* [dpdk-dev] [PATCH 0/2] Enable DCB in SRIOV mode for ixgbe driver
@ 2015-01-12 15:50 Michal Jastrzebski
2015-01-12 15:50 ` [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe Michal Jastrzebski
` (4 more replies)
0 siblings, 5 replies; 41+ messages in thread
From: Michal Jastrzebski @ 2015-01-12 15:50 UTC (permalink / raw)
To: dev
From: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Hi,
this patchset enables DCB in SRIOV (ETH_MQ_RX_VMDQ_DCB and ETH_MQ_TX_VMDQ_DCB)
for each VF and PF for ixgbe driver.
As a side effect this allow to use multiple queues for TX in VF (8 if there is
16 or less VFs or 4 if there is 32 or less VFs) when PFC is not enabled.
Pawel Wodkowski (2):
pmd: add DCB for VF for ixgbe
testpmd: fix dcb in vt mode
app/test-pmd/cmdline.c | 4 +-
app/test-pmd/testpmd.c | 39 ++++++++++----
app/test-pmd/testpmd.h | 10 ----
lib/librte_ether/rte_ethdev.c | 84 +++++++++++++++++++++---------
lib/librte_ether/rte_ethdev.h | 5 +-
lib/librte_pmd_e1000/igb_pf.c | 3 +-
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 10 ++--
lib/librte_pmd_ixgbe/ixgbe_ethdev.h | 1 +
lib/librte_pmd_ixgbe/ixgbe_pf.c | 98 ++++++++++++++++++++++++++++++-----
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 7 ++-
10 files changed, 190 insertions(+), 71 deletions(-)
--
1.7.9.5
^ permalink raw reply [flat|nested] 41+ messages in thread
* [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
2015-01-12 15:50 [dpdk-dev] [PATCH 0/2] Enable DCB in SRIOV mode for ixgbe driver Michal Jastrzebski
@ 2015-01-12 15:50 ` Michal Jastrzebski
2015-01-13 10:14 ` Vlad Zolotarov
2015-01-12 15:50 ` [dpdk-dev] [PATCH 2/2] testpmd: fix dcb in vt mode Michal Jastrzebski
` (3 subsequent siblings)
4 siblings, 1 reply; 41+ messages in thread
From: Michal Jastrzebski @ 2015-01-12 15:50 UTC (permalink / raw)
To: dev
From: Pawel Wodkowski <pawelx.wodkowski@intel.com>
This patch add support for DCB in SRIOV mode. When no PFC
is enabled this feature might be used as multiple queues
(up to 8 or 4) for VF.
It incorporate following modifications:
- Allow zero rx/tx queues to be passed to rte_eth_dev_configure().
Rationale:
in SRIOV mode PF use first free VF to RX/TX. If VF count
is 16 or 32 all recources are assigned to VFs so PF can
be used only for configuration.
- split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool
Rationale:
rx and tx number of queue might be different if RX and TX are
configured in different mode. This allow to inform VF about
proper number of queues.
- extern mailbox API for DCB mode
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
---
lib/librte_ether/rte_ethdev.c | 84 +++++++++++++++++++++---------
lib/librte_ether/rte_ethdev.h | 5 +-
lib/librte_pmd_e1000/igb_pf.c | 3 +-
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 10 ++--
lib/librte_pmd_ixgbe/ixgbe_ethdev.h | 1 +
lib/librte_pmd_ixgbe/ixgbe_pf.c | 98 ++++++++++++++++++++++++++++++-----
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 7 ++-
7 files changed, 159 insertions(+), 49 deletions(-)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 95f2ceb..4c1a494 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -333,7 +333,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
sizeof(dev->data->rx_queues[0]) * nb_queues,
RTE_CACHE_LINE_SIZE);
- if (dev->data->rx_queues == NULL) {
+ if (dev->data->rx_queues == NULL && nb_queues > 0) {
dev->data->nb_rx_queues = 0;
return -(ENOMEM);
}
@@ -475,7 +475,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
sizeof(dev->data->tx_queues[0]) * nb_queues,
RTE_CACHE_LINE_SIZE);
- if (dev->data->tx_queues == NULL) {
+ if (dev->data->tx_queues == NULL && nb_queues > 0) {
dev->data->nb_tx_queues = 0;
return -(ENOMEM);
}
@@ -507,6 +507,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
{
struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ struct rte_eth_dev_info dev_info;
if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
/* check multi-queue mode */
@@ -524,11 +525,33 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
return (-EINVAL);
}
+ if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) &&
+ (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB)) {
+ enum rte_eth_nb_pools rx_pools =
+ dev_conf->rx_adv_conf.vmdq_dcb_conf.nb_queue_pools;
+ enum rte_eth_nb_pools tx_pools =
+ dev_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools;
+
+ if (rx_pools != tx_pools) {
+ /* Only equal number of pools is supported when
+ * DCB+VMDq in SRIOV */
+ PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
+ " SRIOV active, DCB+VMDQ mode, "
+ "number of rx and tx pools is not eqaul\n",
+ port_id);
+ return (-EINVAL);
+ }
+ }
+
+ uint16_t nb_rx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool;
+ uint16_t nb_tx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
+
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_RSS:
case ETH_MQ_RX_VMDQ_DCB:
+ break;
+ case ETH_MQ_RX_VMDQ_RSS:
case ETH_MQ_RX_VMDQ_DCB_RSS:
- /* DCB/RSS VMDQ in SRIOV mode, not implement yet */
+ /* RSS, DCB+RSS VMDQ in SRIOV mode, not implement yet */
PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
" SRIOV active, "
"unsupported VMDQ mq_mode rx %u\n",
@@ -537,37 +560,32 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */
/* if nothing mq mode configure, use default scheme */
dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
- if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
- RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
+ if (nb_rx_q_per_pool > 1)
+ nb_rx_q_per_pool = 1;
break;
}
switch (dev_conf->txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
- /* DCB VMDQ in SRIOV mode, not implement yet */
- PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
- " SRIOV active, "
- "unsupported VMDQ mq_mode tx %u\n",
- port_id, dev_conf->txmode.mq_mode);
- return (-EINVAL);
+ case ETH_MQ_TX_VMDQ_DCB: /* DCB VMDQ in SRIOV mode*/
+ break;
default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
/* if nothing mq mode configure, use default scheme */
dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
- if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
- RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
+ if (nb_tx_q_per_pool > 1)
+ nb_tx_q_per_pool = 1;
break;
}
/* check valid queue number */
- if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
- (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
+ if (nb_rx_q > nb_rx_q_per_pool || nb_tx_q > nb_tx_q_per_pool) {
PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV active, "
- "queue number must less equal to %d\n",
- port_id, RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
+ "rx/tx queue number must less equal to %d/%d\n",
+ port_id, RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool,
+ RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool);
return (-EINVAL);
}
} else {
- /* For vmdb+dcb mode check our configuration before we go further */
+ /* For vmdq+dcb mode check our configuration before we go further */
if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_conf *conf;
@@ -606,11 +624,20 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
}
+ /* For DCB we need to obtain maximum number of queues dinamically,
+ * as this depends on max VF exported in PF */
+ if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
+ (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB)) {
+
+ FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+ (*dev->dev_ops->dev_infos_get)(dev, &dev_info);
+ }
+
/* For DCB mode check our configuration before we go further */
if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
const struct rte_eth_dcb_rx_conf *conf;
- if (nb_rx_q != ETH_DCB_NUM_QUEUES) {
+ if (nb_rx_q != dev_info.max_rx_queues) {
PMD_DEBUG_TRACE("ethdev port_id=%d DCB, nb_rx_q "
"!= %d\n",
port_id, ETH_DCB_NUM_QUEUES);
@@ -630,7 +657,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
const struct rte_eth_dcb_tx_conf *conf;
- if (nb_tx_q != ETH_DCB_NUM_QUEUES) {
+ if (nb_tx_q != dev_info.max_tx_queues) {
PMD_DEBUG_TRACE("ethdev port_id=%d DCB, nb_tx_q "
"!= %d\n",
port_id, ETH_DCB_NUM_QUEUES);
@@ -690,7 +717,10 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
if (nb_rx_q == 0) {
PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_q == 0\n", port_id);
- return (-EINVAL);
+ /* In SRIOV there can be no free resource for PF. So permit use only
+ * for configuration. */
+ if (RTE_ETH_DEV_SRIOV(dev).active == 0)
+ return (-EINVAL);
}
if (nb_tx_q > dev_info.max_tx_queues) {
@@ -698,9 +728,13 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
port_id, nb_tx_q, dev_info.max_tx_queues);
return (-EINVAL);
}
+
if (nb_tx_q == 0) {
PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_q == 0\n", port_id);
- return (-EINVAL);
+ /* In SRIOV there can be no free resource for PF. So permit use only
+ * for configuration. */
+ if (RTE_ETH_DEV_SRIOV(dev).active == 0)
+ return (-EINVAL);
}
/* Copy the dev_conf parameter into the dev structure */
@@ -750,7 +784,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
ETHER_MAX_LEN;
}
- /* multipe queue mode checking */
+ /* multiple queue mode checking */
diag = rte_eth_dev_check_mq_mode(port_id, nb_rx_q, nb_tx_q, dev_conf);
if (diag != 0) {
PMD_DEBUG_TRACE("port%d rte_eth_dev_check_mq_mode = %d\n",
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index ce0528f..04fda83 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -299,7 +299,7 @@ enum rte_eth_rx_mq_mode {
enum rte_eth_tx_mq_mode {
ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */
ETH_MQ_TX_DCB, /**< For TX side,only DCB is on. */
- ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on. */
+ ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on. */
ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
};
@@ -1569,7 +1569,8 @@ struct rte_eth_dev {
struct rte_eth_dev_sriov {
uint8_t active; /**< SRIOV is active with 16, 32 or 64 pools */
- uint8_t nb_q_per_pool; /**< rx queue number per pool */
+ uint8_t nb_rx_q_per_pool; /**< rx queue number per pool */
+ uint8_t nb_tx_q_per_pool; /**< tx queue number per pool */
uint16_t def_vmdq_idx; /**< Default pool num used for PF */
uint16_t def_pool_q_idx; /**< Default pool queue start reg index */
};
diff --git a/lib/librte_pmd_e1000/igb_pf.c b/lib/librte_pmd_e1000/igb_pf.c
index bc3816a..9d2f858 100644
--- a/lib/librte_pmd_e1000/igb_pf.c
+++ b/lib/librte_pmd_e1000/igb_pf.c
@@ -115,7 +115,8 @@ void igb_pf_host_init(struct rte_eth_dev *eth_dev)
rte_panic("Cannot allocate memory for private VF data\n");
RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS;
- RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
+ RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool = nb_queue;
+ RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool = nb_queue;
RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
index 3fc3738..347f03c 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
@@ -3555,14 +3555,14 @@ static int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ixgbe_vf_info *vfinfo =
*(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private));
- uint8_t nb_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
+ uint8_t nb_tx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
uint32_t queue_stride =
IXGBE_MAX_RX_QUEUE_NUM / RTE_ETH_DEV_SRIOV(dev).active;
uint32_t queue_idx = vf * queue_stride, idx = 0, vf_idx;
- uint32_t queue_end = queue_idx + nb_q_per_pool - 1;
+ uint32_t tx_queue_end = queue_idx + nb_tx_q_per_pool - 1;
uint16_t total_rate = 0;
- if (queue_end >= hw->mac.max_tx_queues)
+ if (tx_queue_end >= hw->mac.max_tx_queues)
return -EINVAL;
if (vfinfo != NULL) {
@@ -3577,7 +3577,7 @@ static int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
return -EINVAL;
/* Store tx_rate for this vf. */
- for (idx = 0; idx < nb_q_per_pool; idx++) {
+ for (idx = 0; idx < nb_tx_q_per_pool; idx++) {
if (((uint64_t)0x1 << idx) & q_msk) {
if (vfinfo[vf].tx_rate[idx] != tx_rate)
vfinfo[vf].tx_rate[idx] = tx_rate;
@@ -3595,7 +3595,7 @@ static int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
}
/* Set RTTBCNRC of each queue/pool for vf X */
- for (; queue_idx <= queue_end; queue_idx++) {
+ for (; queue_idx <= tx_queue_end; queue_idx++) {
if (0x1 & q_msk)
ixgbe_set_queue_rate_limit(dev, queue_idx, tx_rate);
q_msk = q_msk >> 1;
diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
index ca99170..ebf16e9 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
+++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
@@ -159,6 +159,7 @@ struct ixgbe_vf_info {
uint16_t tx_rate[IXGBE_MAX_QUEUE_NUM_PER_VF];
uint16_t vlan_count;
uint8_t spoofchk_enabled;
+ unsigned int vf_api;
};
/*
diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c b/lib/librte_pmd_ixgbe/ixgbe_pf.c
index 51da1fd..4d30bcf 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_pf.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c
@@ -127,7 +127,8 @@ void ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
}
- RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
+ RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool = nb_queue;
+ RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool = nb_queue;
RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
@@ -189,7 +190,7 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
hw->mac.ops.set_vmdq(hw, 0, RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx);
/*
- * SW msut set GCR_EXT.VT_Mode the same as GPIE.VT_Mode
+ * SW must set GCR_EXT.VT_Mode the same as GPIE.VT_Mode
*/
gcr_ext = IXGBE_READ_REG(hw, IXGBE_GCR_EXT);
gcr_ext &= ~IXGBE_GCR_EXT_VT_MODE_MASK;
@@ -214,19 +215,19 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
}
IXGBE_WRITE_REG(hw, IXGBE_GCR_EXT, gcr_ext);
- IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
+ IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
- /*
+ /*
* enable vlan filtering and allow all vlan tags through
*/
- vlanctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
- vlanctrl |= IXGBE_VLNCTRL_VFE ; /* enable vlan filters */
- IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl);
+ vlanctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
+ vlanctrl |= IXGBE_VLNCTRL_VFE ; /* enable vlan filters */
+ IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl);
- /* VFTA - enable all vlan filters */
- for (i = 0; i < IXGBE_MAX_VFTA; i++) {
- IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF);
- }
+ /* VFTA - enable all vlan filters */
+ for (i = 0; i < IXGBE_MAX_VFTA; i++) {
+ IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF);
+ }
/* Enable MAC Anti-Spoofing */
hw->mac.ops.set_mac_anti_spoofing(hw, FALSE, vf_num);
@@ -369,6 +370,73 @@ ixgbe_vf_reset(struct rte_eth_dev *dev, uint16_t vf, uint32_t *msgbuf)
}
static int
+ixgbe_negotiate_vf_api(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
+{
+ struct ixgbe_vf_info *vfinfo =
+ *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private));
+ int api = msgbuf[1];
+
+ switch (api) {
+ case ixgbe_mbox_api_10:
+ case ixgbe_mbox_api_11:
+ vfinfo[vf].vf_api = api;
+ return 0;
+ default:
+ break;
+ }
+
+ RTE_LOG(DEBUG, PMD, "VF %d requested invalid api version %u\n", vf, api);
+ return -1;
+}
+
+static int
+ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
+{
+ struct ixgbe_vf_info *vfinfo =
+ *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private));
+ struct ixgbe_dcb_config *dcb_cfg =
+ IXGBE_DEV_PRIVATE_TO_DCB_CFG(dev->data->dev_private);
+
+ uint8_t num_tcs = dcb_cfg->num_tcs.pg_tcs;
+
+ /* verify the PF is supporting the correct APIs */
+ switch (vfinfo[vf].vf_api) {
+ case ixgbe_mbox_api_10:
+ case ixgbe_mbox_api_11:
+ break;
+ default:
+ return -1;
+ }
+
+ if (RTE_ETH_DEV_SRIOV(dev).active) {
+ if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB)
+ msgbuf[IXGBE_VF_TX_QUEUES] = num_tcs;
+ else
+ msgbuf[IXGBE_VF_TX_QUEUES] = 1;
+
+ if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB)
+ msgbuf[IXGBE_VF_RX_QUEUES] = num_tcs;
+ else
+ msgbuf[IXGBE_VF_RX_QUEUES] = 1;
+ } else {
+ /* only allow 1 Tx queue for bandwidth limiting */
+ msgbuf[IXGBE_VF_TX_QUEUES] = 1;
+ msgbuf[IXGBE_VF_RX_QUEUES] = 1;
+ }
+
+ /* notify VF of need for VLAN tag stripping, and correct queue */
+ if (num_tcs)
+ msgbuf[IXGBE_VF_TRANS_VLAN] = num_tcs;
+ else
+ msgbuf[IXGBE_VF_TRANS_VLAN] = 0;
+
+ /* notify VF of default queue */
+ msgbuf[IXGBE_VF_DEF_QUEUE] = 0;
+
+ return 0;
+}
+
+static int
ixgbe_vf_set_mac_addr(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -512,6 +580,12 @@ ixgbe_rcv_msg_from_vf(struct rte_eth_dev *dev, uint16_t vf)
case IXGBE_VF_SET_VLAN:
retval = ixgbe_vf_set_vlan(dev, vf, msgbuf);
break;
+ case IXGBE_VF_API_NEGOTIATE:
+ retval = ixgbe_negotiate_vf_api(dev, vf, msgbuf);
+ break;
+ case IXGBE_VF_GET_QUEUES:
+ retval = ixgbe_get_vf_queues(dev, vf, msgbuf);
+ break;
default:
PMD_DRV_LOG(DEBUG, "Unhandled Msg %8.8x", (unsigned)msgbuf[0]);
retval = IXGBE_ERR_MBX;
@@ -526,7 +600,7 @@ ixgbe_rcv_msg_from_vf(struct rte_eth_dev *dev, uint16_t vf)
msgbuf[0] |= IXGBE_VT_MSGTYPE_CTS;
- ixgbe_write_mbx(hw, msgbuf, 1, vf);
+ ixgbe_write_mbx(hw, msgbuf, mbx_size, vf);
return retval;
}
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index e10d6a2..49b44fe 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -3166,10 +3166,9 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
/* check support mq_mode for DCB */
if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
- (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB))
- return;
-
- if (dev->data->nb_rx_queues != ETH_DCB_NUM_QUEUES)
+ (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
+ (dev_conf->txmode.mq_mode != ETH_MQ_TX_VMDQ_DCB) &&
+ (dev_conf->txmode.mq_mode != ETH_MQ_TX_DCB))
return;
/** Configure DCB hardware **/
--
1.7.9.5
^ permalink raw reply [flat|nested] 41+ messages in thread
* [dpdk-dev] [PATCH 2/2] testpmd: fix dcb in vt mode
2015-01-12 15:50 [dpdk-dev] [PATCH 0/2] Enable DCB in SRIOV mode for ixgbe driver Michal Jastrzebski
2015-01-12 15:50 ` [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe Michal Jastrzebski
@ 2015-01-12 15:50 ` Michal Jastrzebski
2015-01-13 10:15 ` Vlad Zolotarov
2015-01-13 9:50 ` [dpdk-dev] [PATCH 0/2] Enable DCB in SRIOV mode for ixgbe driver Wodkowski, PawelX
` (2 subsequent siblings)
4 siblings, 1 reply; 41+ messages in thread
From: Michal Jastrzebski @ 2015-01-12 15:50 UTC (permalink / raw)
To: dev
From: Pawel Wodkowski <pawelx.wodkowski@intel.com>
This patch incorporate fixes to support DCB in SRIOV mode for testpmd.
It also clean up some old code that is not needed or wrong.
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
---
app/test-pmd/cmdline.c | 4 ++--
app/test-pmd/testpmd.c | 39 +++++++++++++++++++++++++++++----------
app/test-pmd/testpmd.h | 10 ----------
3 files changed, 31 insertions(+), 22 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 882a5a2..3c60087 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1947,9 +1947,9 @@ cmd_config_dcb_parsed(void *parsed_result,
/* DCB in VT mode */
if (!strncmp(res->vt_en, "on",2))
- dcb_conf.dcb_mode = DCB_VT_ENABLED;
+ dcb_conf.vt_en = 1;
else
- dcb_conf.dcb_mode = DCB_ENABLED;
+ dcb_conf.vt_en = 0;
if (!strncmp(res->pfc_en, "on",2)) {
dcb_conf.pfc_en = 1;
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 8c69756..6677a5e 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1733,7 +1733,8 @@ const uint16_t vlan_tags[] = {
};
static int
-get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
+get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf,
+ uint16_t sriov)
{
uint8_t i;
@@ -1741,7 +1742,7 @@ get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
* Builds up the correct configuration for dcb+vt based on the vlan tags array
* given above, and the number of traffic classes available for use.
*/
- if (dcb_conf->dcb_mode == DCB_VT_ENABLED) {
+ if (dcb_conf->vt_en == 1) {
struct rte_eth_vmdq_dcb_conf vmdq_rx_conf;
struct rte_eth_vmdq_dcb_tx_conf vmdq_tx_conf;
@@ -1758,9 +1759,17 @@ get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
vmdq_rx_conf.pool_map[i].vlan_id = vlan_tags[ i ];
vmdq_rx_conf.pool_map[i].pools = 1 << (i % vmdq_rx_conf.nb_queue_pools);
}
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
- vmdq_rx_conf.dcb_queue[i] = i;
- vmdq_tx_conf.dcb_queue[i] = i;
+
+ if (sriov == 0) {
+ for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ vmdq_rx_conf.dcb_queue[i] = i;
+ vmdq_tx_conf.dcb_queue[i] = i;
+ }
+ } else {
+ for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ vmdq_rx_conf.dcb_queue[i] = i % dcb_conf->num_tcs;
+ vmdq_tx_conf.dcb_queue[i] = i % dcb_conf->num_tcs;
+ }
}
/*set DCB mode of RX and TX of multiple queues*/
@@ -1818,22 +1827,32 @@ init_port_dcb_config(portid_t pid,struct dcb_config *dcb_conf)
uint16_t nb_vlan;
uint16_t i;
- /* rxq and txq configuration in dcb mode */
- nb_rxq = 128;
- nb_txq = 128;
rx_free_thresh = 64;
+ rte_port = &ports[pid];
memset(&port_conf,0,sizeof(struct rte_eth_conf));
/* Enter DCB configuration status */
dcb_config = 1;
nb_vlan = sizeof( vlan_tags )/sizeof( vlan_tags[ 0 ]);
/*set configuration of DCB in vt mode and DCB in non-vt mode*/
- retval = get_eth_dcb_conf(&port_conf, dcb_conf);
+ retval = get_eth_dcb_conf(&port_conf, dcb_conf, rte_port->dev_info.max_vfs);
+
+ /* rxq and txq configuration in dcb mode */
+ nb_rxq = rte_port->dev_info.max_rx_queues;
+ nb_txq = rte_port->dev_info.max_tx_queues;
+
+ if (rte_port->dev_info.max_vfs) {
+ if (port_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB)
+ nb_rxq /= port_conf.rx_adv_conf.vmdq_dcb_conf.nb_queue_pools;
+
+ if (port_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB)
+ nb_txq /= port_conf.tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools;
+ }
+
if (retval < 0)
return retval;
- rte_port = &ports[pid];
memcpy(&rte_port->dev_conf, &port_conf,sizeof(struct rte_eth_conf));
rte_port->rx_conf.rx_thresh = rx_thresh;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index f8b0740..8976acc 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -227,20 +227,10 @@ struct fwd_config {
portid_t nb_fwd_ports; /**< Nb. of ports involved. */
};
-/**
- * DCB mode enable
- */
-enum dcb_mode_enable
-{
- DCB_VT_ENABLED,
- DCB_ENABLED
-};
-
/*
* DCB general config info
*/
struct dcb_config {
- enum dcb_mode_enable dcb_mode;
uint8_t vt_en;
enum rte_eth_nb_tcs num_tcs;
uint8_t pfc_en;
--
1.7.9.5
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH 0/2] Enable DCB in SRIOV mode for ixgbe driver
2015-01-12 15:50 [dpdk-dev] [PATCH 0/2] Enable DCB in SRIOV mode for ixgbe driver Michal Jastrzebski
2015-01-12 15:50 ` [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe Michal Jastrzebski
2015-01-12 15:50 ` [dpdk-dev] [PATCH 2/2] testpmd: fix dcb in vt mode Michal Jastrzebski
@ 2015-01-13 9:50 ` Wodkowski, PawelX
2015-01-13 10:11 ` Vlad Zolotarov
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 0/4] " Pawel Wodkowski
4 siblings, 0 replies; 41+ messages in thread
From: Wodkowski, PawelX @ 2015-01-13 9:50 UTC (permalink / raw)
To: Jastrzebski, MichalX K, dev
Comments are more than welcome :)
Pawel
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH 0/2] Enable DCB in SRIOV mode for ixgbe driver
2015-01-12 15:50 [dpdk-dev] [PATCH 0/2] Enable DCB in SRIOV mode for ixgbe driver Michal Jastrzebski
` (2 preceding siblings ...)
2015-01-13 9:50 ` [dpdk-dev] [PATCH 0/2] Enable DCB in SRIOV mode for ixgbe driver Wodkowski, PawelX
@ 2015-01-13 10:11 ` Vlad Zolotarov
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 0/4] " Pawel Wodkowski
4 siblings, 0 replies; 41+ messages in thread
From: Vlad Zolotarov @ 2015-01-13 10:11 UTC (permalink / raw)
To: Michal Jastrzebski, dev
On 01/12/15 17:50, Michal Jastrzebski wrote:
> From: Pawel Wodkowski <pawelx.wodkowski@intel.com>
>
> Hi,
> this patchset enables DCB in SRIOV (ETH_MQ_RX_VMDQ_DCB and ETH_MQ_TX_VMDQ_DCB)
> for each VF and PF for ixgbe driver.
>
> As a side effect this allow to use multiple queues for TX in VF (8 if there is
> 16 or less VFs or 4 if there is 32 or less VFs) when PFC is not enabled.
Here it is! ;) Thanks. Pls., ignore my previous email about the
respinning... ;)
>
>
> Pawel Wodkowski (2):
> pmd: add DCB for VF for ixgbe
> testpmd: fix dcb in vt mode
>
> app/test-pmd/cmdline.c | 4 +-
> app/test-pmd/testpmd.c | 39 ++++++++++----
> app/test-pmd/testpmd.h | 10 ----
> lib/librte_ether/rte_ethdev.c | 84 +++++++++++++++++++++---------
> lib/librte_ether/rte_ethdev.h | 5 +-
> lib/librte_pmd_e1000/igb_pf.c | 3 +-
> lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 10 ++--
> lib/librte_pmd_ixgbe/ixgbe_ethdev.h | 1 +
> lib/librte_pmd_ixgbe/ixgbe_pf.c | 98 ++++++++++++++++++++++++++++++-----
> lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 7 ++-
> 10 files changed, 190 insertions(+), 71 deletions(-)
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
2015-01-12 15:50 ` [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe Michal Jastrzebski
@ 2015-01-13 10:14 ` Vlad Zolotarov
2015-01-13 11:00 ` Wodkowski, PawelX
2015-01-14 1:00 ` Ouyang, Changchun
0 siblings, 2 replies; 41+ messages in thread
From: Vlad Zolotarov @ 2015-01-13 10:14 UTC (permalink / raw)
To: Michal Jastrzebski, dev
On 01/12/15 17:50, Michal Jastrzebski wrote:
> From: Pawel Wodkowski <pawelx.wodkowski@intel.com>
>
> This patch add support for DCB in SRIOV mode. When no PFC
> is enabled this feature might be used as multiple queues
> (up to 8 or 4) for VF.
>
> It incorporate following modifications:
> - Allow zero rx/tx queues to be passed to rte_eth_dev_configure().
> Rationale:
> in SRIOV mode PF use first free VF to RX/TX. If VF count
> is 16 or 32 all recources are assigned to VFs so PF can
> be used only for configuration.
> - split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool
> Rationale:
> rx and tx number of queue might be different if RX and TX are
> configured in different mode. This allow to inform VF about
> proper number of queues.
> - extern mailbox API for DCB mode
IMHO each bullet above is worth a separate patch. ;)
It would be much easier to review.
thanks,
vlad
>
> Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
> ---
> lib/librte_ether/rte_ethdev.c | 84 +++++++++++++++++++++---------
> lib/librte_ether/rte_ethdev.h | 5 +-
> lib/librte_pmd_e1000/igb_pf.c | 3 +-
> lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 10 ++--
> lib/librte_pmd_ixgbe/ixgbe_ethdev.h | 1 +
> lib/librte_pmd_ixgbe/ixgbe_pf.c | 98 ++++++++++++++++++++++++++++++-----
> lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 7 ++-
> 7 files changed, 159 insertions(+), 49 deletions(-)
>
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index 95f2ceb..4c1a494 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -333,7 +333,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
> dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
> sizeof(dev->data->rx_queues[0]) * nb_queues,
> RTE_CACHE_LINE_SIZE);
> - if (dev->data->rx_queues == NULL) {
> + if (dev->data->rx_queues == NULL && nb_queues > 0) {
> dev->data->nb_rx_queues = 0;
> return -(ENOMEM);
> }
> @@ -475,7 +475,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
> dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
> sizeof(dev->data->tx_queues[0]) * nb_queues,
> RTE_CACHE_LINE_SIZE);
> - if (dev->data->tx_queues == NULL) {
> + if (dev->data->tx_queues == NULL && nb_queues > 0) {
> dev->data->nb_tx_queues = 0;
> return -(ENOMEM);
> }
> @@ -507,6 +507,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> const struct rte_eth_conf *dev_conf)
> {
> struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + struct rte_eth_dev_info dev_info;
>
> if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
> /* check multi-queue mode */
> @@ -524,11 +525,33 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> return (-EINVAL);
> }
>
> + if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) &&
> + (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB)) {
> + enum rte_eth_nb_pools rx_pools =
> + dev_conf->rx_adv_conf.vmdq_dcb_conf.nb_queue_pools;
> + enum rte_eth_nb_pools tx_pools =
> + dev_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools;
> +
> + if (rx_pools != tx_pools) {
> + /* Only equal number of pools is supported when
> + * DCB+VMDq in SRIOV */
> + PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
> + " SRIOV active, DCB+VMDQ mode, "
> + "number of rx and tx pools is not eqaul\n",
> + port_id);
> + return (-EINVAL);
> + }
> + }
> +
> + uint16_t nb_rx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool;
> + uint16_t nb_tx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
> +
> switch (dev_conf->rxmode.mq_mode) {
> - case ETH_MQ_RX_VMDQ_RSS:
> case ETH_MQ_RX_VMDQ_DCB:
> + break;
> + case ETH_MQ_RX_VMDQ_RSS:
> case ETH_MQ_RX_VMDQ_DCB_RSS:
> - /* DCB/RSS VMDQ in SRIOV mode, not implement yet */
> + /* RSS, DCB+RSS VMDQ in SRIOV mode, not implement yet */
> PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
> " SRIOV active, "
> "unsupported VMDQ mq_mode rx %u\n",
> @@ -537,37 +560,32 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */
> /* if nothing mq mode configure, use default scheme */
> dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
> - if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
> - RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
> + if (nb_rx_q_per_pool > 1)
> + nb_rx_q_per_pool = 1;
> break;
> }
>
> switch (dev_conf->txmode.mq_mode) {
> - case ETH_MQ_TX_VMDQ_DCB:
> - /* DCB VMDQ in SRIOV mode, not implement yet */
> - PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
> - " SRIOV active, "
> - "unsupported VMDQ mq_mode tx %u\n",
> - port_id, dev_conf->txmode.mq_mode);
> - return (-EINVAL);
> + case ETH_MQ_TX_VMDQ_DCB: /* DCB VMDQ in SRIOV mode*/
> + break;
> default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
> /* if nothing mq mode configure, use default scheme */
> dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
> - if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
> - RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
> + if (nb_tx_q_per_pool > 1)
> + nb_tx_q_per_pool = 1;
> break;
> }
>
> /* check valid queue number */
> - if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
> - (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
> + if (nb_rx_q > nb_rx_q_per_pool || nb_tx_q > nb_tx_q_per_pool) {
> PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV active, "
> - "queue number must less equal to %d\n",
> - port_id, RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
> + "rx/tx queue number must less equal to %d/%d\n",
> + port_id, RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool,
> + RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool);
> return (-EINVAL);
> }
> } else {
> - /* For vmdb+dcb mode check our configuration before we go further */
> + /* For vmdq+dcb mode check our configuration before we go further */
> if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
> const struct rte_eth_vmdq_dcb_conf *conf;
>
> @@ -606,11 +624,20 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> }
> }
>
> + /* For DCB we need to obtain maximum number of queues dinamically,
> + * as this depends on max VF exported in PF */
> + if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
> + (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB)) {
> +
> + FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
> + (*dev->dev_ops->dev_infos_get)(dev, &dev_info);
> + }
> +
> /* For DCB mode check our configuration before we go further */
> if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
> const struct rte_eth_dcb_rx_conf *conf;
>
> - if (nb_rx_q != ETH_DCB_NUM_QUEUES) {
> + if (nb_rx_q != dev_info.max_rx_queues) {
> PMD_DEBUG_TRACE("ethdev port_id=%d DCB, nb_rx_q "
> "!= %d\n",
> port_id, ETH_DCB_NUM_QUEUES);
> @@ -630,7 +657,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
> const struct rte_eth_dcb_tx_conf *conf;
>
> - if (nb_tx_q != ETH_DCB_NUM_QUEUES) {
> + if (nb_tx_q != dev_info.max_tx_queues) {
> PMD_DEBUG_TRACE("ethdev port_id=%d DCB, nb_tx_q "
> "!= %d\n",
> port_id, ETH_DCB_NUM_QUEUES);
> @@ -690,7 +717,10 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> }
> if (nb_rx_q == 0) {
> PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_q == 0\n", port_id);
> - return (-EINVAL);
> + /* In SRIOV there can be no free resource for PF. So permit use only
> + * for configuration. */
> + if (RTE_ETH_DEV_SRIOV(dev).active == 0)
> + return (-EINVAL);
> }
>
> if (nb_tx_q > dev_info.max_tx_queues) {
> @@ -698,9 +728,13 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> port_id, nb_tx_q, dev_info.max_tx_queues);
> return (-EINVAL);
> }
> +
> if (nb_tx_q == 0) {
> PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_q == 0\n", port_id);
> - return (-EINVAL);
> + /* In SRIOV there can be no free resource for PF. So permit use only
> + * for configuration. */
> + if (RTE_ETH_DEV_SRIOV(dev).active == 0)
> + return (-EINVAL);
> }
>
> /* Copy the dev_conf parameter into the dev structure */
> @@ -750,7 +784,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> ETHER_MAX_LEN;
> }
>
> - /* multipe queue mode checking */
> + /* multiple queue mode checking */
> diag = rte_eth_dev_check_mq_mode(port_id, nb_rx_q, nb_tx_q, dev_conf);
> if (diag != 0) {
> PMD_DEBUG_TRACE("port%d rte_eth_dev_check_mq_mode = %d\n",
> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index ce0528f..04fda83 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -299,7 +299,7 @@ enum rte_eth_rx_mq_mode {
> enum rte_eth_tx_mq_mode {
> ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */
> ETH_MQ_TX_DCB, /**< For TX side,only DCB is on. */
> - ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on. */
> + ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on. */
> ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
> };
>
> @@ -1569,7 +1569,8 @@ struct rte_eth_dev {
>
> struct rte_eth_dev_sriov {
> uint8_t active; /**< SRIOV is active with 16, 32 or 64 pools */
> - uint8_t nb_q_per_pool; /**< rx queue number per pool */
> + uint8_t nb_rx_q_per_pool; /**< rx queue number per pool */
> + uint8_t nb_tx_q_per_pool; /**< tx queue number per pool */
> uint16_t def_vmdq_idx; /**< Default pool num used for PF */
> uint16_t def_pool_q_idx; /**< Default pool queue start reg index */
> };
> diff --git a/lib/librte_pmd_e1000/igb_pf.c b/lib/librte_pmd_e1000/igb_pf.c
> index bc3816a..9d2f858 100644
> --- a/lib/librte_pmd_e1000/igb_pf.c
> +++ b/lib/librte_pmd_e1000/igb_pf.c
> @@ -115,7 +115,8 @@ void igb_pf_host_init(struct rte_eth_dev *eth_dev)
> rte_panic("Cannot allocate memory for private VF data\n");
>
> RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS;
> - RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
> + RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool = nb_queue;
> + RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool = nb_queue;
> RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
> RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
>
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
> index 3fc3738..347f03c 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
> +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
> @@ -3555,14 +3555,14 @@ static int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
> struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> struct ixgbe_vf_info *vfinfo =
> *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private));
> - uint8_t nb_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
> + uint8_t nb_tx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
> uint32_t queue_stride =
> IXGBE_MAX_RX_QUEUE_NUM / RTE_ETH_DEV_SRIOV(dev).active;
> uint32_t queue_idx = vf * queue_stride, idx = 0, vf_idx;
> - uint32_t queue_end = queue_idx + nb_q_per_pool - 1;
> + uint32_t tx_queue_end = queue_idx + nb_tx_q_per_pool - 1;
> uint16_t total_rate = 0;
>
> - if (queue_end >= hw->mac.max_tx_queues)
> + if (tx_queue_end >= hw->mac.max_tx_queues)
> return -EINVAL;
>
> if (vfinfo != NULL) {
> @@ -3577,7 +3577,7 @@ static int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
> return -EINVAL;
>
> /* Store tx_rate for this vf. */
> - for (idx = 0; idx < nb_q_per_pool; idx++) {
> + for (idx = 0; idx < nb_tx_q_per_pool; idx++) {
> if (((uint64_t)0x1 << idx) & q_msk) {
> if (vfinfo[vf].tx_rate[idx] != tx_rate)
> vfinfo[vf].tx_rate[idx] = tx_rate;
> @@ -3595,7 +3595,7 @@ static int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
> }
>
> /* Set RTTBCNRC of each queue/pool for vf X */
> - for (; queue_idx <= queue_end; queue_idx++) {
> + for (; queue_idx <= tx_queue_end; queue_idx++) {
> if (0x1 & q_msk)
> ixgbe_set_queue_rate_limit(dev, queue_idx, tx_rate);
> q_msk = q_msk >> 1;
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
> index ca99170..ebf16e9 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
> +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
> @@ -159,6 +159,7 @@ struct ixgbe_vf_info {
> uint16_t tx_rate[IXGBE_MAX_QUEUE_NUM_PER_VF];
> uint16_t vlan_count;
> uint8_t spoofchk_enabled;
> + unsigned int vf_api;
> };
>
> /*
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c b/lib/librte_pmd_ixgbe/ixgbe_pf.c
> index 51da1fd..4d30bcf 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_pf.c
> +++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c
> @@ -127,7 +127,8 @@ void ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
> RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
> }
>
> - RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
> + RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool = nb_queue;
> + RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool = nb_queue;
> RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
> RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
>
> @@ -189,7 +190,7 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
> hw->mac.ops.set_vmdq(hw, 0, RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx);
>
> /*
> - * SW msut set GCR_EXT.VT_Mode the same as GPIE.VT_Mode
> + * SW must set GCR_EXT.VT_Mode the same as GPIE.VT_Mode
> */
> gcr_ext = IXGBE_READ_REG(hw, IXGBE_GCR_EXT);
> gcr_ext &= ~IXGBE_GCR_EXT_VT_MODE_MASK;
> @@ -214,19 +215,19 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
> }
>
> IXGBE_WRITE_REG(hw, IXGBE_GCR_EXT, gcr_ext);
> - IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
> + IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
>
> - /*
> + /*
> * enable vlan filtering and allow all vlan tags through
> */
> - vlanctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
> - vlanctrl |= IXGBE_VLNCTRL_VFE ; /* enable vlan filters */
> - IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl);
> + vlanctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
> + vlanctrl |= IXGBE_VLNCTRL_VFE ; /* enable vlan filters */
> + IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl);
>
> - /* VFTA - enable all vlan filters */
> - for (i = 0; i < IXGBE_MAX_VFTA; i++) {
> - IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF);
> - }
> + /* VFTA - enable all vlan filters */
> + for (i = 0; i < IXGBE_MAX_VFTA; i++) {
> + IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF);
> + }
>
> /* Enable MAC Anti-Spoofing */
> hw->mac.ops.set_mac_anti_spoofing(hw, FALSE, vf_num);
> @@ -369,6 +370,73 @@ ixgbe_vf_reset(struct rte_eth_dev *dev, uint16_t vf, uint32_t *msgbuf)
> }
>
> static int
> +ixgbe_negotiate_vf_api(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
> +{
> + struct ixgbe_vf_info *vfinfo =
> + *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private));
> + int api = msgbuf[1];
> +
> + switch (api) {
> + case ixgbe_mbox_api_10:
> + case ixgbe_mbox_api_11:
> + vfinfo[vf].vf_api = api;
> + return 0;
> + default:
> + break;
> + }
> +
> + RTE_LOG(DEBUG, PMD, "VF %d requested invalid api version %u\n", vf, api);
> + return -1;
> +}
> +
> +static int
> +ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
> +{
> + struct ixgbe_vf_info *vfinfo =
> + *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private));
> + struct ixgbe_dcb_config *dcb_cfg =
> + IXGBE_DEV_PRIVATE_TO_DCB_CFG(dev->data->dev_private);
> +
> + uint8_t num_tcs = dcb_cfg->num_tcs.pg_tcs;
> +
> + /* verify the PF is supporting the correct APIs */
> + switch (vfinfo[vf].vf_api) {
> + case ixgbe_mbox_api_10:
> + case ixgbe_mbox_api_11:
> + break;
> + default:
> + return -1;
> + }
> +
> + if (RTE_ETH_DEV_SRIOV(dev).active) {
> + if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB)
> + msgbuf[IXGBE_VF_TX_QUEUES] = num_tcs;
> + else
> + msgbuf[IXGBE_VF_TX_QUEUES] = 1;
> +
> + if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB)
> + msgbuf[IXGBE_VF_RX_QUEUES] = num_tcs;
> + else
> + msgbuf[IXGBE_VF_RX_QUEUES] = 1;
> + } else {
> + /* only allow 1 Tx queue for bandwidth limiting */
> + msgbuf[IXGBE_VF_TX_QUEUES] = 1;
> + msgbuf[IXGBE_VF_RX_QUEUES] = 1;
> + }
> +
> + /* notify VF of need for VLAN tag stripping, and correct queue */
> + if (num_tcs)
> + msgbuf[IXGBE_VF_TRANS_VLAN] = num_tcs;
> + else
> + msgbuf[IXGBE_VF_TRANS_VLAN] = 0;
> +
> + /* notify VF of default queue */
> + msgbuf[IXGBE_VF_DEF_QUEUE] = 0;
> +
> + return 0;
> +}
> +
> +static int
> ixgbe_vf_set_mac_addr(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
> {
> struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> @@ -512,6 +580,12 @@ ixgbe_rcv_msg_from_vf(struct rte_eth_dev *dev, uint16_t vf)
> case IXGBE_VF_SET_VLAN:
> retval = ixgbe_vf_set_vlan(dev, vf, msgbuf);
> break;
> + case IXGBE_VF_API_NEGOTIATE:
> + retval = ixgbe_negotiate_vf_api(dev, vf, msgbuf);
> + break;
> + case IXGBE_VF_GET_QUEUES:
> + retval = ixgbe_get_vf_queues(dev, vf, msgbuf);
> + break;
> default:
> PMD_DRV_LOG(DEBUG, "Unhandled Msg %8.8x", (unsigned)msgbuf[0]);
> retval = IXGBE_ERR_MBX;
> @@ -526,7 +600,7 @@ ixgbe_rcv_msg_from_vf(struct rte_eth_dev *dev, uint16_t vf)
>
> msgbuf[0] |= IXGBE_VT_MSGTYPE_CTS;
>
> - ixgbe_write_mbx(hw, msgbuf, 1, vf);
> + ixgbe_write_mbx(hw, msgbuf, mbx_size, vf);
>
> return retval;
> }
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> index e10d6a2..49b44fe 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> @@ -3166,10 +3166,9 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
>
> /* check support mq_mode for DCB */
> if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
> - (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB))
> - return;
> -
> - if (dev->data->nb_rx_queues != ETH_DCB_NUM_QUEUES)
> + (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
> + (dev_conf->txmode.mq_mode != ETH_MQ_TX_VMDQ_DCB) &&
> + (dev_conf->txmode.mq_mode != ETH_MQ_TX_DCB))
> return;
>
> /** Configure DCB hardware **/
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH 2/2] testpmd: fix dcb in vt mode
2015-01-12 15:50 ` [dpdk-dev] [PATCH 2/2] testpmd: fix dcb in vt mode Michal Jastrzebski
@ 2015-01-13 10:15 ` Vlad Zolotarov
2015-01-13 11:08 ` Wodkowski, PawelX
0 siblings, 1 reply; 41+ messages in thread
From: Vlad Zolotarov @ 2015-01-13 10:15 UTC (permalink / raw)
To: Michal Jastrzebski, dev
On 01/12/15 17:50, Michal Jastrzebski wrote:
> From: Pawel Wodkowski <pawelx.wodkowski@intel.com>
>
> This patch incorporate fixes to support DCB in SRIOV mode for testpmd.
> It also clean up some old code that is not needed or wrong.
The same here: could u, pls., separate the "cleanup" part of the patch
from the "fixes" part into separate patches?
thanks,
vlad
>
> Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
> ---
> app/test-pmd/cmdline.c | 4 ++--
> app/test-pmd/testpmd.c | 39 +++++++++++++++++++++++++++++----------
> app/test-pmd/testpmd.h | 10 ----------
> 3 files changed, 31 insertions(+), 22 deletions(-)
>
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index 882a5a2..3c60087 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -1947,9 +1947,9 @@ cmd_config_dcb_parsed(void *parsed_result,
>
> /* DCB in VT mode */
> if (!strncmp(res->vt_en, "on",2))
> - dcb_conf.dcb_mode = DCB_VT_ENABLED;
> + dcb_conf.vt_en = 1;
> else
> - dcb_conf.dcb_mode = DCB_ENABLED;
> + dcb_conf.vt_en = 0;
>
> if (!strncmp(res->pfc_en, "on",2)) {
> dcb_conf.pfc_en = 1;
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 8c69756..6677a5e 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -1733,7 +1733,8 @@ const uint16_t vlan_tags[] = {
> };
>
> static int
> -get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
> +get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf,
> + uint16_t sriov)
> {
> uint8_t i;
>
> @@ -1741,7 +1742,7 @@ get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
> * Builds up the correct configuration for dcb+vt based on the vlan tags array
> * given above, and the number of traffic classes available for use.
> */
> - if (dcb_conf->dcb_mode == DCB_VT_ENABLED) {
> + if (dcb_conf->vt_en == 1) {
> struct rte_eth_vmdq_dcb_conf vmdq_rx_conf;
> struct rte_eth_vmdq_dcb_tx_conf vmdq_tx_conf;
>
> @@ -1758,9 +1759,17 @@ get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
> vmdq_rx_conf.pool_map[i].vlan_id = vlan_tags[ i ];
> vmdq_rx_conf.pool_map[i].pools = 1 << (i % vmdq_rx_conf.nb_queue_pools);
> }
> - for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
> - vmdq_rx_conf.dcb_queue[i] = i;
> - vmdq_tx_conf.dcb_queue[i] = i;
> +
> + if (sriov == 0) {
> + for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
> + vmdq_rx_conf.dcb_queue[i] = i;
> + vmdq_tx_conf.dcb_queue[i] = i;
> + }
> + } else {
> + for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
> + vmdq_rx_conf.dcb_queue[i] = i % dcb_conf->num_tcs;
> + vmdq_tx_conf.dcb_queue[i] = i % dcb_conf->num_tcs;
> + }
> }
>
> /*set DCB mode of RX and TX of multiple queues*/
> @@ -1818,22 +1827,32 @@ init_port_dcb_config(portid_t pid,struct dcb_config *dcb_conf)
> uint16_t nb_vlan;
> uint16_t i;
>
> - /* rxq and txq configuration in dcb mode */
> - nb_rxq = 128;
> - nb_txq = 128;
> rx_free_thresh = 64;
>
> + rte_port = &ports[pid];
> memset(&port_conf,0,sizeof(struct rte_eth_conf));
> /* Enter DCB configuration status */
> dcb_config = 1;
>
> nb_vlan = sizeof( vlan_tags )/sizeof( vlan_tags[ 0 ]);
> /*set configuration of DCB in vt mode and DCB in non-vt mode*/
> - retval = get_eth_dcb_conf(&port_conf, dcb_conf);
> + retval = get_eth_dcb_conf(&port_conf, dcb_conf, rte_port->dev_info.max_vfs);
> +
> + /* rxq and txq configuration in dcb mode */
> + nb_rxq = rte_port->dev_info.max_rx_queues;
> + nb_txq = rte_port->dev_info.max_tx_queues;
> +
> + if (rte_port->dev_info.max_vfs) {
> + if (port_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB)
> + nb_rxq /= port_conf.rx_adv_conf.vmdq_dcb_conf.nb_queue_pools;
> +
> + if (port_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB)
> + nb_txq /= port_conf.tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools;
> + }
> +
> if (retval < 0)
> return retval;
>
> - rte_port = &ports[pid];
> memcpy(&rte_port->dev_conf, &port_conf,sizeof(struct rte_eth_conf));
>
> rte_port->rx_conf.rx_thresh = rx_thresh;
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index f8b0740..8976acc 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -227,20 +227,10 @@ struct fwd_config {
> portid_t nb_fwd_ports; /**< Nb. of ports involved. */
> };
>
> -/**
> - * DCB mode enable
> - */
> -enum dcb_mode_enable
> -{
> - DCB_VT_ENABLED,
> - DCB_ENABLED
> -};
> -
> /*
> * DCB general config info
> */
> struct dcb_config {
> - enum dcb_mode_enable dcb_mode;
> uint8_t vt_en;
> enum rte_eth_nb_tcs num_tcs;
> uint8_t pfc_en;
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
2015-01-13 10:14 ` Vlad Zolotarov
@ 2015-01-13 11:00 ` Wodkowski, PawelX
2015-01-14 1:00 ` Ouyang, Changchun
1 sibling, 0 replies; 41+ messages in thread
From: Wodkowski, PawelX @ 2015-01-13 11:00 UTC (permalink / raw)
To: Vlad Zolotarov, Jastrzebski, MichalX K, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Vlad Zolotarov
> Sent: Tuesday, January 13, 2015 11:14 AM
> To: Jastrzebski, MichalX K; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
>
>
> On 01/12/15 17:50, Michal Jastrzebski wrote:
> > From: Pawel Wodkowski <pawelx.wodkowski@intel.com>
> >
> > This patch add support for DCB in SRIOV mode. When no PFC
> > is enabled this feature might be used as multiple queues
> > (up to 8 or 4) for VF.
> >
> > It incorporate following modifications:
> > - Allow zero rx/tx queues to be passed to rte_eth_dev_configure().
> > Rationale:
> > in SRIOV mode PF use first free VF to RX/TX. If VF count
> > is 16 or 32 all recources are assigned to VFs so PF can
> > be used only for configuration.
> > - split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool
> > Rationale:
> > rx and tx number of queue might be different if RX and TX are
> > configured in different mode. This allow to inform VF about
> > proper number of queues.
> > - extern mailbox API for DCB mode
>
> IMHO each bullet above is worth a separate patch. ;)
> It would be much easier to review.
>
Good point. I will send next version shortly.
Pawel
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH 2/2] testpmd: fix dcb in vt mode
2015-01-13 10:15 ` Vlad Zolotarov
@ 2015-01-13 11:08 ` Wodkowski, PawelX
0 siblings, 0 replies; 41+ messages in thread
From: Wodkowski, PawelX @ 2015-01-13 11:08 UTC (permalink / raw)
To: Vlad Zolotarov, Jastrzebski, MichalX K, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Vlad Zolotarov
> Sent: Tuesday, January 13, 2015 11:16 AM
> To: Jastrzebski, MichalX K; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 2/2] testpmd: fix dcb in vt mode
>
>
> On 01/12/15 17:50, Michal Jastrzebski wrote:
> > From: Pawel Wodkowski <pawelx.wodkowski@intel.com>
> >
> > This patch incorporate fixes to support DCB in SRIOV mode for testpmd.
> > It also clean up some old code that is not needed or wrong.
>
> The same here: could u, pls., separate the "cleanup" part of the patch
> from the "fixes" part into separate patches?
>
Maybe little confusion I introduced by saying cleanups. Some code became
obsolete (like enum dcb_mode_enable) when I fixed DCV in VT mode, so
removing those parts I called "cleanups". Please consider them to be a fixes.
Pawel
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
2015-01-13 10:14 ` Vlad Zolotarov
2015-01-13 11:00 ` Wodkowski, PawelX
@ 2015-01-14 1:00 ` Ouyang, Changchun
1 sibling, 0 replies; 41+ messages in thread
From: Ouyang, Changchun @ 2015-01-14 1:00 UTC (permalink / raw)
To: Vlad Zolotarov, Jastrzebski, MichalX K, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Vlad Zolotarov
> Sent: Tuesday, January 13, 2015 6:14 PM
> To: Jastrzebski, MichalX K; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
>
>
> On 01/12/15 17:50, Michal Jastrzebski wrote:
> > From: Pawel Wodkowski <pawelx.wodkowski@intel.com>
> >
> > This patch add support for DCB in SRIOV mode. When no PFC is enabled
> > this feature might be used as multiple queues (up to 8 or 4) for VF.
> >
> > It incorporate following modifications:
> > - Allow zero rx/tx queues to be passed to rte_eth_dev_configure().
> > Rationale:
> > in SRIOV mode PF use first free VF to RX/TX. If VF count
> > is 16 or 32 all recources are assigned to VFs so PF can
> > be used only for configuration.
> > - split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool
> > Rationale:
> > rx and tx number of queue might be different if RX and TX are
> > configured in different mode. This allow to inform VF about
> > proper number of queues.
> > - extern mailbox API for DCB mode
>
> IMHO each bullet above is worth a separate patch. ;) It would be much easier
> to review.
>
> thanks,
> vlad
>
Agree with Vlad
^ permalink raw reply [flat|nested] 41+ messages in thread
* [dpdk-dev] [PATCH v2 0/4] Enable DCB in SRIOV mode for ixgbe driver
2015-01-12 15:50 [dpdk-dev] [PATCH 0/2] Enable DCB in SRIOV mode for ixgbe driver Michal Jastrzebski
` (3 preceding siblings ...)
2015-01-13 10:11 ` Vlad Zolotarov
@ 2015-01-19 13:02 ` Pawel Wodkowski
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 1/4] ethdev: Allow zero rx/tx queues in SRIOV mode Pawel Wodkowski
` (4 more replies)
4 siblings, 5 replies; 41+ messages in thread
From: Pawel Wodkowski @ 2015-01-19 13:02 UTC (permalink / raw)
To: dev
v2:
- Split patch for easer review.
- Remove "pmd: add api version negotiation for ixgbe driver" and "pmd: extend
mailbox api to report number of RX/TX queues" patches as those are already
already marged from other patch
v1:
This patchset enables DCB in SRIOV (ETH_MQ_RX_VMDQ_DCB and ETH_MQ_TX_VMDQ_DCB)
for each VF and PF for ixgbe driver.
As a side effect this allow to use multiple queues for TX in VF (8 if there is
16 or less VFs or 4 if there is 32 or less VFs) when PFC is not enabled.
Pawel Wodkowski (4):
ethdev: Allow zero rx/tx queues in SRIOV mode
ethdev: prevent changing of nb_q_per_pool in
rte_eth_dev_check_mq_mode()
pmd: add support for DCB in SRIOV mode for ixgbe driver.
testpmd: fix dcb in vt mode
app/test-pmd/cmdline.c | 4 +--
app/test-pmd/testpmd.c | 39 +++++++++++++++++------
app/test-pmd/testpmd.h | 10 ------
lib/librte_ether/rte_ethdev.c | 63 +++++++++++++++++++++++--------------
lib/librte_ether/rte_ethdev.h | 2 +-
lib/librte_pmd_ixgbe/ixgbe_pf.c | 42 ++++++++++++++++++-------
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 7 ++---
7 files changed, 106 insertions(+), 61 deletions(-)
--
1.7.9.5
^ permalink raw reply [flat|nested] 41+ messages in thread
* [dpdk-dev] [PATCH v2 1/4] ethdev: Allow zero rx/tx queues in SRIOV mode
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 0/4] " Pawel Wodkowski
@ 2015-01-19 13:02 ` Pawel Wodkowski
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 2/4] ethdev: prevent changing of nb_q_per_pool in rte_eth_dev_check_mq_mode() Pawel Wodkowski
` (3 subsequent siblings)
4 siblings, 0 replies; 41+ messages in thread
From: Pawel Wodkowski @ 2015-01-19 13:02 UTC (permalink / raw)
To: dev
Allow zero rx/tx queues to be passed to rte_eth_dev_configure(). This
way PF might be used only for configuration purpose when no receive
and/or transmit functionality is needed.
Rationale:
in SRIOV mode PF use first free VF to RX/TX (at least ixgbe based NICs).
For example: if using 82599EB based NIC and VF count is 16, 32 or 64 all
recources are assigned to VFs so PF might be used only for configuration
purpose.
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
---
lib/librte_ether/rte_ethdev.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 077d430..62d7f6e 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -333,7 +333,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
sizeof(dev->data->rx_queues[0]) * nb_queues,
RTE_CACHE_LINE_SIZE);
- if (dev->data->rx_queues == NULL) {
+ if (dev->data->rx_queues == NULL && nb_queues > 0) {
dev->data->nb_rx_queues = 0;
return -(ENOMEM);
}
@@ -475,7 +475,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
sizeof(dev->data->tx_queues[0]) * nb_queues,
RTE_CACHE_LINE_SIZE);
- if (dev->data->tx_queues == NULL) {
+ if (dev->data->tx_queues == NULL && nb_queues > 0) {
dev->data->nb_tx_queues = 0;
return -(ENOMEM);
}
@@ -731,7 +731,10 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
if (nb_rx_q == 0) {
PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_q == 0\n", port_id);
- return (-EINVAL);
+ /* In SRIOV there can be no free resource for PF. So permit use only
+ * for configuration. */
+ if (RTE_ETH_DEV_SRIOV(dev).active == 0)
+ return (-EINVAL);
}
if (nb_tx_q > dev_info.max_tx_queues) {
@@ -739,9 +742,13 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
port_id, nb_tx_q, dev_info.max_tx_queues);
return (-EINVAL);
}
+
if (nb_tx_q == 0) {
PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_q == 0\n", port_id);
- return (-EINVAL);
+ /* In SRIOV there can be no free resource for PF. So permit use only
+ * for configuration. */
+ if (RTE_ETH_DEV_SRIOV(dev).active == 0)
+ return (-EINVAL);
}
/* Copy the dev_conf parameter into the dev structure */
--
1.7.9.5
^ permalink raw reply [flat|nested] 41+ messages in thread
* [dpdk-dev] [PATCH v2 2/4] ethdev: prevent changing of nb_q_per_pool in rte_eth_dev_check_mq_mode()
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 0/4] " Pawel Wodkowski
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 1/4] ethdev: Allow zero rx/tx queues in SRIOV mode Pawel Wodkowski
@ 2015-01-19 13:02 ` Pawel Wodkowski
2015-01-20 1:32 ` Ouyang, Changchun
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 3/4] pmd: add support for DCB in SRIOV mode for ixgbe driver Pawel Wodkowski
` (2 subsequent siblings)
4 siblings, 1 reply; 41+ messages in thread
From: Pawel Wodkowski @ 2015-01-19 13:02 UTC (permalink / raw)
To: dev
If SRIOV is used and device configuration does not use MQ the
RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool is set to 1 in
rte_eth_dev_check_mq_mode().
This is bad becouse of two reasons:
1. Port reconfiguration from non-MQ mode to MQ mode is impossible
2. Confguring RX and TX side in different way is impossible.
This patch fix first issue by not changing
RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool
and second by comparing nb_q_per_pool separately for RX
(nb_rx_q_per_pool) and
for TX (nb_tx_q_per_pool).
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
---
lib/librte_ether/rte_ethdev.c | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 62d7f6e..85385f8 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -548,6 +548,9 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
return (-EINVAL);
}
+ uint16_t nb_rx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
+ uint16_t nb_tx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
+
switch (dev_conf->rxmode.mq_mode) {
case ETH_MQ_RX_VMDQ_DCB:
case ETH_MQ_RX_VMDQ_DCB_RSS:
@@ -580,8 +583,8 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */
/* if nothing mq mode configure, use default scheme */
dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
- if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
- RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
+ if (nb_rx_q_per_pool > 1)
+ nb_rx_q_per_pool = 1;
break;
}
@@ -596,15 +599,16 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
/* if nothing mq mode configure, use default scheme */
dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
+ if (nb_tx_q_per_pool > 1)
+ nb_tx_q_per_pool = 1;
break;
}
/* check valid queue number */
- if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
- (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
+ if (nb_rx_q > nb_rx_q_per_pool || nb_tx_q > nb_tx_q_per_pool) {
PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV active, "
- "queue number must less equal to %d\n",
- port_id, RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
+ "rx/tx queue number must less or equal to %d/%d\n",
+ port_id, nb_rx_q_per_pool, nb_tx_q_per_pool);
return (-EINVAL);
}
} else {
--
1.7.9.5
^ permalink raw reply [flat|nested] 41+ messages in thread
* [dpdk-dev] [PATCH v2 3/4] pmd: add support for DCB in SRIOV mode for ixgbe driver.
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 0/4] " Pawel Wodkowski
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 1/4] ethdev: Allow zero rx/tx queues in SRIOV mode Pawel Wodkowski
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 2/4] ethdev: prevent changing of nb_q_per_pool in rte_eth_dev_check_mq_mode() Pawel Wodkowski
@ 2015-01-19 13:02 ` Pawel Wodkowski
2015-01-20 1:56 ` Ouyang, Changchun
2015-01-20 6:52 ` Thomas Monjalon
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 4/4] testpmd: fix dcb in vt mode Pawel Wodkowski
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 0/7] Enable DCB in SRIOV mode for ixgbe driver Pawel Wodkowski
4 siblings, 2 replies; 41+ messages in thread
From: Pawel Wodkowski @ 2015-01-19 13:02 UTC (permalink / raw)
To: dev
Add support for DCB in SRIOV mode. When no PFC is enabled this feature
might be used as multiple queues for VF (up to 8 queues if VFs num is
less or equal 16 or 4 if FVs num is less or equal 32).
The PF must initializes RX in ETH_MQ_RX_VMDQ_DCB and TX in
ETH_MQ_TX_VMDQ_DCB.
VF should initialize Rx in ETH_MQ_RX_DCB and Tx in ETH_MQ_TX_DCB to use
multiple queues and/or DCB.
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
---
lib/librte_ether/rte_ethdev.c | 32 ++++++++++++++++------------
lib/librte_ether/rte_ethdev.h | 2 +-
lib/librte_pmd_ixgbe/ixgbe_pf.c | 42 +++++++++++++++++++++++++++----------
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 7 +++----
4 files changed, 54 insertions(+), 29 deletions(-)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 85385f8..115465e 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -532,6 +532,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
{
struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ struct rte_eth_dev_info dev_info;
if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
/* check multi-queue mode */
@@ -553,8 +554,9 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
switch (dev_conf->rxmode.mq_mode) {
case ETH_MQ_RX_VMDQ_DCB:
+ break;
case ETH_MQ_RX_VMDQ_DCB_RSS:
- /* DCB/RSS VMDQ in SRIOV mode, not implement yet */
+ /* DCB+RSS VMDQ in SRIOV mode, not implement yet */
PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
" SRIOV active, "
"unsupported VMDQ mq_mode rx %u\n",
@@ -589,13 +591,8 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
switch (dev_conf->txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
- /* DCB VMDQ in SRIOV mode, not implement yet */
- PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
- " SRIOV active, "
- "unsupported VMDQ mq_mode tx %u\n",
- port_id, dev_conf->txmode.mq_mode);
- return (-EINVAL);
+ case ETH_MQ_TX_VMDQ_DCB: /* DCB VMDQ in SRIOV mode*/
+ break;
default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
/* if nothing mq mode configure, use default scheme */
dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
@@ -612,7 +609,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
return (-EINVAL);
}
} else {
- /* For vmdb+dcb mode check our configuration before we go further */
+ /* For vmdq+dcb mode check our configuration before we go further */
if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_conf *conf;
@@ -651,11 +648,20 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
}
- /* For DCB mode check our configuration before we go further */
+ /* For DCB we need to obtain maximum number of queues dinamically,
+ * as this depends on max VF exported in PF */
+ if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
+ (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB)) {
+
+ FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+ (*dev->dev_ops->dev_infos_get)(dev, &dev_info);
+ }
+
+ /* For DCB mode check out configuration before we go further */
if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
const struct rte_eth_dcb_rx_conf *conf;
- if (nb_rx_q != ETH_DCB_NUM_QUEUES) {
+ if (nb_rx_q != dev_info.max_rx_queues) {
PMD_DEBUG_TRACE("ethdev port_id=%d DCB, nb_rx_q "
"!= %d\n",
port_id, ETH_DCB_NUM_QUEUES);
@@ -675,7 +681,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
const struct rte_eth_dcb_tx_conf *conf;
- if (nb_tx_q != ETH_DCB_NUM_QUEUES) {
+ if (nb_tx_q != dev_info.max_tx_queues) {
PMD_DEBUG_TRACE("ethdev port_id=%d DCB, nb_tx_q "
"!= %d\n",
port_id, ETH_DCB_NUM_QUEUES);
@@ -802,7 +808,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
ETHER_MAX_LEN;
}
- /* multipe queue mode checking */
+ /* multiple queue mode checking */
diag = rte_eth_dev_check_mq_mode(port_id, nb_rx_q, nb_tx_q, dev_conf);
if (diag != 0) {
PMD_DEBUG_TRACE("port%d rte_eth_dev_check_mq_mode = %d\n",
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index ce0528f..6df3f29 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -299,7 +299,7 @@ enum rte_eth_rx_mq_mode {
enum rte_eth_tx_mq_mode {
ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */
ETH_MQ_TX_DCB, /**< For TX side,only DCB is on. */
- ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on. */
+ ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on. */
ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
};
diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c b/lib/librte_pmd_ixgbe/ixgbe_pf.c
index 93f6e43..b5f570d 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_pf.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c
@@ -231,19 +231,19 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
}
IXGBE_WRITE_REG(hw, IXGBE_GCR_EXT, gcr_ext);
- IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
+ IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
- /*
+ /*
* enable vlan filtering and allow all vlan tags through
*/
- vlanctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
- vlanctrl |= IXGBE_VLNCTRL_VFE ; /* enable vlan filters */
- IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl);
+ vlanctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
+ vlanctrl |= IXGBE_VLNCTRL_VFE ; /* enable vlan filters */
+ IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl);
- /* VFTA - enable all vlan filters */
- for (i = 0; i < IXGBE_MAX_VFTA; i++) {
- IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF);
- }
+ /* VFTA - enable all vlan filters */
+ for (i = 0; i < IXGBE_MAX_VFTA; i++) {
+ IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF);
+ }
/* Enable MAC Anti-Spoofing */
hw->mac.ops.set_mac_anti_spoofing(hw, FALSE, vf_num);
@@ -513,6 +513,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
uint32_t default_q = vf * RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
+ uint8_t pools;
/* Verify if the PF supports the mbox APIs version or not */
switch (vfinfo[vf].api_version) {
@@ -524,8 +525,27 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
}
/* Notify VF of Rx and Tx queue number */
- msgbuf[IXGBE_VF_RX_QUEUES] = RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
- msgbuf[IXGBE_VF_TX_QUEUES] = RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
+ if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+ pools = dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf.nb_queue_pools;
+ if (pools <= 16)
+ msgbuf[IXGBE_VF_RX_QUEUES] = 8;
+ else if (pools <= 32)
+ msgbuf[IXGBE_VF_RX_QUEUES] = 4;
+ else
+ msgbuf[IXGBE_VF_RX_QUEUES] = 1;
+ } else
+ msgbuf[IXGBE_VF_RX_QUEUES] = RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
+
+ if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ pools = dev->data->dev_conf.tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools;
+ if (pools <= 16)
+ msgbuf[IXGBE_VF_TX_QUEUES] = 8;
+ else if (pools <= 32)
+ msgbuf[IXGBE_VF_TX_QUEUES] = 4;
+ else
+ msgbuf[IXGBE_VF_TX_QUEUES] = 1;
+ } else
+ msgbuf[IXGBE_VF_TX_QUEUES] = RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
/* Notify VF of default queue */
msgbuf[IXGBE_VF_DEF_QUEUE] = default_q;
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index 840bc07..eaed280 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -3166,10 +3166,9 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
/* check support mq_mode for DCB */
if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
- (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB))
- return;
-
- if (dev->data->nb_rx_queues != ETH_DCB_NUM_QUEUES)
+ (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
+ (dev_conf->txmode.mq_mode != ETH_MQ_TX_VMDQ_DCB) &&
+ (dev_conf->txmode.mq_mode != ETH_MQ_TX_DCB))
return;
/** Configure DCB hardware **/
--
1.7.9.5
^ permalink raw reply [flat|nested] 41+ messages in thread
* [dpdk-dev] [PATCH v2 4/4] testpmd: fix dcb in vt mode
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 0/4] " Pawel Wodkowski
` (2 preceding siblings ...)
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 3/4] pmd: add support for DCB in SRIOV mode for ixgbe driver Pawel Wodkowski
@ 2015-01-19 13:02 ` Pawel Wodkowski
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 0/7] Enable DCB in SRIOV mode for ixgbe driver Pawel Wodkowski
4 siblings, 0 replies; 41+ messages in thread
From: Pawel Wodkowski @ 2015-01-19 13:02 UTC (permalink / raw)
To: dev
This patch incorporate fixes to support DCB in SRIOV mode for testpmd.
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
---
app/test-pmd/cmdline.c | 4 ++--
app/test-pmd/testpmd.c | 39 +++++++++++++++++++++++++++++----------
app/test-pmd/testpmd.h | 10 ----------
3 files changed, 31 insertions(+), 22 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 4618b92..d6a18a9 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1947,9 +1947,9 @@ cmd_config_dcb_parsed(void *parsed_result,
/* DCB in VT mode */
if (!strncmp(res->vt_en, "on",2))
- dcb_conf.dcb_mode = DCB_VT_ENABLED;
+ dcb_conf.vt_en = 1;
else
- dcb_conf.dcb_mode = DCB_ENABLED;
+ dcb_conf.vt_en = 0;
if (!strncmp(res->pfc_en, "on",2)) {
dcb_conf.pfc_en = 1;
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 773b8af..9b12c25 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1743,7 +1743,8 @@ const uint16_t vlan_tags[] = {
};
static int
-get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
+get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf,
+ uint16_t sriov)
{
uint8_t i;
@@ -1751,7 +1752,7 @@ get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
* Builds up the correct configuration for dcb+vt based on the vlan tags array
* given above, and the number of traffic classes available for use.
*/
- if (dcb_conf->dcb_mode == DCB_VT_ENABLED) {
+ if (dcb_conf->vt_en == 1) {
struct rte_eth_vmdq_dcb_conf vmdq_rx_conf;
struct rte_eth_vmdq_dcb_tx_conf vmdq_tx_conf;
@@ -1768,9 +1769,17 @@ get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
vmdq_rx_conf.pool_map[i].vlan_id = vlan_tags[ i ];
vmdq_rx_conf.pool_map[i].pools = 1 << (i % vmdq_rx_conf.nb_queue_pools);
}
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
- vmdq_rx_conf.dcb_queue[i] = i;
- vmdq_tx_conf.dcb_queue[i] = i;
+
+ if (sriov == 0) {
+ for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ vmdq_rx_conf.dcb_queue[i] = i;
+ vmdq_tx_conf.dcb_queue[i] = i;
+ }
+ } else {
+ for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ vmdq_rx_conf.dcb_queue[i] = i % dcb_conf->num_tcs;
+ vmdq_tx_conf.dcb_queue[i] = i % dcb_conf->num_tcs;
+ }
}
/*set DCB mode of RX and TX of multiple queues*/
@@ -1828,22 +1837,32 @@ init_port_dcb_config(portid_t pid,struct dcb_config *dcb_conf)
uint16_t nb_vlan;
uint16_t i;
- /* rxq and txq configuration in dcb mode */
- nb_rxq = 128;
- nb_txq = 128;
rx_free_thresh = 64;
+ rte_port = &ports[pid];
memset(&port_conf,0,sizeof(struct rte_eth_conf));
/* Enter DCB configuration status */
dcb_config = 1;
nb_vlan = sizeof( vlan_tags )/sizeof( vlan_tags[ 0 ]);
/*set configuration of DCB in vt mode and DCB in non-vt mode*/
- retval = get_eth_dcb_conf(&port_conf, dcb_conf);
+ retval = get_eth_dcb_conf(&port_conf, dcb_conf, rte_port->dev_info.max_vfs);
+
+ /* rxq and txq configuration in dcb mode */
+ nb_rxq = rte_port->dev_info.max_rx_queues;
+ nb_txq = rte_port->dev_info.max_tx_queues;
+
+ if (rte_port->dev_info.max_vfs) {
+ if (port_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB)
+ nb_rxq /= port_conf.rx_adv_conf.vmdq_dcb_conf.nb_queue_pools;
+
+ if (port_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB)
+ nb_txq /= port_conf.tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools;
+ }
+
if (retval < 0)
return retval;
- rte_port = &ports[pid];
memcpy(&rte_port->dev_conf, &port_conf,sizeof(struct rte_eth_conf));
rte_port->rx_conf.rx_thresh = rx_thresh;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 8f5e6c7..695e893 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -227,20 +227,10 @@ struct fwd_config {
portid_t nb_fwd_ports; /**< Nb. of ports involved. */
};
-/**
- * DCB mode enable
- */
-enum dcb_mode_enable
-{
- DCB_VT_ENABLED,
- DCB_ENABLED
-};
-
/*
* DCB general config info
*/
struct dcb_config {
- enum dcb_mode_enable dcb_mode;
uint8_t vt_en;
enum rte_eth_nb_tcs num_tcs;
uint8_t pfc_en;
--
1.7.9.5
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/4] ethdev: prevent changing of nb_q_per_pool in rte_eth_dev_check_mq_mode()
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 2/4] ethdev: prevent changing of nb_q_per_pool in rte_eth_dev_check_mq_mode() Pawel Wodkowski
@ 2015-01-20 1:32 ` Ouyang, Changchun
2015-01-20 9:09 ` Wodkowski, PawelX
0 siblings, 1 reply; 41+ messages in thread
From: Ouyang, Changchun @ 2015-01-20 1:32 UTC (permalink / raw)
To: Wodkowski, PawelX, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Pawel Wodkowski
> Sent: Monday, January 19, 2015 9:02 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v2 2/4] ethdev: prevent changing of
> nb_q_per_pool in rte_eth_dev_check_mq_mode()
>
> If SRIOV is used and device configuration does not use MQ the
> RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool is set to 1 in
> rte_eth_dev_check_mq_mode().
> This is bad becouse of two reasons:
> 1. Port reconfiguration from non-MQ mode to MQ mode is impossible 2.
> Confguring RX and TX side in different way is impossible.
>
This case is possible:
rxmode.mq_mode is ETH_MQ_RX_VMDQ_RSS, and txmode.mq_mode is ETH_MQ_TX_NONE.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/4] pmd: add support for DCB in SRIOV mode for ixgbe driver.
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 3/4] pmd: add support for DCB in SRIOV mode for ixgbe driver Pawel Wodkowski
@ 2015-01-20 1:56 ` Ouyang, Changchun
2015-01-20 6:52 ` Thomas Monjalon
1 sibling, 0 replies; 41+ messages in thread
From: Ouyang, Changchun @ 2015-01-20 1:56 UTC (permalink / raw)
To: Wodkowski, PawelX, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Pawel Wodkowski
> Sent: Monday, January 19, 2015 9:03 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v2 3/4] pmd: add support for DCB in SRIOV
> mode for ixgbe driver.
>
> Add support for DCB in SRIOV mode. When no PFC is enabled this feature
> might be used as multiple queues for VF (up to 8 queues if VFs num is less or
> equal 16 or 4 if FVs num is less or equal 32).
>
> The PF must initializes RX in ETH_MQ_RX_VMDQ_DCB and TX in
> ETH_MQ_TX_VMDQ_DCB.
> VF should initialize Rx in ETH_MQ_RX_DCB and Tx in ETH_MQ_TX_DCB to use
> multiple queues and/or DCB.
>
> Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
> ---
> lib/librte_ether/rte_ethdev.c | 32 ++++++++++++++++------------
> lib/librte_ether/rte_ethdev.h | 2 +-
> lib/librte_pmd_ixgbe/ixgbe_pf.c | 42 +++++++++++++++++++++++++++--
> --------
> lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 7 +++----
> 4 files changed, 54 insertions(+), 29 deletions(-)
>
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index 85385f8..115465e 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -532,6 +532,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
> uint16_t nb_rx_q, uint16_t nb_tx_q,
> const struct rte_eth_conf *dev_conf) {
> struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + struct rte_eth_dev_info dev_info;
>
> if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
> /* check multi-queue mode */
> @@ -553,8 +554,9 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
> uint16_t nb_rx_q, uint16_t nb_tx_q,
>
> switch (dev_conf->rxmode.mq_mode) {
> case ETH_MQ_RX_VMDQ_DCB:
> + break;
> case ETH_MQ_RX_VMDQ_DCB_RSS:
> - /* DCB/RSS VMDQ in SRIOV mode, not implement
> yet */
> + /* DCB+RSS VMDQ in SRIOV mode, not implement
> yet */
> PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
> " SRIOV active, "
> "unsupported VMDQ mq_mode
> rx %u\n", @@ -589,13 +591,8 @@ rte_eth_dev_check_mq_mode(uint8_t
> port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> }
>
> switch (dev_conf->txmode.mq_mode) {
> - case ETH_MQ_TX_VMDQ_DCB:
> - /* DCB VMDQ in SRIOV mode, not implement yet */
> - PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
> - " SRIOV active, "
> - "unsupported VMDQ mq_mode
> tx %u\n",
> - port_id, dev_conf-
> >txmode.mq_mode);
> - return (-EINVAL);
> + case ETH_MQ_TX_VMDQ_DCB: /* DCB VMDQ in SRIOV
> mode*/
> + break;
> default: /* ETH_MQ_TX_VMDQ_ONLY or
> ETH_MQ_TX_NONE */
> /* if nothing mq mode configure, use default scheme
> */
> dev->data->dev_conf.txmode.mq_mode =
> ETH_MQ_TX_VMDQ_ONLY; @@ -612,7 +609,7 @@
> rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t
> nb_tx_q,
> return (-EINVAL);
> }
> } else {
> - /* For vmdb+dcb mode check our configuration before we
> go further */
> + /* For vmdq+dcb mode check our configuration before we
> go further */
> if (dev_conf->rxmode.mq_mode ==
> ETH_MQ_RX_VMDQ_DCB) {
> const struct rte_eth_vmdq_dcb_conf *conf;
>
> @@ -651,11 +648,20 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
> uint16_t nb_rx_q, uint16_t nb_tx_q,
> }
> }
>
> - /* For DCB mode check our configuration before we go
> further */
> + /* For DCB we need to obtain maximum number of queues
> dinamically,
> + * as this depends on max VF exported in PF */
> + if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
> + (dev_conf->txmode.mq_mode ==
> ETH_MQ_TX_DCB)) {
> +
> + FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >dev_infos_get, -ENOTSUP);
> + (*dev->dev_ops->dev_infos_get)(dev,
> &dev_info);
> + }
> +
> + /* For DCB mode check out configuration before we go
> further */
> if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
> const struct rte_eth_dcb_rx_conf *conf;
>
> - if (nb_rx_q != ETH_DCB_NUM_QUEUES) {
> + if (nb_rx_q != dev_info.max_rx_queues) {
> PMD_DEBUG_TRACE("ethdev port_id=%d
> DCB, nb_rx_q "
> "!= %d\n",
> port_id,
> ETH_DCB_NUM_QUEUES);
> @@ -675,7 +681,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
> uint16_t nb_rx_q, uint16_t nb_tx_q,
> if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
> const struct rte_eth_dcb_tx_conf *conf;
>
> - if (nb_tx_q != ETH_DCB_NUM_QUEUES) {
> + if (nb_tx_q != dev_info.max_tx_queues) {
> PMD_DEBUG_TRACE("ethdev port_id=%d
> DCB, nb_tx_q "
> "!= %d\n",
> port_id,
> ETH_DCB_NUM_QUEUES);
> @@ -802,7 +808,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t
> nb_rx_q, uint16_t nb_tx_q,
> ETHER_MAX_LEN;
> }
>
> - /* multipe queue mode checking */
> + /* multiple queue mode checking */
> diag = rte_eth_dev_check_mq_mode(port_id, nb_rx_q, nb_tx_q,
> dev_conf);
> if (diag != 0) {
> PMD_DEBUG_TRACE("port%d
> rte_eth_dev_check_mq_mode = %d\n", diff --git
> a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h index
> ce0528f..6df3f29 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -299,7 +299,7 @@ enum rte_eth_rx_mq_mode { enum
> rte_eth_tx_mq_mode {
> ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */
> ETH_MQ_TX_DCB, /**< For TX side,only DCB is on. */
> - ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is
> on. */
> + ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on.
> */
> ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
> };
>
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c
> b/lib/librte_pmd_ixgbe/ixgbe_pf.c index 93f6e43..b5f570d 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_pf.c
> +++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c
> @@ -231,19 +231,19 @@ int ixgbe_pf_host_configure(struct rte_eth_dev
> *eth_dev)
> }
>
> IXGBE_WRITE_REG(hw, IXGBE_GCR_EXT, gcr_ext);
> - IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
> + IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
>
> - /*
> + /*
> * enable vlan filtering and allow all vlan tags through
> */
> - vlanctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
> - vlanctrl |= IXGBE_VLNCTRL_VFE ; /* enable vlan filters */
> - IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl);
> + vlanctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
> + vlanctrl |= IXGBE_VLNCTRL_VFE ; /* enable vlan filters */
> + IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl);
It is better to use a separate cleanup patch for this indention fix.
> - /* VFTA - enable all vlan filters */
> - for (i = 0; i < IXGBE_MAX_VFTA; i++) {
> - IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF);
> - }
> + /* VFTA - enable all vlan filters */
> + for (i = 0; i < IXGBE_MAX_VFTA; i++) {
> + IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF);
> + }
>
> /* Enable MAC Anti-Spoofing */
> hw->mac.ops.set_mac_anti_spoofing(hw, FALSE, vf_num); @@ -
> 513,6 +513,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf,
> uint32_t *msgbuf)
> struct ixgbe_vf_info *vfinfo =
> *IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data-
> >dev_private);
> uint32_t default_q = vf * RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
> + uint8_t pools;
>
> /* Verify if the PF supports the mbox APIs version or not */
> switch (vfinfo[vf].api_version) {
> @@ -524,8 +525,27 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev,
> uint32_t vf, uint32_t *msgbuf)
> }
>
> /* Notify VF of Rx and Tx queue number */
> - msgbuf[IXGBE_VF_RX_QUEUES] =
> RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
> - msgbuf[IXGBE_VF_TX_QUEUES] =
> RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
> + if (dev->data->dev_conf.rxmode.mq_mode ==
> ETH_MQ_RX_VMDQ_DCB) {
> + pools = dev->data-
> >dev_conf.rx_adv_conf.vmdq_dcb_conf.nb_queue_pools;
> + if (pools <= 16)
> + msgbuf[IXGBE_VF_RX_QUEUES] = 8;
> + else if (pools <= 32)
> + msgbuf[IXGBE_VF_RX_QUEUES] = 4;
> + else
> + msgbuf[IXGBE_VF_RX_QUEUES] = 1;
> + } else
> + msgbuf[IXGBE_VF_RX_QUEUES] =
> RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
> +
> + if (dev->data->dev_conf.txmode.mq_mode ==
> ETH_MQ_TX_VMDQ_DCB) {
> + pools = dev->data-
> >dev_conf.tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools;
> + if (pools <= 16)
> + msgbuf[IXGBE_VF_TX_QUEUES] = 8;
> + else if (pools <= 32)
> + msgbuf[IXGBE_VF_TX_QUEUES] = 4;
> + else
> + msgbuf[IXGBE_VF_TX_QUEUES] = 1;
Is there any logic to make sure msgbuf[IXGBE_VF_TX_QUEUES] and RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool
Have consistent value? Do we need a check here?
> + } else
> + msgbuf[IXGBE_VF_TX_QUEUES] =
> RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
>
> /* Notify VF of default queue */
> msgbuf[IXGBE_VF_DEF_QUEUE] = default_q; diff --git
> a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> index 840bc07..eaed280 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> @@ -3166,10 +3166,9 @@ void ixgbe_configure_dcb(struct rte_eth_dev
> *dev)
>
> /* check support mq_mode for DCB */
> if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
> - (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB))
> - return;
> -
> - if (dev->data->nb_rx_queues != ETH_DCB_NUM_QUEUES)
> + (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
> + (dev_conf->txmode.mq_mode != ETH_MQ_TX_VMDQ_DCB) &&
> + (dev_conf->txmode.mq_mode != ETH_MQ_TX_DCB))
> return;
>
> /** Configure DCB hardware **/
> --
> 1.7.9.5
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/4] pmd: add support for DCB in SRIOV mode for ixgbe driver.
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 3/4] pmd: add support for DCB in SRIOV mode for ixgbe driver Pawel Wodkowski
2015-01-20 1:56 ` Ouyang, Changchun
@ 2015-01-20 6:52 ` Thomas Monjalon
1 sibling, 0 replies; 41+ messages in thread
From: Thomas Monjalon @ 2015-01-20 6:52 UTC (permalink / raw)
To: Pawel Wodkowski; +Cc: dev
2015-01-19 14:02, Pawel Wodkowski:
> Add support for DCB in SRIOV mode. When no PFC is enabled this feature
> might be used as multiple queues for VF (up to 8 queues if VFs num is
> less or equal 16 or 4 if FVs num is less or equal 32).
>
> The PF must initializes RX in ETH_MQ_RX_VMDQ_DCB and TX in
> ETH_MQ_TX_VMDQ_DCB.
> VF should initialize Rx in ETH_MQ_RX_DCB and Tx in ETH_MQ_TX_DCB to use
> multiple queues and/or DCB.
>
> Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
[...]
> --- a/lib/librte_pmd_ixgbe/ixgbe_pf.c
> +++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c
> @@ -231,19 +231,19 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
> }
>
> IXGBE_WRITE_REG(hw, IXGBE_GCR_EXT, gcr_ext);
> - IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
> + IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
>
> - /*
> + /*
> * enable vlan filtering and allow all vlan tags through
> */
> - vlanctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
> - vlanctrl |= IXGBE_VLNCTRL_VFE ; /* enable vlan filters */
> - IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl);
> + vlanctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
> + vlanctrl |= IXGBE_VLNCTRL_VFE ; /* enable vlan filters */
> + IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl);
>
> - /* VFTA - enable all vlan filters */
> - for (i = 0; i < IXGBE_MAX_VFTA; i++) {
> - IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF);
> - }
> + /* VFTA - enable all vlan filters */
> + for (i = 0; i < IXGBE_MAX_VFTA; i++) {
> + IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF);
> + }
Please do not mix indent formatting with "real changes".
When looking for history of these lines, it would be difficult to understand
that this patch don't make real change. Having a dedicated cleanup commit is better.
Thanks
--
Thomas
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/4] ethdev: prevent changing of nb_q_per_pool in rte_eth_dev_check_mq_mode()
2015-01-20 1:32 ` Ouyang, Changchun
@ 2015-01-20 9:09 ` Wodkowski, PawelX
0 siblings, 0 replies; 41+ messages in thread
From: Wodkowski, PawelX @ 2015-01-20 9:09 UTC (permalink / raw)
To: Ouyang, Changchun, dev
> -----Original Message-----
> From: Ouyang, Changchun
> Sent: Tuesday, January 20, 2015 2:33 AM
> To: Wodkowski, PawelX; dev@dpdk.org
> Cc: Ouyang, Changchun
> Subject: RE: [dpdk-dev] [PATCH v2 2/4] ethdev: prevent changing of
> nb_q_per_pool in rte_eth_dev_check_mq_mode()
>
>
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Pawel Wodkowski
> > Sent: Monday, January 19, 2015 9:02 PM
> > To: dev@dpdk.org
> > Subject: [dpdk-dev] [PATCH v2 2/4] ethdev: prevent changing of
> > nb_q_per_pool in rte_eth_dev_check_mq_mode()
> >
> > If SRIOV is used and device configuration does not use MQ the
> > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool is set to 1 in
> > rte_eth_dev_check_mq_mode().
> > This is bad becouse of two reasons:
> > 1. Port reconfiguration from non-MQ mode to MQ mode is impossible 2.
> > Confguring RX and TX side in different way is impossible.
> >
>
> This case is possible:
> rxmode.mq_mode is ETH_MQ_RX_VMDQ_RSS, and txmode.mq_mode is
> ETH_MQ_TX_NONE.
>
but ETH_MQ_RX_NONE -> ETH_MQ_RX_VMDQ_RSS is not.
I have 8 VFs
In testpmd
testpmd> port config all rxq 2
port config all rxq 2
testpmd> port start 0
port start 0
Configuring Port 0 (socket 0)
Fail to configure port 0
testpmd> port config all rxq 4
port config all rxq 4
testpmd> port start 0
port start 0
Configuring Port 0 (socket 0)
Fail to configure port 0
testpmd> port config all rxq 8
port config all rxq 8
testpmd> port start all
port start all
Configuring Port 0 (socket 0)
Fail to configure port 0
testpmd> port config all rxq 1
port config all rxq 1
testpmd> port start 0
port start 0
Configuring Port 0 (socket 0)
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7ffec0ae9140 hw_ring=0x7ffec2c0bf00 dma_addr=0x102c0bf00
PMD: set_tx_function(): Using full-featured tx code path
PMD: set_tx_function(): - txq_flags = 0 [IXGBE_SIMPLE_FLAGS=f01]
PMD: set_tx_function(): - tx_rs_thresh = 32 [RTE_PMD_IXGBE_TX_MAX_BURST=32]
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffec0ae88c0 hw_ring=0x7ffec2c1bf00 dma_addr=0x102c1bf00
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
Port 0: 00:1B:21:C7:33:B0
Checking link statuses...
Port 0 Link Up - speed 10000 Mbps - full-duplex
Port 1 Link Down
Done
testpmd>
Please refer to RSS patch thread. I will post there second reply.
Pawel
^ permalink raw reply [flat|nested] 41+ messages in thread
* [dpdk-dev] [PATCH v4 0/7] Enable DCB in SRIOV mode for ixgbe driver
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 0/4] " Pawel Wodkowski
` (3 preceding siblings ...)
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 4/4] testpmd: fix dcb in vt mode Pawel Wodkowski
@ 2015-02-19 15:54 ` Pawel Wodkowski
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 1/7] ethdev: Allow zero rx/tx queues in SRIOV mode Pawel Wodkowski
` (7 more replies)
4 siblings, 8 replies; 41+ messages in thread
From: Pawel Wodkowski @ 2015-02-19 15:54 UTC (permalink / raw)
To: dev
This patchset enables DCB in SRIOV (ETH_MQ_RX_VMDQ_DCB and ETH_MQ_TX_VMDQ_DCB)
for each VF and PF for ixgbe driver.
As a side effect this allow to use multiple queues for TX in VF (8 if there is
16 or less VFs or 4 if there is 32 or less VFs) when PFC is not enabled.
PATCH v4 changes:
- resend patch as previous was sent by mistake with different one.
PATCH v3 changes:
- Rework patch to fit ixgbe RSS in VT mode changes.
- move driver specific code from rte_ethdev.c to driver code.
- fix bug ixgbe driver VLAN filter enable in PF discoveded during testing.
PATCH v2 changes:
- Split patch for easer review.
- Remove "pmd: add api version negotiation for ixgbe driver" and "pmd: extend
mailbox api to report number of RX/TX queues" patches as those are already
already marged from other patch
Pawel Wodkowski (7):
ethdev: Allow zero rx/tx queues in SRIOV mode
pmd igb: fix VMDQ mode checking
pmd: igb/ixgbe split nb_q_per_pool to rx and tx nb_q_per_pool
move rte_eth_dev_check_mq_mode() logic to ixgbe driver
pmd ixgbe: enable DCB in SRIOV
tespmd: fix DCB in SRIOV mode support
pmd ixgbe: fix vlan setting in in PF
app/test-pmd/cmdline.c | 4 +-
app/test-pmd/testpmd.c | 39 +++++--
app/test-pmd/testpmd.h | 10 --
lib/librte_ether/rte_ethdev.c | 212 ++--------------------------------
lib/librte_ether/rte_ethdev.h | 3 +-
lib/librte_pmd_e1000/igb_ethdev.c | 45 +++++++-
lib/librte_pmd_e1000/igb_pf.c | 3 +-
lib/librte_pmd_e1000/igb_rxtx.c | 2 +-
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 126 ++++++++++++++++++---
lib/librte_pmd_ixgbe/ixgbe_ethdev.h | 5 +-
lib/librte_pmd_ixgbe/ixgbe_pf.c | 220 +++++++++++++++++++++++++++++++-----
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 18 +--
12 files changed, 407 insertions(+), 280 deletions(-)
--
1.9.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [dpdk-dev] [PATCH v4 1/7] ethdev: Allow zero rx/tx queues in SRIOV mode
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 0/7] Enable DCB in SRIOV mode for ixgbe driver Pawel Wodkowski
@ 2015-02-19 15:54 ` Pawel Wodkowski
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 2/7] pmd igb: fix VMDQ mode checking Pawel Wodkowski
` (6 subsequent siblings)
7 siblings, 0 replies; 41+ messages in thread
From: Pawel Wodkowski @ 2015-02-19 15:54 UTC (permalink / raw)
To: dev
Allow zero rx/tx queues to be passed to rte_eth_dev_configure(). This
way PF might be used only for configuration purpose when no receive
and/or transmit functionality is needed.
Rationale:
in SRIOV mode PF use first free VF to RX/TX (at least ixgbe based NICs).
For example: if using 82599EB based NIC and VF count is 16, 32 or 64 all
recources are assigned to VFs so PF might be used only for configuration
purpose.
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
---
lib/librte_ether/rte_ethdev.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index ea3a1fb..2e814db 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -333,7 +333,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
sizeof(dev->data->rx_queues[0]) * nb_queues,
RTE_CACHE_LINE_SIZE);
- if (dev->data->rx_queues == NULL) {
+ if (dev->data->rx_queues == NULL && nb_queues > 0) {
dev->data->nb_rx_queues = 0;
return -(ENOMEM);
}
@@ -475,7 +475,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
sizeof(dev->data->tx_queues[0]) * nb_queues,
RTE_CACHE_LINE_SIZE);
- if (dev->data->tx_queues == NULL) {
+ if (dev->data->tx_queues == NULL && nb_queues > 0) {
dev->data->nb_tx_queues = 0;
return -(ENOMEM);
}
@@ -731,7 +731,10 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
if (nb_rx_q == 0) {
PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_q == 0\n", port_id);
- return (-EINVAL);
+ /* In SRIOV there can be no free resource for PF. So permit use only
+ * for configuration. */
+ if (RTE_ETH_DEV_SRIOV(dev).active == 0)
+ return (-EINVAL);
}
if (nb_tx_q > dev_info.max_tx_queues) {
@@ -739,9 +742,13 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
port_id, nb_tx_q, dev_info.max_tx_queues);
return (-EINVAL);
}
+
if (nb_tx_q == 0) {
PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_q == 0\n", port_id);
- return (-EINVAL);
+ /* In SRIOV there can be no free resource for PF. So permit use only
+ * for configuration. */
+ if (RTE_ETH_DEV_SRIOV(dev).active == 0)
+ return (-EINVAL);
}
/* Copy the dev_conf parameter into the dev structure */
--
1.9.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [dpdk-dev] [PATCH v4 2/7] pmd igb: fix VMDQ mode checking
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 0/7] Enable DCB in SRIOV mode for ixgbe driver Pawel Wodkowski
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 1/7] ethdev: Allow zero rx/tx queues in SRIOV mode Pawel Wodkowski
@ 2015-02-19 15:54 ` Pawel Wodkowski
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 3/7] pmd: igb/ixgbe split nb_q_per_pool to rx and tx nb_q_per_pool Pawel Wodkowski
` (5 subsequent siblings)
7 siblings, 0 replies; 41+ messages in thread
From: Pawel Wodkowski @ 2015-02-19 15:54 UTC (permalink / raw)
To: dev
RX mode is an enum created by ORing flags. Change compare by value
to test a flag when enabling/disabling VLAN filtering during RX queue
setup.
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
---
lib/librte_pmd_e1000/igb_ethdev.c | 2 +-
lib/librte_pmd_e1000/igb_rxtx.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/lib/librte_pmd_e1000/igb_ethdev.c b/lib/librte_pmd_e1000/igb_ethdev.c
index 2a268b8..d451086 100644
--- a/lib/librte_pmd_e1000/igb_ethdev.c
+++ b/lib/librte_pmd_e1000/igb_ethdev.c
@@ -816,7 +816,7 @@ eth_igb_start(struct rte_eth_dev *dev)
ETH_VLAN_EXTEND_MASK;
eth_igb_vlan_offload_set(dev, mask);
- if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+ if ((dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) != 0) {
/* Enable VLAN filter since VMDq always use VLAN filter */
igb_vmdq_vlan_hw_filter_enable(dev);
}
diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index 5c394a9..79c458f 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -2150,7 +2150,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
(hw->mac.mc_filter_type << E1000_RCTL_MO_SHIFT);
/* Make sure VLAN Filters are off. */
- if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_VMDQ_ONLY)
+ if ((dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) == 0)
rctl &= ~E1000_RCTL_VFE;
/* Don't store bad packets. */
rctl &= ~E1000_RCTL_SBP;
--
1.9.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [dpdk-dev] [PATCH v4 3/7] pmd: igb/ixgbe split nb_q_per_pool to rx and tx nb_q_per_pool
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 0/7] Enable DCB in SRIOV mode for ixgbe driver Pawel Wodkowski
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 1/7] ethdev: Allow zero rx/tx queues in SRIOV mode Pawel Wodkowski
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 2/7] pmd igb: fix VMDQ mode checking Pawel Wodkowski
@ 2015-02-19 15:54 ` Pawel Wodkowski
2015-02-25 3:24 ` Ouyang, Changchun
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 4/7] move rte_eth_dev_check_mq_mode() logic to driver Pawel Wodkowski
` (4 subsequent siblings)
7 siblings, 1 reply; 41+ messages in thread
From: Pawel Wodkowski @ 2015-02-19 15:54 UTC (permalink / raw)
To: dev
rx and tx number of queue might be different if RX and TX are
configured in different mode. This allow to inform VF about
proper number of queues.
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
---
lib/librte_ether/rte_ethdev.c | 12 ++++++------
lib/librte_ether/rte_ethdev.h | 3 ++-
lib/librte_pmd_e1000/igb_pf.c | 3 ++-
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 2 +-
lib/librte_pmd_ixgbe/ixgbe_pf.c | 9 +++++----
5 files changed, 16 insertions(+), 13 deletions(-)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 2e814db..4007054 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -520,7 +520,7 @@ rte_eth_dev_check_vf_rss_rxq_num(uint8_t port_id, uint16_t nb_rx_q)
return -EINVAL;
}
- RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = nb_rx_q;
+ RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool = nb_rx_q;
RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx =
dev->pci_dev->max_vfs * nb_rx_q;
@@ -567,7 +567,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
dev->data->dev_conf.rxmode.mq_mode);
case ETH_MQ_RX_VMDQ_RSS:
dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
- if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
+ if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool)
if (rte_eth_dev_check_vf_rss_rxq_num(port_id, nb_rx_q) != 0) {
PMD_DEBUG_TRACE("ethdev port_id=%d"
" SRIOV active, invalid queue"
@@ -580,8 +580,8 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */
/* if nothing mq mode configure, use default scheme */
dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
- if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
- RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
+ if (RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool > 1)
+ RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool = 1;
break;
}
@@ -600,8 +600,8 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
/* check valid queue number */
- if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
- (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
+ if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool) ||
+ (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool)) {
PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV active, "
"queue number must less equal to %d\n",
port_id, RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 84160c3..af86401 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -1544,7 +1544,8 @@ struct rte_eth_dev {
struct rte_eth_dev_sriov {
uint8_t active; /**< SRIOV is active with 16, 32 or 64 pools */
- uint8_t nb_q_per_pool; /**< rx queue number per pool */
+ uint8_t nb_rx_q_per_pool; /**< rx queue number per pool */
+ uint8_t nb_tx_q_per_pool; /**< tx queue number per pool */
uint16_t def_vmdq_idx; /**< Default pool num used for PF */
uint16_t def_pool_q_idx; /**< Default pool queue start reg index */
};
diff --git a/lib/librte_pmd_e1000/igb_pf.c b/lib/librte_pmd_e1000/igb_pf.c
index bc3816a..9d2f858 100644
--- a/lib/librte_pmd_e1000/igb_pf.c
+++ b/lib/librte_pmd_e1000/igb_pf.c
@@ -115,7 +115,8 @@ void igb_pf_host_init(struct rte_eth_dev *eth_dev)
rte_panic("Cannot allocate memory for private VF data\n");
RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS;
- RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
+ RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool = nb_queue;
+ RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool = nb_queue;
RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
index d6d408e..02b9cda 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
@@ -3564,7 +3564,7 @@ static int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ixgbe_vf_info *vfinfo =
*(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private));
- uint8_t nb_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
+ uint8_t nb_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
uint32_t queue_stride =
IXGBE_MAX_RX_QUEUE_NUM / RTE_ETH_DEV_SRIOV(dev).active;
uint32_t queue_idx = vf * queue_stride, idx = 0, vf_idx;
diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c b/lib/librte_pmd_ixgbe/ixgbe_pf.c
index dbda9b5..4103e97 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_pf.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c
@@ -129,7 +129,8 @@ void ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
}
- RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
+ RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool = nb_queue;
+ RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool = nb_queue;
RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
@@ -497,7 +498,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
{
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
- uint32_t default_q = vf * RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
+ uint32_t default_q = vf * RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
/* Verify if the PF supports the mbox APIs version or not */
switch (vfinfo[vf].api_version) {
@@ -509,8 +510,8 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
}
/* Notify VF of Rx and Tx queue number */
- msgbuf[IXGBE_VF_RX_QUEUES] = RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
- msgbuf[IXGBE_VF_TX_QUEUES] = RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
+ msgbuf[IXGBE_VF_RX_QUEUES] = RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool;
+ msgbuf[IXGBE_VF_TX_QUEUES] = RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
/* Notify VF of default queue */
msgbuf[IXGBE_VF_DEF_QUEUE] = default_q;
--
1.9.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [dpdk-dev] [PATCH v4 4/7] move rte_eth_dev_check_mq_mode() logic to driver
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 0/7] Enable DCB in SRIOV mode for ixgbe driver Pawel Wodkowski
` (2 preceding siblings ...)
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 3/7] pmd: igb/ixgbe split nb_q_per_pool to rx and tx nb_q_per_pool Pawel Wodkowski
@ 2015-02-19 15:54 ` Pawel Wodkowski
2015-02-25 6:14 ` Ouyang, Changchun
2015-06-09 4:06 ` Wu, Jingjing
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 5/7] pmd ixgbe: enable DCB in SRIOV Pawel Wodkowski
` (3 subsequent siblings)
7 siblings, 2 replies; 41+ messages in thread
From: Pawel Wodkowski @ 2015-02-19 15:54 UTC (permalink / raw)
To: dev
Function rte_eth_dev_check_mq_mode() is driver specific. It should be
done in PF configuration phase. This patch move igb/ixgbe driver
specific mq check and SRIOV configuration code to driver part. Also
rewriting log messages to be shorter and more descriptive.
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
---
lib/librte_ether/rte_ethdev.c | 197 -----------------------------------
lib/librte_pmd_e1000/igb_ethdev.c | 43 ++++++++
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 105 ++++++++++++++++++-
lib/librte_pmd_ixgbe/ixgbe_ethdev.h | 5 +-
lib/librte_pmd_ixgbe/ixgbe_pf.c | 202 +++++++++++++++++++++++++++++++-----
5 files changed, 327 insertions(+), 225 deletions(-)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 4007054..aa27e39 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -502,195 +502,6 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
return (0);
}
-static int
-rte_eth_dev_check_vf_rss_rxq_num(uint8_t port_id, uint16_t nb_rx_q)
-{
- struct rte_eth_dev *dev = &rte_eth_devices[port_id];
- switch (nb_rx_q) {
- case 1:
- case 2:
- RTE_ETH_DEV_SRIOV(dev).active =
- ETH_64_POOLS;
- break;
- case 4:
- RTE_ETH_DEV_SRIOV(dev).active =
- ETH_32_POOLS;
- break;
- default:
- return -EINVAL;
- }
-
- RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool = nb_rx_q;
- RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx =
- dev->pci_dev->max_vfs * nb_rx_q;
-
- return 0;
-}
-
-static int
-rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
- const struct rte_eth_conf *dev_conf)
-{
- struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-
- if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
- /* check multi-queue mode */
- if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
- (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB_RSS) ||
- (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB)) {
- /* SRIOV only works in VMDq enable mode */
- PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
- " SRIOV active, "
- "wrong VMDQ mq_mode rx %u tx %u\n",
- port_id,
- dev_conf->rxmode.mq_mode,
- dev_conf->txmode.mq_mode);
- return (-EINVAL);
- }
-
- switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_DCB:
- case ETH_MQ_RX_VMDQ_DCB_RSS:
- /* DCB/RSS VMDQ in SRIOV mode, not implement yet */
- PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
- " SRIOV active, "
- "unsupported VMDQ mq_mode rx %u\n",
- port_id, dev_conf->rxmode.mq_mode);
- return (-EINVAL);
- case ETH_MQ_RX_RSS:
- PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
- " SRIOV active, "
- "Rx mq mode is changed from:"
- "mq_mode %u into VMDQ mq_mode %u\n",
- port_id,
- dev_conf->rxmode.mq_mode,
- dev->data->dev_conf.rxmode.mq_mode);
- case ETH_MQ_RX_VMDQ_RSS:
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
- if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool)
- if (rte_eth_dev_check_vf_rss_rxq_num(port_id, nb_rx_q) != 0) {
- PMD_DEBUG_TRACE("ethdev port_id=%d"
- " SRIOV active, invalid queue"
- " number for VMDQ RSS, allowed"
- " value are 1, 2 or 4\n",
- port_id);
- return -EINVAL;
- }
- break;
- default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */
- /* if nothing mq mode configure, use default scheme */
- dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
- if (RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool > 1)
- RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool = 1;
- break;
- }
-
- switch (dev_conf->txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
- /* DCB VMDQ in SRIOV mode, not implement yet */
- PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
- " SRIOV active, "
- "unsupported VMDQ mq_mode tx %u\n",
- port_id, dev_conf->txmode.mq_mode);
- return (-EINVAL);
- default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
- /* if nothing mq mode configure, use default scheme */
- dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
- break;
- }
-
- /* check valid queue number */
- if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool) ||
- (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool)) {
- PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV active, "
- "queue number must less equal to %d\n",
- port_id, RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
- return (-EINVAL);
- }
- } else {
- /* For vmdb+dcb mode check our configuration before we go further */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
- const struct rte_eth_vmdq_dcb_conf *conf;
-
- if (nb_rx_q != ETH_VMDQ_DCB_NUM_QUEUES) {
- PMD_DEBUG_TRACE("ethdev port_id=%d VMDQ+DCB, nb_rx_q "
- "!= %d\n",
- port_id, ETH_VMDQ_DCB_NUM_QUEUES);
- return (-EINVAL);
- }
- conf = &(dev_conf->rx_adv_conf.vmdq_dcb_conf);
- if (! (conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
- PMD_DEBUG_TRACE("ethdev port_id=%d VMDQ+DCB selected, "
- "nb_queue_pools must be %d or %d\n",
- port_id, ETH_16_POOLS, ETH_32_POOLS);
- return (-EINVAL);
- }
- }
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
- const struct rte_eth_vmdq_dcb_tx_conf *conf;
-
- if (nb_tx_q != ETH_VMDQ_DCB_NUM_QUEUES) {
- PMD_DEBUG_TRACE("ethdev port_id=%d VMDQ+DCB, nb_tx_q "
- "!= %d\n",
- port_id, ETH_VMDQ_DCB_NUM_QUEUES);
- return (-EINVAL);
- }
- conf = &(dev_conf->tx_adv_conf.vmdq_dcb_tx_conf);
- if (! (conf->nb_queue_pools == ETH_16_POOLS ||
- conf->nb_queue_pools == ETH_32_POOLS)) {
- PMD_DEBUG_TRACE("ethdev port_id=%d VMDQ+DCB selected, "
- "nb_queue_pools != %d or nb_queue_pools "
- "!= %d\n",
- port_id, ETH_16_POOLS, ETH_32_POOLS);
- return (-EINVAL);
- }
- }
-
- /* For DCB mode check our configuration before we go further */
- if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
- const struct rte_eth_dcb_rx_conf *conf;
-
- if (nb_rx_q != ETH_DCB_NUM_QUEUES) {
- PMD_DEBUG_TRACE("ethdev port_id=%d DCB, nb_rx_q "
- "!= %d\n",
- port_id, ETH_DCB_NUM_QUEUES);
- return (-EINVAL);
- }
- conf = &(dev_conf->rx_adv_conf.dcb_rx_conf);
- if (! (conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
- PMD_DEBUG_TRACE("ethdev port_id=%d DCB selected, "
- "nb_tcs != %d or nb_tcs "
- "!= %d\n",
- port_id, ETH_4_TCS, ETH_8_TCS);
- return (-EINVAL);
- }
- }
-
- if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
- const struct rte_eth_dcb_tx_conf *conf;
-
- if (nb_tx_q != ETH_DCB_NUM_QUEUES) {
- PMD_DEBUG_TRACE("ethdev port_id=%d DCB, nb_tx_q "
- "!= %d\n",
- port_id, ETH_DCB_NUM_QUEUES);
- return (-EINVAL);
- }
- conf = &(dev_conf->tx_adv_conf.dcb_tx_conf);
- if (! (conf->nb_tcs == ETH_4_TCS ||
- conf->nb_tcs == ETH_8_TCS)) {
- PMD_DEBUG_TRACE("ethdev port_id=%d DCB selected, "
- "nb_tcs != %d or nb_tcs "
- "!= %d\n",
- port_id, ETH_4_TCS, ETH_8_TCS);
- return (-EINVAL);
- }
- }
- }
- return 0;
-}
-
int
rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
@@ -798,14 +609,6 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
ETHER_MAX_LEN;
}
- /* multipe queue mode checking */
- diag = rte_eth_dev_check_mq_mode(port_id, nb_rx_q, nb_tx_q, dev_conf);
- if (diag != 0) {
- PMD_DEBUG_TRACE("port%d rte_eth_dev_check_mq_mode = %d\n",
- port_id, diag);
- return diag;
- }
-
/*
* Setup new number of RX/TX queues and reconfigure device.
*/
diff --git a/lib/librte_pmd_e1000/igb_ethdev.c b/lib/librte_pmd_e1000/igb_ethdev.c
index d451086..5c922df 100644
--- a/lib/librte_pmd_e1000/igb_ethdev.c
+++ b/lib/librte_pmd_e1000/igb_ethdev.c
@@ -742,6 +742,49 @@ eth_igb_configure(struct rte_eth_dev *dev)
struct e1000_interrupt *intr =
E1000_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
+ enum rte_eth_rx_mq_mode rx_mq_mode = dev->data->dev_conf.rxmode.mq_mode;
+ enum rte_eth_tx_mq_mode tx_mq_mode = dev->data->dev_conf.txmode.mq_mode;
+
+ if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
+ /* Check multi-queue mode.
+ * To no break software we accept ETH_MQ_RX_NONE as this might be used
+ * to turn off VLAN filter.
+ *
+ * FIXME if support RSS together with VMDq & SRIOV
+ */
+ if (rx_mq_mode != ETH_MQ_RX_NONE &&
+ (rx_mq_mode & ETH_MQ_RX_VMDQ_ONLY) == 0) {
+ PMD_INIT_LOG(WARNING, " SRIOV active, RX mode %d is not supported."
+ "Driver will behave as in %d mode as fallback.",
+ rx_mq_mode, ETH_MQ_RX_NONE);
+ }
+
+ /* TX mode is not used in in this driver so mode might be ignored.*/
+ if (tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+ /* SRIOV only works in VMDq enable mode */
+ PMD_INIT_LOG(WARNING, "TX mode %d is not supported in SRIOV. "
+ "Driver will behave as in %d mode as fallback.",
+ tx_mq_mode, ETH_MQ_TX_VMDQ_ONLY);
+ }
+ } else {
+ /*
+ * To no break software that set invalid mode, only display warning if
+ * invalid mode is used.
+ */
+ if ((rx_mq_mode & (ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG))
+ != rx_mq_mode) {
+ PMD_INIT_LOG(WARNING, "RX mode %d is not supported. Driver will "
+ "behave as in %d mode as fallback.", rx_mq_mode,
+ rx_mq_mode & (ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG));
+ }
+
+ if (tx_mq_mode != ETH_MQ_TX_NONE) {
+ PMD_INIT_LOG(WARNING, "TX mode %d is not supported."
+ "Driver will behave as in %d mode as fallback.",
+ tx_mq_mode, ETH_MQ_TX_NONE);
+ }
+ }
+
PMD_INIT_FUNC_TRACE();
intr->flags |= E1000_FLAG_NEED_LINK_UPDATE;
PMD_INIT_FUNC_TRACE();
diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
index 02b9cda..8e9da3b 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
@@ -863,7 +863,8 @@ eth_ixgbe_dev_init(__attribute__((unused)) struct eth_driver *eth_drv,
"Failed to allocate %u bytes needed to store "
"MAC addresses",
ETHER_ADDR_LEN * hw->mac.num_rar_entries);
- return -ENOMEM;
+ diag = -ENOMEM;
+ goto error;
}
/* Copy the permanent MAC address */
ether_addr_copy((struct ether_addr *) hw->mac.perm_addr,
@@ -876,7 +877,8 @@ eth_ixgbe_dev_init(__attribute__((unused)) struct eth_driver *eth_drv,
PMD_INIT_LOG(ERR,
"Failed to allocate %d bytes needed to store MAC addresses",
ETHER_ADDR_LEN * IXGBE_VMDQ_NUM_UC_MAC);
- return -ENOMEM;
+ diag = -ENOMEM;
+ goto error;
}
/* initialize the vfta */
@@ -886,7 +888,13 @@ eth_ixgbe_dev_init(__attribute__((unused)) struct eth_driver *eth_drv,
memset(hwstrip, 0, sizeof(*hwstrip));
/* initialize PF if max_vfs not zero */
- ixgbe_pf_host_init(eth_dev);
+ diag = ixgbe_pf_host_init(eth_dev);
+ if (diag < 0) {
+ PMD_INIT_LOG(ERR,
+ "Failed to allocate %d bytes needed to store MAC addresses",
+ ETHER_ADDR_LEN * IXGBE_VMDQ_NUM_UC_MAC);
+ goto error;
+ }
ctrl_ext = IXGBE_READ_REG(hw, IXGBE_CTRL_EXT);
/* let hardware know driver is loaded */
@@ -918,6 +926,11 @@ eth_ixgbe_dev_init(__attribute__((unused)) struct eth_driver *eth_drv,
ixgbe_enable_intr(eth_dev);
return 0;
+
+error:
+ rte_free(eth_dev->data->hash_mac_addrs);
+ rte_free(eth_dev->data->mac_addrs);
+ return diag;
}
@@ -1434,7 +1447,93 @@ ixgbe_dev_configure(struct rte_eth_dev *dev)
struct ixgbe_interrupt *intr =
IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
+ struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+ struct rte_eth_dev_info dev_info;
+ int retval;
+
PMD_INIT_FUNC_TRACE();
+ retval = ixgbe_pf_configure_mq_sriov(dev);
+ if (retval <= 0)
+ return retval;
+
+ uint16_t nb_rx_q = dev->data->nb_rx_queues;
+ uint16_t nb_tx_q = dev->data->nb_rx_queues;
+
+ /* For DCB we need to obtain maximum number of queues dinamically,
+ * as this depends on max VF exported in PF. */
+ if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
+ (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB)) {
+ /* Use dev_infos_get field as this might be pointer to PF or VF. */
+ (*dev->dev_ops->dev_infos_get)(dev, &dev_info);
+ }
+
+ /* For vmdq+dcb mode check our configuration before we go further */
+ if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+ const struct rte_eth_vmdq_dcb_conf *conf;
+
+ if (nb_rx_q != ETH_VMDQ_DCB_NUM_QUEUES) {
+ PMD_INIT_LOG(ERR, " VMDQ+DCB, nb_rx_q != %d\n",
+ ETH_VMDQ_DCB_NUM_QUEUES);
+ return (-EINVAL);
+ }
+ conf = &(dev_conf->rx_adv_conf.vmdq_dcb_conf);
+ if (conf->nb_queue_pools != ETH_16_POOLS &&
+ conf->nb_queue_pools != ETH_32_POOLS) {
+ PMD_INIT_LOG(ERR, " VMDQ+DCB selected, "
+ "number of RX queue pools must be %d or %d\n",
+ ETH_16_POOLS, ETH_32_POOLS);
+ return (-EINVAL);
+ }
+ } else if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+ /* For DCB mode check out configuration before we go further */
+ const struct rte_eth_dcb_rx_conf *conf;
+
+ if (nb_rx_q != dev_info.max_rx_queues) {
+ PMD_INIT_LOG(ERR, " DCB, number of RX queues != %d\n",
+ ETH_DCB_NUM_QUEUES);
+ return (-EINVAL);
+ }
+ conf = &(dev_conf->rx_adv_conf.dcb_rx_conf);
+ if (conf->nb_tcs != ETH_4_TCS &&
+ conf->nb_tcs != ETH_8_TCS) {
+ PMD_INIT_LOG(ERR, " DCB, number of RX TC must be %d or %d\n",
+ ETH_4_TCS, ETH_8_TCS);
+ return (-EINVAL);
+ }
+ }
+
+ if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ const struct rte_eth_vmdq_dcb_tx_conf *conf;
+
+ if (nb_tx_q != ETH_VMDQ_DCB_NUM_QUEUES) {
+ PMD_INIT_LOG(ERR, " VMDQ+DCB, number of TX queues != %d\n",
+ ETH_VMDQ_DCB_NUM_QUEUES);
+ return (-EINVAL);
+ }
+ conf = &(dev_conf->tx_adv_conf.vmdq_dcb_tx_conf);
+ if (conf->nb_queue_pools != ETH_16_POOLS &&
+ conf->nb_queue_pools != ETH_32_POOLS) {
+ PMD_INIT_LOG(ERR, " VMDQ+DCB selected, "
+ "number of TX qqueue pools must be %d or %d\n",
+ ETH_16_POOLS, ETH_32_POOLS);
+ return (-EINVAL);
+ }
+ } else if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ const struct rte_eth_dcb_tx_conf *conf;
+
+ if (nb_tx_q != dev_info.max_tx_queues) {
+ PMD_INIT_LOG(ERR, " DCB, number of queues must be %d\n",
+ ETH_DCB_NUM_QUEUES);
+ return (-EINVAL);
+ }
+ conf = &(dev_conf->tx_adv_conf.dcb_tx_conf);
+ if (conf->nb_tcs != ETH_4_TCS &&
+ conf->nb_tcs != ETH_8_TCS) {
+ PMD_INIT_LOG(ERR, " DCB, number of TX TC must be %d or %d\n",
+ ETH_4_TCS, ETH_8_TCS);
+ return (-EINVAL);
+ }
+ }
/* set flag to update link status after init */
intr->flags |= IXGBE_FLAG_NEED_LINK_UPDATE;
diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
index 1383194..e70a6e8 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
+++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
@@ -348,11 +348,14 @@ void ixgbe_vlan_hw_strip_enable_all(struct rte_eth_dev *dev);
void ixgbe_vlan_hw_strip_disable_all(struct rte_eth_dev *dev);
-void ixgbe_pf_host_init(struct rte_eth_dev *eth_dev);
+int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev);
void ixgbe_pf_mbx_process(struct rte_eth_dev *eth_dev);
+int ixgbe_pf_configure_mq_sriov(struct rte_eth_dev *dev);
+
int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev);
uint32_t ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val);
+
#endif /* _IXGBE_ETHDEV_H_ */
diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c b/lib/librte_pmd_ixgbe/ixgbe_pf.c
index 4103e97..a7b9333 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_pf.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c
@@ -91,7 +91,7 @@ ixgbe_mb_intr_setup(struct rte_eth_dev *dev)
return 0;
}
-void ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
+int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
{
struct ixgbe_vf_info **vfinfo =
IXGBE_DEV_PRIVATE_TO_P_VFDATA(eth_dev->data->dev_private);
@@ -101,39 +101,31 @@ void ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
IXGBE_DEV_PRIVATE_TO_UTA(eth_dev->data->dev_private);
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+ int retval;
uint16_t vf_num;
- uint8_t nb_queue;
PMD_INIT_FUNC_TRACE();
- RTE_ETH_DEV_SRIOV(eth_dev).active = 0;
- if (0 == (vf_num = dev_num_vf(eth_dev)))
- return;
+ /* Fill sriov structure using default configuration. */
+ retval = ixgbe_pf_configure_mq_sriov(eth_dev);
+ if (retval != 0) {
+ if (retval < 0)
+ PMD_INIT_LOG(ERR, " Setting up SRIOV with default device "
+ "configuration should not fail. This is a BUG.");
+ return 0;
+ }
+ vf_num = dev_num_vf(eth_dev);
*vfinfo = rte_zmalloc("vf_info", sizeof(struct ixgbe_vf_info) * vf_num, 0);
- if (*vfinfo == NULL)
- rte_panic("Cannot allocate memory for private VF data\n");
+ if (*vfinfo == NULL) {
+ PMD_INIT_LOG(ERR, "Cannot allocate memory for private VF data.");
+ return (-ENOMEM);
+ }
memset(mirror_info,0,sizeof(struct ixgbe_mirror_info));
memset(uta_info,0,sizeof(struct ixgbe_uta_info));
hw->mac.mc_filter_type = 0;
- if (vf_num >= ETH_32_POOLS) {
- nb_queue = 2;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
- } else if (vf_num >= ETH_16_POOLS) {
- nb_queue = 4;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
- } else {
- nb_queue = 8;
- RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
- }
-
- RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool = nb_queue;
- RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool = nb_queue;
- RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
- RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
-
ixgbe_vf_perm_addr_gen(eth_dev, vf_num);
/* init_mailbox_params */
@@ -142,7 +134,169 @@ void ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
/* set mb interrupt mask */
ixgbe_mb_intr_setup(eth_dev);
- return;
+ return 0;
+}
+
+
+/*
+ * Function that make SRIOV configuration, based on device configuration,
+ * number of requested queues and number of VF created.
+ * Function returns:
+ * 1 - SRIOV is not enabled (no VF created)
+ * 0 - proper SRIOV configuration found.
+ * -EINVAL - no suitable SRIOV configuration found.
+ */
+int
+ixgbe_pf_configure_mq_sriov(struct rte_eth_dev *dev)
+{
+ struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+ struct rte_eth_dev_sriov *sriov = &RTE_ETH_DEV_SRIOV(dev);
+ uint16_t vf_num;
+
+ vf_num = dev_num_vf(dev);
+ if (vf_num == 0) {
+ memset(sriov, 0, sizeof(*sriov));
+ return 1;
+ }
+
+ /* Check multi-queue mode. */
+ if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
+ (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB_RSS) ||
+ (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB)) {
+ /* SRIOV only works in VMDq enable mode */
+ PMD_INIT_LOG(ERR, " SRIOV active, "
+ "invlaid VMDQ rx mode (%u) or tx (%u) mode.",
+ dev_conf->rxmode.mq_mode, dev_conf->txmode.mq_mode);
+ return (-EINVAL);
+ }
+
+ switch (dev_conf->rxmode.mq_mode) {
+ case ETH_MQ_RX_VMDQ_DCB:
+ if (vf_num <= ETH_16_POOLS)
+ sriov->nb_rx_q_per_pool = 8;
+ else if (vf_num <= ETH_32_POOLS)
+ sriov->nb_rx_q_per_pool = 4;
+ else {
+ PMD_INIT_LOG(ERR,
+ "DCB (SRIOV active) - VF count (%d) must be less or equal 32.",
+ vf_num);
+ return (-EINVAL);
+ }
+
+ if (dev->data->nb_rx_queues < sriov->nb_rx_q_per_pool) {
+ PMD_INIT_LOG(WARNING,
+ "DCB (SRIOV active) rx queues (%d) count is not equal %d.",
+ dev->data->nb_rx_queues,
+ sriov->nb_rx_q_per_pool);
+ }
+ break;
+ case ETH_MQ_RX_RSS:
+ PMD_INIT_LOG(INFO, "RSS (SRIOV active), "
+ "rx mq mode is changed from: mq_mode %u into VMDQ mq_mode %u.",
+ dev_conf->rxmode.mq_mode, dev->data->dev_conf.rxmode.mq_mode);
+ dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+ /* falltrought */
+ case ETH_MQ_RX_VMDQ_RSS:
+ if (vf_num >= ETH_64_POOLS) {
+ /* FIXME: Is vf_num > 64 realy supported by hardware? */
+ PMD_INIT_LOG(ERR, "RSS (SRIOV active), "
+ "VFs num must be less or equal 64.");
+ return (-EINVAL);
+ } else if (vf_num >= ETH_32_POOLS) {
+ if (dev->data->nb_rx_queues != 1 && dev->data->nb_rx_queues != 2) {
+ PMD_INIT_LOG(ERR, "RSS (SRIOV active, VF count >= 32),"
+ "invalid rx queues count %d. It must be 1 or 2.",
+ dev->data->nb_rx_queues);
+ return (-EINVAL);
+ }
+
+ sriov->nb_rx_q_per_pool = dev->data->nb_rx_queues;
+ } else {
+ /* FIXME: is VT(16) + RSS realy supported? */
+ if (dev->data->nb_rx_queues != 4) {
+ PMD_INIT_LOG(ERR, "RSS (SRIOV active, VFs count < 32), "
+ "invalid rx queues count %d. It must be 4.",
+ dev->data->nb_rx_queues);
+ return (-EINVAL);
+ }
+
+ sriov->nb_rx_q_per_pool = 4;
+ }
+ break;
+ default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */
+ /* if nothing mq mode configure, use default scheme */
+ if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_VMDQ_ONLY) {
+ PMD_INIT_LOG(INFO, "Rx mq mode changed from %u into VMDQ %u.",
+ dev->data->dev_conf.rxmode.mq_mode, ETH_MQ_RX_VMDQ_ONLY);
+
+ dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+ }
+
+ /* queue 0 of each pool is used. */
+ sriov->nb_rx_q_per_pool = 1;
+ break;
+ }
+
+ switch (dev_conf->txmode.mq_mode) {
+ case ETH_MQ_TX_VMDQ_DCB: /* DCB VMDQ in SRIOV mode*/
+ if (vf_num <= ETH_16_POOLS)
+ sriov->nb_tx_q_per_pool = 8;
+ else if (vf_num <= ETH_32_POOLS)
+ sriov->nb_tx_q_per_pool = 4;
+ else if (vf_num <= ETH_64_POOLS)
+ sriov->nb_tx_q_per_pool = 1;
+ else {
+ PMD_INIT_LOG(ERR, "DCB (SRIOV active), "
+ "VF count (%d) must be less or equal 64.",
+ vf_num);
+ return (-EINVAL);
+ }
+ break;
+ default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
+ /* if nothing mq mode configure, use default scheme */
+ if (dev->data->dev_conf.txmode.mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+ PMD_INIT_LOG(INFO, "Tx mq mode is changed from %u into VMDQ %u.",
+ dev->data->dev_conf.txmode.mq_mode, ETH_MQ_TX_VMDQ_ONLY);
+
+ dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
+ }
+
+ /* queue 0 of each pool is used. */
+ sriov->nb_tx_q_per_pool = 1;
+ break;
+ }
+
+ sriov->def_vmdq_idx = vf_num;
+
+ /*
+ * Pools starts at 2xN, 4xN or 8xN
+ */
+ if (vf_num >= ETH_32_POOLS) {
+ /* This must be vf_num <= ETH_64_POOLS */
+ sriov->active = ETH_64_POOLS;
+ sriov->def_pool_q_idx = vf_num * 2;
+ } else if (vf_num >= ETH_16_POOLS) {
+ sriov->active = ETH_32_POOLS;
+ sriov->def_pool_q_idx = vf_num * 4;
+ } else {
+ sriov->active = ETH_16_POOLS;
+ sriov->def_pool_q_idx = vf_num * 8;
+ }
+
+ /* Check if available queus count is not less than allocated.*/
+ if (dev->data->nb_rx_queues > sriov->nb_rx_q_per_pool) {
+ PMD_INIT_LOG(ERR, "SRIOV active, rx queue count must less or equal %d.",
+ sriov->nb_rx_q_per_pool);
+ return (-EINVAL);
+ }
+
+ if (dev->data->nb_rx_queues > sriov->nb_tx_q_per_pool) {
+ PMD_INIT_LOG(ERR, "SRIOV active, tx queue count must less or equal %d.",
+ sriov->nb_tx_q_per_pool);
+ return (-EINVAL);
+ }
+
+ return 0;
}
int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
--
1.9.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [dpdk-dev] [PATCH v4 5/7] pmd ixgbe: enable DCB in SRIOV
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 0/7] Enable DCB in SRIOV mode for ixgbe driver Pawel Wodkowski
` (3 preceding siblings ...)
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 4/7] move rte_eth_dev_check_mq_mode() logic to driver Pawel Wodkowski
@ 2015-02-19 15:54 ` Pawel Wodkowski
2015-02-25 3:36 ` Ouyang, Changchun
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 6/7] tespmd: fix DCB in SRIOV mode support Pawel Wodkowski
` (2 subsequent siblings)
7 siblings, 1 reply; 41+ messages in thread
From: Pawel Wodkowski @ 2015-02-19 15:54 UTC (permalink / raw)
To: dev
Enable DCB in SRIOV mode for ixgbe driver.
To use DCB in VF PF must configure port as DCB + VMDQ and VF must
configure port as DCB only. VF are not allowed to change DCB settings
that are common to all ports like number of TC.
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
---
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 2 +-
lib/librte_pmd_ixgbe/ixgbe_pf.c | 19 ++++++++++++-------
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 18 +++++++++++-------
3 files changed, 24 insertions(+), 15 deletions(-)
diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
index 8e9da3b..7551bcc 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
@@ -1514,7 +1514,7 @@ ixgbe_dev_configure(struct rte_eth_dev *dev)
if (conf->nb_queue_pools != ETH_16_POOLS &&
conf->nb_queue_pools != ETH_32_POOLS) {
PMD_INIT_LOG(ERR, " VMDQ+DCB selected, "
- "number of TX qqueue pools must be %d or %d\n",
+ "number of TX queue pools must be %d or %d\n",
ETH_16_POOLS, ETH_32_POOLS);
return (-EINVAL);
}
diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c b/lib/librte_pmd_ixgbe/ixgbe_pf.c
index a7b9333..7c4afba 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_pf.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c
@@ -109,9 +109,12 @@ int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
/* Fill sriov structure using default configuration. */
retval = ixgbe_pf_configure_mq_sriov(eth_dev);
if (retval != 0) {
- if (retval < 0)
- PMD_INIT_LOG(ERR, " Setting up SRIOV with default device "
+ if (retval < 0) {
+ PMD_INIT_LOG(ERR, "Setting up SRIOV with default device "
"configuration should not fail. This is a BUG.");
+ return retval;
+ }
+
return 0;
}
@@ -652,7 +655,9 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
{
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
- uint32_t default_q = vf * RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
+ struct ixgbe_dcb_config *dcbinfo =
+ IXGBE_DEV_PRIVATE_TO_DCB_CFG(dev->data->dev_private);
+ uint32_t default_q = RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx;
/* Verify if the PF supports the mbox APIs version or not */
switch (vfinfo[vf].api_version) {
@@ -670,10 +675,10 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
/* Notify VF of default queue */
msgbuf[IXGBE_VF_DEF_QUEUE] = default_q;
- /*
- * FIX ME if it needs fill msgbuf[IXGBE_VF_TRANS_VLAN]
- * for VLAN strip or VMDQ_DCB or VMDQ_DCB_RSS
- */
+ if (dcbinfo->num_tcs.pg_tcs)
+ msgbuf[IXGBE_VF_TRANS_VLAN] = dcbinfo->num_tcs.pg_tcs;
+ else
+ msgbuf[IXGBE_VF_TRANS_VLAN] = 1;
return 0;
}
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index e6766b3..2e3522c 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -3166,10 +3166,9 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
/* check support mq_mode for DCB */
if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
- (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB))
- return;
-
- if (dev->data->nb_rx_queues != ETH_DCB_NUM_QUEUES)
+ (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
+ (dev_conf->txmode.mq_mode != ETH_MQ_TX_VMDQ_DCB) &&
+ (dev_conf->txmode.mq_mode != ETH_MQ_TX_DCB))
return;
/** Configure DCB hardware **/
@@ -3442,8 +3441,13 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
ixgbe_config_vf_rss(dev);
break;
- /* FIXME if support DCB/RSS together with VMDq & SRIOV */
+ /*
+ * DCB will be configured during port startup.
+ */
case ETH_MQ_RX_VMDQ_DCB:
+ break;
+
+ /* FIXME if support DCB+RSS together with VMDq & SRIOV */
case ETH_MQ_RX_VMDQ_DCB_RSS:
PMD_INIT_LOG(ERR,
"Could not support DCB with VMDq & SRIOV");
@@ -3488,8 +3492,8 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
switch (RTE_ETH_DEV_SRIOV(dev).active) {
/*
- * SRIOV active scheme
- * FIXME if support DCB together with VMDq & SRIOV
+ * SRIOV active scheme.
+ * Note: DCB will be configured during port startup.
*/
case ETH_64_POOLS:
mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_64VF;
--
1.9.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [dpdk-dev] [PATCH v4 6/7] tespmd: fix DCB in SRIOV mode support
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 0/7] Enable DCB in SRIOV mode for ixgbe driver Pawel Wodkowski
` (4 preceding siblings ...)
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 5/7] pmd ixgbe: enable DCB in SRIOV Pawel Wodkowski
@ 2015-02-19 15:54 ` Pawel Wodkowski
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 7/7] pmd ixgbe: fix vlan setting in in PF Pawel Wodkowski
2015-06-08 3:00 ` [dpdk-dev] [PATCH v4 0/7] Enable DCB in SRIOV mode for ixgbe driver Zhang, Helin
7 siblings, 0 replies; 41+ messages in thread
From: Pawel Wodkowski @ 2015-02-19 15:54 UTC (permalink / raw)
To: dev
This patch incorporate fixes to support DCB in SRIOV mode for testpmd.
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
---
app/test-pmd/cmdline.c | 4 ++--
app/test-pmd/testpmd.c | 39 +++++++++++++++++++++++++++++----------
app/test-pmd/testpmd.h | 10 ----------
3 files changed, 31 insertions(+), 22 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 4753bb4..1e30ca6 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1964,9 +1964,9 @@ cmd_config_dcb_parsed(void *parsed_result,
/* DCB in VT mode */
if (!strncmp(res->vt_en, "on",2))
- dcb_conf.dcb_mode = DCB_VT_ENABLED;
+ dcb_conf.vt_en = 1;
else
- dcb_conf.dcb_mode = DCB_ENABLED;
+ dcb_conf.vt_en = 0;
if (!strncmp(res->pfc_en, "on",2)) {
dcb_conf.pfc_en = 1;
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 3aebea6..bdbf237 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1766,7 +1766,8 @@ const uint16_t vlan_tags[] = {
};
static int
-get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
+get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf,
+ uint16_t sriov)
{
uint8_t i;
@@ -1774,7 +1775,7 @@ get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
* Builds up the correct configuration for dcb+vt based on the vlan tags array
* given above, and the number of traffic classes available for use.
*/
- if (dcb_conf->dcb_mode == DCB_VT_ENABLED) {
+ if (dcb_conf->vt_en == 1) {
struct rte_eth_vmdq_dcb_conf vmdq_rx_conf;
struct rte_eth_vmdq_dcb_tx_conf vmdq_tx_conf;
@@ -1791,9 +1792,17 @@ get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
vmdq_rx_conf.pool_map[i].vlan_id = vlan_tags[ i ];
vmdq_rx_conf.pool_map[i].pools = 1 << (i % vmdq_rx_conf.nb_queue_pools);
}
- for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
- vmdq_rx_conf.dcb_queue[i] = i;
- vmdq_tx_conf.dcb_queue[i] = i;
+
+ if (sriov == 0) {
+ for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ vmdq_rx_conf.dcb_queue[i] = i;
+ vmdq_tx_conf.dcb_queue[i] = i;
+ }
+ } else {
+ for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+ vmdq_rx_conf.dcb_queue[i] = i % dcb_conf->num_tcs;
+ vmdq_tx_conf.dcb_queue[i] = i % dcb_conf->num_tcs;
+ }
}
/*set DCB mode of RX and TX of multiple queues*/
@@ -1851,22 +1860,32 @@ init_port_dcb_config(portid_t pid,struct dcb_config *dcb_conf)
uint16_t nb_vlan;
uint16_t i;
- /* rxq and txq configuration in dcb mode */
- nb_rxq = 128;
- nb_txq = 128;
rx_free_thresh = 64;
+ rte_port = &ports[pid];
memset(&port_conf,0,sizeof(struct rte_eth_conf));
/* Enter DCB configuration status */
dcb_config = 1;
nb_vlan = sizeof( vlan_tags )/sizeof( vlan_tags[ 0 ]);
/*set configuration of DCB in vt mode and DCB in non-vt mode*/
- retval = get_eth_dcb_conf(&port_conf, dcb_conf);
+ retval = get_eth_dcb_conf(&port_conf, dcb_conf, rte_port->dev_info.max_vfs);
+
+ /* rxq and txq configuration in dcb mode */
+ nb_rxq = rte_port->dev_info.max_rx_queues;
+ nb_txq = rte_port->dev_info.max_tx_queues;
+
+ if (rte_port->dev_info.max_vfs) {
+ if (port_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB)
+ nb_rxq /= port_conf.rx_adv_conf.vmdq_dcb_conf.nb_queue_pools;
+
+ if (port_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB)
+ nb_txq /= port_conf.tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools;
+ }
+
if (retval < 0)
return retval;
- rte_port = &ports[pid];
memcpy(&rte_port->dev_conf, &port_conf,sizeof(struct rte_eth_conf));
rxtx_port_config(rte_port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 581130b..0ef3257 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -230,20 +230,10 @@ struct fwd_config {
portid_t nb_fwd_ports; /**< Nb. of ports involved. */
};
-/**
- * DCB mode enable
- */
-enum dcb_mode_enable
-{
- DCB_VT_ENABLED,
- DCB_ENABLED
-};
-
/*
* DCB general config info
*/
struct dcb_config {
- enum dcb_mode_enable dcb_mode;
uint8_t vt_en;
enum rte_eth_nb_tcs num_tcs;
uint8_t pfc_en;
--
1.9.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* [dpdk-dev] [PATCH v4 7/7] pmd ixgbe: fix vlan setting in in PF
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 0/7] Enable DCB in SRIOV mode for ixgbe driver Pawel Wodkowski
` (5 preceding siblings ...)
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 6/7] tespmd: fix DCB in SRIOV mode support Pawel Wodkowski
@ 2015-02-19 15:54 ` Pawel Wodkowski
2015-06-08 3:00 ` [dpdk-dev] [PATCH v4 0/7] Enable DCB in SRIOV mode for ixgbe driver Zhang, Helin
7 siblings, 0 replies; 41+ messages in thread
From: Pawel Wodkowski @ 2015-02-19 15:54 UTC (permalink / raw)
To: dev
The ixgbe_vlan_filter_set() should use hw->mac.ops.set_vfta() to set
VLAN filtering as this is generic function that handles both non-SRIOV
and SRIOV cases.
Bug was discovered issuing command in testpmd 'rx_vlan add VLAN PORT'
for PF. Requested VLAN was enabled but pool mask is not set. Only
command 'rx_vlan add VLAN port PORT vf MASK' can enable pointed VLAN id
for PF.
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
---
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 19 ++++++++-----------
1 file changed, 8 insertions(+), 11 deletions(-)
diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
index 7551bcc..7aef0e8 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
@@ -1162,21 +1162,18 @@ ixgbe_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ixgbe_vfta * shadow_vfta =
IXGBE_DEV_PRIVATE_TO_VFTA(dev->data->dev_private);
- uint32_t vfta;
+ struct rte_eth_dev_sriov *sriov = &RTE_ETH_DEV_SRIOV(dev);
+ u32 vind = sriov->active ? sriov->def_vmdq_idx : 0;
+ s32 ret_val;
uint32_t vid_idx;
- uint32_t vid_bit;
- vid_idx = (uint32_t) ((vlan_id >> 5) & 0x7F);
- vid_bit = (uint32_t) (1 << (vlan_id & 0x1F));
- vfta = IXGBE_READ_REG(hw, IXGBE_VFTA(vid_idx));
- if (on)
- vfta |= vid_bit;
- else
- vfta &= ~vid_bit;
- IXGBE_WRITE_REG(hw, IXGBE_VFTA(vid_idx), vfta);
+ ret_val = hw->mac.ops.set_vfta(hw, vlan_id, vind, on);
+ if (ret_val != IXGBE_SUCCESS)
+ return ret_val;
+ vid_idx = (uint32_t) ((vlan_id >> 5) & 0x7F);
/* update local VFTA copy */
- shadow_vfta->vfta[vid_idx] = vfta;
+ shadow_vfta->vfta[vid_idx] = IXGBE_READ_REG(hw, IXGBE_VFTA(vid_idx));
return 0;
}
--
1.9.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/7] pmd: igb/ixgbe split nb_q_per_pool to rx and tx nb_q_per_pool
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 3/7] pmd: igb/ixgbe split nb_q_per_pool to rx and tx nb_q_per_pool Pawel Wodkowski
@ 2015-02-25 3:24 ` Ouyang, Changchun
2015-02-25 7:47 ` Pawel Wodkowski
0 siblings, 1 reply; 41+ messages in thread
From: Ouyang, Changchun @ 2015-02-25 3:24 UTC (permalink / raw)
To: Wodkowski, PawelX, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Pawel Wodkowski
> Sent: Thursday, February 19, 2015 11:55 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v4 3/7] pmd: igb/ixgbe split nb_q_per_pool to rx
> and tx nb_q_per_pool
>
> rx and tx number of queue might be different if RX and TX are configured in
> different mode. This allow to inform VF about proper number of queues.
>
> Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
> ---
> lib/librte_ether/rte_ethdev.c | 12 ++++++------
> lib/librte_ether/rte_ethdev.h | 3 ++-
> lib/librte_pmd_e1000/igb_pf.c | 3 ++-
> lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 2 +-
> lib/librte_pmd_ixgbe/ixgbe_pf.c | 9 +++++----
> 5 files changed, 16 insertions(+), 13 deletions(-)
>
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index 2e814db..4007054 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -520,7 +520,7 @@ rte_eth_dev_check_vf_rss_rxq_num(uint8_t port_id,
> uint16_t nb_rx_q)
> return -EINVAL;
> }
>
> - RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = nb_rx_q;
> + RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool = nb_rx_q;
> RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx =
> dev->pci_dev->max_vfs * nb_rx_q;
>
> @@ -567,7 +567,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
> uint16_t nb_rx_q, uint16_t nb_tx_q,
> dev->data-
> >dev_conf.rxmode.mq_mode);
> case ETH_MQ_RX_VMDQ_RSS:
> dev->data->dev_conf.rxmode.mq_mode =
> ETH_MQ_RX_VMDQ_RSS;
> - if (nb_rx_q <=
> RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
> + if (nb_rx_q <=
> RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool)
> if
> (rte_eth_dev_check_vf_rss_rxq_num(port_id, nb_rx_q) != 0) {
> PMD_DEBUG_TRACE("ethdev
> port_id=%d"
> " SRIOV active, invalid queue"
> @@ -580,8 +580,8 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
> uint16_t nb_rx_q, uint16_t nb_tx_q,
> default: /* ETH_MQ_RX_VMDQ_ONLY or
> ETH_MQ_RX_NONE */
> /* if nothing mq mode configure, use default scheme
> */
> dev->data->dev_conf.rxmode.mq_mode =
> ETH_MQ_RX_VMDQ_ONLY;
> - if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
> - RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool =
> 1;
> + if (RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool > 1)
> +
> RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool = 1;
> break;
> }
>
> @@ -600,8 +600,8 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
> uint16_t nb_rx_q, uint16_t nb_tx_q,
> }
>
> /* check valid queue number */
> - if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
> - (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
> + if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool)
Here, how about use nb_rx_q_per_pool to replace nb_tx_q_per_pool ?
so it will be more clear to check rx queue number.
> ||
> + (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool))
> {
> PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV
> active, "
> "queue number must less equal to %d\n",
> port_id,
> RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH v4 5/7] pmd ixgbe: enable DCB in SRIOV
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 5/7] pmd ixgbe: enable DCB in SRIOV Pawel Wodkowski
@ 2015-02-25 3:36 ` Ouyang, Changchun
2015-02-25 11:29 ` Pawel Wodkowski
0 siblings, 1 reply; 41+ messages in thread
From: Ouyang, Changchun @ 2015-02-25 3:36 UTC (permalink / raw)
To: Wodkowski, PawelX, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Pawel Wodkowski
> Sent: Thursday, February 19, 2015 11:55 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v4 5/7] pmd ixgbe: enable DCB in SRIOV
>
> Enable DCB in SRIOV mode for ixgbe driver.
>
> To use DCB in VF PF must configure port as DCB + VMDQ and VF must
> configure port as DCB only. VF are not allowed to change DCB settings that
> are common to all ports like number of TC.
>
> Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
> ---
> lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 2 +-
> lib/librte_pmd_ixgbe/ixgbe_pf.c | 19 ++++++++++++-------
> lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 18 +++++++++++-------
> 3 files changed, 24 insertions(+), 15 deletions(-)
>
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
> b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
> index 8e9da3b..7551bcc 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
> +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
> @@ -1514,7 +1514,7 @@ ixgbe_dev_configure(struct rte_eth_dev *dev)
> if (conf->nb_queue_pools != ETH_16_POOLS &&
> conf->nb_queue_pools != ETH_32_POOLS) {
> PMD_INIT_LOG(ERR, " VMDQ+DCB selected, "
> - "number of TX qqueue pools must
> be %d or %d\n",
> + "number of TX queue pools must
> be %d or %d\n",
> ETH_16_POOLS, ETH_32_POOLS);
> return (-EINVAL);
> }
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c
> b/lib/librte_pmd_ixgbe/ixgbe_pf.c index a7b9333..7c4afba 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_pf.c
> +++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c
> @@ -109,9 +109,12 @@ int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
> /* Fill sriov structure using default configuration. */
> retval = ixgbe_pf_configure_mq_sriov(eth_dev);
> if (retval != 0) {
> - if (retval < 0)
> - PMD_INIT_LOG(ERR, " Setting up SRIOV with default
> device "
> + if (retval < 0) {
> + PMD_INIT_LOG(ERR, "Setting up SRIOV with default
> device "
> "configuration should not fail. This is a
> BUG.");
> + return retval;
> + }
> +
> return 0;
> }
>
> @@ -652,7 +655,9 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev,
> uint32_t vf, uint32_t *msgbuf) {
> struct ixgbe_vf_info *vfinfo =
> *IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data-
> >dev_private);
> - uint32_t default_q = vf *
> RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
> + struct ixgbe_dcb_config *dcbinfo =
> + IXGBE_DEV_PRIVATE_TO_DCB_CFG(dev->data-
> >dev_private);
> + uint32_t default_q = RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx;
Why need change the default_q here?
>
> /* Verify if the PF supports the mbox APIs version or not */
> switch (vfinfo[vf].api_version) {
> @@ -670,10 +675,10 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev,
> uint32_t vf, uint32_t *msgbuf)
> /* Notify VF of default queue */
> msgbuf[IXGBE_VF_DEF_QUEUE] = default_q;
>
> - /*
> - * FIX ME if it needs fill msgbuf[IXGBE_VF_TRANS_VLAN]
> - * for VLAN strip or VMDQ_DCB or VMDQ_DCB_RSS
> - */
> + if (dcbinfo->num_tcs.pg_tcs)
> + msgbuf[IXGBE_VF_TRANS_VLAN] = dcbinfo-
> >num_tcs.pg_tcs;
> + else
> + msgbuf[IXGBE_VF_TRANS_VLAN] = 1;
>
> return 0;
> }
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> index e6766b3..2e3522c 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> @@ -3166,10 +3166,9 @@ void ixgbe_configure_dcb(struct rte_eth_dev
> *dev)
>
> /* check support mq_mode for DCB */
> if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
> - (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB))
> - return;
> -
> - if (dev->data->nb_rx_queues != ETH_DCB_NUM_QUEUES)
> + (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
> + (dev_conf->txmode.mq_mode != ETH_MQ_TX_VMDQ_DCB) &&
> + (dev_conf->txmode.mq_mode != ETH_MQ_TX_DCB))
> return;
>
> /** Configure DCB hardware **/
> @@ -3442,8 +3441,13 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev
> *dev)
> ixgbe_config_vf_rss(dev);
> break;
>
> - /* FIXME if support DCB/RSS together with VMDq & SRIOV */
> + /*
> + * DCB will be configured during port startup.
> + */
> case ETH_MQ_RX_VMDQ_DCB:
> + break;
> +
> + /* FIXME if support DCB+RSS together with VMDq & SRIOV
> */
> case ETH_MQ_RX_VMDQ_DCB_RSS:
> PMD_INIT_LOG(ERR,
> "Could not support DCB with VMDq &
> SRIOV"); @@ -3488,8 +3492,8 @@ ixgbe_dev_mq_tx_configure(struct
> rte_eth_dev *dev)
> switch (RTE_ETH_DEV_SRIOV(dev).active) {
>
> /*
> - * SRIOV active scheme
> - * FIXME if support DCB together with VMDq & SRIOV
> + * SRIOV active scheme.
> + * Note: DCB will be configured during port startup.
> */
> case ETH_64_POOLS:
> mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_64VF;
> --
> 1.9.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH v4 4/7] move rte_eth_dev_check_mq_mode() logic to driver
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 4/7] move rte_eth_dev_check_mq_mode() logic to driver Pawel Wodkowski
@ 2015-02-25 6:14 ` Ouyang, Changchun
2015-02-25 9:57 ` Pawel Wodkowski
2015-06-09 4:06 ` Wu, Jingjing
1 sibling, 1 reply; 41+ messages in thread
From: Ouyang, Changchun @ 2015-02-25 6:14 UTC (permalink / raw)
To: Wodkowski, PawelX, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Pawel Wodkowski
> Sent: Thursday, February 19, 2015 11:55 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v4 4/7] move rte_eth_dev_check_mq_mode()
> logic to driver
>
> Function rte_eth_dev_check_mq_mode() is driver specific. It should be
> done in PF configuration phase. This patch move igb/ixgbe driver specific mq
> check and SRIOV configuration code to driver part. Also rewriting log
> messages to be shorter and more descriptive.
>
> Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
> ---
> lib/librte_ether/rte_ethdev.c | 197 -----------------------------------
> lib/librte_pmd_e1000/igb_ethdev.c | 43 ++++++++
> lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 105 ++++++++++++++++++-
> lib/librte_pmd_ixgbe/ixgbe_ethdev.h | 5 +-
> lib/librte_pmd_ixgbe/ixgbe_pf.c | 202
> +++++++++++++++++++++++++++++++-----
> 5 files changed, 327 insertions(+), 225 deletions(-)
>
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index 4007054..aa27e39 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -502,195 +502,6 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev
> *dev, uint16_t nb_queues)
> return (0);
> }
>
> -static int
> -rte_eth_dev_check_vf_rss_rxq_num(uint8_t port_id, uint16_t nb_rx_q) -{
> - struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> - switch (nb_rx_q) {
> - case 1:
> - case 2:
> - RTE_ETH_DEV_SRIOV(dev).active =
> - ETH_64_POOLS;
> - break;
> - case 4:
> - RTE_ETH_DEV_SRIOV(dev).active =
> - ETH_32_POOLS;
> - break;
> - default:
> - return -EINVAL;
> - }
> -
> - RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool = nb_rx_q;
> - RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx =
> - dev->pci_dev->max_vfs * nb_rx_q;
> -
> - return 0;
> -}
> -
> -static int
> -rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t
> nb_tx_q,
> - const struct rte_eth_conf *dev_conf)
> -{
> - struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> -
> - if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
> - /* check multi-queue mode */
> - if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
> - (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB_RSS)
> ||
> - (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB)) {
> - /* SRIOV only works in VMDq enable mode */
> - PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
> - " SRIOV active, "
> - "wrong VMDQ mq_mode rx %u
> tx %u\n",
> - port_id,
> - dev_conf->rxmode.mq_mode,
> - dev_conf->txmode.mq_mode);
> - return (-EINVAL);
> - }
> -
> - switch (dev_conf->rxmode.mq_mode) {
> - case ETH_MQ_RX_VMDQ_DCB:
> - case ETH_MQ_RX_VMDQ_DCB_RSS:
> - /* DCB/RSS VMDQ in SRIOV mode, not implement
> yet */
> - PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
> - " SRIOV active, "
> - "unsupported VMDQ mq_mode
> rx %u\n",
> - port_id, dev_conf-
> >rxmode.mq_mode);
> - return (-EINVAL);
> - case ETH_MQ_RX_RSS:
> - PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
> - " SRIOV active, "
> - "Rx mq mode is changed from:"
> - "mq_mode %u into VMDQ
> mq_mode %u\n",
> - port_id,
> - dev_conf->rxmode.mq_mode,
> - dev->data-
> >dev_conf.rxmode.mq_mode);
> - case ETH_MQ_RX_VMDQ_RSS:
> - dev->data->dev_conf.rxmode.mq_mode =
> ETH_MQ_RX_VMDQ_RSS;
> - if (nb_rx_q <=
> RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool)
> - if
> (rte_eth_dev_check_vf_rss_rxq_num(port_id, nb_rx_q) != 0) {
> - PMD_DEBUG_TRACE("ethdev
> port_id=%d"
> - " SRIOV active, invalid queue"
> - " number for VMDQ RSS,
> allowed"
> - " value are 1, 2 or 4\n",
> - port_id);
> - return -EINVAL;
> - }
> - break;
> - default: /* ETH_MQ_RX_VMDQ_ONLY or
> ETH_MQ_RX_NONE */
> - /* if nothing mq mode configure, use default scheme
> */
> - dev->data->dev_conf.rxmode.mq_mode =
> ETH_MQ_RX_VMDQ_ONLY;
> - if (RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool > 1)
> -
> RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool = 1;
> - break;
> - }
> -
> - switch (dev_conf->txmode.mq_mode) {
> - case ETH_MQ_TX_VMDQ_DCB:
> - /* DCB VMDQ in SRIOV mode, not implement yet */
> - PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
> - " SRIOV active, "
> - "unsupported VMDQ mq_mode
> tx %u\n",
> - port_id, dev_conf-
> >txmode.mq_mode);
> - return (-EINVAL);
> - default: /* ETH_MQ_TX_VMDQ_ONLY or
> ETH_MQ_TX_NONE */
> - /* if nothing mq mode configure, use default scheme
> */
> - dev->data->dev_conf.txmode.mq_mode =
> ETH_MQ_TX_VMDQ_ONLY;
> - break;
> - }
> -
> - /* check valid queue number */
> - if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool)
> ||
> - (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool))
> {
> - PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV
> active, "
> - "queue number must less equal to %d\n",
> - port_id,
> RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
> - return (-EINVAL);
> - }
> - } else {
> - /* For vmdb+dcb mode check our configuration before we
> go further */
> - if (dev_conf->rxmode.mq_mode ==
> ETH_MQ_RX_VMDQ_DCB) {
> - const struct rte_eth_vmdq_dcb_conf *conf;
> -
> - if (nb_rx_q != ETH_VMDQ_DCB_NUM_QUEUES) {
> - PMD_DEBUG_TRACE("ethdev port_id=%d
> VMDQ+DCB, nb_rx_q "
> - "!= %d\n",
> - port_id,
> ETH_VMDQ_DCB_NUM_QUEUES);
> - return (-EINVAL);
> - }
> - conf = &(dev_conf->rx_adv_conf.vmdq_dcb_conf);
> - if (! (conf->nb_queue_pools == ETH_16_POOLS ||
> - conf->nb_queue_pools == ETH_32_POOLS)) {
> - PMD_DEBUG_TRACE("ethdev port_id=%d
> VMDQ+DCB selected, "
> - "nb_queue_pools must
> be %d or %d\n",
> - port_id, ETH_16_POOLS,
> ETH_32_POOLS);
> - return (-EINVAL);
> - }
> - }
> - if (dev_conf->txmode.mq_mode ==
> ETH_MQ_TX_VMDQ_DCB) {
> - const struct rte_eth_vmdq_dcb_tx_conf *conf;
> -
> - if (nb_tx_q != ETH_VMDQ_DCB_NUM_QUEUES) {
> - PMD_DEBUG_TRACE("ethdev port_id=%d
> VMDQ+DCB, nb_tx_q "
> - "!= %d\n",
> - port_id,
> ETH_VMDQ_DCB_NUM_QUEUES);
> - return (-EINVAL);
> - }
> - conf = &(dev_conf-
> >tx_adv_conf.vmdq_dcb_tx_conf);
> - if (! (conf->nb_queue_pools == ETH_16_POOLS ||
> - conf->nb_queue_pools == ETH_32_POOLS)) {
> - PMD_DEBUG_TRACE("ethdev port_id=%d
> VMDQ+DCB selected, "
> - "nb_queue_pools != %d or
> nb_queue_pools "
> - "!= %d\n",
> - port_id, ETH_16_POOLS,
> ETH_32_POOLS);
> - return (-EINVAL);
> - }
> - }
> -
> - /* For DCB mode check our configuration before we go
> further */
> - if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
> - const struct rte_eth_dcb_rx_conf *conf;
> -
> - if (nb_rx_q != ETH_DCB_NUM_QUEUES) {
> - PMD_DEBUG_TRACE("ethdev port_id=%d
> DCB, nb_rx_q "
> - "!= %d\n",
> - port_id,
> ETH_DCB_NUM_QUEUES);
> - return (-EINVAL);
> - }
> - conf = &(dev_conf->rx_adv_conf.dcb_rx_conf);
> - if (! (conf->nb_tcs == ETH_4_TCS ||
> - conf->nb_tcs == ETH_8_TCS)) {
> - PMD_DEBUG_TRACE("ethdev port_id=%d
> DCB selected, "
> - "nb_tcs != %d or nb_tcs "
> - "!= %d\n",
> - port_id, ETH_4_TCS,
> ETH_8_TCS);
> - return (-EINVAL);
> - }
> - }
> -
> - if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
> - const struct rte_eth_dcb_tx_conf *conf;
> -
> - if (nb_tx_q != ETH_DCB_NUM_QUEUES) {
> - PMD_DEBUG_TRACE("ethdev port_id=%d
> DCB, nb_tx_q "
> - "!= %d\n",
> - port_id,
> ETH_DCB_NUM_QUEUES);
> - return (-EINVAL);
> - }
> - conf = &(dev_conf->tx_adv_conf.dcb_tx_conf);
> - if (! (conf->nb_tcs == ETH_4_TCS ||
> - conf->nb_tcs == ETH_8_TCS)) {
> - PMD_DEBUG_TRACE("ethdev port_id=%d
> DCB selected, "
> - "nb_tcs != %d or nb_tcs "
> - "!= %d\n",
> - port_id, ETH_4_TCS,
> ETH_8_TCS);
> - return (-EINVAL);
> - }
> - }
> - }
> - return 0;
> -}
> -
> int
> rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> const struct rte_eth_conf *dev_conf) @@ -798,14 +609,6
> @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t
> nb_tx_q,
> ETHER_MAX_LEN;
> }
>
> - /* multipe queue mode checking */
> - diag = rte_eth_dev_check_mq_mode(port_id, nb_rx_q, nb_tx_q,
> dev_conf);
> - if (diag != 0) {
> - PMD_DEBUG_TRACE("port%d
> rte_eth_dev_check_mq_mode = %d\n",
> - port_id, diag);
> - return diag;
> - }
> -
> /*
> * Setup new number of RX/TX queues and reconfigure device.
> */
> diff --git a/lib/librte_pmd_e1000/igb_ethdev.c
> b/lib/librte_pmd_e1000/igb_ethdev.c
> index d451086..5c922df 100644
> --- a/lib/librte_pmd_e1000/igb_ethdev.c
> +++ b/lib/librte_pmd_e1000/igb_ethdev.c
> @@ -742,6 +742,49 @@ eth_igb_configure(struct rte_eth_dev *dev)
> struct e1000_interrupt *intr =
> E1000_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
>
> + enum rte_eth_rx_mq_mode rx_mq_mode = dev->data-
> >dev_conf.rxmode.mq_mode;
> + enum rte_eth_tx_mq_mode tx_mq_mode =
> +dev->data->dev_conf.txmode.mq_mode;
> +
> + if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
> + /* Check multi-queue mode.
> + * To no break software we accept ETH_MQ_RX_NONE as
> this might be used
> + * to turn off VLAN filter.
> + *
> + * FIXME if support RSS together with VMDq & SRIOV
> + */
> + if (rx_mq_mode != ETH_MQ_RX_NONE &&
> + (rx_mq_mode & ETH_MQ_RX_VMDQ_ONLY)
> == 0) {
> + PMD_INIT_LOG(WARNING, " SRIOV active, RX
> mode %d is not supported."
> + "Driver will behave as in %d mode as
> fallback.",
> + rx_mq_mode, ETH_MQ_RX_NONE);
> + }
> +
> + /* TX mode is not used in in this driver so mode might be
> ignored.*/
> + if (tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
> + /* SRIOV only works in VMDq enable mode */
> + PMD_INIT_LOG(WARNING, "TX mode %d is not
> supported in SRIOV. "
> + "Driver will behave as in %d mode as
> fallback.",
> + tx_mq_mode,
> ETH_MQ_TX_VMDQ_ONLY);
> + }
> + } else {
> + /*
> + * To no break software that set invalid mode, only display
> warning if
> + * invalid mode is used.
> + */
> + if ((rx_mq_mode & (ETH_MQ_RX_RSS_FLAG |
> ETH_MQ_RX_VMDQ_FLAG))
> + != rx_mq_mode) {
> + PMD_INIT_LOG(WARNING, "RX mode %d is not
> supported. Driver will "
> + "behave as in %d mode as fallback.",
> rx_mq_mode,
> + rx_mq_mode &
> (ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG));
> + }
> +
> + if (tx_mq_mode != ETH_MQ_TX_NONE) {
> + PMD_INIT_LOG(WARNING, "TX mode %d is not
> supported."
> + "Driver will behave as in %d mode as
> fallback.",
> + tx_mq_mode, ETH_MQ_TX_NONE);
> + }
> + }
> +
Better to have new function for these new codes.
> PMD_INIT_FUNC_TRACE();
> intr->flags |= E1000_FLAG_NEED_LINK_UPDATE;
> PMD_INIT_FUNC_TRACE();
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
> b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
> index 02b9cda..8e9da3b 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
> +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
> @@ -863,7 +863,8 @@ eth_ixgbe_dev_init(__attribute__((unused)) struct
> eth_driver *eth_drv,
> "Failed to allocate %u bytes needed to store "
> "MAC addresses",
> ETHER_ADDR_LEN * hw->mac.num_rar_entries);
> - return -ENOMEM;
> + diag = -ENOMEM;
> + goto error;
> }
> /* Copy the permanent MAC address */
> ether_addr_copy((struct ether_addr *) hw->mac.perm_addr, @@ -
> 876,7 +877,8 @@ eth_ixgbe_dev_init(__attribute__((unused)) struct
> eth_driver *eth_drv,
> PMD_INIT_LOG(ERR,
> "Failed to allocate %d bytes needed to store MAC
> addresses",
> ETHER_ADDR_LEN * IXGBE_VMDQ_NUM_UC_MAC);
> - return -ENOMEM;
> + diag = -ENOMEM;
> + goto error;
> }
>
> /* initialize the vfta */
> @@ -886,7 +888,13 @@ eth_ixgbe_dev_init(__attribute__((unused)) struct
> eth_driver *eth_drv,
> memset(hwstrip, 0, sizeof(*hwstrip));
>
> /* initialize PF if max_vfs not zero */
> - ixgbe_pf_host_init(eth_dev);
> + diag = ixgbe_pf_host_init(eth_dev);
> + if (diag < 0) {
> + PMD_INIT_LOG(ERR,
> + "Failed to allocate %d bytes needed to store MAC
> addresses",
> + ETHER_ADDR_LEN * IXGBE_VMDQ_NUM_UC_MAC);
> + goto error;
> + }
>
> ctrl_ext = IXGBE_READ_REG(hw, IXGBE_CTRL_EXT);
> /* let hardware know driver is loaded */ @@ -918,6 +926,11 @@
> eth_ixgbe_dev_init(__attribute__((unused)) struct eth_driver *eth_drv,
> ixgbe_enable_intr(eth_dev);
>
> return 0;
> +
> +error:
> + rte_free(eth_dev->data->hash_mac_addrs);
> + rte_free(eth_dev->data->mac_addrs);
> + return diag;
> }
>
>
> @@ -1434,7 +1447,93 @@ ixgbe_dev_configure(struct rte_eth_dev *dev)
> struct ixgbe_interrupt *intr =
> IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
>
> + struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
> + struct rte_eth_dev_info dev_info;
> + int retval;
> +
> PMD_INIT_FUNC_TRACE();
> + retval = ixgbe_pf_configure_mq_sriov(dev);
Do we need a non-sriov version to configure mq mode?
In ixgbe_pf_configure_mq_sriov, in the case of no vf,
It will early return, then no chance to configure and check mq mode and queue number.
Do I miss anything here?
> + if (retval <= 0)
> + return retval;
> +
> + uint16_t nb_rx_q = dev->data->nb_rx_queues;
> + uint16_t nb_tx_q = dev->data->nb_rx_queues;
Rx or tx here?
> +
> + /* For DCB we need to obtain maximum number of queues
> dinamically,
> + * as this depends on max VF exported in PF. */
> + if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
> + (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB)) {
> + /* Use dev_infos_get field as this might be pointer to PF or
> VF. */
> + (*dev->dev_ops->dev_infos_get)(dev, &dev_info);
> + }
> +
> + /* For vmdq+dcb mode check our configuration before we go further
> */
> + if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
> + const struct rte_eth_vmdq_dcb_conf *conf;
> +
> + if (nb_rx_q != ETH_VMDQ_DCB_NUM_QUEUES) {
> + PMD_INIT_LOG(ERR, " VMDQ+DCB,
> nb_rx_q != %d\n",
> + ETH_VMDQ_DCB_NUM_QUEUES);
> + return (-EINVAL);
> + }
> + conf = &(dev_conf->rx_adv_conf.vmdq_dcb_conf);
> + if (conf->nb_queue_pools != ETH_16_POOLS &&
> + conf->nb_queue_pools != ETH_32_POOLS) {
> + PMD_INIT_LOG(ERR, " VMDQ+DCB selected, "
> + "number of RX queue pools must
> be %d or %d\n",
> + ETH_16_POOLS, ETH_32_POOLS);
> + return (-EINVAL);
> + }
> + } else if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
> + /* For DCB mode check out configuration before we go
> further */
> + const struct rte_eth_dcb_rx_conf *conf;
> +
> + if (nb_rx_q != dev_info.max_rx_queues) {
> + PMD_INIT_LOG(ERR, " DCB, number of RX
> queues != %d\n",
> + ETH_DCB_NUM_QUEUES);
> + return (-EINVAL);
> + }
> + conf = &(dev_conf->rx_adv_conf.dcb_rx_conf);
> + if (conf->nb_tcs != ETH_4_TCS &&
> + conf->nb_tcs != ETH_8_TCS) {
> + PMD_INIT_LOG(ERR, " DCB, number of RX TC must
> be %d or %d\n",
> + ETH_4_TCS, ETH_8_TCS);
> + return (-EINVAL);
> + }
> + }
> +
> + if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
> + const struct rte_eth_vmdq_dcb_tx_conf *conf;
> +
> + if (nb_tx_q != ETH_VMDQ_DCB_NUM_QUEUES) {
> + PMD_INIT_LOG(ERR, " VMDQ+DCB, number of TX
> queues != %d\n",
> + ETH_VMDQ_DCB_NUM_QUEUES);
> + return (-EINVAL);
> + }
> + conf = &(dev_conf->tx_adv_conf.vmdq_dcb_tx_conf);
> + if (conf->nb_queue_pools != ETH_16_POOLS &&
> + conf->nb_queue_pools != ETH_32_POOLS) {
> + PMD_INIT_LOG(ERR, " VMDQ+DCB selected, "
> + "number of TX qqueue pools must
> be %d or %d\n",
> + ETH_16_POOLS, ETH_32_POOLS);
> + return (-EINVAL);
> + }
> + } else if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
> + const struct rte_eth_dcb_tx_conf *conf;
> +
> + if (nb_tx_q != dev_info.max_tx_queues) {
> + PMD_INIT_LOG(ERR, " DCB, number of queues must
> be %d\n",
> + ETH_DCB_NUM_QUEUES);
> + return (-EINVAL);
> + }
> + conf = &(dev_conf->tx_adv_conf.dcb_tx_conf);
> + if (conf->nb_tcs != ETH_4_TCS &&
> + conf->nb_tcs != ETH_8_TCS) {
> + PMD_INIT_LOG(ERR, " DCB, number of TX TC must
> be %d or %d\n",
> + ETH_4_TCS, ETH_8_TCS);
> + return (-EINVAL);
> + }
> + }
Better to have a separate function for these new codes.
>
> /* set flag to update link status after init */
> intr->flags |= IXGBE_FLAG_NEED_LINK_UPDATE; diff --git
> a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
> b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
> index 1383194..e70a6e8 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
> +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
> @@ -348,11 +348,14 @@ void ixgbe_vlan_hw_strip_enable_all(struct
> rte_eth_dev *dev);
>
> void ixgbe_vlan_hw_strip_disable_all(struct rte_eth_dev *dev);
>
> -void ixgbe_pf_host_init(struct rte_eth_dev *eth_dev);
> +int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev);
>
> void ixgbe_pf_mbx_process(struct rte_eth_dev *eth_dev);
>
> +int ixgbe_pf_configure_mq_sriov(struct rte_eth_dev *dev);
> +
> int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev);
>
> uint32_t ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t
> orig_val);
> +
> #endif /* _IXGBE_ETHDEV_H_ */
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c
> b/lib/librte_pmd_ixgbe/ixgbe_pf.c index 4103e97..a7b9333 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_pf.c
> +++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c
> @@ -91,7 +91,7 @@ ixgbe_mb_intr_setup(struct rte_eth_dev *dev)
> return 0;
> }
>
> -void ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
> +int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
> {
> struct ixgbe_vf_info **vfinfo =
> IXGBE_DEV_PRIVATE_TO_P_VFDATA(eth_dev->data-
> >dev_private);
> @@ -101,39 +101,31 @@ void ixgbe_pf_host_init(struct rte_eth_dev
> *eth_dev)
> IXGBE_DEV_PRIVATE_TO_UTA(eth_dev->data->dev_private);
> struct ixgbe_hw *hw =
> IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
> + int retval;
> uint16_t vf_num;
> - uint8_t nb_queue;
>
> PMD_INIT_FUNC_TRACE();
>
> - RTE_ETH_DEV_SRIOV(eth_dev).active = 0;
> - if (0 == (vf_num = dev_num_vf(eth_dev)))
> - return;
> + /* Fill sriov structure using default configuration. */
> + retval = ixgbe_pf_configure_mq_sriov(eth_dev);
> + if (retval != 0) {
> + if (retval < 0)
> + PMD_INIT_LOG(ERR, " Setting up SRIOV with default
> device "
> + "configuration should not fail. This is a
> BUG.");
> + return 0;
> + }
>
> + vf_num = dev_num_vf(eth_dev);
> *vfinfo = rte_zmalloc("vf_info", sizeof(struct ixgbe_vf_info) *
> vf_num, 0);
> - if (*vfinfo == NULL)
> - rte_panic("Cannot allocate memory for private VF data\n");
> + if (*vfinfo == NULL) {
> + PMD_INIT_LOG(ERR, "Cannot allocate memory for private VF
> data.");
> + return (-ENOMEM);
> + }
>
> memset(mirror_info,0,sizeof(struct ixgbe_mirror_info));
> memset(uta_info,0,sizeof(struct ixgbe_uta_info));
> hw->mac.mc_filter_type = 0;
>
> - if (vf_num >= ETH_32_POOLS) {
> - nb_queue = 2;
> - RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
> - } else if (vf_num >= ETH_16_POOLS) {
> - nb_queue = 4;
> - RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
> - } else {
> - nb_queue = 8;
> - RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
> - }
> -
> - RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool = nb_queue;
> - RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool = nb_queue;
> - RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
> - RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx =
> (uint16_t)(vf_num * nb_queue);
> -
> ixgbe_vf_perm_addr_gen(eth_dev, vf_num);
>
> /* init_mailbox_params */
> @@ -142,7 +134,169 @@ void ixgbe_pf_host_init(struct rte_eth_dev
> *eth_dev)
> /* set mb interrupt mask */
> ixgbe_mb_intr_setup(eth_dev);
>
> - return;
> + return 0;
> +}
> +
> +
> +/*
> + * Function that make SRIOV configuration, based on device
> +configuration,
> + * number of requested queues and number of VF created.
> + * Function returns:
> + * 1 - SRIOV is not enabled (no VF created)
> + * 0 - proper SRIOV configuration found.
> + * -EINVAL - no suitable SRIOV configuration found.
> + */
> +int
> +ixgbe_pf_configure_mq_sriov(struct rte_eth_dev *dev) {
> + struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
> + struct rte_eth_dev_sriov *sriov = &RTE_ETH_DEV_SRIOV(dev);
> + uint16_t vf_num;
> +
> + vf_num = dev_num_vf(dev);
> + if (vf_num == 0) {
> + memset(sriov, 0, sizeof(*sriov));
> + return 1;
> + }
> +
> + /* Check multi-queue mode. */
> + if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
> + (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB_RSS)
> ||
> + (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB)) {
> + /* SRIOV only works in VMDq enable mode */
> + PMD_INIT_LOG(ERR, " SRIOV active, "
> + "invlaid VMDQ rx mode (%u) or tx (%u)
> mode.",
> + dev_conf->rxmode.mq_mode, dev_conf-
> >txmode.mq_mode);
> + return (-EINVAL);
> + }
> +
> + switch (dev_conf->rxmode.mq_mode) {
> + case ETH_MQ_RX_VMDQ_DCB:
> + if (vf_num <= ETH_16_POOLS)
> + sriov->nb_rx_q_per_pool = 8;
> + else if (vf_num <= ETH_32_POOLS)
> + sriov->nb_rx_q_per_pool = 4;
> + else {
> + PMD_INIT_LOG(ERR,
> + "DCB (SRIOV active) - VF count (%d) must be
> less or equal 32.",
> + vf_num);
> + return (-EINVAL);
> + }
> +
> + if (dev->data->nb_rx_queues < sriov->nb_rx_q_per_pool) {
> + PMD_INIT_LOG(WARNING,
> + "DCB (SRIOV active) rx queues (%d) count is
> not equal %d.",
> + dev->data->nb_rx_queues,
> + sriov->nb_rx_q_per_pool);
> + }
> + break;
> + case ETH_MQ_RX_RSS:
> + PMD_INIT_LOG(INFO, "RSS (SRIOV active), "
> + "rx mq mode is changed from: mq_mode %u
> into VMDQ mq_mode %u.",
> + dev_conf->rxmode.mq_mode, dev->data-
> >dev_conf.rxmode.mq_mode);
> + dev->data->dev_conf.rxmode.mq_mode =
> ETH_MQ_RX_VMDQ_RSS;
> + /* falltrought */
> + case ETH_MQ_RX_VMDQ_RSS:
> + if (vf_num >= ETH_64_POOLS) {
> + /* FIXME: Is vf_num > 64 realy supported by
> hardware? */
> + PMD_INIT_LOG(ERR, "RSS (SRIOV active), "
> + "VFs num must be less or equal 64.");
> + return (-EINVAL);
> + } else if (vf_num >= ETH_32_POOLS) {
> + if (dev->data->nb_rx_queues != 1 && dev->data-
> >nb_rx_queues != 2) {
> + PMD_INIT_LOG(ERR, "RSS (SRIOV active, VF
> count >= 32),"
> + "invalid rx queues count %d.
> It must be 1 or 2.",
> + dev->data->nb_rx_queues);
> + return (-EINVAL);
> + }
> +
> + sriov->nb_rx_q_per_pool = dev->data-
> >nb_rx_queues;
> + } else {
> + /* FIXME: is VT(16) + RSS realy supported? */
Yes, I think it supports.
> + if (dev->data->nb_rx_queues != 4) {
> + PMD_INIT_LOG(ERR, "RSS (SRIOV active, VFs
> count < 32), "
> + "invalid rx queues count %d.
> It must be 4.",
> + dev->data->nb_rx_queues);
> + return (-EINVAL);
> + }
> +
> + sriov->nb_rx_q_per_pool = 4;
Better to use macro to replace the number, so does above.
> + }
> + break;
> + default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */
> + /* if nothing mq mode configure, use default scheme */
> + if (dev->data->dev_conf.rxmode.mq_mode !=
> ETH_MQ_RX_VMDQ_ONLY) {
> + PMD_INIT_LOG(INFO, "Rx mq mode changed
> from %u into VMDQ %u.",
> + dev->data-
> >dev_conf.rxmode.mq_mode, ETH_MQ_RX_VMDQ_ONLY);
> +
> + dev->data->dev_conf.rxmode.mq_mode =
> ETH_MQ_RX_VMDQ_ONLY;
> + }
> +
> + /* queue 0 of each pool is used. */
> + sriov->nb_rx_q_per_pool = 1;
> + break;
> + }
> +
> + switch (dev_conf->txmode.mq_mode) {
> + case ETH_MQ_TX_VMDQ_DCB: /* DCB VMDQ in SRIOV mode*/
> + if (vf_num <= ETH_16_POOLS)
> + sriov->nb_tx_q_per_pool = 8;
> + else if (vf_num <= ETH_32_POOLS)
> + sriov->nb_tx_q_per_pool = 4;
> + else if (vf_num <= ETH_64_POOLS)
> + sriov->nb_tx_q_per_pool = 1;
> + else {
> + PMD_INIT_LOG(ERR, "DCB (SRIOV active), "
> + "VF count (%d) must be less or equal
> 64.",
> + vf_num);
> + return (-EINVAL);
> + }
> + break;
> + default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
> + /* if nothing mq mode configure, use default scheme */
> + if (dev->data->dev_conf.txmode.mq_mode !=
> ETH_MQ_TX_VMDQ_ONLY) {
> + PMD_INIT_LOG(INFO, "Tx mq mode is changed
> from %u into VMDQ %u.",
> + dev->data-
> >dev_conf.txmode.mq_mode, ETH_MQ_TX_VMDQ_ONLY);
> +
> + dev->data->dev_conf.txmode.mq_mode =
> ETH_MQ_TX_VMDQ_ONLY;
> + }
> +
> + /* queue 0 of each pool is used. */
> + sriov->nb_tx_q_per_pool = 1;
> + break;
> + }
> +
> + sriov->def_vmdq_idx = vf_num;
> +
> + /*
> + * Pools starts at 2xN, 4xN or 8xN
> + */
> + if (vf_num >= ETH_32_POOLS) {
> + /* This must be vf_num <= ETH_64_POOLS */
> + sriov->active = ETH_64_POOLS;
> + sriov->def_pool_q_idx = vf_num * 2;
> + } else if (vf_num >= ETH_16_POOLS) {
> + sriov->active = ETH_32_POOLS;
> + sriov->def_pool_q_idx = vf_num * 4;
> + } else {
> + sriov->active = ETH_16_POOLS;
> + sriov->def_pool_q_idx = vf_num * 8;
> + }
> +
> + /* Check if available queus count is not less than allocated.*/
A typo: queus
> + if (dev->data->nb_rx_queues > sriov->nb_rx_q_per_pool) {
> + PMD_INIT_LOG(ERR, "SRIOV active, rx queue count must
> less or equal %d.",
> + sriov->nb_rx_q_per_pool);
> + return (-EINVAL);
> + }
> +
> + if (dev->data->nb_rx_queues > sriov->nb_tx_q_per_pool) {
Replace nb_rx_queues with nb_tx_queues?
> + PMD_INIT_LOG(ERR, "SRIOV active, tx queue count must
> less or equal %d.",
> + sriov->nb_tx_q_per_pool);
> + return (-EINVAL);
> + }
> +
> + return 0;
> }
>
> int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
> --
> 1.9.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/7] pmd: igb/ixgbe split nb_q_per_pool to rx and tx nb_q_per_pool
2015-02-25 3:24 ` Ouyang, Changchun
@ 2015-02-25 7:47 ` Pawel Wodkowski
0 siblings, 0 replies; 41+ messages in thread
From: Pawel Wodkowski @ 2015-02-25 7:47 UTC (permalink / raw)
To: Ouyang, Changchun, dev
On 2015-02-25 04:24, Ouyang, Changchun wrote:
>
>
>> -----Original Message-----
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Pawel Wodkowski
>> Sent: Thursday, February 19, 2015 11:55 PM
>> To: dev@dpdk.org
>> Subject: [dpdk-dev] [PATCH v4 3/7] pmd: igb/ixgbe split nb_q_per_pool to rx
>> and tx nb_q_per_pool
>>
[...]
>>
>> /* check valid queue number */
>> - if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
>> - (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
>> + if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool)
>
> Here, how about use nb_rx_q_per_pool to replace nb_tx_q_per_pool ?
> so it will be more clear to check rx queue number.
Yes, this should be nb_rx_q_per_pool. I missed this, because in next
patch I moved this and corrected "on the fly" :). I will correct this in
next version.
--
Pawel
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH v4 4/7] move rte_eth_dev_check_mq_mode() logic to driver
2015-02-25 6:14 ` Ouyang, Changchun
@ 2015-02-25 9:57 ` Pawel Wodkowski
0 siblings, 0 replies; 41+ messages in thread
From: Pawel Wodkowski @ 2015-02-25 9:57 UTC (permalink / raw)
To: Ouyang, Changchun, dev
On 2015-02-25 07:14, Ouyang, Changchun wrote:
>
>
>> -----Original Message-----
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Pawel Wodkowski
>> Sent: Thursday, February 19, 2015 11:55 PM
>> To: dev@dpdk.org
>> Subject: [dpdk-dev] [PATCH v4 4/7] move rte_eth_dev_check_mq_mode()
>> logic to driver
>>
>> Function rte_eth_dev_check_mq_mode() is driver specific. It should be
>> done in PF configuration phase. This patch move igb/ixgbe driver specific mq
>> check and SRIOV configuration code to driver part. Also rewriting log
>> messages to be shorter and more descriptive.
>>
>> Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
>> ---
>> lib/librte_ether/rte_ethdev.c | 197 -----------------------------------
>> lib/librte_pmd_e1000/igb_ethdev.c | 43 ++++++++
>> lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 105 ++++++++++++++++++-
>> lib/librte_pmd_ixgbe/ixgbe_ethdev.h | 5 +-
>> lib/librte_pmd_ixgbe/ixgbe_pf.c | 202
>> +++++++++++++++++++++++++++++++-----
>> 5 files changed, 327 insertions(+), 225 deletions(-)
>>
>> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
>> index 4007054..aa27e39 100644
>> --- a/lib/librte_ether/rte_ethdev.c
>> +++ b/lib/librte_ether/rte_ethdev.c
>> @@ -502,195 +502,6 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev
>> *dev, uint16_t nb_queues)
>> return (0);
>> }
>>
>> -static int
>> -rte_eth_dev_check_vf_rss_rxq_num(uint8_t port_id, uint16_t nb_rx_q) -{
>> - struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>> - switch (nb_rx_q) {
>> - case 1:
>> - case 2:
>> - RTE_ETH_DEV_SRIOV(dev).active =
>> - ETH_64_POOLS;
>> - break;
>> - case 4:
>> - RTE_ETH_DEV_SRIOV(dev).active =
>> - ETH_32_POOLS;
>> - break;
>> - default:
>> - return -EINVAL;
>> - }
>> -
>> - RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool = nb_rx_q;
>> - RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx =
>> - dev->pci_dev->max_vfs * nb_rx_q;
>> -
>> - return 0;
>> -}
>> -
>> -static int
>> -rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t
>> nb_tx_q,
>> - const struct rte_eth_conf *dev_conf)
>> -{
>> - struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>> -
>> - if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
>> - /* check multi-queue mode */
>> - if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
>> - (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB_RSS)
>> ||
>> - (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB)) {
>> - /* SRIOV only works in VMDq enable mode */
>> - PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
>> - " SRIOV active, "
>> - "wrong VMDQ mq_mode rx %u
>> tx %u\n",
>> - port_id,
>> - dev_conf->rxmode.mq_mode,
>> - dev_conf->txmode.mq_mode);
>> - return (-EINVAL);
>> - }
>> -
>> - switch (dev_conf->rxmode.mq_mode) {
>> - case ETH_MQ_RX_VMDQ_DCB:
>> - case ETH_MQ_RX_VMDQ_DCB_RSS:
>> - /* DCB/RSS VMDQ in SRIOV mode, not implement
>> yet */
>> - PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
>> - " SRIOV active, "
>> - "unsupported VMDQ mq_mode
>> rx %u\n",
>> - port_id, dev_conf-
>>> rxmode.mq_mode);
>> - return (-EINVAL);
>> - case ETH_MQ_RX_RSS:
>> - PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
>> - " SRIOV active, "
>> - "Rx mq mode is changed from:"
>> - "mq_mode %u into VMDQ
>> mq_mode %u\n",
>> - port_id,
>> - dev_conf->rxmode.mq_mode,
>> - dev->data-
>>> dev_conf.rxmode.mq_mode);
>> - case ETH_MQ_RX_VMDQ_RSS:
>> - dev->data->dev_conf.rxmode.mq_mode =
>> ETH_MQ_RX_VMDQ_RSS;
>> - if (nb_rx_q <=
>> RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool)
>> - if
>> (rte_eth_dev_check_vf_rss_rxq_num(port_id, nb_rx_q) != 0) {
>> - PMD_DEBUG_TRACE("ethdev
>> port_id=%d"
>> - " SRIOV active, invalid queue"
>> - " number for VMDQ RSS,
>> allowed"
>> - " value are 1, 2 or 4\n",
>> - port_id);
>> - return -EINVAL;
>> - }
>> - break;
>> - default: /* ETH_MQ_RX_VMDQ_ONLY or
>> ETH_MQ_RX_NONE */
>> - /* if nothing mq mode configure, use default scheme
>> */
>> - dev->data->dev_conf.rxmode.mq_mode =
>> ETH_MQ_RX_VMDQ_ONLY;
>> - if (RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool > 1)
>> -
>> RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool = 1;
>> - break;
>> - }
>> -
>> - switch (dev_conf->txmode.mq_mode) {
>> - case ETH_MQ_TX_VMDQ_DCB:
>> - /* DCB VMDQ in SRIOV mode, not implement yet */
>> - PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
>> - " SRIOV active, "
>> - "unsupported VMDQ mq_mode
>> tx %u\n",
>> - port_id, dev_conf-
>>> txmode.mq_mode);
>> - return (-EINVAL);
>> - default: /* ETH_MQ_TX_VMDQ_ONLY or
>> ETH_MQ_TX_NONE */
>> - /* if nothing mq mode configure, use default scheme
>> */
>> - dev->data->dev_conf.txmode.mq_mode =
>> ETH_MQ_TX_VMDQ_ONLY;
>> - break;
>> - }
>> -
>> - /* check valid queue number */
>> - if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool)
>> ||
>> - (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool))
>> {
>> - PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV
>> active, "
>> - "queue number must less equal to %d\n",
>> - port_id,
>> RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
>> - return (-EINVAL);
>> - }
>> - } else {
>> - /* For vmdb+dcb mode check our configuration before we
>> go further */
>> - if (dev_conf->rxmode.mq_mode ==
>> ETH_MQ_RX_VMDQ_DCB) {
>> - const struct rte_eth_vmdq_dcb_conf *conf;
>> -
>> - if (nb_rx_q != ETH_VMDQ_DCB_NUM_QUEUES) {
>> - PMD_DEBUG_TRACE("ethdev port_id=%d
>> VMDQ+DCB, nb_rx_q "
>> - "!= %d\n",
>> - port_id,
>> ETH_VMDQ_DCB_NUM_QUEUES);
>> - return (-EINVAL);
>> - }
>> - conf = &(dev_conf->rx_adv_conf.vmdq_dcb_conf);
>> - if (! (conf->nb_queue_pools == ETH_16_POOLS ||
>> - conf->nb_queue_pools == ETH_32_POOLS)) {
>> - PMD_DEBUG_TRACE("ethdev port_id=%d
>> VMDQ+DCB selected, "
>> - "nb_queue_pools must
>> be %d or %d\n",
>> - port_id, ETH_16_POOLS,
>> ETH_32_POOLS);
>> - return (-EINVAL);
>> - }
>> - }
>> - if (dev_conf->txmode.mq_mode ==
>> ETH_MQ_TX_VMDQ_DCB) {
>> - const struct rte_eth_vmdq_dcb_tx_conf *conf;
>> -
>> - if (nb_tx_q != ETH_VMDQ_DCB_NUM_QUEUES) {
>> - PMD_DEBUG_TRACE("ethdev port_id=%d
>> VMDQ+DCB, nb_tx_q "
>> - "!= %d\n",
>> - port_id,
>> ETH_VMDQ_DCB_NUM_QUEUES);
>> - return (-EINVAL);
>> - }
>> - conf = &(dev_conf-
>>> tx_adv_conf.vmdq_dcb_tx_conf);
>> - if (! (conf->nb_queue_pools == ETH_16_POOLS ||
>> - conf->nb_queue_pools == ETH_32_POOLS)) {
>> - PMD_DEBUG_TRACE("ethdev port_id=%d
>> VMDQ+DCB selected, "
>> - "nb_queue_pools != %d or
>> nb_queue_pools "
>> - "!= %d\n",
>> - port_id, ETH_16_POOLS,
>> ETH_32_POOLS);
>> - return (-EINVAL);
>> - }
>> - }
>> -
>> - /* For DCB mode check our configuration before we go
>> further */
>> - if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
>> - const struct rte_eth_dcb_rx_conf *conf;
>> -
>> - if (nb_rx_q != ETH_DCB_NUM_QUEUES) {
>> - PMD_DEBUG_TRACE("ethdev port_id=%d
>> DCB, nb_rx_q "
>> - "!= %d\n",
>> - port_id,
>> ETH_DCB_NUM_QUEUES);
>> - return (-EINVAL);
>> - }
>> - conf = &(dev_conf->rx_adv_conf.dcb_rx_conf);
>> - if (! (conf->nb_tcs == ETH_4_TCS ||
>> - conf->nb_tcs == ETH_8_TCS)) {
>> - PMD_DEBUG_TRACE("ethdev port_id=%d
>> DCB selected, "
>> - "nb_tcs != %d or nb_tcs "
>> - "!= %d\n",
>> - port_id, ETH_4_TCS,
>> ETH_8_TCS);
>> - return (-EINVAL);
>> - }
>> - }
>> -
>> - if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
>> - const struct rte_eth_dcb_tx_conf *conf;
>> -
>> - if (nb_tx_q != ETH_DCB_NUM_QUEUES) {
>> - PMD_DEBUG_TRACE("ethdev port_id=%d
>> DCB, nb_tx_q "
>> - "!= %d\n",
>> - port_id,
>> ETH_DCB_NUM_QUEUES);
>> - return (-EINVAL);
>> - }
>> - conf = &(dev_conf->tx_adv_conf.dcb_tx_conf);
>> - if (! (conf->nb_tcs == ETH_4_TCS ||
>> - conf->nb_tcs == ETH_8_TCS)) {
>> - PMD_DEBUG_TRACE("ethdev port_id=%d
>> DCB selected, "
>> - "nb_tcs != %d or nb_tcs "
>> - "!= %d\n",
>> - port_id, ETH_4_TCS,
>> ETH_8_TCS);
>> - return (-EINVAL);
>> - }
>> - }
>> - }
>> - return 0;
>> -}
>> -
>> int
>> rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>> const struct rte_eth_conf *dev_conf) @@ -798,14 +609,6
>> @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t
>> nb_tx_q,
>> ETHER_MAX_LEN;
>> }
>>
>> - /* multipe queue mode checking */
>> - diag = rte_eth_dev_check_mq_mode(port_id, nb_rx_q, nb_tx_q,
>> dev_conf);
>> - if (diag != 0) {
>> - PMD_DEBUG_TRACE("port%d
>> rte_eth_dev_check_mq_mode = %d\n",
>> - port_id, diag);
>> - return diag;
>> - }
>> -
>> /*
>> * Setup new number of RX/TX queues and reconfigure device.
>> */
>> diff --git a/lib/librte_pmd_e1000/igb_ethdev.c
>> b/lib/librte_pmd_e1000/igb_ethdev.c
>> index d451086..5c922df 100644
>> --- a/lib/librte_pmd_e1000/igb_ethdev.c
>> +++ b/lib/librte_pmd_e1000/igb_ethdev.c
>> @@ -742,6 +742,49 @@ eth_igb_configure(struct rte_eth_dev *dev)
>> struct e1000_interrupt *intr =
>> E1000_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
>>
>> + enum rte_eth_rx_mq_mode rx_mq_mode = dev->data-
>>> dev_conf.rxmode.mq_mode;
>> + enum rte_eth_tx_mq_mode tx_mq_mode =
>> +dev->data->dev_conf.txmode.mq_mode;
>> +
>> + if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
>> + /* Check multi-queue mode.
>> + * To no break software we accept ETH_MQ_RX_NONE as
>> this might be used
>> + * to turn off VLAN filter.
>> + *
>> + * FIXME if support RSS together with VMDq & SRIOV
>> + */
>> + if (rx_mq_mode != ETH_MQ_RX_NONE &&
>> + (rx_mq_mode & ETH_MQ_RX_VMDQ_ONLY)
>> == 0) {
>> + PMD_INIT_LOG(WARNING, " SRIOV active, RX
>> mode %d is not supported."
>> + "Driver will behave as in %d mode as
>> fallback.",
>> + rx_mq_mode, ETH_MQ_RX_NONE);
>> + }
>> +
>> + /* TX mode is not used in in this driver so mode might be
>> ignored.*/
>> + if (tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
>> + /* SRIOV only works in VMDq enable mode */
>> + PMD_INIT_LOG(WARNING, "TX mode %d is not
>> supported in SRIOV. "
>> + "Driver will behave as in %d mode as
>> fallback.",
>> + tx_mq_mode,
>> ETH_MQ_TX_VMDQ_ONLY);
>> + }
>> + } else {
>> + /*
>> + * To no break software that set invalid mode, only display
>> warning if
>> + * invalid mode is used.
>> + */
>> + if ((rx_mq_mode & (ETH_MQ_RX_RSS_FLAG |
>> ETH_MQ_RX_VMDQ_FLAG))
>> + != rx_mq_mode) {
>> + PMD_INIT_LOG(WARNING, "RX mode %d is not
>> supported. Driver will "
>> + "behave as in %d mode as fallback.",
>> rx_mq_mode,
>> + rx_mq_mode &
>> (ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG));
>> + }
>> +
>> + if (tx_mq_mode != ETH_MQ_TX_NONE) {
>> + PMD_INIT_LOG(WARNING, "TX mode %d is not
>> supported."
>> + "Driver will behave as in %d mode as
>> fallback.",
>> + tx_mq_mode, ETH_MQ_TX_NONE);
>> + }
>> + }
>> +
>
> Better to have new function for these new codes.
>
>> PMD_INIT_FUNC_TRACE();
>> intr->flags |= E1000_FLAG_NEED_LINK_UPDATE;
>> PMD_INIT_FUNC_TRACE();
>> diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
>> b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
>> index 02b9cda..8e9da3b 100644
>> --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
>> +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
>> @@ -863,7 +863,8 @@ eth_ixgbe_dev_init(__attribute__((unused)) struct
>> eth_driver *eth_drv,
>> "Failed to allocate %u bytes needed to store "
>> "MAC addresses",
>> ETHER_ADDR_LEN * hw->mac.num_rar_entries);
>> - return -ENOMEM;
>> + diag = -ENOMEM;
>> + goto error;
>> }
>> /* Copy the permanent MAC address */
>> ether_addr_copy((struct ether_addr *) hw->mac.perm_addr, @@ -
>> 876,7 +877,8 @@ eth_ixgbe_dev_init(__attribute__((unused)) struct
>> eth_driver *eth_drv,
>> PMD_INIT_LOG(ERR,
>> "Failed to allocate %d bytes needed to store MAC
>> addresses",
>> ETHER_ADDR_LEN * IXGBE_VMDQ_NUM_UC_MAC);
>> - return -ENOMEM;
>> + diag = -ENOMEM;
>> + goto error;
>> }
>>
>> /* initialize the vfta */
>> @@ -886,7 +888,13 @@ eth_ixgbe_dev_init(__attribute__((unused)) struct
>> eth_driver *eth_drv,
>> memset(hwstrip, 0, sizeof(*hwstrip));
>>
>> /* initialize PF if max_vfs not zero */
>> - ixgbe_pf_host_init(eth_dev);
>> + diag = ixgbe_pf_host_init(eth_dev);
>> + if (diag < 0) {
>> + PMD_INIT_LOG(ERR,
>> + "Failed to allocate %d bytes needed to store MAC
>> addresses",
>> + ETHER_ADDR_LEN * IXGBE_VMDQ_NUM_UC_MAC);
>> + goto error;
>> + }
>>
>> ctrl_ext = IXGBE_READ_REG(hw, IXGBE_CTRL_EXT);
>> /* let hardware know driver is loaded */ @@ -918,6 +926,11 @@
>> eth_ixgbe_dev_init(__attribute__((unused)) struct eth_driver *eth_drv,
>> ixgbe_enable_intr(eth_dev);
>>
>> return 0;
>> +
>> +error:
>> + rte_free(eth_dev->data->hash_mac_addrs);
>> + rte_free(eth_dev->data->mac_addrs);
>> + return diag;
>> }
>>
>>
>> @@ -1434,7 +1447,93 @@ ixgbe_dev_configure(struct rte_eth_dev *dev)
>> struct ixgbe_interrupt *intr =
>> IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
>>
>> + struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
>> + struct rte_eth_dev_info dev_info;
>> + int retval;
>> +
>> PMD_INIT_FUNC_TRACE();
>> + retval = ixgbe_pf_configure_mq_sriov(dev);
>
> Do we need a non-sriov version to configure mq mode?
> In ixgbe_pf_configure_mq_sriov, in the case of no vf,
> It will early return, then no chance to configure and check mq mode and queue number.
> Do I miss anything here?
Function in case of no vf function return 1 and if bellow will allow
non-SRIOV configuration.
>
>> + if (retval <= 0)
>> + return retval;
>> +
>> + uint16_t nb_rx_q = dev->data->nb_rx_queues;
>> + uint16_t nb_tx_q = dev->data->nb_rx_queues;
>
> Rx or tx here?
Yes, it should be tx.
>
>> +
>> + /* For DCB we need to obtain maximum number of queues
>> dinamically,
>> + * as this depends on max VF exported in PF. */
>> + if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
>> + (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB)) {
>> + /* Use dev_infos_get field as this might be pointer to PF or
>> VF. */
>> + (*dev->dev_ops->dev_infos_get)(dev, &dev_info);
>> + }
>> +
>> + /* For vmdq+dcb mode check our configuration before we go further
>> */
>> + if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
>> + const struct rte_eth_vmdq_dcb_conf *conf;
>> +
>> + if (nb_rx_q != ETH_VMDQ_DCB_NUM_QUEUES) {
>> + PMD_INIT_LOG(ERR, " VMDQ+DCB,
>> nb_rx_q != %d\n",
>> + ETH_VMDQ_DCB_NUM_QUEUES);
>> + return (-EINVAL);
>> + }
>> + conf = &(dev_conf->rx_adv_conf.vmdq_dcb_conf);
>> + if (conf->nb_queue_pools != ETH_16_POOLS &&
>> + conf->nb_queue_pools != ETH_32_POOLS) {
>> + PMD_INIT_LOG(ERR, " VMDQ+DCB selected, "
>> + "number of RX queue pools must
>> be %d or %d\n",
>> + ETH_16_POOLS, ETH_32_POOLS);
>> + return (-EINVAL);
>> + }
>> + } else if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
>> + /* For DCB mode check out configuration before we go
>> further */
>> + const struct rte_eth_dcb_rx_conf *conf;
>> +
>> + if (nb_rx_q != dev_info.max_rx_queues) {
>> + PMD_INIT_LOG(ERR, " DCB, number of RX
>> queues != %d\n",
>> + ETH_DCB_NUM_QUEUES);
>> + return (-EINVAL);
>> + }
>> + conf = &(dev_conf->rx_adv_conf.dcb_rx_conf);
>> + if (conf->nb_tcs != ETH_4_TCS &&
>> + conf->nb_tcs != ETH_8_TCS) {
>> + PMD_INIT_LOG(ERR, " DCB, number of RX TC must
>> be %d or %d\n",
>> + ETH_4_TCS, ETH_8_TCS);
>> + return (-EINVAL);
>> + }
>> + }
>> +
>> + if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
>> + const struct rte_eth_vmdq_dcb_tx_conf *conf;
>> +
>> + if (nb_tx_q != ETH_VMDQ_DCB_NUM_QUEUES) {
>> + PMD_INIT_LOG(ERR, " VMDQ+DCB, number of TX
>> queues != %d\n",
>> + ETH_VMDQ_DCB_NUM_QUEUES);
>> + return (-EINVAL);
>> + }
>> + conf = &(dev_conf->tx_adv_conf.vmdq_dcb_tx_conf);
>> + if (conf->nb_queue_pools != ETH_16_POOLS &&
>> + conf->nb_queue_pools != ETH_32_POOLS) {
>> + PMD_INIT_LOG(ERR, " VMDQ+DCB selected, "
>> + "number of TX qqueue pools must
>> be %d or %d\n",
>> + ETH_16_POOLS, ETH_32_POOLS);
>> + return (-EINVAL);
>> + }
>> + } else if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
>> + const struct rte_eth_dcb_tx_conf *conf;
>> +
>> + if (nb_tx_q != dev_info.max_tx_queues) {
>> + PMD_INIT_LOG(ERR, " DCB, number of queues must
>> be %d\n",
>> + ETH_DCB_NUM_QUEUES);
>> + return (-EINVAL);
>> + }
>> + conf = &(dev_conf->tx_adv_conf.dcb_tx_conf);
>> + if (conf->nb_tcs != ETH_4_TCS &&
>> + conf->nb_tcs != ETH_8_TCS) {
>> + PMD_INIT_LOG(ERR, " DCB, number of TX TC must
>> be %d or %d\n",
>> + ETH_4_TCS, ETH_8_TCS);
>> + return (-EINVAL);
>> + }
>> + }
>
> Better to have a separate function for these new codes.
>
>>
>> /* set flag to update link status after init */
>> intr->flags |= IXGBE_FLAG_NEED_LINK_UPDATE; diff --git
>> a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
>> b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
>> index 1383194..e70a6e8 100644
>> --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
>> +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
>> @@ -348,11 +348,14 @@ void ixgbe_vlan_hw_strip_enable_all(struct
>> rte_eth_dev *dev);
>>
>> void ixgbe_vlan_hw_strip_disable_all(struct rte_eth_dev *dev);
>>
>> -void ixgbe_pf_host_init(struct rte_eth_dev *eth_dev);
>> +int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev);
>>
>> void ixgbe_pf_mbx_process(struct rte_eth_dev *eth_dev);
>>
>> +int ixgbe_pf_configure_mq_sriov(struct rte_eth_dev *dev);
>> +
>> int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev);
>>
>> uint32_t ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t
>> orig_val);
>> +
>> #endif /* _IXGBE_ETHDEV_H_ */
>> diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c
>> b/lib/librte_pmd_ixgbe/ixgbe_pf.c index 4103e97..a7b9333 100644
>> --- a/lib/librte_pmd_ixgbe/ixgbe_pf.c
>> +++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c
>> @@ -91,7 +91,7 @@ ixgbe_mb_intr_setup(struct rte_eth_dev *dev)
>> return 0;
>> }
>>
>> -void ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
>> +int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
>> {
>> struct ixgbe_vf_info **vfinfo =
>> IXGBE_DEV_PRIVATE_TO_P_VFDATA(eth_dev->data-
>>> dev_private);
>> @@ -101,39 +101,31 @@ void ixgbe_pf_host_init(struct rte_eth_dev
>> *eth_dev)
>> IXGBE_DEV_PRIVATE_TO_UTA(eth_dev->data->dev_private);
>> struct ixgbe_hw *hw =
>> IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
>> + int retval;
>> uint16_t vf_num;
>> - uint8_t nb_queue;
>>
>> PMD_INIT_FUNC_TRACE();
>>
>> - RTE_ETH_DEV_SRIOV(eth_dev).active = 0;
>> - if (0 == (vf_num = dev_num_vf(eth_dev)))
>> - return;
>> + /* Fill sriov structure using default configuration. */
>> + retval = ixgbe_pf_configure_mq_sriov(eth_dev);
>> + if (retval != 0) {
>> + if (retval < 0)
>> + PMD_INIT_LOG(ERR, " Setting up SRIOV with default
>> device "
>> + "configuration should not fail. This is a
>> BUG.");
>> + return 0;
>> + }
>>
>> + vf_num = dev_num_vf(eth_dev);
>> *vfinfo = rte_zmalloc("vf_info", sizeof(struct ixgbe_vf_info) *
>> vf_num, 0);
>> - if (*vfinfo == NULL)
>> - rte_panic("Cannot allocate memory for private VF data\n");
>> + if (*vfinfo == NULL) {
>> + PMD_INIT_LOG(ERR, "Cannot allocate memory for private VF
>> data.");
>> + return (-ENOMEM);
>> + }
>>
>> memset(mirror_info,0,sizeof(struct ixgbe_mirror_info));
>> memset(uta_info,0,sizeof(struct ixgbe_uta_info));
>> hw->mac.mc_filter_type = 0;
>>
>> - if (vf_num >= ETH_32_POOLS) {
>> - nb_queue = 2;
>> - RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
>> - } else if (vf_num >= ETH_16_POOLS) {
>> - nb_queue = 4;
>> - RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
>> - } else {
>> - nb_queue = 8;
>> - RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
>> - }
>> -
>> - RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool = nb_queue;
>> - RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool = nb_queue;
>> - RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
>> - RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx =
>> (uint16_t)(vf_num * nb_queue);
>> -
>> ixgbe_vf_perm_addr_gen(eth_dev, vf_num);
>>
>> /* init_mailbox_params */
>> @@ -142,7 +134,169 @@ void ixgbe_pf_host_init(struct rte_eth_dev
>> *eth_dev)
>> /* set mb interrupt mask */
>> ixgbe_mb_intr_setup(eth_dev);
>>
>> - return;
>> + return 0;
>> +}
>> +
>> +
>> +/*
>> + * Function that make SRIOV configuration, based on device
>> +configuration,
>> + * number of requested queues and number of VF created.
>> + * Function returns:
>> + * 1 - SRIOV is not enabled (no VF created)
>> + * 0 - proper SRIOV configuration found.
>> + * -EINVAL - no suitable SRIOV configuration found.
>> + */
>> +int
>> +ixgbe_pf_configure_mq_sriov(struct rte_eth_dev *dev) {
>> + struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
>> + struct rte_eth_dev_sriov *sriov = &RTE_ETH_DEV_SRIOV(dev);
>> + uint16_t vf_num;
>> +
>> + vf_num = dev_num_vf(dev);
>> + if (vf_num == 0) {
>> + memset(sriov, 0, sizeof(*sriov));
>> + return 1;
>> + }
>> +
>> + /* Check multi-queue mode. */
>> + if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
>> + (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB_RSS)
>> ||
>> + (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB)) {
>> + /* SRIOV only works in VMDq enable mode */
>> + PMD_INIT_LOG(ERR, " SRIOV active, "
>> + "invlaid VMDQ rx mode (%u) or tx (%u)
>> mode.",
>> + dev_conf->rxmode.mq_mode, dev_conf-
>>> txmode.mq_mode);
>> + return (-EINVAL);
>> + }
>> +
>> + switch (dev_conf->rxmode.mq_mode) {
>> + case ETH_MQ_RX_VMDQ_DCB:
>> + if (vf_num <= ETH_16_POOLS)
>> + sriov->nb_rx_q_per_pool = 8;
>> + else if (vf_num <= ETH_32_POOLS)
>> + sriov->nb_rx_q_per_pool = 4;
>> + else {
>> + PMD_INIT_LOG(ERR,
>> + "DCB (SRIOV active) - VF count (%d) must be
>> less or equal 32.",
>> + vf_num);
>> + return (-EINVAL);
>> + }
>> +
>> + if (dev->data->nb_rx_queues < sriov->nb_rx_q_per_pool) {
>> + PMD_INIT_LOG(WARNING,
>> + "DCB (SRIOV active) rx queues (%d) count is
>> not equal %d.",
>> + dev->data->nb_rx_queues,
>> + sriov->nb_rx_q_per_pool);
>> + }
>> + break;
>> + case ETH_MQ_RX_RSS:
>> + PMD_INIT_LOG(INFO, "RSS (SRIOV active), "
>> + "rx mq mode is changed from: mq_mode %u
>> into VMDQ mq_mode %u.",
>> + dev_conf->rxmode.mq_mode, dev->data-
>>> dev_conf.rxmode.mq_mode);
>> + dev->data->dev_conf.rxmode.mq_mode =
>> ETH_MQ_RX_VMDQ_RSS;
>> + /* falltrought */
>> + case ETH_MQ_RX_VMDQ_RSS:
>> + if (vf_num >= ETH_64_POOLS) {
>> + /* FIXME: Is vf_num > 64 realy supported by
>> hardware? */
>> + PMD_INIT_LOG(ERR, "RSS (SRIOV active), "
>> + "VFs num must be less or equal 64.");
>> + return (-EINVAL);
>> + } else if (vf_num >= ETH_32_POOLS) {
>> + if (dev->data->nb_rx_queues != 1 && dev->data-
>>> nb_rx_queues != 2) {
>> + PMD_INIT_LOG(ERR, "RSS (SRIOV active, VF
>> count >= 32),"
>> + "invalid rx queues count %d.
>> It must be 1 or 2.",
>> + dev->data->nb_rx_queues);
>> + return (-EINVAL);
>> + }
>> +
>> + sriov->nb_rx_q_per_pool = dev->data-
>>> nb_rx_queues;
>> + } else {
>> + /* FIXME: is VT(16) + RSS realy supported? */
>
> Yes, I think it supports.
>
>> + if (dev->data->nb_rx_queues != 4) {
>> + PMD_INIT_LOG(ERR, "RSS (SRIOV active, VFs
>> count < 32), "
>> + "invalid rx queues count %d.
>> It must be 4.",
>> + dev->data->nb_rx_queues);
>> + return (-EINVAL);
>> + }
>> +
>> + sriov->nb_rx_q_per_pool = 4;
>
> Better to use macro to replace the number, so does above.
>
>> + }
>> + break;
>> + default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */
>> + /* if nothing mq mode configure, use default scheme */
>> + if (dev->data->dev_conf.rxmode.mq_mode !=
>> ETH_MQ_RX_VMDQ_ONLY) {
>> + PMD_INIT_LOG(INFO, "Rx mq mode changed
>> from %u into VMDQ %u.",
>> + dev->data-
>>> dev_conf.rxmode.mq_mode, ETH_MQ_RX_VMDQ_ONLY);
>> +
>> + dev->data->dev_conf.rxmode.mq_mode =
>> ETH_MQ_RX_VMDQ_ONLY;
>> + }
>> +
>> + /* queue 0 of each pool is used. */
>> + sriov->nb_rx_q_per_pool = 1;
>> + break;
>> + }
>> +
>> + switch (dev_conf->txmode.mq_mode) {
>> + case ETH_MQ_TX_VMDQ_DCB: /* DCB VMDQ in SRIOV mode*/
>> + if (vf_num <= ETH_16_POOLS)
>> + sriov->nb_tx_q_per_pool = 8;
>> + else if (vf_num <= ETH_32_POOLS)
>> + sriov->nb_tx_q_per_pool = 4;
>> + else if (vf_num <= ETH_64_POOLS)
>> + sriov->nb_tx_q_per_pool = 1;
>> + else {
>> + PMD_INIT_LOG(ERR, "DCB (SRIOV active), "
>> + "VF count (%d) must be less or equal
>> 64.",
>> + vf_num);
>> + return (-EINVAL);
>> + }
>> + break;
>> + default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
>> + /* if nothing mq mode configure, use default scheme */
>> + if (dev->data->dev_conf.txmode.mq_mode !=
>> ETH_MQ_TX_VMDQ_ONLY) {
>> + PMD_INIT_LOG(INFO, "Tx mq mode is changed
>> from %u into VMDQ %u.",
>> + dev->data-
>>> dev_conf.txmode.mq_mode, ETH_MQ_TX_VMDQ_ONLY);
>> +
>> + dev->data->dev_conf.txmode.mq_mode =
>> ETH_MQ_TX_VMDQ_ONLY;
>> + }
>> +
>> + /* queue 0 of each pool is used. */
>> + sriov->nb_tx_q_per_pool = 1;
>> + break;
>> + }
>> +
>> + sriov->def_vmdq_idx = vf_num;
>> +
>> + /*
>> + * Pools starts at 2xN, 4xN or 8xN
>> + */
>> + if (vf_num >= ETH_32_POOLS) {
>> + /* This must be vf_num <= ETH_64_POOLS */
>> + sriov->active = ETH_64_POOLS;
>> + sriov->def_pool_q_idx = vf_num * 2;
>> + } else if (vf_num >= ETH_16_POOLS) {
>> + sriov->active = ETH_32_POOLS;
>> + sriov->def_pool_q_idx = vf_num * 4;
>> + } else {
>> + sriov->active = ETH_16_POOLS;
>> + sriov->def_pool_q_idx = vf_num * 8;
>> + }
>> +
>> + /* Check if available queus count is not less than allocated.*/
>
> A typo: queus
>
>> + if (dev->data->nb_rx_queues > sriov->nb_rx_q_per_pool) {
>> + PMD_INIT_LOG(ERR, "SRIOV active, rx queue count must
>> less or equal %d.",
>> + sriov->nb_rx_q_per_pool);
>> + return (-EINVAL);
>> + }
>> +
>> + if (dev->data->nb_rx_queues > sriov->nb_tx_q_per_pool) {
>
> Replace nb_rx_queues with nb_tx_queues?
Yes, you are right.
>
>> + PMD_INIT_LOG(ERR, "SRIOV active, tx queue count must
>> less or equal %d.",
>> + sriov->nb_tx_q_per_pool);
>> + return (-EINVAL);
>> + }
>> +
>> + return 0;
>> }
>>
>> int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
>> --
>> 1.9.1
>
--
Pawel
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH v4 5/7] pmd ixgbe: enable DCB in SRIOV
2015-02-25 3:36 ` Ouyang, Changchun
@ 2015-02-25 11:29 ` Pawel Wodkowski
0 siblings, 0 replies; 41+ messages in thread
From: Pawel Wodkowski @ 2015-02-25 11:29 UTC (permalink / raw)
To: Ouyang, Changchun, dev
On 2015-02-25 04:36, Ouyang, Changchun wrote:
>> @@ -652,7 +655,9 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev,
>> >uint32_t vf, uint32_t *msgbuf) {
>> > struct ixgbe_vf_info *vfinfo =
>> > *IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data-
>>> > >dev_private);
>> >- uint32_t default_q = vf *
>> >RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
>> >+ struct ixgbe_dcb_config *dcbinfo =
>> >+ IXGBE_DEV_PRIVATE_TO_DCB_CFG(dev->data-
>>> > >dev_private);
>> >+ uint32_t default_q = RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx;
> Why need change the default_q here?
>
Because this field holds default queue index.
--
Pawel
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH v4 0/7] Enable DCB in SRIOV mode for ixgbe driver
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 0/7] Enable DCB in SRIOV mode for ixgbe driver Pawel Wodkowski
` (6 preceding siblings ...)
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 7/7] pmd ixgbe: fix vlan setting in in PF Pawel Wodkowski
@ 2015-06-08 3:00 ` Zhang, Helin
7 siblings, 0 replies; 41+ messages in thread
From: Zhang, Helin @ 2015-06-08 3:00 UTC (permalink / raw)
To: Wodkowski, PawelX; +Cc: dev
Hi Pawel
Could you help to rebase it to the latest? Then several DPDK developers here may help you on code review.
I think your patches are really helpful on DCB decoupling in ethdev layer.
Regards,
Helin
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Pawel Wodkowski
> Sent: Thursday, February 19, 2015 11:55 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v4 0/7] Enable DCB in SRIOV mode for ixgbe driver
>
> This patchset enables DCB in SRIOV (ETH_MQ_RX_VMDQ_DCB and
> ETH_MQ_TX_VMDQ_DCB) for each VF and PF for ixgbe driver.
>
> As a side effect this allow to use multiple queues for TX in VF (8 if there is
> 16 or less VFs or 4 if there is 32 or less VFs) when PFC is not enabled.
>
> PATCH v4 changes:
> - resend patch as previous was sent by mistake with different one.
>
> PATCH v3 changes:
> - Rework patch to fit ixgbe RSS in VT mode changes.
> - move driver specific code from rte_ethdev.c to driver code.
> - fix bug ixgbe driver VLAN filter enable in PF discoveded during testing.
>
> PATCH v2 changes:
> - Split patch for easer review.
> - Remove "pmd: add api version negotiation for ixgbe driver" and "pmd: extend
> mailbox api to report number of RX/TX queues" patches as those are already
> already marged from other patch
>
> Pawel Wodkowski (7):
> ethdev: Allow zero rx/tx queues in SRIOV mode
> pmd igb: fix VMDQ mode checking
> pmd: igb/ixgbe split nb_q_per_pool to rx and tx nb_q_per_pool
> move rte_eth_dev_check_mq_mode() logic to ixgbe driver
> pmd ixgbe: enable DCB in SRIOV
> tespmd: fix DCB in SRIOV mode support
> pmd ixgbe: fix vlan setting in in PF
>
> app/test-pmd/cmdline.c | 4 +-
> app/test-pmd/testpmd.c | 39 +++++--
> app/test-pmd/testpmd.h | 10 --
> lib/librte_ether/rte_ethdev.c | 212 ++--------------------------------
> lib/librte_ether/rte_ethdev.h | 3 +-
> lib/librte_pmd_e1000/igb_ethdev.c | 45 +++++++-
> lib/librte_pmd_e1000/igb_pf.c | 3 +-
> lib/librte_pmd_e1000/igb_rxtx.c | 2 +-
> lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 126 ++++++++++++++++++---
> lib/librte_pmd_ixgbe/ixgbe_ethdev.h | 5 +-
> lib/librte_pmd_ixgbe/ixgbe_pf.c | 220
> +++++++++++++++++++++++++++++++-----
> lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 18 +--
> 12 files changed, 407 insertions(+), 280 deletions(-)
>
> --
> 1.9.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH v4 4/7] move rte_eth_dev_check_mq_mode() logic to driver
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 4/7] move rte_eth_dev_check_mq_mode() logic to driver Pawel Wodkowski
2015-02-25 6:14 ` Ouyang, Changchun
@ 2015-06-09 4:06 ` Wu, Jingjing
1 sibling, 0 replies; 41+ messages in thread
From: Wu, Jingjing @ 2015-06-09 4:06 UTC (permalink / raw)
To: Wodkowski, PawelX, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Pawel Wodkowski
> Sent: Thursday, February 19, 2015 11:55 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v4 4/7] move rte_eth_dev_check_mq_mode()
> logic to driver
>
> Function rte_eth_dev_check_mq_mode() is driver specific. It should be
> done in PF configuration phase. This patch move igb/ixgbe driver specific mq
> check and SRIOV configuration code to driver part. Also rewriting log
> messages to be shorter and more descriptive.
>
> Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
> ---
> lib/librte_ether/rte_ethdev.c | 197 -----------------------------------
> lib/librte_pmd_e1000/igb_ethdev.c | 43 ++++++++
> lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 105 ++++++++++++++++++-
> lib/librte_pmd_ixgbe/ixgbe_ethdev.h | 5 +-
> lib/librte_pmd_ixgbe/ixgbe_pf.c | 202
> +++++++++++++++++++++++++++++++-----
> 5 files changed, 327 insertions(+), 225 deletions(-)
>
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
> b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
> index 02b9cda..8e9da3b 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
> +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
> @@ -863,7 +863,8 @@ eth_ixgbe_dev_init(__attribute__((unused)) struct
> eth_driver *eth_drv,
> "Failed to allocate %u bytes needed to store "
> "MAC addresses",
> ETHER_ADDR_LEN * hw->mac.num_rar_entries);
> - return -ENOMEM;
> + diag = -ENOMEM;
> + goto error;
> }
> /* Copy the permanent MAC address */
> ether_addr_copy((struct ether_addr *) hw->mac.perm_addr, @@ -
> 876,7 +877,8 @@ eth_ixgbe_dev_init(__attribute__((unused)) struct
> eth_driver *eth_drv,
> PMD_INIT_LOG(ERR,
> "Failed to allocate %d bytes needed to store MAC
> addresses",
> ETHER_ADDR_LEN * IXGBE_VMDQ_NUM_UC_MAC);
> - return -ENOMEM;
> + diag = -ENOMEM;
> + goto error;
> }
>
> /* initialize the vfta */
> @@ -886,7 +888,13 @@ eth_ixgbe_dev_init(__attribute__((unused)) struct
> eth_driver *eth_drv,
> memset(hwstrip, 0, sizeof(*hwstrip));
>
> /* initialize PF if max_vfs not zero */
> - ixgbe_pf_host_init(eth_dev);
> + diag = ixgbe_pf_host_init(eth_dev);
> + if (diag < 0) {
> + PMD_INIT_LOG(ERR,
> + "Failed to allocate %d bytes needed to store MAC
> addresses",
> + ETHER_ADDR_LEN * IXGBE_VMDQ_NUM_UC_MAC);
> + goto error;
> + }
>
> ctrl_ext = IXGBE_READ_REG(hw, IXGBE_CTRL_EXT);
> /* let hardware know driver is loaded */ @@ -918,6 +926,11 @@
> eth_ixgbe_dev_init(__attribute__((unused)) struct eth_driver *eth_drv,
> ixgbe_enable_intr(eth_dev);
>
> return 0;
> +
> +error:
> + rte_free(eth_dev->data->hash_mac_addrs);
> + rte_free(eth_dev->data->mac_addrs);
> + return diag;
> }
>
>
> @@ -1434,7 +1447,93 @@ ixgbe_dev_configure(struct rte_eth_dev *dev)
> struct ixgbe_interrupt *intr =
> IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
>
> + struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
> + struct rte_eth_dev_info dev_info;
> + int retval;
> +
> PMD_INIT_FUNC_TRACE();
> + retval = ixgbe_pf_configure_mq_sriov(dev);
> + if (retval <= 0)
> + return retval;
> +
> + uint16_t nb_rx_q = dev->data->nb_rx_queues;
> + uint16_t nb_tx_q = dev->data->nb_rx_queues;
> +
> + /* For DCB we need to obtain maximum number of queues
> dinamically,
> + * as this depends on max VF exported in PF. */
> + if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
> + (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB)) {
> + /* Use dev_infos_get field as this might be pointer to PF or
> VF. */
> + (*dev->dev_ops->dev_infos_get)(dev, &dev_info);
Why not call ixgbe_dev_info_get directly? And it looks only max_rx_queues and max_tx_queues
are used below, maybe hw->mac.max_rx_queues and hw->mac.max_tx_queues can be used
below instead of calling a function.
> + }
> +
> + /* For vmdq+dcb mode check our configuration before we go further
> */
> + if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
> + const struct rte_eth_vmdq_dcb_conf *conf;
> +
> + if (nb_rx_q != ETH_VMDQ_DCB_NUM_QUEUES) {
> + PMD_INIT_LOG(ERR, " VMDQ+DCB,
> nb_rx_q != %d\n",
> + ETH_VMDQ_DCB_NUM_QUEUES);
> + return (-EINVAL);
> + }
> + conf = &(dev_conf->rx_adv_conf.vmdq_dcb_conf);
> + if (conf->nb_queue_pools != ETH_16_POOLS &&
> + conf->nb_queue_pools != ETH_32_POOLS) {
> + PMD_INIT_LOG(ERR, " VMDQ+DCB selected, "
> + "number of RX queue pools must
> be %d or %d\n",
> + ETH_16_POOLS, ETH_32_POOLS);
> + return (-EINVAL);
> + }
> + } else if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
> + /* For DCB mode check out configuration before we go
> further */
> + const struct rte_eth_dcb_rx_conf *conf;
> +
> + if (nb_rx_q != dev_info.max_rx_queues) {
> + PMD_INIT_LOG(ERR, " DCB, number of RX
> queues != %d\n",
> + ETH_DCB_NUM_QUEUES);
The check is using dev_info.max_rx_queues, while ETH_DCB_NUM_QUEUES in log.
> + return (-EINVAL);
> + }
> + conf = &(dev_conf->rx_adv_conf.dcb_rx_conf);
> + if (conf->nb_tcs != ETH_4_TCS &&
> + conf->nb_tcs != ETH_8_TCS) {
> + PMD_INIT_LOG(ERR, " DCB, number of RX TC must
> be %d or %d\n",
> + ETH_4_TCS, ETH_8_TCS);
> + return (-EINVAL);
> + }
> + }
> +
> + if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
> + const struct rte_eth_vmdq_dcb_tx_conf *conf;
> +
> + if (nb_tx_q != ETH_VMDQ_DCB_NUM_QUEUES) {
> + PMD_INIT_LOG(ERR, " VMDQ+DCB, number of TX
> queues != %d\n",
> + ETH_VMDQ_DCB_NUM_QUEUES);
> + return (-EINVAL);
> + }
> + conf = &(dev_conf->tx_adv_conf.vmdq_dcb_tx_conf);
> + if (conf->nb_queue_pools != ETH_16_POOLS &&
> + conf->nb_queue_pools != ETH_32_POOLS) {
> + PMD_INIT_LOG(ERR, " VMDQ+DCB selected, "
> + "number of TX qqueue pools must
Typo: qqueue->queue
> be %d or %d\n",
> + ETH_16_POOLS, ETH_32_POOLS);
> + return (-EINVAL);
> + }
> + } else if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
> + const struct rte_eth_dcb_tx_conf *conf;
> +
> + if (nb_tx_q != dev_info.max_tx_queues) {
> + PMD_INIT_LOG(ERR, " DCB, number of queues must
> be %d\n",
> + ETH_DCB_NUM_QUEUES);
The check is using dev_info.max_rx_queues, while ETH_DCB_NUM_QUEUES in log.
> + return (-EINVAL);
> + }
> + conf = &(dev_conf->tx_adv_conf.dcb_tx_conf);
> + if (conf->nb_tcs != ETH_4_TCS &&
> + conf->nb_tcs != ETH_8_TCS) {
> + PMD_INIT_LOG(ERR, " DCB, number of TX TC must
> be %d or %d\n",
> + ETH_4_TCS, ETH_8_TCS);
> + return (-EINVAL);
> + }
> + }
>
> /* set flag to update link status after init */
> intr->flags |= IXGBE_FLAG_NEED_LINK_UPDATE; diff --git
> a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
> b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
> index 1383194..e70a6e8 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
> +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
> @@ -348,11 +348,14 @@ void ixgbe_vlan_hw_strip_enable_all(struct
> rte_eth_dev *dev);
>
> void ixgbe_vlan_hw_strip_disable_all(struct rte_eth_dev *dev);
>
> -void ixgbe_pf_host_init(struct rte_eth_dev *eth_dev);
> +int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev);
>
> void ixgbe_pf_mbx_process(struct rte_eth_dev *eth_dev);
>
> +int ixgbe_pf_configure_mq_sriov(struct rte_eth_dev *dev);
> +
> int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev);
>
> uint32_t ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t
> orig_val);
> +
> #endif /* _IXGBE_ETHDEV_H_ */
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c
> b/lib/librte_pmd_ixgbe/ixgbe_pf.c index 4103e97..a7b9333 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_pf.c
> +++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c
> @@ -91,7 +91,7 @@ ixgbe_mb_intr_setup(struct rte_eth_dev *dev)
> return 0;
> }
>
> -void ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
> +int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
> {
> struct ixgbe_vf_info **vfinfo =
> IXGBE_DEV_PRIVATE_TO_P_VFDATA(eth_dev->data-
> >dev_private);
> @@ -101,39 +101,31 @@ void ixgbe_pf_host_init(struct rte_eth_dev
> *eth_dev)
> IXGBE_DEV_PRIVATE_TO_UTA(eth_dev->data->dev_private);
> struct ixgbe_hw *hw =
> IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
> + int retval;
> uint16_t vf_num;
> - uint8_t nb_queue;
>
> PMD_INIT_FUNC_TRACE();
>
> - RTE_ETH_DEV_SRIOV(eth_dev).active = 0;
> - if (0 == (vf_num = dev_num_vf(eth_dev)))
> - return;
> + /* Fill sriov structure using default configuration. */
> + retval = ixgbe_pf_configure_mq_sriov(eth_dev);
> + if (retval != 0) {
> + if (retval < 0)
> + PMD_INIT_LOG(ERR, " Setting up SRIOV with default
> device "
> + "configuration should not fail. This is a
> BUG.");
> + return 0;
> + }
>
> + vf_num = dev_num_vf(eth_dev);
> *vfinfo = rte_zmalloc("vf_info", sizeof(struct ixgbe_vf_info) *
> vf_num, 0);
> - if (*vfinfo == NULL)
> - rte_panic("Cannot allocate memory for private VF data\n");
> + if (*vfinfo == NULL) {
> + PMD_INIT_LOG(ERR, "Cannot allocate memory for private VF
> data.");
> + return (-ENOMEM);
> + }
>
> memset(mirror_info,0,sizeof(struct ixgbe_mirror_info));
> memset(uta_info,0,sizeof(struct ixgbe_uta_info));
> hw->mac.mc_filter_type = 0;
>
> - if (vf_num >= ETH_32_POOLS) {
> - nb_queue = 2;
> - RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
> - } else if (vf_num >= ETH_16_POOLS) {
> - nb_queue = 4;
> - RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
> - } else {
> - nb_queue = 8;
> - RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
> - }
> -
> - RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool = nb_queue;
> - RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool = nb_queue;
> - RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
> - RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx =
> (uint16_t)(vf_num * nb_queue);
> -
> ixgbe_vf_perm_addr_gen(eth_dev, vf_num);
>
> /* init_mailbox_params */
> @@ -142,7 +134,169 @@ void ixgbe_pf_host_init(struct rte_eth_dev
> *eth_dev)
> /* set mb interrupt mask */
> ixgbe_mb_intr_setup(eth_dev);
>
> - return;
> + return 0;
> +}
> +
> +
> +/*
> + * Function that make SRIOV configuration, based on device
> +configuration,
> + * number of requested queues and number of VF created.
> + * Function returns:
> + * 1 - SRIOV is not enabled (no VF created)
> + * 0 - proper SRIOV configuration found.
> + * -EINVAL - no suitable SRIOV configuration found.
> + */
> +int
> +ixgbe_pf_configure_mq_sriov(struct rte_eth_dev *dev) {
If this function is called by ixgbe_pf_host_init. It is in the initialization process, the
dev_conf in data is meaningless. The following check is still necessary? Maybe it's
better to use different functions in configure and init phase.
> + struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
> + struct rte_eth_dev_sriov *sriov = &RTE_ETH_DEV_SRIOV(dev);
> + uint16_t vf_num;
> +
> + vf_num = dev_num_vf(dev);
> + if (vf_num == 0) {
> + memset(sriov, 0, sizeof(*sriov));
> + return 1;
> + }
> +
> + /* Check multi-queue mode. */
> + if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
> + (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB_RSS)
> ||
> + (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB)) {
> + /* SRIOV only works in VMDq enable mode */
> + PMD_INIT_LOG(ERR, " SRIOV active, "
> + "invlaid VMDQ rx mode (%u) or tx (%u)
> mode.",
> + dev_conf->rxmode.mq_mode, dev_conf-
> >txmode.mq_mode);
> + return (-EINVAL);
> + }
> +
> + switch (dev_conf->rxmode.mq_mode) {
> + case ETH_MQ_RX_VMDQ_DCB:
> + if (vf_num <= ETH_16_POOLS)
> + sriov->nb_rx_q_per_pool = 8;
> + else if (vf_num <= ETH_32_POOLS)
> + sriov->nb_rx_q_per_pool = 4;
> + else {
> + PMD_INIT_LOG(ERR,
> + "DCB (SRIOV active) - VF count (%d) must be
> less or equal 32.",
> + vf_num);
> + return (-EINVAL);
> + }
> +
> + if (dev->data->nb_rx_queues < sriov->nb_rx_q_per_pool) {
> + PMD_INIT_LOG(WARNING,
> + "DCB (SRIOV active) rx queues (%d) count is
> not equal %d.",
> + dev->data->nb_rx_queues,
> + sriov->nb_rx_q_per_pool);
> + }
> + break;
> + case ETH_MQ_RX_RSS:
> + PMD_INIT_LOG(INFO, "RSS (SRIOV active), "
> + "rx mq mode is changed from: mq_mode %u
> into VMDQ mq_mode %u.",
> + dev_conf->rxmode.mq_mode, dev->data-
> >dev_conf.rxmode.mq_mode);
> + dev->data->dev_conf.rxmode.mq_mode =
> ETH_MQ_RX_VMDQ_RSS;
> + /* falltrought */
> + case ETH_MQ_RX_VMDQ_RSS:
> + if (vf_num >= ETH_64_POOLS) {
> + /* FIXME: Is vf_num > 64 realy supported by
> hardware? */
> + PMD_INIT_LOG(ERR, "RSS (SRIOV active), "
> + "VFs num must be less or equal 64.");
> + return (-EINVAL);
> + } else if (vf_num >= ETH_32_POOLS) {
> + if (dev->data->nb_rx_queues != 1 && dev->data-
> >nb_rx_queues != 2) {
> + PMD_INIT_LOG(ERR, "RSS (SRIOV active, VF
> count >= 32),"
> + "invalid rx queues count %d.
> It must be 1 or 2.",
> + dev->data->nb_rx_queues);
> + return (-EINVAL);
> + }
> +
> + sriov->nb_rx_q_per_pool = dev->data-
> >nb_rx_queues;
> + } else {
> + /* FIXME: is VT(16) + RSS realy supported? */
> + if (dev->data->nb_rx_queues != 4) {
> + PMD_INIT_LOG(ERR, "RSS (SRIOV active, VFs
> count < 32), "
> + "invalid rx queues count %d.
> It must be 4.",
> + dev->data->nb_rx_queues);
> + return (-EINVAL);
> + }
> +
> + sriov->nb_rx_q_per_pool = 4;
> + }
> + break;
> + default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */
> + /* if nothing mq mode configure, use default scheme */
> + if (dev->data->dev_conf.rxmode.mq_mode !=
> ETH_MQ_RX_VMDQ_ONLY) {
> + PMD_INIT_LOG(INFO, "Rx mq mode changed
> from %u into VMDQ %u.",
> + dev->data-
> >dev_conf.rxmode.mq_mode, ETH_MQ_RX_VMDQ_ONLY);
> +
> + dev->data->dev_conf.rxmode.mq_mode =
> ETH_MQ_RX_VMDQ_ONLY;
> + }
> +
> + /* queue 0 of each pool is used. */
> + sriov->nb_rx_q_per_pool = 1;
> + break;
> + }
> +
> + switch (dev_conf->txmode.mq_mode) {
> + case ETH_MQ_TX_VMDQ_DCB: /* DCB VMDQ in SRIOV mode*/
> + if (vf_num <= ETH_16_POOLS)
> + sriov->nb_tx_q_per_pool = 8;
> + else if (vf_num <= ETH_32_POOLS)
> + sriov->nb_tx_q_per_pool = 4;
> + else if (vf_num <= ETH_64_POOLS)
> + sriov->nb_tx_q_per_pool = 1;
> + else {
> + PMD_INIT_LOG(ERR, "DCB (SRIOV active), "
> + "VF count (%d) must be less or equal
> 64.",
> + vf_num);
> + return (-EINVAL);
> + }
> + break;
> + default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
> + /* if nothing mq mode configure, use default scheme */
> + if (dev->data->dev_conf.txmode.mq_mode !=
> ETH_MQ_TX_VMDQ_ONLY) {
> + PMD_INIT_LOG(INFO, "Tx mq mode is changed
> from %u into VMDQ %u.",
> + dev->data-
> >dev_conf.txmode.mq_mode, ETH_MQ_TX_VMDQ_ONLY);
> +
> + dev->data->dev_conf.txmode.mq_mode =
> ETH_MQ_TX_VMDQ_ONLY;
> + }
> +
> + /* queue 0 of each pool is used. */
> + sriov->nb_tx_q_per_pool = 1;
> + break;
> + }
> +
> + sriov->def_vmdq_idx = vf_num;
> +
> + /*
> + * Pools starts at 2xN, 4xN or 8xN
> + */
> + if (vf_num >= ETH_32_POOLS) {
> + /* This must be vf_num <= ETH_64_POOLS */
> + sriov->active = ETH_64_POOLS;
> + sriov->def_pool_q_idx = vf_num * 2;
> + } else if (vf_num >= ETH_16_POOLS) {
> + sriov->active = ETH_32_POOLS;
> + sriov->def_pool_q_idx = vf_num * 4;
> + } else {
> + sriov->active = ETH_16_POOLS;
> + sriov->def_pool_q_idx = vf_num * 8;
> + }
> +
> + /* Check if available queus count is not less than allocated.*/
> + if (dev->data->nb_rx_queues > sriov->nb_rx_q_per_pool) {
> + PMD_INIT_LOG(ERR, "SRIOV active, rx queue count must
> less or equal %d.",
> + sriov->nb_rx_q_per_pool);
> + return (-EINVAL);
> + }
> +
> + if (dev->data->nb_rx_queues > sriov->nb_tx_q_per_pool) {
> + PMD_INIT_LOG(ERR, "SRIOV active, tx queue count must
> less or equal %d.",
> + sriov->nb_tx_q_per_pool);
> + return (-EINVAL);
> + }
> +
> + return 0;
> }
>
> int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
> --
> 1.9.1
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
2015-01-14 0:51 ` Ouyang, Changchun
@ 2015-01-14 9:46 ` Wodkowski, PawelX
0 siblings, 0 replies; 41+ messages in thread
From: Wodkowski, PawelX @ 2015-01-14 9:46 UTC (permalink / raw)
To: Ouyang, Changchun, Vlad Zolotarov, Jastrzebski, MichalX K, dev
> > >
> > > - split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool
> > >
> > > Rationale:
> > >
> > > rx and tx number of queue might be different if RX and TX are
> > >
> > > configured in different mode. This allow to inform VF about
> > >
> > > proper number of queues.
> >
> >
> > Nice move! Ouyang, this is a nice answer to my recent remarks about your
> > PATCH4 in "Enable VF RSS for Niantic" series.
>
> After I respond your last comments, I see this, :-), I am sure we both agree it is
> the right way to resolve it in vmdq dcb case.
>
I am now dividing this patch with your suggestions and I am little confused.
In this (DCB in SRIOV) case the primary cause for spliting nb_q_per_pool into
nb_rx_q_per_pool and nb_tx_q_per_pool was because of this code:
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index af9e261..be3afe4 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -537,8 +537,8 @@
default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */
/* if nothing mq mode configure, use default scheme */
dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
- if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
- RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
+ if (RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool > 1)
+ RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool = 1;
break;
}
@@ -553,17 +553,18 @@
default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
/* if nothing mq mode configure, use default scheme */
dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
- if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
- RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
+ if (RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool > 1)
+ RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool = 1;
break;
}
/* check valid queue number */
- if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
- (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
+ if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool) ||
+ (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool)) {
PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV active, "
- "queue number must less equal to %d\n",
- port_id, RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
+ "rx/tx queue number must less equal to %d/%d\n",
+ port_id, RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool,
+ RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool);
return (-EINVAL);
}
} else {
--
This introduced an issue when RX and TX was configure in different way. The problem was
that the RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool as common for RX and TX and it is
changed. So I did the above. But when testpmd was adjusted for DCB in SRIOV there
was another issue. Testpmd is pre-configuring ports by default and since
nb_rx_q_per_pool and nb_tx_q_per_pool was already reset to 1 there was no way to
use it for DCB in SRIOV. So I did another modification:
> + uint16_t nb_rx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool;
> + uint16_t nb_tx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
> +
> switch (dev_conf->rxmode.mq_mode) {
> - case ETH_MQ_RX_VMDQ_RSS:
> case ETH_MQ_RX_VMDQ_DCB:
> + break;
> + case ETH_MQ_RX_VMDQ_RSS:
> case ETH_MQ_RX_VMDQ_DCB_RSS:
> - /* DCB/RSS VMDQ in SRIOV mode, not implement yet */
> + /* RSS, DCB+RSS VMDQ in SRIOV mode, not implement yet */
> PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
> " SRIOV active, "
> "unsupported VMDQ mq_mode rx %u\n",
> @@ -537,37 +560,32 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */
> /* if nothing mq mode configure, use default scheme */
> dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
> - if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
> - RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
> + if (nb_rx_q_per_pool > 1)
> + nb_rx_q_per_pool = 1;
> break;
> }
>
> switch (dev_conf->txmode.mq_mode) {
> - case ETH_MQ_TX_VMDQ_DCB:
> - /* DCB VMDQ in SRIOV mode, not implement yet */
> - PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
> - " SRIOV active, "
> - "unsupported VMDQ mq_mode tx %u\n",
> - port_id, dev_conf->txmode.mq_mode);
> - return (-EINVAL);
> + case ETH_MQ_TX_VMDQ_DCB: /* DCB VMDQ in SRIOV mode*/
> + break;
> default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
> /* if nothing mq mode configure, use default scheme */
> dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
> - if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
> - RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
> + if (nb_tx_q_per_pool > 1)
> + nb_tx_q_per_pool = 1;
> break;
> }
>
> /* check valid queue number */
> - if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
> - (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
> + if (nb_rx_q > nb_rx_q_per_pool || nb_tx_q > nb_tx_q_per_pool) {
> PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV active, "
> - "queue number must less equal to %d\n",
> - port_id, RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
> + "rx/tx queue number must less equal to %d/%d\n",
> + port_id, RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool,
> + RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool);
> return (-EINVAL);
> }
For this point I think that splitting RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool might be not
needed. From my point of view (DCB), since nb_q_per_pool is untouched, I think I can stay with:
> + uint16_t nb_rx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
> + uint16_t nb_tx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
> +
What do you think? I noticed that you was discussing some issue about nb_q_per_pool in face
of RSS functionality. Can you spoke about my doubts in face of that RSS?
Pawel
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
2015-01-13 10:08 ` Vlad Zolotarov
@ 2015-01-14 0:51 ` Ouyang, Changchun
2015-01-14 9:46 ` Wodkowski, PawelX
0 siblings, 1 reply; 41+ messages in thread
From: Ouyang, Changchun @ 2015-01-14 0:51 UTC (permalink / raw)
To: Vlad Zolotarov, Jastrzebski, MichalX K, dev
> -----Original Message-----
> From: Vlad Zolotarov [mailto:vladz@cloudius-systems.com]
> Sent: Tuesday, January 13, 2015 6:09 PM
> To: Jastrzebski, MichalX K; dev@dpdk.org
> Cc: Ouyang, Changchun
> Subject: Re: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
>
>
> On 01/12/15 16:43, Michal Jastrzebski wrote:
> > Date: Mon, 12 Jan 2015 15:39:40 +0100
> > Message-Id:
> > <1421073581-6644-2-git-send-email-michalx.k.jastrzebski@intel.com>
> > X-Mailer: git-send-email 2.1.1
> > In-Reply-To:
> > <1421073581-6644-1-git-send-email-michalx.k.jastrzebski@intel.com>
> > References:
> > <1421073581-6644-1-git-send-email-michalx.k.jastrzebski@intel.com>
> >
> > From: Pawel Wodkowski <pawelx.wodkowski@intel.com>
> >
> >
> > This patch add support for DCB in SRIOV mode. When no PFC
> >
> > is enabled this feature might be used as multiple queues
> >
> > (up to 8 or 4) for VF.
> >
> >
> >
> > It incorporate following modifications:
> >
> > - Allow zero rx/tx queues to be passed to rte_eth_dev_configure().
> >
> > Rationale:
> >
> > in SRIOV mode PF use first free VF to RX/TX. If VF count
> >
> > is 16 or 32 all recources are assigned to VFs so PF can
> >
> > be used only for configuration.
> >
> > - split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool
> >
> > Rationale:
> >
> > rx and tx number of queue might be different if RX and TX are
> >
> > configured in different mode. This allow to inform VF about
> >
> > proper number of queues.
>
>
> Nice move! Ouyang, this is a nice answer to my recent remarks about your
> PATCH4 in "Enable VF RSS for Niantic" series.
After I respond your last comments, I see this, :-), I am sure we both agree it is the right way to resolve it in vmdq dcb case.
> Michal, could u, pls., respin this series after fixing the formatting and (maybe)
> using "git send-email" for sending? ;)
>
> thanks,
> vlad
>
>
> >
> > - extern mailbox API for DCB mode
> >
> >
> >
> > Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
> >
> > ---
> >
> > lib/librte_ether/rte_ethdev.c | 84 +++++++++++++++++++++---------
> >
> > lib/librte_ether/rte_ethdev.h | 5 +-
> >
> > lib/librte_pmd_e1000/igb_pf.c | 3 +-
> >
> > lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 10 ++--
> >
> > lib/librte_pmd_ixgbe/ixgbe_ethdev.h | 1 +
> >
> > lib/librte_pmd_ixgbe/ixgbe_pf.c | 98
> ++++++++++++++++++++++++++++++-----
> >
> > lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 7 ++-
> >
> > 7 files changed, 159 insertions(+), 49 deletions(-)
> >
> >
> >
> > diff --git a/lib/librte_ether/rte_ethdev.c
> > b/lib/librte_ether/rte_ethdev.c
> >
> > index 95f2ceb..4c1a494 100644
> >
> > --- a/lib/librte_ether/rte_ethdev.c
> >
> > +++ b/lib/librte_ether/rte_ethdev.c
> >
> > @@ -333,7 +333,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev
> > *dev, uint16_t nb_queues)
> >
> > dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
> >
> > sizeof(dev->data->rx_queues[0]) *
> nb_queues,
> >
> > RTE_CACHE_LINE_SIZE);
> >
> > - if (dev->data->rx_queues == NULL) {
> >
> > + if (dev->data->rx_queues == NULL && nb_queues > 0) {
> >
> > dev->data->nb_rx_queues = 0;
> >
> > return -(ENOMEM);
> >
> > }
> >
> > @@ -475,7 +475,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev
> > *dev, uint16_t nb_queues)
> >
> > dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
> >
> > sizeof(dev->data->tx_queues[0]) *
> nb_queues,
> >
> > RTE_CACHE_LINE_SIZE);
> >
> > - if (dev->data->tx_queues == NULL) {
> >
> > + if (dev->data->tx_queues == NULL && nb_queues > 0) {
> >
> > dev->data->nb_tx_queues = 0;
> >
> > return -(ENOMEM);
> >
> > }
> >
> > @@ -507,6 +507,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
> > uint16_t nb_rx_q, uint16_t nb_tx_q,
> >
> > const struct rte_eth_conf *dev_conf)
> >
> > {
> >
> > struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> >
> > + struct rte_eth_dev_info dev_info;
> >
> >
> >
> > if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
> >
> > /* check multi-queue mode */
> >
> > @@ -524,11 +525,33 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
> > uint16_t nb_rx_q, uint16_t nb_tx_q,
> >
> > return (-EINVAL);
> >
> > }
> >
> >
> >
> > + if ((dev_conf->rxmode.mq_mode ==
> ETH_MQ_RX_VMDQ_DCB) &&
> >
> > + (dev_conf->txmode.mq_mode ==
> ETH_MQ_TX_VMDQ_DCB)) {
> >
> > + enum rte_eth_nb_pools rx_pools =
> >
> > + dev_conf-
> >rx_adv_conf.vmdq_dcb_conf.nb_queue_pools;
> >
> > + enum rte_eth_nb_pools tx_pools =
> >
> > + dev_conf-
> >tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools;
> >
> > +
> >
> > + if (rx_pools != tx_pools) {
> >
> > + /* Only equal number of pools is supported
> when
> >
> > + * DCB+VMDq in SRIOV */
> >
> > + PMD_DEBUG_TRACE("ethdev port_id=%"
> PRIu8
> >
> > + " SRIOV active, DCB+VMDQ
> mode, "
> >
> > + "number of rx and tx pools is
> not eqaul\n",
> >
> > + port_id);
> >
> > + return (-EINVAL);
> >
> > + }
> >
> > + }
> >
> > +
> >
> > + uint16_t nb_rx_q_per_pool =
> > +RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool;
> >
> > + uint16_t nb_tx_q_per_pool =
> > +RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
> >
> > +
> >
> > switch (dev_conf->rxmode.mq_mode) {
> >
> > - case ETH_MQ_RX_VMDQ_RSS:
> >
> > case ETH_MQ_RX_VMDQ_DCB:
> >
> > + break;
> >
> > + case ETH_MQ_RX_VMDQ_RSS:
> >
> > case ETH_MQ_RX_VMDQ_DCB_RSS:
> >
> > - /* DCB/RSS VMDQ in SRIOV mode, not implement
> yet */
> >
> > + /* RSS, DCB+RSS VMDQ in SRIOV mode, not
> implement yet */
> >
> > PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
> >
> > " SRIOV active, "
> >
> > "unsupported VMDQ mq_mode
> rx %u\n",
> >
> > @@ -537,37 +560,32 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
> > uint16_t nb_rx_q, uint16_t nb_tx_q,
> >
> > default: /* ETH_MQ_RX_VMDQ_ONLY or
> ETH_MQ_RX_NONE */
> >
> > /* if nothing mq mode configure, use default scheme
> */
> >
> > dev->data->dev_conf.rxmode.mq_mode =
> ETH_MQ_RX_VMDQ_ONLY;
> >
> > - if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
> >
> > - RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool =
> 1;
> >
> > + if (nb_rx_q_per_pool > 1)
> >
> > + nb_rx_q_per_pool = 1;
> >
> > break;
> >
> > }
> >
> >
> >
> > switch (dev_conf->txmode.mq_mode) {
> >
> > - case ETH_MQ_TX_VMDQ_DCB:
> >
> > - /* DCB VMDQ in SRIOV mode, not implement yet */
> >
> > - PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
> >
> > - " SRIOV active, "
> >
> > - "unsupported VMDQ mq_mode
> tx %u\n",
> >
> > - port_id, dev_conf-
> >txmode.mq_mode);
> >
> > - return (-EINVAL);
> >
> > + case ETH_MQ_TX_VMDQ_DCB: /* DCB VMDQ in SRIOV
> mode*/
> >
> > + break;
> >
> > default: /* ETH_MQ_TX_VMDQ_ONLY or
> ETH_MQ_TX_NONE */
> >
> > /* if nothing mq mode configure, use default scheme
> */
> >
> > dev->data->dev_conf.txmode.mq_mode =
> ETH_MQ_TX_VMDQ_ONLY;
> >
> > - if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
> >
> > - RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool =
> 1;
> >
> > + if (nb_tx_q_per_pool > 1)
> >
> > + nb_tx_q_per_pool = 1;
> >
> > break;
> >
> > }
> >
> >
> >
> > /* check valid queue number */
> >
> > - if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
> >
> > - (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
> >
> > + if (nb_rx_q > nb_rx_q_per_pool || nb_tx_q >
> nb_tx_q_per_pool) {
> >
> > PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV
> active, "
> >
> > - "queue number must less equal to %d\n",
> >
> > - port_id,
> RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
> >
> > + "rx/tx queue number must less equal
> to %d/%d\n",
> >
> > + port_id,
> RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool,
> >
> > +
> RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool);
> >
> > return (-EINVAL);
> >
> > }
> >
> > } else {
> >
> > - /* For vmdb+dcb mode check our configuration before we
> go further */
> >
> > + /* For vmdq+dcb mode check our configuration before we
> go further
> > +*/
> >
> > if (dev_conf->rxmode.mq_mode ==
> ETH_MQ_RX_VMDQ_DCB) {
> >
> > const struct rte_eth_vmdq_dcb_conf *conf;
> >
> >
> >
> > @@ -606,11 +624,20 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
> > uint16_t nb_rx_q, uint16_t nb_tx_q,
> >
> > }
> >
> > }
> >
> >
> >
> > + /* For DCB we need to obtain maximum number of queues
> dinamically,
> >
> > + * as this depends on max VF exported in PF */
> >
> > + if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
> >
> > + (dev_conf->txmode.mq_mode ==
> ETH_MQ_TX_DCB)) {
> >
> > +
> >
> > + FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >dev_infos_get, -ENOTSUP);
> >
> > + (*dev->dev_ops->dev_infos_get)(dev,
> &dev_info);
> >
> > + }
> >
> > +
> >
> > /* For DCB mode check our configuration before we go
> further */
> >
> > if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
> >
> > const struct rte_eth_dcb_rx_conf *conf;
> >
> >
> >
> > - if (nb_rx_q != ETH_DCB_NUM_QUEUES) {
> >
> > + if (nb_rx_q != dev_info.max_rx_queues) {
> >
> > PMD_DEBUG_TRACE("ethdev port_id=%d
> DCB, nb_rx_q "
> >
> > "!= %d\n",
> >
> > port_id,
> ETH_DCB_NUM_QUEUES);
> >
> > @@ -630,7 +657,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
> > uint16_t nb_rx_q, uint16_t nb_tx_q,
> >
> > if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
> >
> > const struct rte_eth_dcb_tx_conf *conf;
> >
> >
> >
> > - if (nb_tx_q != ETH_DCB_NUM_QUEUES) {
> >
> > + if (nb_tx_q != dev_info.max_tx_queues) {
> >
> > PMD_DEBUG_TRACE("ethdev port_id=%d
> DCB, nb_tx_q "
> >
> > "!= %d\n",
> >
> > port_id,
> ETH_DCB_NUM_QUEUES);
> >
> > @@ -690,7 +717,10 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t
> > nb_rx_q, uint16_t nb_tx_q,
> >
> > }
> >
> > if (nb_rx_q == 0) {
> >
> > PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_q == 0\n",
> port_id);
> >
> > - return (-EINVAL);
> >
> > + /* In SRIOV there can be no free resource for PF. So permit
> use
> > +only
> >
> > + * for configuration. */
> >
> > + if (RTE_ETH_DEV_SRIOV(dev).active == 0)
> >
> > + return (-EINVAL);
> >
> > }
> >
> >
> >
> > if (nb_tx_q > dev_info.max_tx_queues) {
> >
> > @@ -698,9 +728,13 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t
> > nb_rx_q, uint16_t nb_tx_q,
> >
> > port_id, nb_tx_q, dev_info.max_tx_queues);
> >
> > return (-EINVAL);
> >
> > }
> >
> > +
> >
> > if (nb_tx_q == 0) {
> >
> > PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_q == 0\n",
> port_id);
> >
> > - return (-EINVAL);
> >
> > + /* In SRIOV there can be no free resource for PF. So permit
> use
> > +only
> >
> > + * for configuration. */
> >
> > + if (RTE_ETH_DEV_SRIOV(dev).active == 0)
> >
> > + return (-EINVAL);
> >
> > }
> >
> >
> >
> > /* Copy the dev_conf parameter into the dev structure */
> >
> > @@ -750,7 +784,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t
> > nb_rx_q, uint16_t nb_tx_q,
> >
> > ETHER_MAX_LEN;
> >
> > }
> >
> >
> >
> > - /* multipe queue mode checking */
> >
> > + /* multiple queue mode checking */
> >
> > diag = rte_eth_dev_check_mq_mode(port_id, nb_rx_q, nb_tx_q,
> > dev_conf);
> >
> > if (diag != 0) {
> >
> > PMD_DEBUG_TRACE("port%d
> rte_eth_dev_check_mq_mode = %d\n",
> >
> > diff --git a/lib/librte_ether/rte_ethdev.h
> > b/lib/librte_ether/rte_ethdev.h
> >
> > index ce0528f..04fda83 100644
> >
> > --- a/lib/librte_ether/rte_ethdev.h
> >
> > +++ b/lib/librte_ether/rte_ethdev.h
> >
> > @@ -299,7 +299,7 @@ enum rte_eth_rx_mq_mode {
> >
> > enum rte_eth_tx_mq_mode {
> >
> > ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */
> >
> > ETH_MQ_TX_DCB, /**< For TX side,only DCB is on. */
> >
> > - ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is
> on. */
> >
> > + ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on.
> */
> >
> > ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
> >
> > };
> >
> >
> >
> > @@ -1569,7 +1569,8 @@ struct rte_eth_dev {
> >
> >
> >
> > struct rte_eth_dev_sriov {
> >
> > uint8_t active; /**< SRIOV is active with 16, 32 or 64 pools */
> >
> > - uint8_t nb_q_per_pool; /**< rx queue number per pool */
> >
> > + uint8_t nb_rx_q_per_pool; /**< rx queue number per pool */
> >
> > + uint8_t nb_tx_q_per_pool; /**< tx queue number per pool */
>
> >
> > uint16_t def_vmdq_idx; /**< Default pool num used for PF */
> >
> > uint16_t def_pool_q_idx; /**< Default pool queue start reg index
> */
> >
> > };
> >
> > diff --git a/lib/librte_pmd_e1000/igb_pf.c
> > b/lib/librte_pmd_e1000/igb_pf.c
> >
> > index bc3816a..9d2f858 100644
> >
> > --- a/lib/librte_pmd_e1000/igb_pf.c
> >
> > +++ b/lib/librte_pmd_e1000/igb_pf.c
> >
> > @@ -115,7 +115,8 @@ void igb_pf_host_init(struct rte_eth_dev *eth_dev)
> >
> > rte_panic("Cannot allocate memory for private VF data\n");
> >
> >
> >
> > RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS;
> >
> > - RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
> >
> > + RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool = nb_queue;
> >
> > + RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool = nb_queue;
> >
> > RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
> >
> > RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx =
> (uint16_t)(vf_num *
> > nb_queue);
> >
> >
> >
> > diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
> > b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
> >
> > index 3fc3738..347f03c 100644
> >
> > --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
> >
> > +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
> >
> > @@ -3555,14 +3555,14 @@ static int ixgbe_set_vf_rate_limit(struct
> > rte_eth_dev *dev, uint16_t vf,
> >
> > struct ixgbe_hw *hw =
> > IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> >
> > struct ixgbe_vf_info *vfinfo =
> >
> > *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data-
> >dev_private));
> >
> > - uint8_t nb_q_per_pool =
> RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
> >
> > + uint8_t nb_tx_q_per_pool =
> RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
> >
> > uint32_t queue_stride =
> >
> > IXGBE_MAX_RX_QUEUE_NUM /
> RTE_ETH_DEV_SRIOV(dev).active;
> >
> > uint32_t queue_idx = vf * queue_stride, idx = 0, vf_idx;
> >
> > - uint32_t queue_end = queue_idx + nb_q_per_pool - 1;
> >
> > + uint32_t tx_queue_end = queue_idx + nb_tx_q_per_pool - 1;
> >
> > uint16_t total_rate = 0;
> >
> >
> >
> > - if (queue_end >= hw->mac.max_tx_queues)
> >
> > + if (tx_queue_end >= hw->mac.max_tx_queues)
> >
> > return -EINVAL;
> >
> >
> >
> > if (vfinfo != NULL) {
> >
> > @@ -3577,7 +3577,7 @@ static int ixgbe_set_vf_rate_limit(struct
> > rte_eth_dev *dev, uint16_t vf,
> >
> > return -EINVAL;
> >
> >
> >
> > /* Store tx_rate for this vf. */
> >
> > - for (idx = 0; idx < nb_q_per_pool; idx++) {
> >
> > + for (idx = 0; idx < nb_tx_q_per_pool; idx++) {
> >
> > if (((uint64_t)0x1 << idx) & q_msk) {
> >
> > if (vfinfo[vf].tx_rate[idx] != tx_rate)
> >
> > vfinfo[vf].tx_rate[idx] = tx_rate;
> >
> > @@ -3595,7 +3595,7 @@ static int ixgbe_set_vf_rate_limit(struct
> > rte_eth_dev *dev, uint16_t vf,
> >
> > }
> >
> >
> >
> > /* Set RTTBCNRC of each queue/pool for vf X */
> >
> > - for (; queue_idx <= queue_end; queue_idx++) {
> >
> > + for (; queue_idx <= tx_queue_end; queue_idx++) {
> >
> > if (0x1 & q_msk)
> >
> > ixgbe_set_queue_rate_limit(dev, queue_idx,
> tx_rate);
> >
> > q_msk = q_msk >> 1;
> >
> > diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
> > b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
> >
> > index ca99170..ebf16e9 100644
> >
> > --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
> >
> > +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
> >
> > @@ -159,6 +159,7 @@ struct ixgbe_vf_info {
> >
> > uint16_t tx_rate[IXGBE_MAX_QUEUE_NUM_PER_VF];
> >
> > uint16_t vlan_count;
> >
> > uint8_t spoofchk_enabled;
> >
> > + unsigned int vf_api;
> >
> > };
> >
> >
> >
> > /*
> >
> > diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c
> > b/lib/librte_pmd_ixgbe/ixgbe_pf.c
> >
> > index 51da1fd..4d30bcf 100644
> >
> > --- a/lib/librte_pmd_ixgbe/ixgbe_pf.c
> >
> > +++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c
> >
> > @@ -127,7 +127,8 @@ void ixgbe_pf_host_init(struct rte_eth_dev
> > *eth_dev)
> >
> > RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
> >
> > }
> >
> >
> >
> > - RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
> >
> > + RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool = nb_queue;
> >
> > + RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool = nb_queue;
> >
> > RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
> >
> > RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx =
> (uint16_t)(vf_num *
> > nb_queue);
> >
> >
> >
> > @@ -189,7 +190,7 @@ int ixgbe_pf_host_configure(struct rte_eth_dev
> > *eth_dev)
> >
> > hw->mac.ops.set_vmdq(hw, 0,
> > RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx);
> >
> >
> >
> > /*
> >
> > - * SW msut set GCR_EXT.VT_Mode the same as GPIE.VT_Mode
> >
> > + * SW must set GCR_EXT.VT_Mode the same as GPIE.VT_Mode
> >
> > */
> >
> > gcr_ext = IXGBE_READ_REG(hw, IXGBE_GCR_EXT);
> >
> > gcr_ext &= ~IXGBE_GCR_EXT_VT_MODE_MASK;
> >
> > @@ -214,19 +215,19 @@ int ixgbe_pf_host_configure(struct rte_eth_dev
> > *eth_dev)
> >
> > }
> >
> >
> >
> > IXGBE_WRITE_REG(hw, IXGBE_GCR_EXT, gcr_ext);
> >
> > - IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
> >
> > + IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
> >
> >
> >
> > - /*
> >
> > + /*
> >
> > * enable vlan filtering and allow all vlan tags through
> >
> > */
> >
> > - vlanctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
> >
> > - vlanctrl |= IXGBE_VLNCTRL_VFE ; /* enable vlan filters */
> >
> > - IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl);
> >
> > + vlanctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
> >
> > + vlanctrl |= IXGBE_VLNCTRL_VFE ; /* enable vlan filters */
> >
> > + IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl);
> >
> >
> >
> > - /* VFTA - enable all vlan filters */
> >
> > - for (i = 0; i < IXGBE_MAX_VFTA; i++) {
> >
> > - IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF);
> >
> > - }
> >
> > + /* VFTA - enable all vlan filters */
> >
> > + for (i = 0; i < IXGBE_MAX_VFTA; i++) {
> >
> > + IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF);
> >
> > + }
> >
> >
> >
> > /* Enable MAC Anti-Spoofing */
> >
> > hw->mac.ops.set_mac_anti_spoofing(hw, FALSE, vf_num);
> >
> > @@ -369,6 +370,73 @@ ixgbe_vf_reset(struct rte_eth_dev *dev, uint16_t
> > vf, uint32_t *msgbuf)
> >
> > }
> >
> >
> >
> > static int
> >
> > +ixgbe_negotiate_vf_api(struct rte_eth_dev *dev, uint32_t vf, uint32_t
> > +*msgbuf)
> >
> > +{
> >
> > + struct ixgbe_vf_info *vfinfo =
> >
> > + *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data-
> >dev_private));
> >
> > + int api = msgbuf[1];
> >
> > +
> >
> > + switch (api) {
> >
> > + case ixgbe_mbox_api_10:
> >
> > + case ixgbe_mbox_api_11:
> >
> > + vfinfo[vf].vf_api = api;
> >
> > + return 0;
> >
> > + default:
> >
> > + break;
> >
> > + }
> >
> > +
> >
> > + RTE_LOG(DEBUG, PMD, "VF %d requested invalid api version %u\n",
> vf,
> > +api);
> >
> > + return -1;
> >
> > +}
> >
> > +
> >
> > +static int
> >
> > +ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t
> > +*msgbuf)
> >
> > +{
> >
> > + struct ixgbe_vf_info *vfinfo =
> >
> > + *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data-
> >dev_private));
> >
> > + struct ixgbe_dcb_config *dcb_cfg =
> >
> > + IXGBE_DEV_PRIVATE_TO_DCB_CFG(dev->data-
> >dev_private);
> >
> > +
> >
> > + uint8_t num_tcs = dcb_cfg->num_tcs.pg_tcs;
> >
> > +
> >
> > + /* verify the PF is supporting the correct APIs */
> >
> > + switch (vfinfo[vf].vf_api) {
> >
> > + case ixgbe_mbox_api_10:
> >
> > + case ixgbe_mbox_api_11:
> >
> > + break;
> >
> > + default:
> >
> > + return -1;
> >
> > + }
> >
> > +
> >
> > + if (RTE_ETH_DEV_SRIOV(dev).active) {
> >
> > + if (dev->data->dev_conf.rxmode.mq_mode ==
> ETH_MQ_RX_VMDQ_DCB)
> >
> > + msgbuf[IXGBE_VF_TX_QUEUES] = num_tcs;
> >
> > + else
> >
> > + msgbuf[IXGBE_VF_TX_QUEUES] = 1;
> >
> > +
> >
> > + if (dev->data->dev_conf.txmode.mq_mode ==
> ETH_MQ_TX_VMDQ_DCB)
> >
> > + msgbuf[IXGBE_VF_RX_QUEUES] = num_tcs;
> >
> > + else
> >
> > + msgbuf[IXGBE_VF_RX_QUEUES] = 1;
> >
> > + } else {
> >
> > + /* only allow 1 Tx queue for bandwidth limiting */
> >
> > + msgbuf[IXGBE_VF_TX_QUEUES] = 1;
> >
> > + msgbuf[IXGBE_VF_RX_QUEUES] = 1;
> >
> > + }
> >
> > +
> >
> > + /* notify VF of need for VLAN tag stripping, and correct queue */
> >
> > + if (num_tcs)
> >
> > + msgbuf[IXGBE_VF_TRANS_VLAN] = num_tcs;
> >
> > + else
> >
> > + msgbuf[IXGBE_VF_TRANS_VLAN] = 0;
> >
> > +
> >
> > + /* notify VF of default queue */
> >
> > + msgbuf[IXGBE_VF_DEF_QUEUE] = 0;
> >
> > +
> >
> > + return 0;
> >
> > +}
> >
> > +
> >
> > +static int
> >
> > ixgbe_vf_set_mac_addr(struct rte_eth_dev *dev, uint32_t vf, uint32_t
> > *msgbuf)
> >
> > {
> >
> > struct ixgbe_hw *hw =
> > IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> >
> > @@ -512,6 +580,12 @@ ixgbe_rcv_msg_from_vf(struct rte_eth_dev *dev,
> > uint16_t vf)
> >
> > case IXGBE_VF_SET_VLAN:
> >
> > retval = ixgbe_vf_set_vlan(dev, vf, msgbuf);
> >
> > break;
> >
> > + case IXGBE_VF_API_NEGOTIATE:
> >
> > + retval = ixgbe_negotiate_vf_api(dev, vf, msgbuf);
> >
> > + break;
> >
> > + case IXGBE_VF_GET_QUEUES:
> >
> > + retval = ixgbe_get_vf_queues(dev, vf, msgbuf);
> >
> > + break;
> >
> > default:
> >
> > PMD_DRV_LOG(DEBUG, "Unhandled Msg %8.8x",
> (unsigned)msgbuf[0]);
> >
> > retval = IXGBE_ERR_MBX;
> >
> > @@ -526,7 +600,7 @@ ixgbe_rcv_msg_from_vf(struct rte_eth_dev *dev,
> > uint16_t vf)
> >
> >
> >
> > msgbuf[0] |= IXGBE_VT_MSGTYPE_CTS;
> >
> >
> >
> > - ixgbe_write_mbx(hw, msgbuf, 1, vf);
> >
> > + ixgbe_write_mbx(hw, msgbuf, mbx_size, vf);
> >
> >
> >
> > return retval;
> >
> > }
> >
> > diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> >
> > index e10d6a2..49b44fe 100644
> >
> > --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> >
> > +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> >
> > @@ -3166,10 +3166,9 @@ void ixgbe_configure_dcb(struct rte_eth_dev
> > *dev)
> >
> >
> >
> > /* check support mq_mode for DCB */
> >
> > if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
> >
> > - (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB))
> >
> > - return;
> >
> > -
> >
> > - if (dev->data->nb_rx_queues != ETH_DCB_NUM_QUEUES)
> >
> > + (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
> >
> > + (dev_conf->txmode.mq_mode != ETH_MQ_TX_VMDQ_DCB) &&
> >
> > + (dev_conf->txmode.mq_mode != ETH_MQ_TX_DCB))
> >
> > return;
> >
> >
> >
> > /** Configure DCB hardware **/
> >
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
2015-01-12 14:43 [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe Michal Jastrzebski
2015-01-12 15:46 ` Jastrzebski, MichalX K
@ 2015-01-13 10:08 ` Vlad Zolotarov
2015-01-14 0:51 ` Ouyang, Changchun
1 sibling, 1 reply; 41+ messages in thread
From: Vlad Zolotarov @ 2015-01-13 10:08 UTC (permalink / raw)
To: Michal Jastrzebski, dev
On 01/12/15 16:43, Michal Jastrzebski wrote:
> Date: Mon, 12 Jan 2015 15:39:40 +0100
> Message-Id: <1421073581-6644-2-git-send-email-michalx.k.jastrzebski@intel.com>
> X-Mailer: git-send-email 2.1.1
> In-Reply-To: <1421073581-6644-1-git-send-email-michalx.k.jastrzebski@intel.com>
> References: <1421073581-6644-1-git-send-email-michalx.k.jastrzebski@intel.com>
>
> From: Pawel Wodkowski <pawelx.wodkowski@intel.com>
>
>
> This patch add support for DCB in SRIOV mode. When no PFC
>
> is enabled this feature might be used as multiple queues
>
> (up to 8 or 4) for VF.
>
>
>
> It incorporate following modifications:
>
> - Allow zero rx/tx queues to be passed to rte_eth_dev_configure().
>
> Rationale:
>
> in SRIOV mode PF use first free VF to RX/TX. If VF count
>
> is 16 or 32 all recources are assigned to VFs so PF can
>
> be used only for configuration.
>
> - split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool
>
> Rationale:
>
> rx and tx number of queue might be different if RX and TX are
>
> configured in different mode. This allow to inform VF about
>
> proper number of queues.
Nice move! Ouyang, this is a nice answer to my recent remarks about your
PATCH4 in "Enable VF RSS for Niantic" series.
Michal, could u, pls., respin this series after fixing the formatting
and (maybe) using "git send-email" for sending? ;)
thanks,
vlad
>
> - extern mailbox API for DCB mode
>
>
>
> Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
>
> ---
>
> lib/librte_ether/rte_ethdev.c | 84 +++++++++++++++++++++---------
>
> lib/librte_ether/rte_ethdev.h | 5 +-
>
> lib/librte_pmd_e1000/igb_pf.c | 3 +-
>
> lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 10 ++--
>
> lib/librte_pmd_ixgbe/ixgbe_ethdev.h | 1 +
>
> lib/librte_pmd_ixgbe/ixgbe_pf.c | 98 ++++++++++++++++++++++++++++++-----
>
> lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 7 ++-
>
> 7 files changed, 159 insertions(+), 49 deletions(-)
>
>
>
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
>
> index 95f2ceb..4c1a494 100644
>
> --- a/lib/librte_ether/rte_ethdev.c
>
> +++ b/lib/librte_ether/rte_ethdev.c
>
> @@ -333,7 +333,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
>
> dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
>
> sizeof(dev->data->rx_queues[0]) * nb_queues,
>
> RTE_CACHE_LINE_SIZE);
>
> - if (dev->data->rx_queues == NULL) {
>
> + if (dev->data->rx_queues == NULL && nb_queues > 0) {
>
> dev->data->nb_rx_queues = 0;
>
> return -(ENOMEM);
>
> }
>
> @@ -475,7 +475,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
>
> dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
>
> sizeof(dev->data->tx_queues[0]) * nb_queues,
>
> RTE_CACHE_LINE_SIZE);
>
> - if (dev->data->tx_queues == NULL) {
>
> + if (dev->data->tx_queues == NULL && nb_queues > 0) {
>
> dev->data->nb_tx_queues = 0;
>
> return -(ENOMEM);
>
> }
>
> @@ -507,6 +507,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>
> const struct rte_eth_conf *dev_conf)
>
> {
>
> struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>
> + struct rte_eth_dev_info dev_info;
>
>
>
> if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
>
> /* check multi-queue mode */
>
> @@ -524,11 +525,33 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>
> return (-EINVAL);
>
> }
>
>
>
> + if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) &&
>
> + (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB)) {
>
> + enum rte_eth_nb_pools rx_pools =
>
> + dev_conf->rx_adv_conf.vmdq_dcb_conf.nb_queue_pools;
>
> + enum rte_eth_nb_pools tx_pools =
>
> + dev_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools;
>
> +
>
> + if (rx_pools != tx_pools) {
>
> + /* Only equal number of pools is supported when
>
> + * DCB+VMDq in SRIOV */
>
> + PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
>
> + " SRIOV active, DCB+VMDQ mode, "
>
> + "number of rx and tx pools is not eqaul\n",
>
> + port_id);
>
> + return (-EINVAL);
>
> + }
>
> + }
>
> +
>
> + uint16_t nb_rx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool;
>
> + uint16_t nb_tx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
>
> +
>
> switch (dev_conf->rxmode.mq_mode) {
>
> - case ETH_MQ_RX_VMDQ_RSS:
>
> case ETH_MQ_RX_VMDQ_DCB:
>
> + break;
>
> + case ETH_MQ_RX_VMDQ_RSS:
>
> case ETH_MQ_RX_VMDQ_DCB_RSS:
>
> - /* DCB/RSS VMDQ in SRIOV mode, not implement yet */
>
> + /* RSS, DCB+RSS VMDQ in SRIOV mode, not implement yet */
>
> PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
>
> " SRIOV active, "
>
> "unsupported VMDQ mq_mode rx %u\n",
>
> @@ -537,37 +560,32 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>
> default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */
>
> /* if nothing mq mode configure, use default scheme */
>
> dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
>
> - if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
>
> - RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
>
> + if (nb_rx_q_per_pool > 1)
>
> + nb_rx_q_per_pool = 1;
>
> break;
>
> }
>
>
>
> switch (dev_conf->txmode.mq_mode) {
>
> - case ETH_MQ_TX_VMDQ_DCB:
>
> - /* DCB VMDQ in SRIOV mode, not implement yet */
>
> - PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
>
> - " SRIOV active, "
>
> - "unsupported VMDQ mq_mode tx %u\n",
>
> - port_id, dev_conf->txmode.mq_mode);
>
> - return (-EINVAL);
>
> + case ETH_MQ_TX_VMDQ_DCB: /* DCB VMDQ in SRIOV mode*/
>
> + break;
>
> default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
>
> /* if nothing mq mode configure, use default scheme */
>
> dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
>
> - if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
>
> - RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
>
> + if (nb_tx_q_per_pool > 1)
>
> + nb_tx_q_per_pool = 1;
>
> break;
>
> }
>
>
>
> /* check valid queue number */
>
> - if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
>
> - (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
>
> + if (nb_rx_q > nb_rx_q_per_pool || nb_tx_q > nb_tx_q_per_pool) {
>
> PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV active, "
>
> - "queue number must less equal to %d\n",
>
> - port_id, RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
>
> + "rx/tx queue number must less equal to %d/%d\n",
>
> + port_id, RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool,
>
> + RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool);
>
> return (-EINVAL);
>
> }
>
> } else {
>
> - /* For vmdb+dcb mode check our configuration before we go further */
>
> + /* For vmdq+dcb mode check our configuration before we go further */
>
> if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
>
> const struct rte_eth_vmdq_dcb_conf *conf;
>
>
>
> @@ -606,11 +624,20 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>
> }
>
> }
>
>
>
> + /* For DCB we need to obtain maximum number of queues dinamically,
>
> + * as this depends on max VF exported in PF */
>
> + if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
>
> + (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB)) {
>
> +
>
> + FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
>
> + (*dev->dev_ops->dev_infos_get)(dev, &dev_info);
>
> + }
>
> +
>
> /* For DCB mode check our configuration before we go further */
>
> if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
>
> const struct rte_eth_dcb_rx_conf *conf;
>
>
>
> - if (nb_rx_q != ETH_DCB_NUM_QUEUES) {
>
> + if (nb_rx_q != dev_info.max_rx_queues) {
>
> PMD_DEBUG_TRACE("ethdev port_id=%d DCB, nb_rx_q "
>
> "!= %d\n",
>
> port_id, ETH_DCB_NUM_QUEUES);
>
> @@ -630,7 +657,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>
> if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
>
> const struct rte_eth_dcb_tx_conf *conf;
>
>
>
> - if (nb_tx_q != ETH_DCB_NUM_QUEUES) {
>
> + if (nb_tx_q != dev_info.max_tx_queues) {
>
> PMD_DEBUG_TRACE("ethdev port_id=%d DCB, nb_tx_q "
>
> "!= %d\n",
>
> port_id, ETH_DCB_NUM_QUEUES);
>
> @@ -690,7 +717,10 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>
> }
>
> if (nb_rx_q == 0) {
>
> PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_q == 0\n", port_id);
>
> - return (-EINVAL);
>
> + /* In SRIOV there can be no free resource for PF. So permit use only
>
> + * for configuration. */
>
> + if (RTE_ETH_DEV_SRIOV(dev).active == 0)
>
> + return (-EINVAL);
>
> }
>
>
>
> if (nb_tx_q > dev_info.max_tx_queues) {
>
> @@ -698,9 +728,13 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>
> port_id, nb_tx_q, dev_info.max_tx_queues);
>
> return (-EINVAL);
>
> }
>
> +
>
> if (nb_tx_q == 0) {
>
> PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_q == 0\n", port_id);
>
> - return (-EINVAL);
>
> + /* In SRIOV there can be no free resource for PF. So permit use only
>
> + * for configuration. */
>
> + if (RTE_ETH_DEV_SRIOV(dev).active == 0)
>
> + return (-EINVAL);
>
> }
>
>
>
> /* Copy the dev_conf parameter into the dev structure */
>
> @@ -750,7 +784,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>
> ETHER_MAX_LEN;
>
> }
>
>
>
> - /* multipe queue mode checking */
>
> + /* multiple queue mode checking */
>
> diag = rte_eth_dev_check_mq_mode(port_id, nb_rx_q, nb_tx_q, dev_conf);
>
> if (diag != 0) {
>
> PMD_DEBUG_TRACE("port%d rte_eth_dev_check_mq_mode = %d\n",
>
> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
>
> index ce0528f..04fda83 100644
>
> --- a/lib/librte_ether/rte_ethdev.h
>
> +++ b/lib/librte_ether/rte_ethdev.h
>
> @@ -299,7 +299,7 @@ enum rte_eth_rx_mq_mode {
>
> enum rte_eth_tx_mq_mode {
>
> ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */
>
> ETH_MQ_TX_DCB, /**< For TX side,only DCB is on. */
>
> - ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on. */
>
> + ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on. */
>
> ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
>
> };
>
>
>
> @@ -1569,7 +1569,8 @@ struct rte_eth_dev {
>
>
>
> struct rte_eth_dev_sriov {
>
> uint8_t active; /**< SRIOV is active with 16, 32 or 64 pools */
>
> - uint8_t nb_q_per_pool; /**< rx queue number per pool */
>
> + uint8_t nb_rx_q_per_pool; /**< rx queue number per pool */
>
> + uint8_t nb_tx_q_per_pool; /**< tx queue number per pool */
>
> uint16_t def_vmdq_idx; /**< Default pool num used for PF */
>
> uint16_t def_pool_q_idx; /**< Default pool queue start reg index */
>
> };
>
> diff --git a/lib/librte_pmd_e1000/igb_pf.c b/lib/librte_pmd_e1000/igb_pf.c
>
> index bc3816a..9d2f858 100644
>
> --- a/lib/librte_pmd_e1000/igb_pf.c
>
> +++ b/lib/librte_pmd_e1000/igb_pf.c
>
> @@ -115,7 +115,8 @@ void igb_pf_host_init(struct rte_eth_dev *eth_dev)
>
> rte_panic("Cannot allocate memory for private VF data\n");
>
>
>
> RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS;
>
> - RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
>
> + RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool = nb_queue;
>
> + RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool = nb_queue;
>
> RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
>
> RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
>
>
>
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
>
> index 3fc3738..347f03c 100644
>
> --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
>
> +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
>
> @@ -3555,14 +3555,14 @@ static int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
>
> struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>
> struct ixgbe_vf_info *vfinfo =
>
> *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private));
>
> - uint8_t nb_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
>
> + uint8_t nb_tx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
>
> uint32_t queue_stride =
>
> IXGBE_MAX_RX_QUEUE_NUM / RTE_ETH_DEV_SRIOV(dev).active;
>
> uint32_t queue_idx = vf * queue_stride, idx = 0, vf_idx;
>
> - uint32_t queue_end = queue_idx + nb_q_per_pool - 1;
>
> + uint32_t tx_queue_end = queue_idx + nb_tx_q_per_pool - 1;
>
> uint16_t total_rate = 0;
>
>
>
> - if (queue_end >= hw->mac.max_tx_queues)
>
> + if (tx_queue_end >= hw->mac.max_tx_queues)
>
> return -EINVAL;
>
>
>
> if (vfinfo != NULL) {
>
> @@ -3577,7 +3577,7 @@ static int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
>
> return -EINVAL;
>
>
>
> /* Store tx_rate for this vf. */
>
> - for (idx = 0; idx < nb_q_per_pool; idx++) {
>
> + for (idx = 0; idx < nb_tx_q_per_pool; idx++) {
>
> if (((uint64_t)0x1 << idx) & q_msk) {
>
> if (vfinfo[vf].tx_rate[idx] != tx_rate)
>
> vfinfo[vf].tx_rate[idx] = tx_rate;
>
> @@ -3595,7 +3595,7 @@ static int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
>
> }
>
>
>
> /* Set RTTBCNRC of each queue/pool for vf X */
>
> - for (; queue_idx <= queue_end; queue_idx++) {
>
> + for (; queue_idx <= tx_queue_end; queue_idx++) {
>
> if (0x1 & q_msk)
>
> ixgbe_set_queue_rate_limit(dev, queue_idx, tx_rate);
>
> q_msk = q_msk >> 1;
>
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
>
> index ca99170..ebf16e9 100644
>
> --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
>
> +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
>
> @@ -159,6 +159,7 @@ struct ixgbe_vf_info {
>
> uint16_t tx_rate[IXGBE_MAX_QUEUE_NUM_PER_VF];
>
> uint16_t vlan_count;
>
> uint8_t spoofchk_enabled;
>
> + unsigned int vf_api;
>
> };
>
>
>
> /*
>
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c b/lib/librte_pmd_ixgbe/ixgbe_pf.c
>
> index 51da1fd..4d30bcf 100644
>
> --- a/lib/librte_pmd_ixgbe/ixgbe_pf.c
>
> +++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c
>
> @@ -127,7 +127,8 @@ void ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
>
> RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
>
> }
>
>
>
> - RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
>
> + RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool = nb_queue;
>
> + RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool = nb_queue;
>
> RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
>
> RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
>
>
>
> @@ -189,7 +190,7 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
>
> hw->mac.ops.set_vmdq(hw, 0, RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx);
>
>
>
> /*
>
> - * SW msut set GCR_EXT.VT_Mode the same as GPIE.VT_Mode
>
> + * SW must set GCR_EXT.VT_Mode the same as GPIE.VT_Mode
>
> */
>
> gcr_ext = IXGBE_READ_REG(hw, IXGBE_GCR_EXT);
>
> gcr_ext &= ~IXGBE_GCR_EXT_VT_MODE_MASK;
>
> @@ -214,19 +215,19 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
>
> }
>
>
>
> IXGBE_WRITE_REG(hw, IXGBE_GCR_EXT, gcr_ext);
>
> - IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
>
> + IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
>
>
>
> - /*
>
> + /*
>
> * enable vlan filtering and allow all vlan tags through
>
> */
>
> - vlanctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
>
> - vlanctrl |= IXGBE_VLNCTRL_VFE ; /* enable vlan filters */
>
> - IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl);
>
> + vlanctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
>
> + vlanctrl |= IXGBE_VLNCTRL_VFE ; /* enable vlan filters */
>
> + IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl);
>
>
>
> - /* VFTA - enable all vlan filters */
>
> - for (i = 0; i < IXGBE_MAX_VFTA; i++) {
>
> - IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF);
>
> - }
>
> + /* VFTA - enable all vlan filters */
>
> + for (i = 0; i < IXGBE_MAX_VFTA; i++) {
>
> + IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF);
>
> + }
>
>
>
> /* Enable MAC Anti-Spoofing */
>
> hw->mac.ops.set_mac_anti_spoofing(hw, FALSE, vf_num);
>
> @@ -369,6 +370,73 @@ ixgbe_vf_reset(struct rte_eth_dev *dev, uint16_t vf, uint32_t *msgbuf)
>
> }
>
>
>
> static int
>
> +ixgbe_negotiate_vf_api(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
>
> +{
>
> + struct ixgbe_vf_info *vfinfo =
>
> + *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private));
>
> + int api = msgbuf[1];
>
> +
>
> + switch (api) {
>
> + case ixgbe_mbox_api_10:
>
> + case ixgbe_mbox_api_11:
>
> + vfinfo[vf].vf_api = api;
>
> + return 0;
>
> + default:
>
> + break;
>
> + }
>
> +
>
> + RTE_LOG(DEBUG, PMD, "VF %d requested invalid api version %u\n", vf, api);
>
> + return -1;
>
> +}
>
> +
>
> +static int
>
> +ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
>
> +{
>
> + struct ixgbe_vf_info *vfinfo =
>
> + *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private));
>
> + struct ixgbe_dcb_config *dcb_cfg =
>
> + IXGBE_DEV_PRIVATE_TO_DCB_CFG(dev->data->dev_private);
>
> +
>
> + uint8_t num_tcs = dcb_cfg->num_tcs.pg_tcs;
>
> +
>
> + /* verify the PF is supporting the correct APIs */
>
> + switch (vfinfo[vf].vf_api) {
>
> + case ixgbe_mbox_api_10:
>
> + case ixgbe_mbox_api_11:
>
> + break;
>
> + default:
>
> + return -1;
>
> + }
>
> +
>
> + if (RTE_ETH_DEV_SRIOV(dev).active) {
>
> + if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB)
>
> + msgbuf[IXGBE_VF_TX_QUEUES] = num_tcs;
>
> + else
>
> + msgbuf[IXGBE_VF_TX_QUEUES] = 1;
>
> +
>
> + if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB)
>
> + msgbuf[IXGBE_VF_RX_QUEUES] = num_tcs;
>
> + else
>
> + msgbuf[IXGBE_VF_RX_QUEUES] = 1;
>
> + } else {
>
> + /* only allow 1 Tx queue for bandwidth limiting */
>
> + msgbuf[IXGBE_VF_TX_QUEUES] = 1;
>
> + msgbuf[IXGBE_VF_RX_QUEUES] = 1;
>
> + }
>
> +
>
> + /* notify VF of need for VLAN tag stripping, and correct queue */
>
> + if (num_tcs)
>
> + msgbuf[IXGBE_VF_TRANS_VLAN] = num_tcs;
>
> + else
>
> + msgbuf[IXGBE_VF_TRANS_VLAN] = 0;
>
> +
>
> + /* notify VF of default queue */
>
> + msgbuf[IXGBE_VF_DEF_QUEUE] = 0;
>
> +
>
> + return 0;
>
> +}
>
> +
>
> +static int
>
> ixgbe_vf_set_mac_addr(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
>
> {
>
> struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>
> @@ -512,6 +580,12 @@ ixgbe_rcv_msg_from_vf(struct rte_eth_dev *dev, uint16_t vf)
>
> case IXGBE_VF_SET_VLAN:
>
> retval = ixgbe_vf_set_vlan(dev, vf, msgbuf);
>
> break;
>
> + case IXGBE_VF_API_NEGOTIATE:
>
> + retval = ixgbe_negotiate_vf_api(dev, vf, msgbuf);
>
> + break;
>
> + case IXGBE_VF_GET_QUEUES:
>
> + retval = ixgbe_get_vf_queues(dev, vf, msgbuf);
>
> + break;
>
> default:
>
> PMD_DRV_LOG(DEBUG, "Unhandled Msg %8.8x", (unsigned)msgbuf[0]);
>
> retval = IXGBE_ERR_MBX;
>
> @@ -526,7 +600,7 @@ ixgbe_rcv_msg_from_vf(struct rte_eth_dev *dev, uint16_t vf)
>
>
>
> msgbuf[0] |= IXGBE_VT_MSGTYPE_CTS;
>
>
>
> - ixgbe_write_mbx(hw, msgbuf, 1, vf);
>
> + ixgbe_write_mbx(hw, msgbuf, mbx_size, vf);
>
>
>
> return retval;
>
> }
>
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
>
> index e10d6a2..49b44fe 100644
>
> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
>
> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
>
> @@ -3166,10 +3166,9 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
>
>
>
> /* check support mq_mode for DCB */
>
> if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
>
> - (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB))
>
> - return;
>
> -
>
> - if (dev->data->nb_rx_queues != ETH_DCB_NUM_QUEUES)
>
> + (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
>
> + (dev_conf->txmode.mq_mode != ETH_MQ_TX_VMDQ_DCB) &&
>
> + (dev_conf->txmode.mq_mode != ETH_MQ_TX_DCB))
>
> return;
>
>
>
> /** Configure DCB hardware **/
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
2015-01-12 15:46 ` Jastrzebski, MichalX K
@ 2015-01-13 10:02 ` Vlad Zolotarov
0 siblings, 0 replies; 41+ messages in thread
From: Vlad Zolotarov @ 2015-01-13 10:02 UTC (permalink / raw)
To: Jastrzebski, MichalX K, dev
On 01/12/15 17:46, Jastrzebski, MichalX K wrote:
>> -----Original Message-----
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Michal Jastrzebski
>> Sent: Monday, January 12, 2015 3:43 PM
>> To: dev@dpdk.org
>> Subject: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
>>
>> Date: Mon, 12 Jan 2015 15:39:40 +0100
>> Message-Id: <1421073581-6644-2-git-send-email-
>> michalx.k.jastrzebski@intel.com>
>> X-Mailer: git-send-email 2.1.1
>> In-Reply-To: <1421073581-6644-1-git-send-email-
>> michalx.k.jastrzebski@intel.com>
>> References: <1421073581-6644-1-git-send-email-
>> michalx.k.jastrzebski@intel.com>
>>
>> From: Pawel Wodkowski <pawelx.wodkowski@intel.com>
>>
>>
>> This patch add support for DCB in SRIOV mode. When no PFC
>>
>> is enabled this feature might be used as multiple queues
>>
>> (up to 8 or 4) for VF.
>>
>>
>>
>> It incorporate following modifications:
>>
>> - Allow zero rx/tx queues to be passed to rte_eth_dev_configure().
>>
>> Rationale:
>>
>> in SRIOV mode PF use first free VF to RX/TX. If VF count
>>
>> is 16 or 32 all recources are assigned to VFs so PF can
>>
>> be used only for configuration.
>>
>> - split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool
>>
>> Rationale:
>>
>> rx and tx number of queue might be different if RX and TX are
>>
>> configured in different mode. This allow to inform VF about
>>
>> proper number of queues.
>>
>> - extern mailbox API for DCB mode
>>
>>
>>
>> Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
>>
>> ---
>>
>> lib/librte_ether/rte_ethdev.c | 84 +++++++++++++++++++++---------
>>
>> lib/librte_ether/rte_ethdev.h | 5 +-
>>
>> lib/librte_pmd_e1000/igb_pf.c | 3 +-
>>
>> lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 10 ++--
>>
>> lib/librte_pmd_ixgbe/ixgbe_ethdev.h | 1 +
>>
>> lib/librte_pmd_ixgbe/ixgbe_pf.c | 98 ++++++++++++++++++++++++++++++--
>> ---
>>
>> lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 7 ++-
>>
>> 7 files changed, 159 insertions(+), 49 deletions(-)
>>
>>
>>
>> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
>>
>> index 95f2ceb..4c1a494 100644
>>
>> --- a/lib/librte_ether/rte_ethdev.c
>>
>> +++ b/lib/librte_ether/rte_ethdev.c
>>
>> @@ -333,7 +333,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev
>> *dev, uint16_t nb_queues)
>>
>> dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
>>
>> sizeof(dev->data->rx_queues[0]) * nb_queues,
>>
>> RTE_CACHE_LINE_SIZE);
>>
>> - if (dev->data->rx_queues == NULL) {
>>
>> + if (dev->data->rx_queues == NULL && nb_queues > 0) {
>>
>> dev->data->nb_rx_queues = 0;
>>
>> return -(ENOMEM);
>>
>> }
>>
>> @@ -475,7 +475,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev
>> *dev, uint16_t nb_queues)
>>
>> dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
>>
>> sizeof(dev->data->tx_queues[0]) * nb_queues,
>>
>> RTE_CACHE_LINE_SIZE);
>>
>> - if (dev->data->tx_queues == NULL) {
>>
>> + if (dev->data->tx_queues == NULL && nb_queues > 0) {
>>
>> dev->data->nb_tx_queues = 0;
>>
>> return -(ENOMEM);
>>
>> }
>>
>> @@ -507,6 +507,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
>> uint16_t nb_rx_q, uint16_t nb_tx_q,
>>
>> const struct rte_eth_conf *dev_conf)
>>
>> {
>>
>> struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>
>> + struct rte_eth_dev_info dev_info;
>>
>>
>>
>> if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
>>
>> /* check multi-queue mode */
>>
>> @@ -524,11 +525,33 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
>> uint16_t nb_rx_q, uint16_t nb_tx_q,
>>
>> return (-EINVAL);
>>
>> }
>>
>>
>>
>> + if ((dev_conf->rxmode.mq_mode ==
>> ETH_MQ_RX_VMDQ_DCB) &&
>>
>> + (dev_conf->txmode.mq_mode ==
>> ETH_MQ_TX_VMDQ_DCB)) {
>>
>> + enum rte_eth_nb_pools rx_pools =
>>
>> + dev_conf-
>>> rx_adv_conf.vmdq_dcb_conf.nb_queue_pools;
>> + enum rte_eth_nb_pools tx_pools =
>>
>> + dev_conf-
>>> tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools;
>> +
>>
>> + if (rx_pools != tx_pools) {
>>
>> + /* Only equal number of pools is supported
>> when
>>
>> + * DCB+VMDq in SRIOV */
>>
>> + PMD_DEBUG_TRACE("ethdev port_id=%"
>> PRIu8
>>
>> + " SRIOV active, DCB+VMDQ
>> mode, "
>>
>> + "number of rx and tx pools is
>> not eqaul\n",
>>
>> + port_id);
>>
>> + return (-EINVAL);
>>
>> + }
>>
>> + }
>>
>> +
>>
>> + uint16_t nb_rx_q_per_pool =
>> RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool;
>>
>> + uint16_t nb_tx_q_per_pool =
>> RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
>>
>> +
>>
>> switch (dev_conf->rxmode.mq_mode) {
>>
>> - case ETH_MQ_RX_VMDQ_RSS:
>>
>> case ETH_MQ_RX_VMDQ_DCB:
>>
>> + break;
>>
>> + case ETH_MQ_RX_VMDQ_RSS:
>>
>> case ETH_MQ_RX_VMDQ_DCB_RSS:
>>
>> - /* DCB/RSS VMDQ in SRIOV mode, not implement yet
>> */
>>
>> + /* RSS, DCB+RSS VMDQ in SRIOV mode, not
>> implement yet */
>>
>> PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
>>
>> " SRIOV active, "
>>
>> "unsupported VMDQ mq_mode rx
>> %u\n",
>>
>> @@ -537,37 +560,32 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
>> uint16_t nb_rx_q, uint16_t nb_tx_q,
>>
>> default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE
>> */
>>
>> /* if nothing mq mode configure, use default scheme
>> */
>>
>> dev->data->dev_conf.rxmode.mq_mode =
>> ETH_MQ_RX_VMDQ_ONLY;
>>
>> - if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
>>
>> - RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool =
>> 1;
>>
>> + if (nb_rx_q_per_pool > 1)
>>
>> + nb_rx_q_per_pool = 1;
>>
>> break;
>>
>> }
>>
>>
>>
>> switch (dev_conf->txmode.mq_mode) {
>>
>> - case ETH_MQ_TX_VMDQ_DCB:
>>
>> - /* DCB VMDQ in SRIOV mode, not implement yet */
>>
>> - PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
>>
>> - " SRIOV active, "
>>
>> - "unsupported VMDQ mq_mode tx
>> %u\n",
>>
>> - port_id, dev_conf-
>>> txmode.mq_mode);
>> - return (-EINVAL);
>>
>> + case ETH_MQ_TX_VMDQ_DCB: /* DCB VMDQ in SRIOV
>> mode*/
>>
>> + break;
>>
>> default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE
>> */
>>
>> /* if nothing mq mode configure, use default scheme
>> */
>>
>> dev->data->dev_conf.txmode.mq_mode =
>> ETH_MQ_TX_VMDQ_ONLY;
>>
>> - if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
>>
>> - RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool =
>> 1;
>>
>> + if (nb_tx_q_per_pool > 1)
>>
>> + nb_tx_q_per_pool = 1;
>>
>> break;
>>
>> }
>>
>>
>>
>> /* check valid queue number */
>>
>> - if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
>>
>> - (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
>>
>> + if (nb_rx_q > nb_rx_q_per_pool || nb_tx_q >
>> nb_tx_q_per_pool) {
>>
>> PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV
>> active, "
>>
>> - "queue number must less equal to %d\n",
>>
>> - port_id,
>> RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
>>
>> + "rx/tx queue number must less equal to
>> %d/%d\n",
>>
>> + port_id,
>> RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool,
>>
>> +
>> RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool);
>>
>> return (-EINVAL);
>>
>> }
>>
>> } else {
>>
>> - /* For vmdb+dcb mode check our configuration before we go
>> further */
>>
>> + /* For vmdq+dcb mode check our configuration before we go
>> further */
>>
>> if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB)
>> {
>>
>> const struct rte_eth_vmdq_dcb_conf *conf;
>>
>>
>>
>> @@ -606,11 +624,20 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
>> uint16_t nb_rx_q, uint16_t nb_tx_q,
>>
>> }
>>
>> }
>>
>>
>>
>> + /* For DCB we need to obtain maximum number of queues
>> dinamically,
>>
>> + * as this depends on max VF exported in PF */
>>
>> + if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
>>
>> + (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB))
>> {
>>
>> +
>>
>> + FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
>>> dev_infos_get, -ENOTSUP);
>> + (*dev->dev_ops->dev_infos_get)(dev,
>> &dev_info);
>>
>> + }
>>
>> +
>>
>> /* For DCB mode check our configuration before we go further
>> */
>>
>> if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
>>
>> const struct rte_eth_dcb_rx_conf *conf;
>>
>>
>>
>> - if (nb_rx_q != ETH_DCB_NUM_QUEUES) {
>>
>> + if (nb_rx_q != dev_info.max_rx_queues) {
>>
>> PMD_DEBUG_TRACE("ethdev port_id=%d
>> DCB, nb_rx_q "
>>
>> "!= %d\n",
>>
>> port_id,
>> ETH_DCB_NUM_QUEUES);
>>
>> @@ -630,7 +657,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
>> uint16_t nb_rx_q, uint16_t nb_tx_q,
>>
>> if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
>>
>> const struct rte_eth_dcb_tx_conf *conf;
>>
>>
>>
>> - if (nb_tx_q != ETH_DCB_NUM_QUEUES) {
>>
>> + if (nb_tx_q != dev_info.max_tx_queues) {
>>
>> PMD_DEBUG_TRACE("ethdev port_id=%d
>> DCB, nb_tx_q "
>>
>> "!= %d\n",
>>
>> port_id,
>> ETH_DCB_NUM_QUEUES);
>>
>> @@ -690,7 +717,10 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t
>> nb_rx_q, uint16_t nb_tx_q,
>>
>> }
>>
>> if (nb_rx_q == 0) {
>>
>> PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_q == 0\n",
>> port_id);
>>
>> - return (-EINVAL);
>>
>> + /* In SRIOV there can be no free resource for PF. So permit use
>> only
>>
>> + * for configuration. */
>>
>> + if (RTE_ETH_DEV_SRIOV(dev).active == 0)
>>
>> + return (-EINVAL);
>>
>> }
>>
>>
>>
>> if (nb_tx_q > dev_info.max_tx_queues) {
>>
>> @@ -698,9 +728,13 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t
>> nb_rx_q, uint16_t nb_tx_q,
>>
>> port_id, nb_tx_q, dev_info.max_tx_queues);
>>
>> return (-EINVAL);
>>
>> }
>>
>> +
>>
>> if (nb_tx_q == 0) {
>>
>> PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_q == 0\n",
>> port_id);
>>
>> - return (-EINVAL);
>>
>> + /* In SRIOV there can be no free resource for PF. So permit use
>> only
>>
>> + * for configuration. */
>>
>> + if (RTE_ETH_DEV_SRIOV(dev).active == 0)
>>
>> + return (-EINVAL);
>>
>> }
>>
>>
>>
>> /* Copy the dev_conf parameter into the dev structure */
>>
>> @@ -750,7 +784,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t
>> nb_rx_q, uint16_t nb_tx_q,
>>
>> ETHER_MAX_LEN;
>>
>> }
>>
>>
>>
>> - /* multipe queue mode checking */
>>
>> + /* multiple queue mode checking */
>>
>> diag = rte_eth_dev_check_mq_mode(port_id, nb_rx_q, nb_tx_q,
>> dev_conf);
>>
>> if (diag != 0) {
>>
>> PMD_DEBUG_TRACE("port%d rte_eth_dev_check_mq_mode =
>> %d\n",
>>
>> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
>>
>> index ce0528f..04fda83 100644
>>
>> --- a/lib/librte_ether/rte_ethdev.h
>>
>> +++ b/lib/librte_ether/rte_ethdev.h
>>
>> @@ -299,7 +299,7 @@ enum rte_eth_rx_mq_mode {
>>
>> enum rte_eth_tx_mq_mode {
>>
>> ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */
>>
>> ETH_MQ_TX_DCB, /**< For TX side,only DCB is on. */
>>
>> - ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is
>> on. */
>>
>> + ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on.
>> */
>>
>> ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
>>
>> };
>>
>>
>>
>> @@ -1569,7 +1569,8 @@ struct rte_eth_dev {
>>
>>
>>
>> struct rte_eth_dev_sriov {
>>
>> uint8_t active; /**< SRIOV is active with 16, 32 or 64 pools */
>>
>> - uint8_t nb_q_per_pool; /**< rx queue number per pool */
>>
>> + uint8_t nb_rx_q_per_pool; /**< rx queue number per pool */
>>
>> + uint8_t nb_tx_q_per_pool; /**< tx queue number per pool */
>>
>> uint16_t def_vmdq_idx; /**< Default pool num used for PF */
>>
>> uint16_t def_pool_q_idx; /**< Default pool queue start reg index */
>>
>> };
>>
>> diff --git a/lib/librte_pmd_e1000/igb_pf.c b/lib/librte_pmd_e1000/igb_pf.c
>>
>> index bc3816a..9d2f858 100644
>>
>> --- a/lib/librte_pmd_e1000/igb_pf.c
>>
>> +++ b/lib/librte_pmd_e1000/igb_pf.c
>>
>> @@ -115,7 +115,8 @@ void igb_pf_host_init(struct rte_eth_dev *eth_dev)
>>
>> rte_panic("Cannot allocate memory for private VF data\n");
>>
>>
>>
>> RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS;
>>
>> - RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
>>
>> + RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool = nb_queue;
>>
>> + RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool = nb_queue;
>>
>> RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
>>
>> RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num *
>> nb_queue);
>>
>>
>>
>> diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
>> b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
>>
>> index 3fc3738..347f03c 100644
>>
>> --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
>>
>> +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
>>
>> @@ -3555,14 +3555,14 @@ static int ixgbe_set_vf_rate_limit(struct
>> rte_eth_dev *dev, uint16_t vf,
>>
>> struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data-
>>> dev_private);
>> struct ixgbe_vf_info *vfinfo =
>>
>> *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data-
>>> dev_private));
>> - uint8_t nb_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
>>
>> + uint8_t nb_tx_q_per_pool =
>> RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
>>
>> uint32_t queue_stride =
>>
>> IXGBE_MAX_RX_QUEUE_NUM /
>> RTE_ETH_DEV_SRIOV(dev).active;
>>
>> uint32_t queue_idx = vf * queue_stride, idx = 0, vf_idx;
>>
>> - uint32_t queue_end = queue_idx + nb_q_per_pool - 1;
>>
>> + uint32_t tx_queue_end = queue_idx + nb_tx_q_per_pool - 1;
>>
>> uint16_t total_rate = 0;
>>
>>
>>
>> - if (queue_end >= hw->mac.max_tx_queues)
>>
>> + if (tx_queue_end >= hw->mac.max_tx_queues)
>>
>> return -EINVAL;
>>
>>
>>
>> if (vfinfo != NULL) {
>>
>> @@ -3577,7 +3577,7 @@ static int ixgbe_set_vf_rate_limit(struct rte_eth_dev
>> *dev, uint16_t vf,
>>
>> return -EINVAL;
>>
>>
>>
>> /* Store tx_rate for this vf. */
>>
>> - for (idx = 0; idx < nb_q_per_pool; idx++) {
>>
>> + for (idx = 0; idx < nb_tx_q_per_pool; idx++) {
>>
>> if (((uint64_t)0x1 << idx) & q_msk) {
>>
>> if (vfinfo[vf].tx_rate[idx] != tx_rate)
>>
>> vfinfo[vf].tx_rate[idx] = tx_rate;
>>
>> @@ -3595,7 +3595,7 @@ static int ixgbe_set_vf_rate_limit(struct rte_eth_dev
>> *dev, uint16_t vf,
>>
>> }
>>
>>
>>
>> /* Set RTTBCNRC of each queue/pool for vf X */
>>
>> - for (; queue_idx <= queue_end; queue_idx++) {
>>
>> + for (; queue_idx <= tx_queue_end; queue_idx++) {
>>
>> if (0x1 & q_msk)
>>
>> ixgbe_set_queue_rate_limit(dev, queue_idx, tx_rate);
>>
>> q_msk = q_msk >> 1;
>>
>> diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
>> b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
>>
>> index ca99170..ebf16e9 100644
>>
>> --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
>>
>> +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
>>
>> @@ -159,6 +159,7 @@ struct ixgbe_vf_info {
>>
>> uint16_t tx_rate[IXGBE_MAX_QUEUE_NUM_PER_VF];
>>
>> uint16_t vlan_count;
>>
>> uint8_t spoofchk_enabled;
>>
>> + unsigned int vf_api;
>>
>> };
>>
>>
>>
>> /*
>>
>> diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c b/lib/librte_pmd_ixgbe/ixgbe_pf.c
>>
>> index 51da1fd..4d30bcf 100644
>>
>> --- a/lib/librte_pmd_ixgbe/ixgbe_pf.c
>>
>> +++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c
>>
>> @@ -127,7 +127,8 @@ void ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
>>
>> RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
>>
>> }
>>
>>
>>
>> - RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
>>
>> + RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool = nb_queue;
>>
>> + RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool = nb_queue;
>>
>> RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
>>
>> RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num *
>> nb_queue);
>>
>>
>>
>> @@ -189,7 +190,7 @@ int ixgbe_pf_host_configure(struct rte_eth_dev
>> *eth_dev)
>>
>> hw->mac.ops.set_vmdq(hw, 0,
>> RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx);
>>
>>
>>
>> /*
>>
>> - * SW msut set GCR_EXT.VT_Mode the same as GPIE.VT_Mode
>>
>> + * SW must set GCR_EXT.VT_Mode the same as GPIE.VT_Mode
>>
>> */
>>
>> gcr_ext = IXGBE_READ_REG(hw, IXGBE_GCR_EXT);
>>
>> gcr_ext &= ~IXGBE_GCR_EXT_VT_MODE_MASK;
>>
>> @@ -214,19 +215,19 @@ int ixgbe_pf_host_configure(struct rte_eth_dev
>> *eth_dev)
>>
>> }
>>
>>
>>
>> IXGBE_WRITE_REG(hw, IXGBE_GCR_EXT, gcr_ext);
>>
>> - IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
>>
>> + IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
>>
>>
>>
>> - /*
>>
>> + /*
>>
>> * enable vlan filtering and allow all vlan tags through
>>
>> */
>>
>> - vlanctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
>>
>> - vlanctrl |= IXGBE_VLNCTRL_VFE ; /* enable vlan filters */
>>
>> - IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl);
>>
>> + vlanctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
>>
>> + vlanctrl |= IXGBE_VLNCTRL_VFE ; /* enable vlan filters */
>>
>> + IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl);
>>
>>
>>
>> - /* VFTA - enable all vlan filters */
>>
>> - for (i = 0; i < IXGBE_MAX_VFTA; i++) {
>>
>> - IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF);
>>
>> - }
>>
>> + /* VFTA - enable all vlan filters */
>>
>> + for (i = 0; i < IXGBE_MAX_VFTA; i++) {
>>
>> + IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF);
>>
>> + }
>>
>>
>>
>> /* Enable MAC Anti-Spoofing */
>>
>> hw->mac.ops.set_mac_anti_spoofing(hw, FALSE, vf_num);
>>
>> @@ -369,6 +370,73 @@ ixgbe_vf_reset(struct rte_eth_dev *dev, uint16_t vf,
>> uint32_t *msgbuf)
>>
>> }
>>
>>
>>
>> static int
>>
>> +ixgbe_negotiate_vf_api(struct rte_eth_dev *dev, uint32_t vf, uint32_t
>> *msgbuf)
>>
>> +{
>>
>> + struct ixgbe_vf_info *vfinfo =
>>
>> + *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data-
>>> dev_private));
>> + int api = msgbuf[1];
>>
>> +
>>
>> + switch (api) {
>>
>> + case ixgbe_mbox_api_10:
>>
>> + case ixgbe_mbox_api_11:
>>
>> + vfinfo[vf].vf_api = api;
>>
>> + return 0;
>>
>> + default:
>>
>> + break;
>>
>> + }
>>
>> +
>>
>> + RTE_LOG(DEBUG, PMD, "VF %d requested invalid api version %u\n", vf,
>> api);
>>
>> + return -1;
>>
>> +}
>>
>> +
>>
>> +static int
>>
>> +ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
>>
>> +{
>>
>> + struct ixgbe_vf_info *vfinfo =
>>
>> + *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data-
>>> dev_private));
>> + struct ixgbe_dcb_config *dcb_cfg =
>>
>> + IXGBE_DEV_PRIVATE_TO_DCB_CFG(dev->data-
>>> dev_private);
>> +
>>
>> + uint8_t num_tcs = dcb_cfg->num_tcs.pg_tcs;
>>
>> +
>>
>> + /* verify the PF is supporting the correct APIs */
>>
>> + switch (vfinfo[vf].vf_api) {
>>
>> + case ixgbe_mbox_api_10:
>>
>> + case ixgbe_mbox_api_11:
>>
>> + break;
>>
>> + default:
>>
>> + return -1;
>>
>> + }
>>
>> +
>>
>> + if (RTE_ETH_DEV_SRIOV(dev).active) {
>>
>> + if (dev->data->dev_conf.rxmode.mq_mode ==
>> ETH_MQ_RX_VMDQ_DCB)
>>
>> + msgbuf[IXGBE_VF_TX_QUEUES] = num_tcs;
>>
>> + else
>>
>> + msgbuf[IXGBE_VF_TX_QUEUES] = 1;
>>
>> +
>>
>> + if (dev->data->dev_conf.txmode.mq_mode ==
>> ETH_MQ_TX_VMDQ_DCB)
>>
>> + msgbuf[IXGBE_VF_RX_QUEUES] = num_tcs;
>>
>> + else
>>
>> + msgbuf[IXGBE_VF_RX_QUEUES] = 1;
>>
>> + } else {
>>
>> + /* only allow 1 Tx queue for bandwidth limiting */
>>
>> + msgbuf[IXGBE_VF_TX_QUEUES] = 1;
>>
>> + msgbuf[IXGBE_VF_RX_QUEUES] = 1;
>>
>> + }
>>
>> +
>>
>> + /* notify VF of need for VLAN tag stripping, and correct queue */
>>
>> + if (num_tcs)
>>
>> + msgbuf[IXGBE_VF_TRANS_VLAN] = num_tcs;
>>
>> + else
>>
>> + msgbuf[IXGBE_VF_TRANS_VLAN] = 0;
>>
>> +
>>
>> + /* notify VF of default queue */
>>
>> + msgbuf[IXGBE_VF_DEF_QUEUE] = 0;
>>
>> +
>>
>> + return 0;
>>
>> +}
>>
>> +
>>
>> +static int
>>
>> ixgbe_vf_set_mac_addr(struct rte_eth_dev *dev, uint32_t vf, uint32_t
>> *msgbuf)
>>
>> {
>>
>> struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data-
>>> dev_private);
>> @@ -512,6 +580,12 @@ ixgbe_rcv_msg_from_vf(struct rte_eth_dev *dev,
>> uint16_t vf)
>>
>> case IXGBE_VF_SET_VLAN:
>>
>> retval = ixgbe_vf_set_vlan(dev, vf, msgbuf);
>>
>> break;
>>
>> + case IXGBE_VF_API_NEGOTIATE:
>>
>> + retval = ixgbe_negotiate_vf_api(dev, vf, msgbuf);
>>
>> + break;
>>
>> + case IXGBE_VF_GET_QUEUES:
>>
>> + retval = ixgbe_get_vf_queues(dev, vf, msgbuf);
>>
>> + break;
>>
>> default:
>>
>> PMD_DRV_LOG(DEBUG, "Unhandled Msg %8.8x",
>> (unsigned)msgbuf[0]);
>>
>> retval = IXGBE_ERR_MBX;
>>
>> @@ -526,7 +600,7 @@ ixgbe_rcv_msg_from_vf(struct rte_eth_dev *dev,
>> uint16_t vf)
>>
>>
>>
>> msgbuf[0] |= IXGBE_VT_MSGTYPE_CTS;
>>
>>
>>
>> - ixgbe_write_mbx(hw, msgbuf, 1, vf);
>>
>> + ixgbe_write_mbx(hw, msgbuf, mbx_size, vf);
>>
>>
>>
>> return retval;
>>
>> }
>>
>> diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
>> b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
>>
>> index e10d6a2..49b44fe 100644
>>
>> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
>>
>> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
>>
>> @@ -3166,10 +3166,9 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
>>
>>
>>
>> /* check support mq_mode for DCB */
>>
>> if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
>>
>> - (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB))
>>
>> - return;
>>
>> -
>>
>> - if (dev->data->nb_rx_queues != ETH_DCB_NUM_QUEUES)
>>
>> + (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
>>
>> + (dev_conf->txmode.mq_mode != ETH_MQ_TX_VMDQ_DCB) &&
>>
>> + (dev_conf->txmode.mq_mode != ETH_MQ_TX_DCB))
>>
>> return;
>>
>>
>>
>> /** Configure DCB hardware **/
>>
>> --
>>
>> 1.7.9.5
>>
>>
> Self nacked - because of wrong message format.
Yeah, there is something really wrong with this email formatting.... ;)
Note that since u (i guess) haven't used 'git send-email' for this
series - it doesn't look like a series (at least in my thunderbird).
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
2015-01-12 14:43 [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe Michal Jastrzebski
@ 2015-01-12 15:46 ` Jastrzebski, MichalX K
2015-01-13 10:02 ` Vlad Zolotarov
2015-01-13 10:08 ` Vlad Zolotarov
1 sibling, 1 reply; 41+ messages in thread
From: Jastrzebski, MichalX K @ 2015-01-12 15:46 UTC (permalink / raw)
To: Jastrzebski, MichalX K, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Michal Jastrzebski
> Sent: Monday, January 12, 2015 3:43 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
>
> Date: Mon, 12 Jan 2015 15:39:40 +0100
> Message-Id: <1421073581-6644-2-git-send-email-
> michalx.k.jastrzebski@intel.com>
> X-Mailer: git-send-email 2.1.1
> In-Reply-To: <1421073581-6644-1-git-send-email-
> michalx.k.jastrzebski@intel.com>
> References: <1421073581-6644-1-git-send-email-
> michalx.k.jastrzebski@intel.com>
>
> From: Pawel Wodkowski <pawelx.wodkowski@intel.com>
>
>
> This patch add support for DCB in SRIOV mode. When no PFC
>
> is enabled this feature might be used as multiple queues
>
> (up to 8 or 4) for VF.
>
>
>
> It incorporate following modifications:
>
> - Allow zero rx/tx queues to be passed to rte_eth_dev_configure().
>
> Rationale:
>
> in SRIOV mode PF use first free VF to RX/TX. If VF count
>
> is 16 or 32 all recources are assigned to VFs so PF can
>
> be used only for configuration.
>
> - split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool
>
> Rationale:
>
> rx and tx number of queue might be different if RX and TX are
>
> configured in different mode. This allow to inform VF about
>
> proper number of queues.
>
> - extern mailbox API for DCB mode
>
>
>
> Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
>
> ---
>
> lib/librte_ether/rte_ethdev.c | 84 +++++++++++++++++++++---------
>
> lib/librte_ether/rte_ethdev.h | 5 +-
>
> lib/librte_pmd_e1000/igb_pf.c | 3 +-
>
> lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 10 ++--
>
> lib/librte_pmd_ixgbe/ixgbe_ethdev.h | 1 +
>
> lib/librte_pmd_ixgbe/ixgbe_pf.c | 98 ++++++++++++++++++++++++++++++--
> ---
>
> lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 7 ++-
>
> 7 files changed, 159 insertions(+), 49 deletions(-)
>
>
>
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
>
> index 95f2ceb..4c1a494 100644
>
> --- a/lib/librte_ether/rte_ethdev.c
>
> +++ b/lib/librte_ether/rte_ethdev.c
>
> @@ -333,7 +333,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev
> *dev, uint16_t nb_queues)
>
> dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
>
> sizeof(dev->data->rx_queues[0]) * nb_queues,
>
> RTE_CACHE_LINE_SIZE);
>
> - if (dev->data->rx_queues == NULL) {
>
> + if (dev->data->rx_queues == NULL && nb_queues > 0) {
>
> dev->data->nb_rx_queues = 0;
>
> return -(ENOMEM);
>
> }
>
> @@ -475,7 +475,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev
> *dev, uint16_t nb_queues)
>
> dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
>
> sizeof(dev->data->tx_queues[0]) * nb_queues,
>
> RTE_CACHE_LINE_SIZE);
>
> - if (dev->data->tx_queues == NULL) {
>
> + if (dev->data->tx_queues == NULL && nb_queues > 0) {
>
> dev->data->nb_tx_queues = 0;
>
> return -(ENOMEM);
>
> }
>
> @@ -507,6 +507,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
> uint16_t nb_rx_q, uint16_t nb_tx_q,
>
> const struct rte_eth_conf *dev_conf)
>
> {
>
> struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>
> + struct rte_eth_dev_info dev_info;
>
>
>
> if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
>
> /* check multi-queue mode */
>
> @@ -524,11 +525,33 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
> uint16_t nb_rx_q, uint16_t nb_tx_q,
>
> return (-EINVAL);
>
> }
>
>
>
> + if ((dev_conf->rxmode.mq_mode ==
> ETH_MQ_RX_VMDQ_DCB) &&
>
> + (dev_conf->txmode.mq_mode ==
> ETH_MQ_TX_VMDQ_DCB)) {
>
> + enum rte_eth_nb_pools rx_pools =
>
> + dev_conf-
> >rx_adv_conf.vmdq_dcb_conf.nb_queue_pools;
>
> + enum rte_eth_nb_pools tx_pools =
>
> + dev_conf-
> >tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools;
>
> +
>
> + if (rx_pools != tx_pools) {
>
> + /* Only equal number of pools is supported
> when
>
> + * DCB+VMDq in SRIOV */
>
> + PMD_DEBUG_TRACE("ethdev port_id=%"
> PRIu8
>
> + " SRIOV active, DCB+VMDQ
> mode, "
>
> + "number of rx and tx pools is
> not eqaul\n",
>
> + port_id);
>
> + return (-EINVAL);
>
> + }
>
> + }
>
> +
>
> + uint16_t nb_rx_q_per_pool =
> RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool;
>
> + uint16_t nb_tx_q_per_pool =
> RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
>
> +
>
> switch (dev_conf->rxmode.mq_mode) {
>
> - case ETH_MQ_RX_VMDQ_RSS:
>
> case ETH_MQ_RX_VMDQ_DCB:
>
> + break;
>
> + case ETH_MQ_RX_VMDQ_RSS:
>
> case ETH_MQ_RX_VMDQ_DCB_RSS:
>
> - /* DCB/RSS VMDQ in SRIOV mode, not implement yet
> */
>
> + /* RSS, DCB+RSS VMDQ in SRIOV mode, not
> implement yet */
>
> PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
>
> " SRIOV active, "
>
> "unsupported VMDQ mq_mode rx
> %u\n",
>
> @@ -537,37 +560,32 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
> uint16_t nb_rx_q, uint16_t nb_tx_q,
>
> default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE
> */
>
> /* if nothing mq mode configure, use default scheme
> */
>
> dev->data->dev_conf.rxmode.mq_mode =
> ETH_MQ_RX_VMDQ_ONLY;
>
> - if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
>
> - RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool =
> 1;
>
> + if (nb_rx_q_per_pool > 1)
>
> + nb_rx_q_per_pool = 1;
>
> break;
>
> }
>
>
>
> switch (dev_conf->txmode.mq_mode) {
>
> - case ETH_MQ_TX_VMDQ_DCB:
>
> - /* DCB VMDQ in SRIOV mode, not implement yet */
>
> - PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
>
> - " SRIOV active, "
>
> - "unsupported VMDQ mq_mode tx
> %u\n",
>
> - port_id, dev_conf-
> >txmode.mq_mode);
>
> - return (-EINVAL);
>
> + case ETH_MQ_TX_VMDQ_DCB: /* DCB VMDQ in SRIOV
> mode*/
>
> + break;
>
> default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE
> */
>
> /* if nothing mq mode configure, use default scheme
> */
>
> dev->data->dev_conf.txmode.mq_mode =
> ETH_MQ_TX_VMDQ_ONLY;
>
> - if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
>
> - RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool =
> 1;
>
> + if (nb_tx_q_per_pool > 1)
>
> + nb_tx_q_per_pool = 1;
>
> break;
>
> }
>
>
>
> /* check valid queue number */
>
> - if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
>
> - (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
>
> + if (nb_rx_q > nb_rx_q_per_pool || nb_tx_q >
> nb_tx_q_per_pool) {
>
> PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV
> active, "
>
> - "queue number must less equal to %d\n",
>
> - port_id,
> RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
>
> + "rx/tx queue number must less equal to
> %d/%d\n",
>
> + port_id,
> RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool,
>
> +
> RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool);
>
> return (-EINVAL);
>
> }
>
> } else {
>
> - /* For vmdb+dcb mode check our configuration before we go
> further */
>
> + /* For vmdq+dcb mode check our configuration before we go
> further */
>
> if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB)
> {
>
> const struct rte_eth_vmdq_dcb_conf *conf;
>
>
>
> @@ -606,11 +624,20 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
> uint16_t nb_rx_q, uint16_t nb_tx_q,
>
> }
>
> }
>
>
>
> + /* For DCB we need to obtain maximum number of queues
> dinamically,
>
> + * as this depends on max VF exported in PF */
>
> + if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
>
> + (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB))
> {
>
> +
>
> + FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >dev_infos_get, -ENOTSUP);
>
> + (*dev->dev_ops->dev_infos_get)(dev,
> &dev_info);
>
> + }
>
> +
>
> /* For DCB mode check our configuration before we go further
> */
>
> if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
>
> const struct rte_eth_dcb_rx_conf *conf;
>
>
>
> - if (nb_rx_q != ETH_DCB_NUM_QUEUES) {
>
> + if (nb_rx_q != dev_info.max_rx_queues) {
>
> PMD_DEBUG_TRACE("ethdev port_id=%d
> DCB, nb_rx_q "
>
> "!= %d\n",
>
> port_id,
> ETH_DCB_NUM_QUEUES);
>
> @@ -630,7 +657,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
> uint16_t nb_rx_q, uint16_t nb_tx_q,
>
> if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
>
> const struct rte_eth_dcb_tx_conf *conf;
>
>
>
> - if (nb_tx_q != ETH_DCB_NUM_QUEUES) {
>
> + if (nb_tx_q != dev_info.max_tx_queues) {
>
> PMD_DEBUG_TRACE("ethdev port_id=%d
> DCB, nb_tx_q "
>
> "!= %d\n",
>
> port_id,
> ETH_DCB_NUM_QUEUES);
>
> @@ -690,7 +717,10 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t
> nb_rx_q, uint16_t nb_tx_q,
>
> }
>
> if (nb_rx_q == 0) {
>
> PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_q == 0\n",
> port_id);
>
> - return (-EINVAL);
>
> + /* In SRIOV there can be no free resource for PF. So permit use
> only
>
> + * for configuration. */
>
> + if (RTE_ETH_DEV_SRIOV(dev).active == 0)
>
> + return (-EINVAL);
>
> }
>
>
>
> if (nb_tx_q > dev_info.max_tx_queues) {
>
> @@ -698,9 +728,13 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t
> nb_rx_q, uint16_t nb_tx_q,
>
> port_id, nb_tx_q, dev_info.max_tx_queues);
>
> return (-EINVAL);
>
> }
>
> +
>
> if (nb_tx_q == 0) {
>
> PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_q == 0\n",
> port_id);
>
> - return (-EINVAL);
>
> + /* In SRIOV there can be no free resource for PF. So permit use
> only
>
> + * for configuration. */
>
> + if (RTE_ETH_DEV_SRIOV(dev).active == 0)
>
> + return (-EINVAL);
>
> }
>
>
>
> /* Copy the dev_conf parameter into the dev structure */
>
> @@ -750,7 +784,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t
> nb_rx_q, uint16_t nb_tx_q,
>
> ETHER_MAX_LEN;
>
> }
>
>
>
> - /* multipe queue mode checking */
>
> + /* multiple queue mode checking */
>
> diag = rte_eth_dev_check_mq_mode(port_id, nb_rx_q, nb_tx_q,
> dev_conf);
>
> if (diag != 0) {
>
> PMD_DEBUG_TRACE("port%d rte_eth_dev_check_mq_mode =
> %d\n",
>
> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
>
> index ce0528f..04fda83 100644
>
> --- a/lib/librte_ether/rte_ethdev.h
>
> +++ b/lib/librte_ether/rte_ethdev.h
>
> @@ -299,7 +299,7 @@ enum rte_eth_rx_mq_mode {
>
> enum rte_eth_tx_mq_mode {
>
> ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */
>
> ETH_MQ_TX_DCB, /**< For TX side,only DCB is on. */
>
> - ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is
> on. */
>
> + ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on.
> */
>
> ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
>
> };
>
>
>
> @@ -1569,7 +1569,8 @@ struct rte_eth_dev {
>
>
>
> struct rte_eth_dev_sriov {
>
> uint8_t active; /**< SRIOV is active with 16, 32 or 64 pools */
>
> - uint8_t nb_q_per_pool; /**< rx queue number per pool */
>
> + uint8_t nb_rx_q_per_pool; /**< rx queue number per pool */
>
> + uint8_t nb_tx_q_per_pool; /**< tx queue number per pool */
>
> uint16_t def_vmdq_idx; /**< Default pool num used for PF */
>
> uint16_t def_pool_q_idx; /**< Default pool queue start reg index */
>
> };
>
> diff --git a/lib/librte_pmd_e1000/igb_pf.c b/lib/librte_pmd_e1000/igb_pf.c
>
> index bc3816a..9d2f858 100644
>
> --- a/lib/librte_pmd_e1000/igb_pf.c
>
> +++ b/lib/librte_pmd_e1000/igb_pf.c
>
> @@ -115,7 +115,8 @@ void igb_pf_host_init(struct rte_eth_dev *eth_dev)
>
> rte_panic("Cannot allocate memory for private VF data\n");
>
>
>
> RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS;
>
> - RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
>
> + RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool = nb_queue;
>
> + RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool = nb_queue;
>
> RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
>
> RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num *
> nb_queue);
>
>
>
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
> b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
>
> index 3fc3738..347f03c 100644
>
> --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
>
> +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
>
> @@ -3555,14 +3555,14 @@ static int ixgbe_set_vf_rate_limit(struct
> rte_eth_dev *dev, uint16_t vf,
>
> struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
>
> struct ixgbe_vf_info *vfinfo =
>
> *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data-
> >dev_private));
>
> - uint8_t nb_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
>
> + uint8_t nb_tx_q_per_pool =
> RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
>
> uint32_t queue_stride =
>
> IXGBE_MAX_RX_QUEUE_NUM /
> RTE_ETH_DEV_SRIOV(dev).active;
>
> uint32_t queue_idx = vf * queue_stride, idx = 0, vf_idx;
>
> - uint32_t queue_end = queue_idx + nb_q_per_pool - 1;
>
> + uint32_t tx_queue_end = queue_idx + nb_tx_q_per_pool - 1;
>
> uint16_t total_rate = 0;
>
>
>
> - if (queue_end >= hw->mac.max_tx_queues)
>
> + if (tx_queue_end >= hw->mac.max_tx_queues)
>
> return -EINVAL;
>
>
>
> if (vfinfo != NULL) {
>
> @@ -3577,7 +3577,7 @@ static int ixgbe_set_vf_rate_limit(struct rte_eth_dev
> *dev, uint16_t vf,
>
> return -EINVAL;
>
>
>
> /* Store tx_rate for this vf. */
>
> - for (idx = 0; idx < nb_q_per_pool; idx++) {
>
> + for (idx = 0; idx < nb_tx_q_per_pool; idx++) {
>
> if (((uint64_t)0x1 << idx) & q_msk) {
>
> if (vfinfo[vf].tx_rate[idx] != tx_rate)
>
> vfinfo[vf].tx_rate[idx] = tx_rate;
>
> @@ -3595,7 +3595,7 @@ static int ixgbe_set_vf_rate_limit(struct rte_eth_dev
> *dev, uint16_t vf,
>
> }
>
>
>
> /* Set RTTBCNRC of each queue/pool for vf X */
>
> - for (; queue_idx <= queue_end; queue_idx++) {
>
> + for (; queue_idx <= tx_queue_end; queue_idx++) {
>
> if (0x1 & q_msk)
>
> ixgbe_set_queue_rate_limit(dev, queue_idx, tx_rate);
>
> q_msk = q_msk >> 1;
>
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
> b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
>
> index ca99170..ebf16e9 100644
>
> --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
>
> +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
>
> @@ -159,6 +159,7 @@ struct ixgbe_vf_info {
>
> uint16_t tx_rate[IXGBE_MAX_QUEUE_NUM_PER_VF];
>
> uint16_t vlan_count;
>
> uint8_t spoofchk_enabled;
>
> + unsigned int vf_api;
>
> };
>
>
>
> /*
>
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c b/lib/librte_pmd_ixgbe/ixgbe_pf.c
>
> index 51da1fd..4d30bcf 100644
>
> --- a/lib/librte_pmd_ixgbe/ixgbe_pf.c
>
> +++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c
>
> @@ -127,7 +127,8 @@ void ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
>
> RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
>
> }
>
>
>
> - RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
>
> + RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool = nb_queue;
>
> + RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool = nb_queue;
>
> RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
>
> RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num *
> nb_queue);
>
>
>
> @@ -189,7 +190,7 @@ int ixgbe_pf_host_configure(struct rte_eth_dev
> *eth_dev)
>
> hw->mac.ops.set_vmdq(hw, 0,
> RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx);
>
>
>
> /*
>
> - * SW msut set GCR_EXT.VT_Mode the same as GPIE.VT_Mode
>
> + * SW must set GCR_EXT.VT_Mode the same as GPIE.VT_Mode
>
> */
>
> gcr_ext = IXGBE_READ_REG(hw, IXGBE_GCR_EXT);
>
> gcr_ext &= ~IXGBE_GCR_EXT_VT_MODE_MASK;
>
> @@ -214,19 +215,19 @@ int ixgbe_pf_host_configure(struct rte_eth_dev
> *eth_dev)
>
> }
>
>
>
> IXGBE_WRITE_REG(hw, IXGBE_GCR_EXT, gcr_ext);
>
> - IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
>
> + IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
>
>
>
> - /*
>
> + /*
>
> * enable vlan filtering and allow all vlan tags through
>
> */
>
> - vlanctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
>
> - vlanctrl |= IXGBE_VLNCTRL_VFE ; /* enable vlan filters */
>
> - IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl);
>
> + vlanctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
>
> + vlanctrl |= IXGBE_VLNCTRL_VFE ; /* enable vlan filters */
>
> + IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl);
>
>
>
> - /* VFTA - enable all vlan filters */
>
> - for (i = 0; i < IXGBE_MAX_VFTA; i++) {
>
> - IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF);
>
> - }
>
> + /* VFTA - enable all vlan filters */
>
> + for (i = 0; i < IXGBE_MAX_VFTA; i++) {
>
> + IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF);
>
> + }
>
>
>
> /* Enable MAC Anti-Spoofing */
>
> hw->mac.ops.set_mac_anti_spoofing(hw, FALSE, vf_num);
>
> @@ -369,6 +370,73 @@ ixgbe_vf_reset(struct rte_eth_dev *dev, uint16_t vf,
> uint32_t *msgbuf)
>
> }
>
>
>
> static int
>
> +ixgbe_negotiate_vf_api(struct rte_eth_dev *dev, uint32_t vf, uint32_t
> *msgbuf)
>
> +{
>
> + struct ixgbe_vf_info *vfinfo =
>
> + *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data-
> >dev_private));
>
> + int api = msgbuf[1];
>
> +
>
> + switch (api) {
>
> + case ixgbe_mbox_api_10:
>
> + case ixgbe_mbox_api_11:
>
> + vfinfo[vf].vf_api = api;
>
> + return 0;
>
> + default:
>
> + break;
>
> + }
>
> +
>
> + RTE_LOG(DEBUG, PMD, "VF %d requested invalid api version %u\n", vf,
> api);
>
> + return -1;
>
> +}
>
> +
>
> +static int
>
> +ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
>
> +{
>
> + struct ixgbe_vf_info *vfinfo =
>
> + *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data-
> >dev_private));
>
> + struct ixgbe_dcb_config *dcb_cfg =
>
> + IXGBE_DEV_PRIVATE_TO_DCB_CFG(dev->data-
> >dev_private);
>
> +
>
> + uint8_t num_tcs = dcb_cfg->num_tcs.pg_tcs;
>
> +
>
> + /* verify the PF is supporting the correct APIs */
>
> + switch (vfinfo[vf].vf_api) {
>
> + case ixgbe_mbox_api_10:
>
> + case ixgbe_mbox_api_11:
>
> + break;
>
> + default:
>
> + return -1;
>
> + }
>
> +
>
> + if (RTE_ETH_DEV_SRIOV(dev).active) {
>
> + if (dev->data->dev_conf.rxmode.mq_mode ==
> ETH_MQ_RX_VMDQ_DCB)
>
> + msgbuf[IXGBE_VF_TX_QUEUES] = num_tcs;
>
> + else
>
> + msgbuf[IXGBE_VF_TX_QUEUES] = 1;
>
> +
>
> + if (dev->data->dev_conf.txmode.mq_mode ==
> ETH_MQ_TX_VMDQ_DCB)
>
> + msgbuf[IXGBE_VF_RX_QUEUES] = num_tcs;
>
> + else
>
> + msgbuf[IXGBE_VF_RX_QUEUES] = 1;
>
> + } else {
>
> + /* only allow 1 Tx queue for bandwidth limiting */
>
> + msgbuf[IXGBE_VF_TX_QUEUES] = 1;
>
> + msgbuf[IXGBE_VF_RX_QUEUES] = 1;
>
> + }
>
> +
>
> + /* notify VF of need for VLAN tag stripping, and correct queue */
>
> + if (num_tcs)
>
> + msgbuf[IXGBE_VF_TRANS_VLAN] = num_tcs;
>
> + else
>
> + msgbuf[IXGBE_VF_TRANS_VLAN] = 0;
>
> +
>
> + /* notify VF of default queue */
>
> + msgbuf[IXGBE_VF_DEF_QUEUE] = 0;
>
> +
>
> + return 0;
>
> +}
>
> +
>
> +static int
>
> ixgbe_vf_set_mac_addr(struct rte_eth_dev *dev, uint32_t vf, uint32_t
> *msgbuf)
>
> {
>
> struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
>
> @@ -512,6 +580,12 @@ ixgbe_rcv_msg_from_vf(struct rte_eth_dev *dev,
> uint16_t vf)
>
> case IXGBE_VF_SET_VLAN:
>
> retval = ixgbe_vf_set_vlan(dev, vf, msgbuf);
>
> break;
>
> + case IXGBE_VF_API_NEGOTIATE:
>
> + retval = ixgbe_negotiate_vf_api(dev, vf, msgbuf);
>
> + break;
>
> + case IXGBE_VF_GET_QUEUES:
>
> + retval = ixgbe_get_vf_queues(dev, vf, msgbuf);
>
> + break;
>
> default:
>
> PMD_DRV_LOG(DEBUG, "Unhandled Msg %8.8x",
> (unsigned)msgbuf[0]);
>
> retval = IXGBE_ERR_MBX;
>
> @@ -526,7 +600,7 @@ ixgbe_rcv_msg_from_vf(struct rte_eth_dev *dev,
> uint16_t vf)
>
>
>
> msgbuf[0] |= IXGBE_VT_MSGTYPE_CTS;
>
>
>
> - ixgbe_write_mbx(hw, msgbuf, 1, vf);
>
> + ixgbe_write_mbx(hw, msgbuf, mbx_size, vf);
>
>
>
> return retval;
>
> }
>
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
>
> index e10d6a2..49b44fe 100644
>
> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
>
> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
>
> @@ -3166,10 +3166,9 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
>
>
>
> /* check support mq_mode for DCB */
>
> if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
>
> - (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB))
>
> - return;
>
> -
>
> - if (dev->data->nb_rx_queues != ETH_DCB_NUM_QUEUES)
>
> + (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
>
> + (dev_conf->txmode.mq_mode != ETH_MQ_TX_VMDQ_DCB) &&
>
> + (dev_conf->txmode.mq_mode != ETH_MQ_TX_DCB))
>
> return;
>
>
>
> /** Configure DCB hardware **/
>
> --
>
> 1.7.9.5
>
>
Self nacked - because of wrong message format.
^ permalink raw reply [flat|nested] 41+ messages in thread
* [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
@ 2015-01-12 14:43 Michal Jastrzebski
2015-01-12 15:46 ` Jastrzebski, MichalX K
2015-01-13 10:08 ` Vlad Zolotarov
0 siblings, 2 replies; 41+ messages in thread
From: Michal Jastrzebski @ 2015-01-12 14:43 UTC (permalink / raw)
To: dev
Date: Mon, 12 Jan 2015 15:39:40 +0100
Message-Id: <1421073581-6644-2-git-send-email-michalx.k.jastrzebski@intel.com>
X-Mailer: git-send-email 2.1.1
In-Reply-To: <1421073581-6644-1-git-send-email-michalx.k.jastrzebski@intel.com>
References: <1421073581-6644-1-git-send-email-michalx.k.jastrzebski@intel.com>
From: Pawel Wodkowski <pawelx.wodkowski@intel.com>
This patch add support for DCB in SRIOV mode. When no PFC
is enabled this feature might be used as multiple queues
(up to 8 or 4) for VF.
It incorporate following modifications:
- Allow zero rx/tx queues to be passed to rte_eth_dev_configure().
Rationale:
in SRIOV mode PF use first free VF to RX/TX. If VF count
is 16 or 32 all recources are assigned to VFs so PF can
be used only for configuration.
- split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool
Rationale:
rx and tx number of queue might be different if RX and TX are
configured in different mode. This allow to inform VF about
proper number of queues.
- extern mailbox API for DCB mode
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
---
lib/librte_ether/rte_ethdev.c | 84 +++++++++++++++++++++---------
lib/librte_ether/rte_ethdev.h | 5 +-
lib/librte_pmd_e1000/igb_pf.c | 3 +-
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 10 ++--
lib/librte_pmd_ixgbe/ixgbe_ethdev.h | 1 +
lib/librte_pmd_ixgbe/ixgbe_pf.c | 98 ++++++++++++++++++++++++++++++-----
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 7 ++-
7 files changed, 159 insertions(+), 49 deletions(-)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 95f2ceb..4c1a494 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -333,7 +333,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
sizeof(dev->data->rx_queues[0]) * nb_queues,
RTE_CACHE_LINE_SIZE);
- if (dev->data->rx_queues == NULL) {
+ if (dev->data->rx_queues == NULL && nb_queues > 0) {
dev->data->nb_rx_queues = 0;
return -(ENOMEM);
}
@@ -475,7 +475,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
sizeof(dev->data->tx_queues[0]) * nb_queues,
RTE_CACHE_LINE_SIZE);
- if (dev->data->tx_queues == NULL) {
+ if (dev->data->tx_queues == NULL && nb_queues > 0) {
dev->data->nb_tx_queues = 0;
return -(ENOMEM);
}
@@ -507,6 +507,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
{
struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ struct rte_eth_dev_info dev_info;
if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
/* check multi-queue mode */
@@ -524,11 +525,33 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
return (-EINVAL);
}
+ if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) &&
+ (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB)) {
+ enum rte_eth_nb_pools rx_pools =
+ dev_conf->rx_adv_conf.vmdq_dcb_conf.nb_queue_pools;
+ enum rte_eth_nb_pools tx_pools =
+ dev_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools;
+
+ if (rx_pools != tx_pools) {
+ /* Only equal number of pools is supported when
+ * DCB+VMDq in SRIOV */
+ PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
+ " SRIOV active, DCB+VMDQ mode, "
+ "number of rx and tx pools is not eqaul\n",
+ port_id);
+ return (-EINVAL);
+ }
+ }
+
+ uint16_t nb_rx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool;
+ uint16_t nb_tx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
+
switch (dev_conf->rxmode.mq_mode) {
- case ETH_MQ_RX_VMDQ_RSS:
case ETH_MQ_RX_VMDQ_DCB:
+ break;
+ case ETH_MQ_RX_VMDQ_RSS:
case ETH_MQ_RX_VMDQ_DCB_RSS:
- /* DCB/RSS VMDQ in SRIOV mode, not implement yet */
+ /* RSS, DCB+RSS VMDQ in SRIOV mode, not implement yet */
PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
" SRIOV active, "
"unsupported VMDQ mq_mode rx %u\n",
@@ -537,37 +560,32 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */
/* if nothing mq mode configure, use default scheme */
dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
- if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
- RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
+ if (nb_rx_q_per_pool > 1)
+ nb_rx_q_per_pool = 1;
break;
}
switch (dev_conf->txmode.mq_mode) {
- case ETH_MQ_TX_VMDQ_DCB:
- /* DCB VMDQ in SRIOV mode, not implement yet */
- PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
- " SRIOV active, "
- "unsupported VMDQ mq_mode tx %u\n",
- port_id, dev_conf->txmode.mq_mode);
- return (-EINVAL);
+ case ETH_MQ_TX_VMDQ_DCB: /* DCB VMDQ in SRIOV mode*/
+ break;
default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
/* if nothing mq mode configure, use default scheme */
dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
- if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
- RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
+ if (nb_tx_q_per_pool > 1)
+ nb_tx_q_per_pool = 1;
break;
}
/* check valid queue number */
- if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
- (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
+ if (nb_rx_q > nb_rx_q_per_pool || nb_tx_q > nb_tx_q_per_pool) {
PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV active, "
- "queue number must less equal to %d\n",
- port_id, RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
+ "rx/tx queue number must less equal to %d/%d\n",
+ port_id, RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool,
+ RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool);
return (-EINVAL);
}
} else {
- /* For vmdb+dcb mode check our configuration before we go further */
+ /* For vmdq+dcb mode check our configuration before we go further */
if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
const struct rte_eth_vmdq_dcb_conf *conf;
@@ -606,11 +624,20 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
}
+ /* For DCB we need to obtain maximum number of queues dinamically,
+ * as this depends on max VF exported in PF */
+ if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
+ (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB)) {
+
+ FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+ (*dev->dev_ops->dev_infos_get)(dev, &dev_info);
+ }
+
/* For DCB mode check our configuration before we go further */
if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
const struct rte_eth_dcb_rx_conf *conf;
- if (nb_rx_q != ETH_DCB_NUM_QUEUES) {
+ if (nb_rx_q != dev_info.max_rx_queues) {
PMD_DEBUG_TRACE("ethdev port_id=%d DCB, nb_rx_q "
"!= %d\n",
port_id, ETH_DCB_NUM_QUEUES);
@@ -630,7 +657,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
const struct rte_eth_dcb_tx_conf *conf;
- if (nb_tx_q != ETH_DCB_NUM_QUEUES) {
+ if (nb_tx_q != dev_info.max_tx_queues) {
PMD_DEBUG_TRACE("ethdev port_id=%d DCB, nb_tx_q "
"!= %d\n",
port_id, ETH_DCB_NUM_QUEUES);
@@ -690,7 +717,10 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
if (nb_rx_q == 0) {
PMD_DEBUG_TRACE("ethdev port_id=%d nb_rx_q == 0\n", port_id);
- return (-EINVAL);
+ /* In SRIOV there can be no free resource for PF. So permit use only
+ * for configuration. */
+ if (RTE_ETH_DEV_SRIOV(dev).active == 0)
+ return (-EINVAL);
}
if (nb_tx_q > dev_info.max_tx_queues) {
@@ -698,9 +728,13 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
port_id, nb_tx_q, dev_info.max_tx_queues);
return (-EINVAL);
}
+
if (nb_tx_q == 0) {
PMD_DEBUG_TRACE("ethdev port_id=%d nb_tx_q == 0\n", port_id);
- return (-EINVAL);
+ /* In SRIOV there can be no free resource for PF. So permit use only
+ * for configuration. */
+ if (RTE_ETH_DEV_SRIOV(dev).active == 0)
+ return (-EINVAL);
}
/* Copy the dev_conf parameter into the dev structure */
@@ -750,7 +784,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
ETHER_MAX_LEN;
}
- /* multipe queue mode checking */
+ /* multiple queue mode checking */
diag = rte_eth_dev_check_mq_mode(port_id, nb_rx_q, nb_tx_q, dev_conf);
if (diag != 0) {
PMD_DEBUG_TRACE("port%d rte_eth_dev_check_mq_mode = %d\n",
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index ce0528f..04fda83 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -299,7 +299,7 @@ enum rte_eth_rx_mq_mode {
enum rte_eth_tx_mq_mode {
ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */
ETH_MQ_TX_DCB, /**< For TX side,only DCB is on. */
- ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on. */
+ ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on. */
ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
};
@@ -1569,7 +1569,8 @@ struct rte_eth_dev {
struct rte_eth_dev_sriov {
uint8_t active; /**< SRIOV is active with 16, 32 or 64 pools */
- uint8_t nb_q_per_pool; /**< rx queue number per pool */
+ uint8_t nb_rx_q_per_pool; /**< rx queue number per pool */
+ uint8_t nb_tx_q_per_pool; /**< tx queue number per pool */
uint16_t def_vmdq_idx; /**< Default pool num used for PF */
uint16_t def_pool_q_idx; /**< Default pool queue start reg index */
};
diff --git a/lib/librte_pmd_e1000/igb_pf.c b/lib/librte_pmd_e1000/igb_pf.c
index bc3816a..9d2f858 100644
--- a/lib/librte_pmd_e1000/igb_pf.c
+++ b/lib/librte_pmd_e1000/igb_pf.c
@@ -115,7 +115,8 @@ void igb_pf_host_init(struct rte_eth_dev *eth_dev)
rte_panic("Cannot allocate memory for private VF data\n");
RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS;
- RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
+ RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool = nb_queue;
+ RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool = nb_queue;
RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
index 3fc3738..347f03c 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
@@ -3555,14 +3555,14 @@ static int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ixgbe_vf_info *vfinfo =
*(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private));
- uint8_t nb_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
+ uint8_t nb_tx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
uint32_t queue_stride =
IXGBE_MAX_RX_QUEUE_NUM / RTE_ETH_DEV_SRIOV(dev).active;
uint32_t queue_idx = vf * queue_stride, idx = 0, vf_idx;
- uint32_t queue_end = queue_idx + nb_q_per_pool - 1;
+ uint32_t tx_queue_end = queue_idx + nb_tx_q_per_pool - 1;
uint16_t total_rate = 0;
- if (queue_end >= hw->mac.max_tx_queues)
+ if (tx_queue_end >= hw->mac.max_tx_queues)
return -EINVAL;
if (vfinfo != NULL) {
@@ -3577,7 +3577,7 @@ static int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
return -EINVAL;
/* Store tx_rate for this vf. */
- for (idx = 0; idx < nb_q_per_pool; idx++) {
+ for (idx = 0; idx < nb_tx_q_per_pool; idx++) {
if (((uint64_t)0x1 << idx) & q_msk) {
if (vfinfo[vf].tx_rate[idx] != tx_rate)
vfinfo[vf].tx_rate[idx] = tx_rate;
@@ -3595,7 +3595,7 @@ static int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
}
/* Set RTTBCNRC of each queue/pool for vf X */
- for (; queue_idx <= queue_end; queue_idx++) {
+ for (; queue_idx <= tx_queue_end; queue_idx++) {
if (0x1 & q_msk)
ixgbe_set_queue_rate_limit(dev, queue_idx, tx_rate);
q_msk = q_msk >> 1;
diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
index ca99170..ebf16e9 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
+++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
@@ -159,6 +159,7 @@ struct ixgbe_vf_info {
uint16_t tx_rate[IXGBE_MAX_QUEUE_NUM_PER_VF];
uint16_t vlan_count;
uint8_t spoofchk_enabled;
+ unsigned int vf_api;
};
/*
diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c b/lib/librte_pmd_ixgbe/ixgbe_pf.c
index 51da1fd..4d30bcf 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_pf.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c
@@ -127,7 +127,8 @@ void ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
}
- RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
+ RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool = nb_queue;
+ RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool = nb_queue;
RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
@@ -189,7 +190,7 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
hw->mac.ops.set_vmdq(hw, 0, RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx);
/*
- * SW msut set GCR_EXT.VT_Mode the same as GPIE.VT_Mode
+ * SW must set GCR_EXT.VT_Mode the same as GPIE.VT_Mode
*/
gcr_ext = IXGBE_READ_REG(hw, IXGBE_GCR_EXT);
gcr_ext &= ~IXGBE_GCR_EXT_VT_MODE_MASK;
@@ -214,19 +215,19 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
}
IXGBE_WRITE_REG(hw, IXGBE_GCR_EXT, gcr_ext);
- IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
+ IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
- /*
+ /*
* enable vlan filtering and allow all vlan tags through
*/
- vlanctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
- vlanctrl |= IXGBE_VLNCTRL_VFE ; /* enable vlan filters */
- IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl);
+ vlanctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
+ vlanctrl |= IXGBE_VLNCTRL_VFE ; /* enable vlan filters */
+ IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl);
- /* VFTA - enable all vlan filters */
- for (i = 0; i < IXGBE_MAX_VFTA; i++) {
- IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF);
- }
+ /* VFTA - enable all vlan filters */
+ for (i = 0; i < IXGBE_MAX_VFTA; i++) {
+ IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF);
+ }
/* Enable MAC Anti-Spoofing */
hw->mac.ops.set_mac_anti_spoofing(hw, FALSE, vf_num);
@@ -369,6 +370,73 @@ ixgbe_vf_reset(struct rte_eth_dev *dev, uint16_t vf, uint32_t *msgbuf)
}
static int
+ixgbe_negotiate_vf_api(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
+{
+ struct ixgbe_vf_info *vfinfo =
+ *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private));
+ int api = msgbuf[1];
+
+ switch (api) {
+ case ixgbe_mbox_api_10:
+ case ixgbe_mbox_api_11:
+ vfinfo[vf].vf_api = api;
+ return 0;
+ default:
+ break;
+ }
+
+ RTE_LOG(DEBUG, PMD, "VF %d requested invalid api version %u\n", vf, api);
+ return -1;
+}
+
+static int
+ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
+{
+ struct ixgbe_vf_info *vfinfo =
+ *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private));
+ struct ixgbe_dcb_config *dcb_cfg =
+ IXGBE_DEV_PRIVATE_TO_DCB_CFG(dev->data->dev_private);
+
+ uint8_t num_tcs = dcb_cfg->num_tcs.pg_tcs;
+
+ /* verify the PF is supporting the correct APIs */
+ switch (vfinfo[vf].vf_api) {
+ case ixgbe_mbox_api_10:
+ case ixgbe_mbox_api_11:
+ break;
+ default:
+ return -1;
+ }
+
+ if (RTE_ETH_DEV_SRIOV(dev).active) {
+ if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB)
+ msgbuf[IXGBE_VF_TX_QUEUES] = num_tcs;
+ else
+ msgbuf[IXGBE_VF_TX_QUEUES] = 1;
+
+ if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB)
+ msgbuf[IXGBE_VF_RX_QUEUES] = num_tcs;
+ else
+ msgbuf[IXGBE_VF_RX_QUEUES] = 1;
+ } else {
+ /* only allow 1 Tx queue for bandwidth limiting */
+ msgbuf[IXGBE_VF_TX_QUEUES] = 1;
+ msgbuf[IXGBE_VF_RX_QUEUES] = 1;
+ }
+
+ /* notify VF of need for VLAN tag stripping, and correct queue */
+ if (num_tcs)
+ msgbuf[IXGBE_VF_TRANS_VLAN] = num_tcs;
+ else
+ msgbuf[IXGBE_VF_TRANS_VLAN] = 0;
+
+ /* notify VF of default queue */
+ msgbuf[IXGBE_VF_DEF_QUEUE] = 0;
+
+ return 0;
+}
+
+static int
ixgbe_vf_set_mac_addr(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -512,6 +580,12 @@ ixgbe_rcv_msg_from_vf(struct rte_eth_dev *dev, uint16_t vf)
case IXGBE_VF_SET_VLAN:
retval = ixgbe_vf_set_vlan(dev, vf, msgbuf);
break;
+ case IXGBE_VF_API_NEGOTIATE:
+ retval = ixgbe_negotiate_vf_api(dev, vf, msgbuf);
+ break;
+ case IXGBE_VF_GET_QUEUES:
+ retval = ixgbe_get_vf_queues(dev, vf, msgbuf);
+ break;
default:
PMD_DRV_LOG(DEBUG, "Unhandled Msg %8.8x", (unsigned)msgbuf[0]);
retval = IXGBE_ERR_MBX;
@@ -526,7 +600,7 @@ ixgbe_rcv_msg_from_vf(struct rte_eth_dev *dev, uint16_t vf)
msgbuf[0] |= IXGBE_VT_MSGTYPE_CTS;
- ixgbe_write_mbx(hw, msgbuf, 1, vf);
+ ixgbe_write_mbx(hw, msgbuf, mbx_size, vf);
return retval;
}
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index e10d6a2..49b44fe 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -3166,10 +3166,9 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
/* check support mq_mode for DCB */
if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
- (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB))
- return;
-
- if (dev->data->nb_rx_queues != ETH_DCB_NUM_QUEUES)
+ (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
+ (dev_conf->txmode.mq_mode != ETH_MQ_TX_VMDQ_DCB) &&
+ (dev_conf->txmode.mq_mode != ETH_MQ_TX_DCB))
return;
/** Configure DCB hardware **/
--
1.7.9.5
^ permalink raw reply [flat|nested] 41+ messages in thread
end of thread, other threads:[~2015-06-09 4:07 UTC | newest]
Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-12 15:50 [dpdk-dev] [PATCH 0/2] Enable DCB in SRIOV mode for ixgbe driver Michal Jastrzebski
2015-01-12 15:50 ` [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe Michal Jastrzebski
2015-01-13 10:14 ` Vlad Zolotarov
2015-01-13 11:00 ` Wodkowski, PawelX
2015-01-14 1:00 ` Ouyang, Changchun
2015-01-12 15:50 ` [dpdk-dev] [PATCH 2/2] testpmd: fix dcb in vt mode Michal Jastrzebski
2015-01-13 10:15 ` Vlad Zolotarov
2015-01-13 11:08 ` Wodkowski, PawelX
2015-01-13 9:50 ` [dpdk-dev] [PATCH 0/2] Enable DCB in SRIOV mode for ixgbe driver Wodkowski, PawelX
2015-01-13 10:11 ` Vlad Zolotarov
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 0/4] " Pawel Wodkowski
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 1/4] ethdev: Allow zero rx/tx queues in SRIOV mode Pawel Wodkowski
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 2/4] ethdev: prevent changing of nb_q_per_pool in rte_eth_dev_check_mq_mode() Pawel Wodkowski
2015-01-20 1:32 ` Ouyang, Changchun
2015-01-20 9:09 ` Wodkowski, PawelX
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 3/4] pmd: add support for DCB in SRIOV mode for ixgbe driver Pawel Wodkowski
2015-01-20 1:56 ` Ouyang, Changchun
2015-01-20 6:52 ` Thomas Monjalon
2015-01-19 13:02 ` [dpdk-dev] [PATCH v2 4/4] testpmd: fix dcb in vt mode Pawel Wodkowski
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 0/7] Enable DCB in SRIOV mode for ixgbe driver Pawel Wodkowski
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 1/7] ethdev: Allow zero rx/tx queues in SRIOV mode Pawel Wodkowski
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 2/7] pmd igb: fix VMDQ mode checking Pawel Wodkowski
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 3/7] pmd: igb/ixgbe split nb_q_per_pool to rx and tx nb_q_per_pool Pawel Wodkowski
2015-02-25 3:24 ` Ouyang, Changchun
2015-02-25 7:47 ` Pawel Wodkowski
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 4/7] move rte_eth_dev_check_mq_mode() logic to driver Pawel Wodkowski
2015-02-25 6:14 ` Ouyang, Changchun
2015-02-25 9:57 ` Pawel Wodkowski
2015-06-09 4:06 ` Wu, Jingjing
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 5/7] pmd ixgbe: enable DCB in SRIOV Pawel Wodkowski
2015-02-25 3:36 ` Ouyang, Changchun
2015-02-25 11:29 ` Pawel Wodkowski
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 6/7] tespmd: fix DCB in SRIOV mode support Pawel Wodkowski
2015-02-19 15:54 ` [dpdk-dev] [PATCH v4 7/7] pmd ixgbe: fix vlan setting in in PF Pawel Wodkowski
2015-06-08 3:00 ` [dpdk-dev] [PATCH v4 0/7] Enable DCB in SRIOV mode for ixgbe driver Zhang, Helin
-- strict thread matches above, loose matches on Subject: below --
2015-01-12 14:43 [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe Michal Jastrzebski
2015-01-12 15:46 ` Jastrzebski, MichalX K
2015-01-13 10:02 ` Vlad Zolotarov
2015-01-13 10:08 ` Vlad Zolotarov
2015-01-14 0:51 ` Ouyang, Changchun
2015-01-14 9:46 ` Wodkowski, PawelX
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).