From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 7B77E5ABE for ; Mon, 12 Jan 2015 16:46:36 +0100 (CET) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga102.fm.intel.com with ESMTP; 12 Jan 2015 07:46:34 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.97,862,1389772800"; d="scan'208";a="439504065" Received: from irsmsx154.ger.corp.intel.com ([163.33.192.96]) by FMSMGA003.fm.intel.com with ESMTP; 12 Jan 2015 07:33:47 -0800 Received: from irsmsx109.ger.corp.intel.com ([169.254.13.11]) by IRSMSX154.ger.corp.intel.com ([169.254.12.15]) with mapi id 14.03.0195.001; Mon, 12 Jan 2015 15:46:12 +0000 From: "Jastrzebski, MichalX K" To: "Jastrzebski, MichalX K" , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe Thread-Index: AQHQLnYs5ubDhO4QKUWmAuQl0hSvKZy8oMIw Date: Mon, 12 Jan 2015 15:46:11 +0000 Message-ID: <60ABE07DBB3A454EB7FAD707B4BB1582138D2553@IRSMSX109.ger.corp.intel.com> References: <6c3329$jtfs2e@orsmga002.jf.intel.com> In-Reply-To: <6c3329$jtfs2e@orsmga002.jf.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 12 Jan 2015 15:46:37 -0000 > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Michal Jastrzebski > Sent: Monday, January 12, 2015 3:43 PM > To: dev@dpdk.org > Subject: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe >=20 > Date: Mon, 12 Jan 2015 15:39:40 +0100 > Message-Id: <1421073581-6644-2-git-send-email- > michalx.k.jastrzebski@intel.com> > X-Mailer: git-send-email 2.1.1 > In-Reply-To: <1421073581-6644-1-git-send-email- > michalx.k.jastrzebski@intel.com> > References: <1421073581-6644-1-git-send-email- > michalx.k.jastrzebski@intel.com> >=20 > From: Pawel Wodkowski >=20 >=20 > This patch add support for DCB in SRIOV mode. When no PFC >=20 > is enabled this feature might be used as multiple queues >=20 > (up to 8 or 4) for VF. >=20 >=20 >=20 > It incorporate following modifications: >=20 > - Allow zero rx/tx queues to be passed to rte_eth_dev_configure(). >=20 > Rationale: >=20 > in SRIOV mode PF use first free VF to RX/TX. If VF count >=20 > is 16 or 32 all recources are assigned to VFs so PF can >=20 > be used only for configuration. >=20 > - split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool >=20 > Rationale: >=20 > rx and tx number of queue might be different if RX and TX are >=20 > configured in different mode. This allow to inform VF about >=20 > proper number of queues. >=20 > - extern mailbox API for DCB mode >=20 >=20 >=20 > Signed-off-by: Pawel Wodkowski >=20 > --- >=20 > lib/librte_ether/rte_ethdev.c | 84 +++++++++++++++++++++--------= - >=20 > lib/librte_ether/rte_ethdev.h | 5 +- >=20 > lib/librte_pmd_e1000/igb_pf.c | 3 +- >=20 > lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 10 ++-- >=20 > lib/librte_pmd_ixgbe/ixgbe_ethdev.h | 1 + >=20 > lib/librte_pmd_ixgbe/ixgbe_pf.c | 98 +++++++++++++++++++++++++++++= +-- > --- >=20 > lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 7 ++- >=20 > 7 files changed, 159 insertions(+), 49 deletions(-) >=20 >=20 >=20 > diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.= c >=20 > index 95f2ceb..4c1a494 100644 >=20 > --- a/lib/librte_ether/rte_ethdev.c >=20 > +++ b/lib/librte_ether/rte_ethdev.c >=20 > @@ -333,7 +333,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev > *dev, uint16_t nb_queues) >=20 > dev->data->rx_queues =3D rte_zmalloc("ethdev->rx_queues", >=20 > sizeof(dev->data->rx_queues[0]) * nb_queues, >=20 > RTE_CACHE_LINE_SIZE); >=20 > - if (dev->data->rx_queues =3D=3D NULL) { >=20 > + if (dev->data->rx_queues =3D=3D NULL && nb_queues > 0) { >=20 > dev->data->nb_rx_queues =3D 0; >=20 > return -(ENOMEM); >=20 > } >=20 > @@ -475,7 +475,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev > *dev, uint16_t nb_queues) >=20 > dev->data->tx_queues =3D rte_zmalloc("ethdev->tx_queues", >=20 > sizeof(dev->data->tx_queues[0]) * nb_queues, >=20 > RTE_CACHE_LINE_SIZE); >=20 > - if (dev->data->tx_queues =3D=3D NULL) { >=20 > + if (dev->data->tx_queues =3D=3D NULL && nb_queues > 0) { >=20 > dev->data->nb_tx_queues =3D 0; >=20 > return -(ENOMEM); >=20 > } >=20 > @@ -507,6 +507,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, > uint16_t nb_rx_q, uint16_t nb_tx_q, >=20 > const struct rte_eth_conf *dev_conf) >=20 > { >=20 > struct rte_eth_dev *dev =3D &rte_eth_devices[port_id]; >=20 > + struct rte_eth_dev_info dev_info; >=20 >=20 >=20 > if (RTE_ETH_DEV_SRIOV(dev).active !=3D 0) { >=20 > /* check multi-queue mode */ >=20 > @@ -524,11 +525,33 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, > uint16_t nb_rx_q, uint16_t nb_tx_q, >=20 > return (-EINVAL); >=20 > } >=20 >=20 >=20 > + if ((dev_conf->rxmode.mq_mode =3D=3D > ETH_MQ_RX_VMDQ_DCB) && >=20 > + (dev_conf->txmode.mq_mode =3D=3D > ETH_MQ_TX_VMDQ_DCB)) { >=20 > + enum rte_eth_nb_pools rx_pools =3D >=20 > + dev_conf- > >rx_adv_conf.vmdq_dcb_conf.nb_queue_pools; >=20 > + enum rte_eth_nb_pools tx_pools =3D >=20 > + dev_conf- > >tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools; >=20 > + >=20 > + if (rx_pools !=3D tx_pools) { >=20 > + /* Only equal number of pools is supported > when >=20 > + * DCB+VMDq in SRIOV */ >=20 > + PMD_DEBUG_TRACE("ethdev port_id=3D%" > PRIu8 >=20 > + " SRIOV active, DCB+VMDQ > mode, " >=20 > + "number of rx and tx pools is > not eqaul\n", >=20 > + port_id); >=20 > + return (-EINVAL); >=20 > + } >=20 > + } >=20 > + >=20 > + uint16_t nb_rx_q_per_pool =3D > RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool; >=20 > + uint16_t nb_tx_q_per_pool =3D > RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool; >=20 > + >=20 > switch (dev_conf->rxmode.mq_mode) { >=20 > - case ETH_MQ_RX_VMDQ_RSS: >=20 > case ETH_MQ_RX_VMDQ_DCB: >=20 > + break; >=20 > + case ETH_MQ_RX_VMDQ_RSS: >=20 > case ETH_MQ_RX_VMDQ_DCB_RSS: >=20 > - /* DCB/RSS VMDQ in SRIOV mode, not implement yet > */ >=20 > + /* RSS, DCB+RSS VMDQ in SRIOV mode, not > implement yet */ >=20 > PMD_DEBUG_TRACE("ethdev port_id=3D%" PRIu8 >=20 > " SRIOV active, " >=20 > "unsupported VMDQ mq_mode rx > %u\n", >=20 > @@ -537,37 +560,32 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, > uint16_t nb_rx_q, uint16_t nb_tx_q, >=20 > default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE > */ >=20 > /* if nothing mq mode configure, use default scheme > */ >=20 > dev->data->dev_conf.rxmode.mq_mode =3D > ETH_MQ_RX_VMDQ_ONLY; >=20 > - if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1) >=20 > - RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool =3D > 1; >=20 > + if (nb_rx_q_per_pool > 1) >=20 > + nb_rx_q_per_pool =3D 1; >=20 > break; >=20 > } >=20 >=20 >=20 > switch (dev_conf->txmode.mq_mode) { >=20 > - case ETH_MQ_TX_VMDQ_DCB: >=20 > - /* DCB VMDQ in SRIOV mode, not implement yet */ >=20 > - PMD_DEBUG_TRACE("ethdev port_id=3D%" PRIu8 >=20 > - " SRIOV active, " >=20 > - "unsupported VMDQ mq_mode tx > %u\n", >=20 > - port_id, dev_conf- > >txmode.mq_mode); >=20 > - return (-EINVAL); >=20 > + case ETH_MQ_TX_VMDQ_DCB: /* DCB VMDQ in SRIOV > mode*/ >=20 > + break; >=20 > default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE > */ >=20 > /* if nothing mq mode configure, use default scheme > */ >=20 > dev->data->dev_conf.txmode.mq_mode =3D > ETH_MQ_TX_VMDQ_ONLY; >=20 > - if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1) >=20 > - RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool =3D > 1; >=20 > + if (nb_tx_q_per_pool > 1) >=20 > + nb_tx_q_per_pool =3D 1; >=20 > break; >=20 > } >=20 >=20 >=20 > /* check valid queue number */ >=20 > - if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) || >=20 > - (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) { >=20 > + if (nb_rx_q > nb_rx_q_per_pool || nb_tx_q > > nb_tx_q_per_pool) { >=20 > PMD_DEBUG_TRACE("ethdev port_id=3D%d SRIOV > active, " >=20 > - "queue number must less equal to %d\n", >=20 > - port_id, > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool); >=20 > + "rx/tx queue number must less equal to > %d/%d\n", >=20 > + port_id, > RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool, >=20 > + > RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool); >=20 > return (-EINVAL); >=20 > } >=20 > } else { >=20 > - /* For vmdb+dcb mode check our configuration before we go > further */ >=20 > + /* For vmdq+dcb mode check our configuration before we go > further */ >=20 > if (dev_conf->rxmode.mq_mode =3D=3D ETH_MQ_RX_VMDQ_DCB) > { >=20 > const struct rte_eth_vmdq_dcb_conf *conf; >=20 >=20 >=20 > @@ -606,11 +624,20 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, > uint16_t nb_rx_q, uint16_t nb_tx_q, >=20 > } >=20 > } >=20 >=20 >=20 > + /* For DCB we need to obtain maximum number of queues > dinamically, >=20 > + * as this depends on max VF exported in PF */ >=20 > + if ((dev_conf->rxmode.mq_mode =3D=3D ETH_MQ_RX_DCB) || >=20 > + (dev_conf->txmode.mq_mode =3D=3D ETH_MQ_TX_DCB)) > { >=20 > + >=20 > + FUNC_PTR_OR_ERR_RET(*dev->dev_ops- > >dev_infos_get, -ENOTSUP); >=20 > + (*dev->dev_ops->dev_infos_get)(dev, > &dev_info); >=20 > + } >=20 > + >=20 > /* For DCB mode check our configuration before we go further > */ >=20 > if (dev_conf->rxmode.mq_mode =3D=3D ETH_MQ_RX_DCB) { >=20 > const struct rte_eth_dcb_rx_conf *conf; >=20 >=20 >=20 > - if (nb_rx_q !=3D ETH_DCB_NUM_QUEUES) { >=20 > + if (nb_rx_q !=3D dev_info.max_rx_queues) { >=20 > PMD_DEBUG_TRACE("ethdev port_id=3D%d > DCB, nb_rx_q " >=20 > "!=3D %d\n", >=20 > port_id, > ETH_DCB_NUM_QUEUES); >=20 > @@ -630,7 +657,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, > uint16_t nb_rx_q, uint16_t nb_tx_q, >=20 > if (dev_conf->txmode.mq_mode =3D=3D ETH_MQ_TX_DCB) { >=20 > const struct rte_eth_dcb_tx_conf *conf; >=20 >=20 >=20 > - if (nb_tx_q !=3D ETH_DCB_NUM_QUEUES) { >=20 > + if (nb_tx_q !=3D dev_info.max_tx_queues) { >=20 > PMD_DEBUG_TRACE("ethdev port_id=3D%d > DCB, nb_tx_q " >=20 > "!=3D %d\n", >=20 > port_id, > ETH_DCB_NUM_QUEUES); >=20 > @@ -690,7 +717,10 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t > nb_rx_q, uint16_t nb_tx_q, >=20 > } >=20 > if (nb_rx_q =3D=3D 0) { >=20 > PMD_DEBUG_TRACE("ethdev port_id=3D%d nb_rx_q =3D=3D 0\n", > port_id); >=20 > - return (-EINVAL); >=20 > + /* In SRIOV there can be no free resource for PF. So permit use > only >=20 > + * for configuration. */ >=20 > + if (RTE_ETH_DEV_SRIOV(dev).active =3D=3D 0) >=20 > + return (-EINVAL); >=20 > } >=20 >=20 >=20 > if (nb_tx_q > dev_info.max_tx_queues) { >=20 > @@ -698,9 +728,13 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t > nb_rx_q, uint16_t nb_tx_q, >=20 > port_id, nb_tx_q, dev_info.max_tx_queues); >=20 > return (-EINVAL); >=20 > } >=20 > + >=20 > if (nb_tx_q =3D=3D 0) { >=20 > PMD_DEBUG_TRACE("ethdev port_id=3D%d nb_tx_q =3D=3D 0\n", > port_id); >=20 > - return (-EINVAL); >=20 > + /* In SRIOV there can be no free resource for PF. So permit use > only >=20 > + * for configuration. */ >=20 > + if (RTE_ETH_DEV_SRIOV(dev).active =3D=3D 0) >=20 > + return (-EINVAL); >=20 > } >=20 >=20 >=20 > /* Copy the dev_conf parameter into the dev structure */ >=20 > @@ -750,7 +784,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t > nb_rx_q, uint16_t nb_tx_q, >=20 > ETHER_MAX_LEN; >=20 > } >=20 >=20 >=20 > - /* multipe queue mode checking */ >=20 > + /* multiple queue mode checking */ >=20 > diag =3D rte_eth_dev_check_mq_mode(port_id, nb_rx_q, nb_tx_q, > dev_conf); >=20 > if (diag !=3D 0) { >=20 > PMD_DEBUG_TRACE("port%d rte_eth_dev_check_mq_mode =3D > %d\n", >=20 > diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.= h >=20 > index ce0528f..04fda83 100644 >=20 > --- a/lib/librte_ether/rte_ethdev.h >=20 > +++ b/lib/librte_ether/rte_ethdev.h >=20 > @@ -299,7 +299,7 @@ enum rte_eth_rx_mq_mode { >=20 > enum rte_eth_tx_mq_mode { >=20 > ETH_MQ_TX_NONE =3D 0, /**< It is in neither DCB nor VT mode. */ >=20 > ETH_MQ_TX_DCB, /**< For TX side,only DCB is on. */ >=20 > - ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is > on. */ >=20 > + ETH_MQ_TX_VMDQ_DCB, /**< For TX side,both DCB and VT is on. > */ >=20 > ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */ >=20 > }; >=20 >=20 >=20 > @@ -1569,7 +1569,8 @@ struct rte_eth_dev { >=20 >=20 >=20 > struct rte_eth_dev_sriov { >=20 > uint8_t active; /**< SRIOV is active with 16, 32 or 64 po= ols */ >=20 > - uint8_t nb_q_per_pool; /**< rx queue number per pool */ >=20 > + uint8_t nb_rx_q_per_pool; /**< rx queue number per pool */ >=20 > + uint8_t nb_tx_q_per_pool; /**< tx queue number per pool */ >=20 > uint16_t def_vmdq_idx; /**< Default pool num used for PF */ >=20 > uint16_t def_pool_q_idx; /**< Default pool queue start reg index *= / >=20 > }; >=20 > diff --git a/lib/librte_pmd_e1000/igb_pf.c b/lib/librte_pmd_e1000/igb_pf.= c >=20 > index bc3816a..9d2f858 100644 >=20 > --- a/lib/librte_pmd_e1000/igb_pf.c >=20 > +++ b/lib/librte_pmd_e1000/igb_pf.c >=20 > @@ -115,7 +115,8 @@ void igb_pf_host_init(struct rte_eth_dev *eth_dev) >=20 > rte_panic("Cannot allocate memory for private VF data\n"); >=20 >=20 >=20 > RTE_ETH_DEV_SRIOV(eth_dev).active =3D ETH_8_POOLS; >=20 > - RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool =3D nb_queue; >=20 > + RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool =3D nb_queue; >=20 > + RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool =3D nb_queue; >=20 > RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx =3D vf_num; >=20 > RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx =3D (uint16_t)(vf_num * > nb_queue); >=20 >=20 >=20 > diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c > b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c >=20 > index 3fc3738..347f03c 100644 >=20 > --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c >=20 > +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c >=20 > @@ -3555,14 +3555,14 @@ static int ixgbe_set_vf_rate_limit(struct > rte_eth_dev *dev, uint16_t vf, >=20 > struct ixgbe_hw *hw =3D IXGBE_DEV_PRIVATE_TO_HW(dev->data- > >dev_private); >=20 > struct ixgbe_vf_info *vfinfo =3D >=20 > *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data- > >dev_private)); >=20 > - uint8_t nb_q_per_pool =3D RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool; >=20 > + uint8_t nb_tx_q_per_pool =3D > RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool; >=20 > uint32_t queue_stride =3D >=20 > IXGBE_MAX_RX_QUEUE_NUM / > RTE_ETH_DEV_SRIOV(dev).active; >=20 > uint32_t queue_idx =3D vf * queue_stride, idx =3D 0, vf_idx; >=20 > - uint32_t queue_end =3D queue_idx + nb_q_per_pool - 1; >=20 > + uint32_t tx_queue_end =3D queue_idx + nb_tx_q_per_pool - 1; >=20 > uint16_t total_rate =3D 0; >=20 >=20 >=20 > - if (queue_end >=3D hw->mac.max_tx_queues) >=20 > + if (tx_queue_end >=3D hw->mac.max_tx_queues) >=20 > return -EINVAL; >=20 >=20 >=20 > if (vfinfo !=3D NULL) { >=20 > @@ -3577,7 +3577,7 @@ static int ixgbe_set_vf_rate_limit(struct rte_eth_d= ev > *dev, uint16_t vf, >=20 > return -EINVAL; >=20 >=20 >=20 > /* Store tx_rate for this vf. */ >=20 > - for (idx =3D 0; idx < nb_q_per_pool; idx++) { >=20 > + for (idx =3D 0; idx < nb_tx_q_per_pool; idx++) { >=20 > if (((uint64_t)0x1 << idx) & q_msk) { >=20 > if (vfinfo[vf].tx_rate[idx] !=3D tx_rate) >=20 > vfinfo[vf].tx_rate[idx] =3D tx_rate; >=20 > @@ -3595,7 +3595,7 @@ static int ixgbe_set_vf_rate_limit(struct rte_eth_d= ev > *dev, uint16_t vf, >=20 > } >=20 >=20 >=20 > /* Set RTTBCNRC of each queue/pool for vf X */ >=20 > - for (; queue_idx <=3D queue_end; queue_idx++) { >=20 > + for (; queue_idx <=3D tx_queue_end; queue_idx++) { >=20 > if (0x1 & q_msk) >=20 > ixgbe_set_queue_rate_limit(dev, queue_idx, tx_rate); >=20 > q_msk =3D q_msk >> 1; >=20 > diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h > b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h >=20 > index ca99170..ebf16e9 100644 >=20 > --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h >=20 > +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h >=20 > @@ -159,6 +159,7 @@ struct ixgbe_vf_info { >=20 > uint16_t tx_rate[IXGBE_MAX_QUEUE_NUM_PER_VF]; >=20 > uint16_t vlan_count; >=20 > uint8_t spoofchk_enabled; >=20 > + unsigned int vf_api; >=20 > }; >=20 >=20 >=20 > /* >=20 > diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c b/lib/librte_pmd_ixgbe/ixgbe= _pf.c >=20 > index 51da1fd..4d30bcf 100644 >=20 > --- a/lib/librte_pmd_ixgbe/ixgbe_pf.c >=20 > +++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c >=20 > @@ -127,7 +127,8 @@ void ixgbe_pf_host_init(struct rte_eth_dev *eth_dev) >=20 > RTE_ETH_DEV_SRIOV(eth_dev).active =3D ETH_16_POOLS; >=20 > } >=20 >=20 >=20 > - RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool =3D nb_queue; >=20 > + RTE_ETH_DEV_SRIOV(eth_dev).nb_rx_q_per_pool =3D nb_queue; >=20 > + RTE_ETH_DEV_SRIOV(eth_dev).nb_tx_q_per_pool =3D nb_queue; >=20 > RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx =3D vf_num; >=20 > RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx =3D (uint16_t)(vf_num * > nb_queue); >=20 >=20 >=20 > @@ -189,7 +190,7 @@ int ixgbe_pf_host_configure(struct rte_eth_dev > *eth_dev) >=20 > hw->mac.ops.set_vmdq(hw, 0, > RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx); >=20 >=20 >=20 > /* >=20 > - * SW msut set GCR_EXT.VT_Mode the same as GPIE.VT_Mode >=20 > + * SW must set GCR_EXT.VT_Mode the same as GPIE.VT_Mode >=20 > */ >=20 > gcr_ext =3D IXGBE_READ_REG(hw, IXGBE_GCR_EXT); >=20 > gcr_ext &=3D ~IXGBE_GCR_EXT_VT_MODE_MASK; >=20 > @@ -214,19 +215,19 @@ int ixgbe_pf_host_configure(struct rte_eth_dev > *eth_dev) >=20 > } >=20 >=20 >=20 > IXGBE_WRITE_REG(hw, IXGBE_GCR_EXT, gcr_ext); >=20 > - IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie); >=20 > + IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie); >=20 >=20 >=20 > - /* >=20 > + /* >=20 > * enable vlan filtering and allow all vlan tags through >=20 > */ >=20 > - vlanctrl =3D IXGBE_READ_REG(hw, IXGBE_VLNCTRL); >=20 > - vlanctrl |=3D IXGBE_VLNCTRL_VFE ; /* enable vlan filters */ >=20 > - IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl); >=20 > + vlanctrl =3D IXGBE_READ_REG(hw, IXGBE_VLNCTRL); >=20 > + vlanctrl |=3D IXGBE_VLNCTRL_VFE ; /* enable vlan filters */ >=20 > + IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlanctrl); >=20 >=20 >=20 > - /* VFTA - enable all vlan filters */ >=20 > - for (i =3D 0; i < IXGBE_MAX_VFTA; i++) { >=20 > - IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF); >=20 > - } >=20 > + /* VFTA - enable all vlan filters */ >=20 > + for (i =3D 0; i < IXGBE_MAX_VFTA; i++) { >=20 > + IXGBE_WRITE_REG(hw, IXGBE_VFTA(i), 0xFFFFFFFF); >=20 > + } >=20 >=20 >=20 > /* Enable MAC Anti-Spoofing */ >=20 > hw->mac.ops.set_mac_anti_spoofing(hw, FALSE, vf_num); >=20 > @@ -369,6 +370,73 @@ ixgbe_vf_reset(struct rte_eth_dev *dev, uint16_t vf, > uint32_t *msgbuf) >=20 > } >=20 >=20 >=20 > static int >=20 > +ixgbe_negotiate_vf_api(struct rte_eth_dev *dev, uint32_t vf, uint32_t > *msgbuf) >=20 > +{ >=20 > + struct ixgbe_vf_info *vfinfo =3D >=20 > + *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data- > >dev_private)); >=20 > + int api =3D msgbuf[1]; >=20 > + >=20 > + switch (api) { >=20 > + case ixgbe_mbox_api_10: >=20 > + case ixgbe_mbox_api_11: >=20 > + vfinfo[vf].vf_api =3D api; >=20 > + return 0; >=20 > + default: >=20 > + break; >=20 > + } >=20 > + >=20 > + RTE_LOG(DEBUG, PMD, "VF %d requested invalid api version %u\n", vf, > api); >=20 > + return -1; >=20 > +} >=20 > + >=20 > +static int >=20 > +ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgb= uf) >=20 > +{ >=20 > + struct ixgbe_vf_info *vfinfo =3D >=20 > + *(IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data- > >dev_private)); >=20 > + struct ixgbe_dcb_config *dcb_cfg =3D >=20 > + IXGBE_DEV_PRIVATE_TO_DCB_CFG(dev->data- > >dev_private); >=20 > + >=20 > + uint8_t num_tcs =3D dcb_cfg->num_tcs.pg_tcs; >=20 > + >=20 > + /* verify the PF is supporting the correct APIs */ >=20 > + switch (vfinfo[vf].vf_api) { >=20 > + case ixgbe_mbox_api_10: >=20 > + case ixgbe_mbox_api_11: >=20 > + break; >=20 > + default: >=20 > + return -1; >=20 > + } >=20 > + >=20 > + if (RTE_ETH_DEV_SRIOV(dev).active) { >=20 > + if (dev->data->dev_conf.rxmode.mq_mode =3D=3D > ETH_MQ_RX_VMDQ_DCB) >=20 > + msgbuf[IXGBE_VF_TX_QUEUES] =3D num_tcs; >=20 > + else >=20 > + msgbuf[IXGBE_VF_TX_QUEUES] =3D 1; >=20 > + >=20 > + if (dev->data->dev_conf.txmode.mq_mode =3D=3D > ETH_MQ_TX_VMDQ_DCB) >=20 > + msgbuf[IXGBE_VF_RX_QUEUES] =3D num_tcs; >=20 > + else >=20 > + msgbuf[IXGBE_VF_RX_QUEUES] =3D 1; >=20 > + } else { >=20 > + /* only allow 1 Tx queue for bandwidth limiting */ >=20 > + msgbuf[IXGBE_VF_TX_QUEUES] =3D 1; >=20 > + msgbuf[IXGBE_VF_RX_QUEUES] =3D 1; >=20 > + } >=20 > + >=20 > + /* notify VF of need for VLAN tag stripping, and correct queue */ >=20 > + if (num_tcs) >=20 > + msgbuf[IXGBE_VF_TRANS_VLAN] =3D num_tcs; >=20 > + else >=20 > + msgbuf[IXGBE_VF_TRANS_VLAN] =3D 0; >=20 > + >=20 > + /* notify VF of default queue */ >=20 > + msgbuf[IXGBE_VF_DEF_QUEUE] =3D 0; >=20 > + >=20 > + return 0; >=20 > +} >=20 > + >=20 > +static int >=20 > ixgbe_vf_set_mac_addr(struct rte_eth_dev *dev, uint32_t vf, uint32_t > *msgbuf) >=20 > { >=20 > struct ixgbe_hw *hw =3D IXGBE_DEV_PRIVATE_TO_HW(dev->data- > >dev_private); >=20 > @@ -512,6 +580,12 @@ ixgbe_rcv_msg_from_vf(struct rte_eth_dev *dev, > uint16_t vf) >=20 > case IXGBE_VF_SET_VLAN: >=20 > retval =3D ixgbe_vf_set_vlan(dev, vf, msgbuf); >=20 > break; >=20 > + case IXGBE_VF_API_NEGOTIATE: >=20 > + retval =3D ixgbe_negotiate_vf_api(dev, vf, msgbuf); >=20 > + break; >=20 > + case IXGBE_VF_GET_QUEUES: >=20 > + retval =3D ixgbe_get_vf_queues(dev, vf, msgbuf); >=20 > + break; >=20 > default: >=20 > PMD_DRV_LOG(DEBUG, "Unhandled Msg %8.8x", > (unsigned)msgbuf[0]); >=20 > retval =3D IXGBE_ERR_MBX; >=20 > @@ -526,7 +600,7 @@ ixgbe_rcv_msg_from_vf(struct rte_eth_dev *dev, > uint16_t vf) >=20 >=20 >=20 > msgbuf[0] |=3D IXGBE_VT_MSGTYPE_CTS; >=20 >=20 >=20 > - ixgbe_write_mbx(hw, msgbuf, 1, vf); >=20 > + ixgbe_write_mbx(hw, msgbuf, mbx_size, vf); >=20 >=20 >=20 > return retval; >=20 > } >=20 > diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c > b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c >=20 > index e10d6a2..49b44fe 100644 >=20 > --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c >=20 > +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c >=20 > @@ -3166,10 +3166,9 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev) >=20 >=20 >=20 > /* check support mq_mode for DCB */ >=20 > if ((dev_conf->rxmode.mq_mode !=3D ETH_MQ_RX_VMDQ_DCB) && >=20 > - (dev_conf->rxmode.mq_mode !=3D ETH_MQ_RX_DCB)) >=20 > - return; >=20 > - >=20 > - if (dev->data->nb_rx_queues !=3D ETH_DCB_NUM_QUEUES) >=20 > + (dev_conf->rxmode.mq_mode !=3D ETH_MQ_RX_DCB) && >=20 > + (dev_conf->txmode.mq_mode !=3D ETH_MQ_TX_VMDQ_DCB) && >=20 > + (dev_conf->txmode.mq_mode !=3D ETH_MQ_TX_DCB)) >=20 > return; >=20 >=20 >=20 > /** Configure DCB hardware **/ >=20 > -- >=20 > 1.7.9.5 >=20 >=20 Self nacked - because of wrong message format.