From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wg0-f41.google.com (mail-wg0-f41.google.com [74.125.82.41]) by dpdk.org (Postfix) with ESMTP id D4BF1282 for ; Thu, 25 Dec 2014 14:38:35 +0100 (CET) Received: by mail-wg0-f41.google.com with SMTP id y19so13075258wgg.28 for ; Thu, 25 Dec 2014 05:38:35 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=f7B0j85xeN9Yc8Bc61gkbbnx7qzo55DQ6+nuasguJqg=; b=MXlPRfcWIq95nGX5yJhflC6nlWkYPFDhSW6V/iwnq7qG2XQ7b7SOieiN8KSNqPn/5D DdscGc8QcTNYqhuI41JTodfjnvkZ8VB2fg+nAs2LP+I7+aM3aIG2EeC540hCsektcgfC xMnxEmifa7TSjuI2ryqT+bKj7aw+Rt4ddyUKvkH4qqOkZ3fhKgre4jZrNH68aioWG4Oc UXSKIQPWbKx7lG6JPQ9kk3eT+GfgkwvV2QUjtqJPaWXP/4zIGBLG5PYhON49avEtrQ/J co+TIGJwizz8Vrk9EyGTQLv96hTJnrrWOif7Nk7TndldVu1llBb9O+BEFQa+Gr3sUWvr e2FA== X-Gm-Message-State: ALoCoQkOBUr5zGFEkG5DYrN9uKGNoIAnz1BtTQ/zSYMnl12FqUz2yq2HJYFGLq6RikNp5jbqG5ko X-Received: by 10.180.93.102 with SMTP id ct6mr59751934wib.2.1419514715655; Thu, 25 Dec 2014 05:38:35 -0800 (PST) Received: from [10.0.0.165] (system.cloudius-systems.com. [84.94.198.183]) by mx.google.com with ESMTPSA id r3sm24768499wic.10.2014.12.25.05.38.34 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 25 Dec 2014 05:38:35 -0800 (PST) Message-ID: <549C1359.7080107@cloudius-systems.com> Date: Thu, 25 Dec 2014 15:38:33 +0200 From: Vlad Zolotarov User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: "Ouyang, Changchun" , "dev@dpdk.org" References: <1419389808-9559-1-git-send-email-changchun.ouyang@intel.com> <1419398584-19520-1-git-send-email-changchun.ouyang@intel.com> <1419398584-19520-6-git-send-email-changchun.ouyang@intel.com> <549A97F6.30901@cloudius-systems.com> In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v3 5/6] ixgbe: Config VF RSS X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Dec 2014 13:38:36 -0000 On 12/25/14 04:43, Ouyang, Changchun wrote: > Hi, > Sorry miss some comments, so continue my response below, > >> -----Original Message----- >> From: Vlad Zolotarov [mailto:vladz@cloudius-systems.com] >> Sent: Wednesday, December 24, 2014 6:40 PM >> To: Ouyang, Changchun; dev@dpdk.org >> Subject: Re: [dpdk-dev] [PATCH v3 5/6] ixgbe: Config VF RSS >> >> >> On 12/24/14 07:23, Ouyang Changchun wrote: >>> It needs config RSS and IXGBE_MRQC and IXGBE_VFPSRTYPE to enable VF >> RSS. >>> The psrtype will determine how many queues the received packets will >>> distribute to, and the value of psrtype should depends on both facet: >>> max VF rxq number which has been negotiated with PF, and the number of >> rxq specified in config on guest. >>> Signed-off-by: Changchun Ouyang >>> --- >>> lib/librte_pmd_ixgbe/ixgbe_pf.c | 15 +++++++ >>> lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 92 >> ++++++++++++++++++++++++++++++++++----- >>> 2 files changed, 97 insertions(+), 10 deletions(-) >>> >>> diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c >>> b/lib/librte_pmd_ixgbe/ixgbe_pf.c index cbb0145..9c9dad8 100644 >>> --- a/lib/librte_pmd_ixgbe/ixgbe_pf.c >>> +++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c >>> @@ -187,6 +187,21 @@ int ixgbe_pf_host_configure(struct rte_eth_dev >> *eth_dev) >>> IXGBE_WRITE_REG(hw, IXGBE_MPSAR_LO(hw- >>> mac.num_rar_entries), 0); >>> IXGBE_WRITE_REG(hw, IXGBE_MPSAR_HI(hw- >>> mac.num_rar_entries), 0); >>> >>> + /* >>> + * VF RSS can support at most 4 queues for each VF, even if >>> + * 8 queues are available for each VF, it need refine to 4 >>> + * queues here due to this limitation, otherwise no queue >>> + * will receive any packet even RSS is enabled. >> According to Table 7-3 in the 82599 spec RSS is not available when port is >> configured to have 8 queues per pool. This means that if u see this >> configuration u may immediately disable RSS flow in your code. >> >>> + */ >>> + if (eth_dev->data->dev_conf.rxmode.mq_mode == >> ETH_MQ_RX_VMDQ_RSS) { >>> + if (RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool == 8) { >>> + RTE_ETH_DEV_SRIOV(eth_dev).active = >> ETH_32_POOLS; >>> + RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = 4; >>> + RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = >>> + dev_num_vf(eth_dev) * 4; >> According to 82599 spec u can't do that since RSS is not allowed when port is >> configured to have 8 function per-VF. Have u verified that this works? If yes, >> then spec should be updated. >> >>> + } >>> + } >>> + >>> /* set VMDq map to default PF pool */ >>> hw->mac.ops.set_vmdq(hw, 0, >>> RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx); >>> >>> diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c >>> b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c >>> index f69abda..a7c17a4 100644 >>> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c >>> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c >>> @@ -3327,6 +3327,39 @@ ixgbe_alloc_rx_queue_mbufs(struct >> igb_rx_queue *rxq) >>> } >>> >>> static int >>> +ixgbe_config_vf_rss(struct rte_eth_dev *dev) { >>> + struct ixgbe_hw *hw; >>> + uint32_t mrqc; >>> + >>> + ixgbe_rss_configure(dev); >>> + >>> + hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); >>> + >>> + /* MRQC: enable VF RSS */ >>> + mrqc = IXGBE_READ_REG(hw, IXGBE_MRQC); >>> + mrqc &= ~IXGBE_MRQC_MRQE_MASK; >>> + switch (RTE_ETH_DEV_SRIOV(dev).active) { >>> + case ETH_64_POOLS: >>> + mrqc |= IXGBE_MRQC_VMDQRSS64EN; >>> + break; >>> + >>> + case ETH_32_POOLS: >>> + case ETH_16_POOLS: >>> + mrqc |= IXGBE_MRQC_VMDQRSS32EN; >> Again, this contradicts with the spec. > Yes, the spec say the hw can't support vf rss at all, but experiment find that could be done. I have just realized something - why did u have to experiment at all? U work at Intel, don't u? Can't u just ask a HW engineer that have designed this NIC? What do u mean by an "experiment" here? From my experience u can't just write some random values in the registers and conclude that if it worked for like 5 minutes it will continue to work for the next minute... There is always a clear procedure of how HW should be initialized and used and that's the only way it may be used since this was the way the HW has been tested. U can't assume anything in regards to reliability if u don't follow specs and programmer manuals of HW provider. Could u clarify, please? > We can focus on discussing the implementation firstly. > >>> + break; >>> + >>> + default: >>> + PMD_INIT_LOG(ERR, "Invalid pool number in IOV mode"); >>> + return -EINVAL; >>> + } >>> + >>> + IXGBE_WRITE_REG(hw, IXGBE_MRQC, mrqc); >>> + >>> + return 0; >>> +} >>> + >>> +static int >>> ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev) >>> { >>> struct ixgbe_hw *hw = >>> @@ -3358,24 +3391,38 @@ ixgbe_dev_mq_rx_configure(struct >> rte_eth_dev *dev) >>> default: ixgbe_rss_disable(dev); >>> } >>> } else { >>> - switch (RTE_ETH_DEV_SRIOV(dev).active) { >>> /* >>> * SRIOV active scheme >>> * FIXME if support DCB/RSS together with VMDq & SRIOV >>> */ >>> - case ETH_64_POOLS: >>> - IXGBE_WRITE_REG(hw, IXGBE_MRQC, >> IXGBE_MRQC_VMDQEN); >>> + switch (dev->data->dev_conf.rxmode.mq_mode) { >>> + case ETH_MQ_RX_RSS: >>> + case ETH_MQ_RX_VMDQ_RSS: >>> + ixgbe_config_vf_rss(dev); >>> break; >>> >>> - case ETH_32_POOLS: >>> - IXGBE_WRITE_REG(hw, IXGBE_MRQC, >> IXGBE_MRQC_VMDQRT4TCEN); >>> - break; >>> + default: >>> + switch (RTE_ETH_DEV_SRIOV(dev).active) { >> Sorry for nitpicking but have u considered taking this encapsulated "switch- >> case" block into a separate function? This could make the code look a lot >> nicer. ;) >> >>> + case ETH_64_POOLS: >>> + IXGBE_WRITE_REG(hw, IXGBE_MRQC, >>> + IXGBE_MRQC_VMDQEN); >>> + break; >>> >>> - case ETH_16_POOLS: >>> - IXGBE_WRITE_REG(hw, IXGBE_MRQC, >> IXGBE_MRQC_VMDQRT8TCEN); >>> + case ETH_32_POOLS: >>> + IXGBE_WRITE_REG(hw, IXGBE_MRQC, >>> + IXGBE_MRQC_VMDQRT4TCEN); >>> + break; >>> + >>> + case ETH_16_POOLS: >>> + IXGBE_WRITE_REG(hw, IXGBE_MRQC, >>> + IXGBE_MRQC_VMDQRT8TCEN); >>> + break; >>> + default: >>> + PMD_INIT_LOG(ERR, >>> + "invalid pool number in IOV mode"); >>> + break; >>> + } >>> break; >>> - default: >>> - PMD_INIT_LOG(ERR, "invalid pool number in IOV >> mode"); >>> } >>> } >>> >>> @@ -3989,10 +4036,32 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev) >>> uint16_t buf_size; >>> uint16_t i; >>> int ret; >>> + uint16_t valid_rxq_num; >>> >>> PMD_INIT_FUNC_TRACE(); >>> hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); >>> >>> + valid_rxq_num = RTE_MIN(dev->data->nb_rx_queues, >>> +hw->mac.max_rx_queues); >>> + >>> + /* >>> + * VMDq RSS can't support 3 queues, so config it into 4 queues, >>> + * and give user a hint that some packets may loss if it doesn't >>> + * poll the queue where those packets are distributed to. >>> + */ >>> + if (valid_rxq_num == 3) >>> + valid_rxq_num = 4; >> Why to configure more queues that requested and not less (2)? Why to >> configure anything at all and not return an error? > Sorry, I don't agree this is "anything" you say, because I don't use 5,6,7, 8, ..., 16, 2014, 2015,... etc. > By considering 2 or 4, > I prefer 4, the reason is if user need more than 3 queues per vf to do something, > And pf has also the capability to setup 4 queues per vf, confining to 2 queues is also not good thing, > So here try to enable 4 queues, and give user hints here. > Btw, change it into 2 is another way, depends on other guys' more insight here. > >>> + >>> + if (dev->data->nb_rx_queues > valid_rxq_num) { >>> + PMD_INIT_LOG(ERR, "The number of Rx queue invalid, " >>> + "it should be equal to or less than %d", >>> + valid_rxq_num); >>> + return -1; >>> + } else if (dev->data->nb_rx_queues < valid_rxq_num) >>> + PMD_INIT_LOG(ERR, "The number of Rx queue is less " >>> + "than the number of available Rx queues:%d, " >>> + "packets in Rx queues(q_id >= %d) may loss.", >>> + valid_rxq_num, dev->data->nb_rx_queues); >> Who ever looks in the "INIT_LOG" if everything "work well" and u make it >> look so by allowing this call to succeed. And then some packets will just >> silently not arrive?! And what the used should somehow guess to do? >> - Look in the "INIT_LOG"?! This is a nightmare! >> >>> + >>> /* >>> * When the VF driver issues a IXGBE_VF_RESET request, the PF >> driver >>> * disables the VF receipt of packets if the PF MTU is > 1500. >>> @@ -4094,6 +4163,9 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev) >>> IXGBE_PSRTYPE_IPV6HDR; >>> #endif >>> >>> + /* Set RQPL for VF RSS according to max Rx queue */ >>> + psrtype |= (valid_rxq_num >> 1) << >>> + IXGBE_PSRTYPE_RQPL_SHIFT; >>> IXGBE_WRITE_REG(hw, IXGBE_VFPSRTYPE, psrtype); >>> >>> if (dev->data->dev_conf.rxmode.enable_scatter) {