From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wg0-f48.google.com (mail-wg0-f48.google.com [74.125.82.48]) by dpdk.org (Postfix) with ESMTP id 352EF19F5 for ; Fri, 26 Dec 2014 08:37:17 +0100 (CET) Received: by mail-wg0-f48.google.com with SMTP id y19so13954890wgg.21 for ; Thu, 25 Dec 2014 23:37:17 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=8cDJX/Ej597o5iQFkHW+FjHduvb+hh18yGYkUStZdOg=; b=CHfgexi8VqyCO4xQ9WDXvKKxsAL7NSxCkqsMAm0J1O9UfukuXVG9CWsvpYkRRAQEZM RNiEv3HtaVBkM+icty/aSuMgcyzhvoSsj+himjShoyPJERLxZrkHZrrQw8BkZ1rHhJns VqSPq4+HNbzry0+9MjCvi0bJyBWy7olLEF4KjxV0lN4loSBpH9ip/3lsUt5tfEy1zfZQ MRzkd5v6BZWsOdfTzMZ1TQO4Av6p5zoc9ERZhucJ9DOcTn9gsP5+XcrdQFpbPsrxP8Is UPYqa+0Rhqpl0YFtAIrQJq794DgmRfrPTuXnexEO3lDcw/hB9+ZbuFR3NpTpKS2awgm+ LZmQ== X-Gm-Message-State: ALoCoQloLC43SiqUDur/R4X2ArlltJES7iIPsS4jr994rgYXUk56h0xyMXiYYGVVdoEOSEJ5ahER MIME-Version: 1.0 X-Received: by 10.181.8.98 with SMTP id dj2mr65654880wid.81.1419579435373; Thu, 25 Dec 2014 23:37:15 -0800 (PST) Received: by 10.194.56.41 with HTTP; Thu, 25 Dec 2014 23:37:15 -0800 (PST) Received: by 10.194.56.41 with HTTP; Thu, 25 Dec 2014 23:37:15 -0800 (PST) In-Reply-To: References: <1419389808-9559-1-git-send-email-changchun.ouyang@intel.com> <1419398584-19520-1-git-send-email-changchun.ouyang@intel.com> <1419398584-19520-6-git-send-email-changchun.ouyang@intel.com> <549A97F6.30901@cloudius-systems.com> <549C0F10.8050402@cloudius-systems.com> Date: Fri, 26 Dec 2014 09:37:15 +0200 Message-ID: From: Vladislav Zolotarov To: Changchun Ouyang Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH v3 5/6] ixgbe: Config VF RSS X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Dec 2014 07:37:17 -0000 On Dec 26, 2014 9:28 AM, "Ouyang, Changchun" wrote: > > Hi Vladislav, > > > > From: Vladislav Zolotarov [mailto:vladz@cloudius-systems.com] > Sent: Friday, December 26, 2014 2:49 PM > To: Ouyang, Changchun > Cc: dev@dpdk.org > Subject: RE: [dpdk-dev] [PATCH v3 5/6] ixgbe: Config VF RSS > > > > > On Dec 26, 2014 3:52 AM, "Ouyang, Changchun" wrote: > > > > > > > > > -----Original Message----- > > > From: Vlad Zolotarov [mailto:vladz@cloudius-systems.com] > > > Sent: Thursday, December 25, 2014 9:20 PM > > > To: Ouyang, Changchun; dev@dpdk.org > > > Subject: Re: [dpdk-dev] [PATCH v3 5/6] ixgbe: Config VF RSS > > > > > > > > > On 12/25/14 04:43, Ouyang, Changchun wrote: > > > > Hi, > > > > Sorry miss some comments, so continue my response below, > > > > > > > >> -----Original Message----- > > > >> From: Vlad Zolotarov [mailto:vladz@cloudius-systems.com] > > > >> Sent: Wednesday, December 24, 2014 6:40 PM > > > >> To: Ouyang, Changchun; dev@dpdk.org > > > >> Subject: Re: [dpdk-dev] [PATCH v3 5/6] ixgbe: Config VF RSS > > > >> > > > >> > > > >> On 12/24/14 07:23, Ouyang Changchun wrote: > > > >>> It needs config RSS and IXGBE_MRQC and IXGBE_VFPSRTYPE to enable > > > VF > > > >> RSS. > > > >>> The psrtype will determine how many queues the received packets will > > > >>> distribute to, and the value of psrtype should depends on both facet: > > > >>> max VF rxq number which has been negotiated with PF, and the number > > > >>> of > > > >> rxq specified in config on guest. > > > >>> Signed-off-by: Changchun Ouyang > > > >>> --- > > > >>> lib/librte_pmd_ixgbe/ixgbe_pf.c | 15 +++++++ > > > >>> lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 92 > > > >> ++++++++++++++++++++++++++++++++++----- > > > >>> 2 files changed, 97 insertions(+), 10 deletions(-) > > > >>> > > > >>> diff --git a/lib/librte_pmd_ixgbe/ixgbe_pf.c > > > >>> b/lib/librte_pmd_ixgbe/ixgbe_pf.c index cbb0145..9c9dad8 100644 > > > >>> --- a/lib/librte_pmd_ixgbe/ixgbe_pf.c > > > >>> +++ b/lib/librte_pmd_ixgbe/ixgbe_pf.c > > > >>> @@ -187,6 +187,21 @@ int ixgbe_pf_host_configure(struct rte_eth_dev > > > >> *eth_dev) > > > >>> IXGBE_WRITE_REG(hw, IXGBE_MPSAR_LO(hw- > > > mac.num_rar_entries), 0); > > > >>> IXGBE_WRITE_REG(hw, IXGBE_MPSAR_HI(hw- > > > mac.num_rar_entries), 0); > > > >>> > > > >>> + /* > > > >>> + * VF RSS can support at most 4 queues for each VF, even if > > > >>> + * 8 queues are available for each VF, it need refine to 4 > > > >>> + * queues here due to this limitation, otherwise no queue > > > >>> + * will receive any packet even RSS is enabled. > > > >> According to Table 7-3 in the 82599 spec RSS is not available when > > > >> port is configured to have 8 queues per pool. This means that if u > > > >> see this configuration u may immediately disable RSS flow in your code. > > > >> > > > >>> + */ > > > >>> + if (eth_dev->data->dev_conf.rxmode.mq_mode =3D=3D > > > >> ETH_MQ_RX_VMDQ_RSS) { > > > >>> + if (RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool =3D=3D 8) = { > > > >>> + RTE_ETH_DEV_SRIOV(eth_dev).active =3D > > > >> ETH_32_POOLS; > > > >>> + RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool =3D 4; > > > >>> + RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx =3D > > > >>> + dev_num_vf(eth_dev) * 4; > > > >> According to 82599 spec u can't do that since RSS is not allowed when > > > >> port is configured to have 8 function per-VF. Have u verified that > > > >> this works? If yes, then spec should be updated. > > > >> > > > >>> + } > > > >>> + } > > > >>> + > > > >>> /* set VMDq map to default PF pool */ > > > >>> hw->mac.ops.set_vmdq(hw, 0, > > > >>> RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx); > > > >>> > > > >>> diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c > > > >>> b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c > > > >>> index f69abda..a7c17a4 100644 > > > >>> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c > > > >>> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c > > > >>> @@ -3327,6 +3327,39 @@ ixgbe_alloc_rx_queue_mbufs(struct > > > >> igb_rx_queue *rxq) > > > >>> } > > > >>> > > > >>> static int > > > >>> +ixgbe_config_vf_rss(struct rte_eth_dev *dev) { > > > >>> + struct ixgbe_hw *hw; > > > >>> + uint32_t mrqc; > > > >>> + > > > >>> + ixgbe_rss_configure(dev); > > > >>> + > > > >>> + hw =3D IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); > > > >>> + > > > >>> + /* MRQC: enable VF RSS */ > > > >>> + mrqc =3D IXGBE_READ_REG(hw, IXGBE_MRQC); > > > >>> + mrqc &=3D ~IXGBE_MRQC_MRQE_MASK; > > > >>> + switch (RTE_ETH_DEV_SRIOV(dev).active) { > > > >>> + case ETH_64_POOLS: > > > >>> + mrqc |=3D IXGBE_MRQC_VMDQRSS64EN; > > > >>> + break; > > > >>> + > > > >>> + case ETH_32_POOLS: > > > >>> + case ETH_16_POOLS: > > > >>> + mrqc |=3D IXGBE_MRQC_VMDQRSS32EN; > > > >> Again, this contradicts with the spec. > > > > Yes, the spec say the hw can't support vf rss at all, but experiment find that > > > could be done. > > > > > > The spec explicitly say that VF RSS *is* supported in particular in the table > > > mentioned above. > > But the spec(January 2014 revision 2.9) on my hand says: "in IOV mode, VMDq+RSS mode is not available" in note of section 4.6.10.2.1 > > >And still there is the whole section about configuring packet filtering including Rx in the VF mode (including the table i've referred) . It's quite confusing i must say... > > Changchun: do you mind tell me which table you are referring to, I will try to have a look and may share my thought if I can. > > > > What your code is doing is that in case of 16 VFs u setup a 32 pools > > > configuration and use only 16 out of them. > > But I don't see any big issue here, in this case, each vf COULD have 8 queues, like I said before, but this is estimation value, actually only 4 queues > > Are really available for one vf, you can refer to spec for the correctness here. > > >No issues, i just wanted to clarify that it seems like you are doing it quite according to the spec. > > > > > > > > We can focus on discussing the implementation firstly. > > >Right. So, after we clarified that there is nothing u can do at the moment about the rss query flow, there is one more open issue here. > >In general we need a way to know how many queues from those that are available may be configured as RSS. While the same issue is present with the PF as well (it's 16 for 82599 but it may be a different number for a different device) for VF it's more pronounced since it depends on the PF configuration. > > >Don't u think it would be logical to add a specific filed for it in the dev_info struct? > > Changchun: you are right, and we have already the max_rx_queues in dev_info, > > while negotiating between pf and vf, the negotiated max rx queue number will be set into hw->mac.max_rx_queues, > > And after that when you call ixgbe_dev_info_get, that value will be set into dev_info->max_rx_queues. > > Then you could get the number of queue all packets will distribute to by getting dev_info->max_rx_queues. I'm afraid u've missed my point here. For instance, for a PF max_rx_queues will be set to 128 while u may only configure 16 RSS queues. The similar will happen for a VF in the 16 VF configuration: max_rx_queues will be set to 8 while u may configure only 4 RSS queues. This is why i suggested to add a separate info field... =F0=9F=98=89 > > Thanks > > Changchun