From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wi0-f171.google.com (mail-wi0-f171.google.com [209.85.212.171]) by dpdk.org (Postfix) with ESMTP id 4E71919F5 for ; Wed, 24 Dec 2014 10:59:28 +0100 (CET) Received: by mail-wi0-f171.google.com with SMTP id bs8so13181183wib.4 for ; Wed, 24 Dec 2014 01:59:28 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=kd7xWSkAlOAGmBGjIcX/jqjdvXAIJZXwntDSEgACWk4=; b=Y8I6qjaE0qkqy5AR1YtlXj1+Y24PNgAlxZ0vHXSN5CdtOVwvb9tcwMoOj1tIZQdrV5 eCT2AbFp89kw4PPgf1dMQtOaUaEYnUpQ4JPJgVerqC0vVL4kTEYCiCeskR0fVVOnxtk0 mCh9RHTZEIr8clo5DLFZbGtGhm9ldrcognnBS4Ve30e1IvU55sLcEh7roT7id3+vK100 LA4jLDxI56OIxdSY2+hNPqlGYmpQj3xbLMulcaWoVGYY/cgeBHhLwiz2S02RfVR9VrgY d4JvRC3/5D9JvrhzmmDQQE7J9ZRFTOQ73O+uQWfr81TaKBFvV2mdPec2rrQQeI3yyCMe 2qMw== X-Gm-Message-State: ALoCoQm0EufDqlb+UzASMuI1lcrrr2HHO11Pi3+luovHGg5hccBVfxFJgtX/QBmKgP5eDbue/s9T X-Received: by 10.180.109.46 with SMTP id hp14mr50727150wib.54.1419415166458; Wed, 24 Dec 2014 01:59:26 -0800 (PST) Received: from [10.0.0.1] (bzq-79-179-97-80.red.bezeqint.net. [79.179.97.80]) by mx.google.com with ESMTPSA id dv9sm20570019wib.14.2014.12.24.01.59.25 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 24 Dec 2014 01:59:26 -0800 (PST) Message-ID: <549A8E7C.7010806@cloudius-systems.com> Date: Wed, 24 Dec 2014 11:59:24 +0200 From: Vlad Zolotarov User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: Ouyang Changchun , dev@dpdk.org References: <1419389808-9559-1-git-send-email-changchun.ouyang@intel.com> <1419398584-19520-1-git-send-email-changchun.ouyang@intel.com> In-Reply-To: <1419398584-19520-1-git-send-email-changchun.ouyang@intel.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v3 0/6] Enable VF RSS for Niantic X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 24 Dec 2014 09:59:28 -0000 On 12/24/14 07:22, Ouyang Changchun wrote: > This patch enables VF RSS for Niantic, which allow each VF having at most 4 queues. > The actual queue number per VF depends on the total number of pool, which is > determined by the total number of VF at PF initialization stage and the number of > queue specified in config: > 1) If the number of VF is in the range from 1 to 32 and the number of rxq is 4('--rxq 4' in testpmd), > then there is totally 32 pools(ETH_32_POOLS), and each VF have 4 queues; > > 2)If the number of VF is in the range from 33 to 64 and the number of rxq is 2('--rxq 2' in testpmd), > then there is totally 64 pools(ETH_64_POOLS), and each VF have 2 queues; > > On host, to enable VF RSS functionality, rx mq mode should be set as ETH_MQ_RX_VMDQ_RSS > or ETH_MQ_RX_RSS mode, and SRIOV mode should be activated(max_vfs >= 1). > It also needs config VF RSS information like hash function, RSS key, RSS key length. > > The limitation for Niantic VF RSS is: > the hash and key are shared among PF and all VF Hmmm... This kinda contradicts the previous sentence where u say that VF on the host should configure hash and RSS key. If PF and VF share the same hash and key what's the use of configuring it in VF? Could u clarify, please? > , the RETA table with 128 entries are > also shared among PF and all VF. So it is not good idea to query the hash and reta content per VF on > guest, instead, it makes sense to query them on host(PF). On the contrary - it's a very good idea! We use DPDK on Amazon's guests with enhanced networking and we have no access to the PF. We still need to know the RSS redirection rules for our VF pool. From the 82599 spec, chapter 4.6.10.1.1: "redirection table is common to all the pools and only indicates the queue inside the pool to use once the pool is chosen". In that case we need to get the whole 128 entries of the RETA. Is there a reason why we can't have it? > > v3 change: > - More cleanup; > > v2 change: > - Update the description; > - Use receiving queue number('--rxq ') specified in config to determine the number of pool and > the number of queue per VF; > > v1 change: > - Config VF RSS; > > Changchun Ouyang (6): > ixgbe: Code cleanup > ixgbe: Negotiate VF API version > ixgbe: Get VF queue number > ether: Check VMDq RSS mode > ixgbe: Config VF RSS > testpmd: Set Rx VMDq RSS mode > > app/test-pmd/testpmd.c | 10 +++ > lib/librte_ether/rte_ethdev.c | 39 +++++++++-- > lib/librte_pmd_ixgbe/ixgbe_ethdev.h | 1 + > lib/librte_pmd_ixgbe/ixgbe_pf.c | 75 ++++++++++++++++++++- > lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 127 ++++++++++++++++++++++++++++-------- > 5 files changed, 219 insertions(+), 33 deletions(-) >