From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 56ACA5A97 for ; Wed, 11 Nov 2015 09:57:00 +0100 (CET) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga103.fm.intel.com with ESMTP; 11 Nov 2015 00:57:00 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,274,1444719600"; d="scan'208";a="847741254" Received: from shvmail01.sh.intel.com ([10.239.29.42]) by orsmga002.jf.intel.com with ESMTP; 11 Nov 2015 00:56:58 -0800 Received: from shecgisg004.sh.intel.com (shecgisg004.sh.intel.com [10.239.29.89]) by shvmail01.sh.intel.com with ESMTP id tAB8uuTb017527; Wed, 11 Nov 2015 16:56:56 +0800 Received: from shecgisg004.sh.intel.com (localhost [127.0.0.1]) by shecgisg004.sh.intel.com (8.13.6/8.13.6/SuSE Linux 0.8) with ESMTP id tAB8urG8002375; Wed, 11 Nov 2015 16:56:55 +0800 Received: (from hzhan75@localhost) by shecgisg004.sh.intel.com (8.13.6/8.13.6/Submit) id tAB8urXO002371; Wed, 11 Nov 2015 16:56:53 +0800 From: Helin Zhang To: dev@dpdk.org Date: Wed, 11 Nov 2015 16:56:45 +0800 Message-Id: <1447232205-2339-1-git-send-email-helin.zhang@intel.com> X-Mailer: git-send-email 1.7.4.1 Subject: [dpdk-dev] [PATCH] i40e: fix the issue of cannot using more than 1 poor for VMDq X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Nov 2015 08:57:00 -0000 It fixes the issue of cannot using more than 1 poor for VMDq, according to the queues left. Fixes: 705b57f82054 ("i40e: enlarge the number of supported queues") Signed-off-by: Helin Zhang --- drivers/net/i40e/i40e_ethdev.c | 36 ++++++++++++++++++++++++++---------- 1 file changed, 26 insertions(+), 10 deletions(-) diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index ddf3d38..c5cd06f 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -3120,17 +3120,33 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev) /* VMDq queue/VSI allocation */ pf->vmdq_qp_offset = pf->vf_qp_offset + pf->vf_nb_qps * pf->vf_num; + pf->vmdq_nb_qps = 0; + pf->max_nb_vmdq_vsi = 0; if (hw->func_caps.vmdq) { - pf->flags |= I40E_FLAG_VMDQ; - pf->vmdq_nb_qps = pf->vmdq_nb_qp_max; - pf->max_nb_vmdq_vsi = 1; - PMD_DRV_LOG(DEBUG, "%u VMDQ VSIs, %u queues per VMDQ VSI, " - "in total %u queues", pf->max_nb_vmdq_vsi, - pf->vmdq_nb_qps, - pf->vmdq_nb_qps * pf->max_nb_vmdq_vsi); - } else { - pf->vmdq_nb_qps = 0; - pf->max_nb_vmdq_vsi = 0; + if (qp_count < hw->func_caps.num_tx_qp) { + pf->max_nb_vmdq_vsi = (hw->func_caps.num_tx_qp - + qp_count) / pf->vmdq_nb_qp_max; + + /* Limit the maximum number of VMDq vsi to the maximum + * ethdev can support + */ + pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi, + ETH_64_POOLS); + if (pf->max_nb_vmdq_vsi) { + pf->flags |= I40E_FLAG_VMDQ; + pf->vmdq_nb_qps = pf->vmdq_nb_qp_max; + PMD_DRV_LOG(DEBUG, "%u VMDQ VSIs, %u queues " + "per VMDQ VSI, in total %u queues", + pf->max_nb_vmdq_vsi, + pf->vmdq_nb_qps, pf->vmdq_nb_qps * + pf->max_nb_vmdq_vsi); + } else { + PMD_DRV_LOG(INFO, "No enough queues left for " + "VMDq"); + } + } else { + PMD_DRV_LOG(INFO, "No queue left for VMDq"); + } } qp_count += pf->vmdq_nb_qps * pf->max_nb_vmdq_vsi; vsi_count += pf->max_nb_vmdq_vsi; -- 1.9.3