From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E68E0A052A for ; Fri, 8 Jan 2021 11:21:41 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D6C30140E66; Fri, 8 Jan 2021 11:21:41 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 93417140E1A; Fri, 8 Jan 2021 11:21:38 +0100 (CET) IronPort-SDR: YxI6Ks3qTVYVRUzjkaOK2Qymowv75Jj3eqpiU0tViQszCMCNIj8uZ0LhzrQjxzp4lCC8s+C00r bNMHYmUQ2ucQ== X-IronPort-AV: E=McAfee;i="6000,8403,9857"; a="174066175" X-IronPort-AV: E=Sophos;i="5.79,330,1602572400"; d="scan'208";a="174066175" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jan 2021 02:21:37 -0800 IronPort-SDR: FQyzfGfQH5E4ELWHAgLv7IgdCcoVCrYBvMfgb55w3DFmVU+HkDjk8cpP1TbulWJRlYWDzeCIh9 tOBXoJPPN3kw== X-IronPort-AV: E=Sophos;i="5.79,330,1602572400"; d="scan'208";a="380079453" Received: from unknown (HELO localhost.localdomain) ([10.240.183.93]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jan 2021 02:21:30 -0800 From: dapengx.yu@intel.com To: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com, ting.xu@intel.com Cc: dev@dpdk.org, YU DAPENG , stable@dpdk.org Date: Fri, 8 Jan 2021 18:21:11 +0800 Message-Id: <20210108102111.7519-1-dapengx.yu@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201230065347.90115-1-dapengx.yu@intel.com> References: <20201230065347.90115-1-dapengx.yu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-stable] [PATCH] net/iavf: fix vector id assignment X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" From: YU DAPENG The number of MSI-X interrupts on Rx shall be the minimal value of the number of available MSI-X interrupts per VF - 1 (the 1 is for miscellaneous interrupt) and the number of configured Rx queues. The current code break the rule because the number of available MSI-X interrupts is used as the first value, but code does not subtract 1 from it. In normal situation, the first value is larger than the second value. So each queue can be assigned a unique vector_id. For example: 17 available MSI-X interrupts, and 16 available Rx queues per VF; but only 4 Rx queues are configured when device is started. vector_id:0 is for misc interrupt, vector_id:1 for Rx queue0, vector_id:2 for Rx queue1, vector_id:3 for Rx queue2, vector_id:4 for Rx queue3. Current code breaks the rule in this normal situation, because when assign vector_ids to interrupt handle, for example, it does not assign vector_id:4 to the queue3, but assign vector_id:1 to it, because the condition used causes vector_id wrap around too early. In iavf_config_irq_map(), the current code does not write data into the last element of vecmap[], because of the previous code break. Which cause wrong data is sent to PF with opcode VIRTCHNL_OP_CONFIG_IRQ_MAP and cause error: VIRTCHNL_STATUS_ERR_PARAM(-5). If kernel driver supports large VFs (up to 256 queues), different queues can be assigned same vector_id. In order to adapt to large VFs and avoid wrapping early, the condition is replaced from vec >= vf->nb_msix to vec >= vf->vf_res->max_vectors. Fixes: d6bde6b5eae9 ("net/avf: enable Rx interrupt") Cc: stable@dpdk.org Signed-off-by: YU DAPENG --- drivers/net/iavf/iavf_ethdev.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index 7e3c26a94..d730bb156 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -483,6 +483,7 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev, struct iavf_qv_map *qv_map; uint16_t interval, i; int vec; + uint16_t max_vectors; if (rte_intr_cap_multiple(intr_handle) && dev->data->dev_conf.intr_conf.rxq) { @@ -570,15 +571,16 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev, /* If Rx interrupt is reuquired, and we can use * multi interrupts, then the vec is from 1 */ - vf->nb_msix = RTE_MIN(vf->vf_res->max_vectors, - intr_handle->nb_efd); + max_vectors = + vf->vf_res->max_vectors - IAVF_RX_VEC_START; + vf->nb_msix = RTE_MIN(max_vectors, intr_handle->nb_efd); vf->msix_base = IAVF_RX_VEC_START; vec = IAVF_RX_VEC_START; for (i = 0; i < dev->data->nb_rx_queues; i++) { qv_map[i].queue_id = i; qv_map[i].vector_id = vec; intr_handle->intr_vec[i] = vec++; - if (vec >= vf->nb_msix) + if (vec >= vf->vf_res->max_vectors) vec = IAVF_RX_VEC_START; } vf->qv_map = qv_map; -- 2.27.0