From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 79237A00C3; Thu, 20 Jan 2022 09:53:44 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6F28641C3D; Thu, 20 Jan 2022 09:53:32 +0100 (CET) Received: from relay.smtp-ext.broadcom.com (lpdvsmtp11.broadcom.com [192.19.166.231]) by mails.dpdk.org (Postfix) with ESMTP id C4911426D8 for ; Thu, 20 Jan 2022 09:53:29 +0100 (CET) Received: from dhcp-10-123-153-22.dhcp.broadcom.net (bgccx-dev-host-lnx2.bec.broadcom.net [10.123.153.22]) by relay.smtp-ext.broadcom.com (Postfix) with ESMTP id 1AF30C000C75; Thu, 20 Jan 2022 00:53:27 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 relay.smtp-ext.broadcom.com 1AF30C000C75 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1642668809; bh=0Wns2Tl7i9zHrFI+9vrTzzPn4eIcjXELHNuHCnsHCWY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HK4txjkXzyYqvndBEdyRLie7s5ot+uRXhkpyvIczjNwHi/t0AzXT0iYrKOZnOyEBM P5VTcsuy/Jwuw7Vga8uyq7eX1MqPvYjlS3i0VVy1UvSHcBr8pCvDJpIcuKVXUu67Xp SOo26zxrec+CET4ePLj29o3mPLCI7bjx5ofKyvO4= From: Kalesh A P To: dev@dpdk.org Cc: ferruh.yigit@intel.com, ajit.khaparde@broadcom.com Subject: [dpdk-dev] [PATCH 4/4] net/bnxt: fix VF resource allocation strategy Date: Thu, 20 Jan 2022 14:42:28 +0530 Message-Id: <20220120091228.7076-5-kalesh-anakkur.purayil@broadcom.com> X-Mailer: git-send-email 2.10.1 In-Reply-To: <20220120091228.7076-1-kalesh-anakkur.purayil@broadcom.com> References: <20220120091228.7076-1-kalesh-anakkur.purayil@broadcom.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Ajit Khaparde 1. VFs need a notification queue to handle async messages. But the current logic does not reserve a notification queue leading to initialization failure in some cases. 2. With the current logic, DPDK PF driver reserves only one VNIC to the VFs leading to initialization failure with more than 1 RXQs. Added logic to distribute number of NQs and VNICs from the pool across VFs and PF. While reserving resources for the VFs, the strategy is to keep both min & max values the same. This could result in a failure when there isn't enough resources to satisfy the request. Hence fixed to instruct the FW to not reserve all minimum resources requested for the VF. The VF driver can request the FW for the allocated resources during probe. Fixes: b7778e8a1c00 ("net/bnxt: refactor to properly allocate resources for PF/VF") Cc: stable@dpdk.org Signed-off-by: Ajit Khaparde Signed-off-by: Kalesh AP Reviewed-by: Somnath Kotur --- drivers/net/bnxt/bnxt_hwrm.c | 32 +++++++++++++++++--------------- drivers/net/bnxt/bnxt_hwrm.h | 2 ++ 2 files changed, 19 insertions(+), 15 deletions(-) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 5418fa1..b4aeec5 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -902,15 +902,7 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp) bp->max_l2_ctx = rte_le_to_cpu_16(resp->max_l2_ctxs); if (!BNXT_CHIP_P5(bp) && !bp->pdev->max_vfs) bp->max_l2_ctx += bp->max_rx_em_flows; - /* TODO: For now, do not support VMDq/RFS on VFs. */ - if (BNXT_PF(bp)) { - if (bp->pf->max_vfs) - bp->max_vnics = 1; - else - bp->max_vnics = rte_le_to_cpu_16(resp->max_vnics); - } else { - bp->max_vnics = 1; - } + bp->max_vnics = rte_le_to_cpu_16(resp->max_vnics); PMD_DRV_LOG(DEBUG, "Max l2_cntxts is %d vnics is %d\n", bp->max_l2_ctx, bp->max_vnics); bp->max_stat_ctx = rte_le_to_cpu_16(resp->max_stat_ctx); @@ -3495,7 +3487,7 @@ static int bnxt_hwrm_pf_func_cfg(struct bnxt *bp, rte_cpu_to_le_16(pf_resc->num_hw_ring_grps); } else if (BNXT_HAS_NQ(bp)) { enables |= HWRM_FUNC_CFG_INPUT_ENABLES_NUM_MSIX; - req.num_msix = rte_cpu_to_le_16(bp->max_nq_rings); + req.num_msix = rte_cpu_to_le_16(pf_resc->num_nq_rings); } req.flags = rte_cpu_to_le_32(bp->pf->func_cfg_flags); @@ -3508,7 +3500,7 @@ static int bnxt_hwrm_pf_func_cfg(struct bnxt *bp, req.num_tx_rings = rte_cpu_to_le_16(pf_resc->num_tx_rings); req.num_rx_rings = rte_cpu_to_le_16(pf_resc->num_rx_rings); req.num_l2_ctxs = rte_cpu_to_le_16(pf_resc->num_l2_ctxs); - req.num_vnics = rte_cpu_to_le_16(bp->max_vnics); + req.num_vnics = rte_cpu_to_le_16(pf_resc->num_vnics); req.fid = rte_cpu_to_le_16(0xffff); req.enables = rte_cpu_to_le_32(enables); @@ -3545,14 +3537,12 @@ bnxt_fill_vf_func_cfg_req_new(struct bnxt *bp, req->min_rx_rings = req->max_rx_rings; req->max_l2_ctxs = rte_cpu_to_le_16(bp->max_l2_ctx / (num_vfs + 1)); req->min_l2_ctxs = req->max_l2_ctxs; - /* TODO: For now, do not support VMDq/RFS on VFs. */ - req->max_vnics = rte_cpu_to_le_16(1); + req->max_vnics = rte_cpu_to_le_16(bp->max_vnics / (num_vfs + 1)); req->min_vnics = req->max_vnics; req->max_hw_ring_grps = rte_cpu_to_le_16(bp->max_ring_grps / (num_vfs + 1)); req->min_hw_ring_grps = req->max_hw_ring_grps; - req->flags = - rte_cpu_to_le_16(HWRM_FUNC_VF_RESOURCE_CFG_INPUT_FLAGS_MIN_GUARANTEED); + req->max_msix = rte_cpu_to_le_16(bp->max_nq_rings / (num_vfs + 1)); } static void @@ -3612,6 +3602,8 @@ static int bnxt_update_max_resources(struct bnxt *bp, bp->max_rx_rings -= rte_le_to_cpu_16(resp->alloc_rx_rings); bp->max_l2_ctx -= rte_le_to_cpu_16(resp->alloc_l2_ctx); bp->max_ring_grps -= rte_le_to_cpu_16(resp->alloc_hw_ring_grps); + bp->max_nq_rings -= rte_le_to_cpu_16(resp->alloc_msix); + bp->max_vnics -= rte_le_to_cpu_16(resp->alloc_vnics); HWRM_UNLOCK(); @@ -3685,6 +3677,8 @@ static int bnxt_query_pf_resources(struct bnxt *bp, pf_resc->num_rx_rings = rte_le_to_cpu_16(resp->alloc_rx_rings); pf_resc->num_l2_ctxs = rte_le_to_cpu_16(resp->alloc_l2_ctx); pf_resc->num_hw_ring_grps = rte_le_to_cpu_32(resp->alloc_hw_ring_grps); + pf_resc->num_nq_rings = rte_le_to_cpu_32(resp->alloc_msix); + pf_resc->num_vnics = rte_le_to_cpu_16(resp->alloc_vnics); bp->pf->evb_mode = resp->evb_mode; HWRM_UNLOCK(); @@ -3705,6 +3699,8 @@ bnxt_calculate_pf_resources(struct bnxt *bp, pf_resc->num_rx_rings = bp->max_rx_rings; pf_resc->num_l2_ctxs = bp->max_l2_ctx; pf_resc->num_hw_ring_grps = bp->max_ring_grps; + pf_resc->num_nq_rings = bp->max_nq_rings; + pf_resc->num_vnics = bp->max_vnics; return; } @@ -3723,6 +3719,10 @@ bnxt_calculate_pf_resources(struct bnxt *bp, bp->max_l2_ctx % (num_vfs + 1); pf_resc->num_hw_ring_grps = bp->max_ring_grps / (num_vfs + 1) + bp->max_ring_grps % (num_vfs + 1); + pf_resc->num_nq_rings = bp->max_nq_rings / (num_vfs + 1) + + bp->max_nq_rings % (num_vfs + 1); + pf_resc->num_vnics = bp->max_vnics / (num_vfs + 1) + + bp->max_vnics % (num_vfs + 1); } int bnxt_hwrm_allocate_pf_only(struct bnxt *bp) @@ -3898,6 +3898,8 @@ bnxt_update_pf_resources(struct bnxt *bp, bp->max_tx_rings = pf_resc->num_tx_rings; bp->max_rx_rings = pf_resc->num_rx_rings; bp->max_ring_grps = pf_resc->num_hw_ring_grps; + bp->max_nq_rings = pf_resc->num_nq_rings; + bp->max_vnics = pf_resc->num_vnics; } static int32_t diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h index 21e1b7a..63f8d8c 100644 --- a/drivers/net/bnxt/bnxt_hwrm.h +++ b/drivers/net/bnxt/bnxt_hwrm.h @@ -114,6 +114,8 @@ struct bnxt_pf_resource_info { uint16_t num_rx_rings; uint16_t num_cp_rings; uint16_t num_l2_ctxs; + uint16_t num_nq_rings; + uint16_t num_vnics; uint32_t num_hw_ring_grps; }; -- 2.10.1