From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 66A34A0032 for ; Wed, 16 Mar 2022 13:04:38 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 599A440395; Wed, 16 Mar 2022 13:04:38 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 3337C40395 for ; Wed, 16 Mar 2022 13:04:36 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 22GAK2kK027633 for ; Wed, 16 Mar 2022 05:04:34 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-type; s=pfpt0220; bh=QFrwGS30rimtyPjfmboo5aiEKeIPupj1MFxhEIso/QI=; b=d1196HaIs3/vCuNsbEm268yNlPFlIs05IwWSwkM7t8h1+QKgQSEYrZ0D3iF65e+4tw0E +rbfhFS2k5CVcEQSa4qBqxto1pgutxNALJgzURzOpOkIblsnSsEykke+dZt7WN5UVbjn uiDCXMg3cbPLrEPmxOWkQUGjVz4sUW3BlaCLy/DTi32gHc+RImI3d0j5ASYx4lRuT4CR H3KPJBdueCdhjdmLDKm8w1c9+MlPsIJg8Ik9u/qD+cMByRk1wSRVQ6zvpIcWEIy4F9f5 fGs9Nv/+fYQK70LxdnmZ5sdxYZo1JRXpHRsFzkvF92WCKNwdyVE7FzJihplBgaEul12F Cg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3eue23gcp5-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 16 Mar 2022 05:04:30 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 16 Mar 2022 05:03:11 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 16 Mar 2022 05:03:11 -0700 Received: from localhost.marvell.com (unknown [10.30.47.116]) by maili.marvell.com (Postfix) with ESMTP id 11D8D3F7044; Wed, 16 Mar 2022 05:03:09 -0700 (PDT) From: Devendra Singh Rawat To: CC: , , Devendra Singh Rawat Subject: [PATCH 20.11] net/qede: fix Rx bulk mbuf allocation Date: Wed, 16 Mar 2022 17:33:06 +0530 Message-ID: <20220316120306.561108-1-dsinghrawat@marvell.com> X-Mailer: git-send-email 2.18.2 MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-GUID: fuJv9YLK_bhdTXved2LjKnYPqhEfwzz6 X-Proofpoint-ORIG-GUID: fuJv9YLK_bhdTXved2LjKnYPqhEfwzz6 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-16_04,2022-03-15_01,2022-02-23_01 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org [ upstream commit f65c7fbceca91b54200ca3dc5d27f2292e5d829f ] qede_alloc_rx_bulk_mbufs() was trimming the number of requested mbufs count to QEDE_MAX_BULK_ALLOC_COUNT. The Rx callback was ignorant of this trimming and it was always resetting the number of empty RX BD ring slots to 0. This resulted in Rx BD ring getting into an inconsistent state and ultimately the application fails to receive any traffic. The fix trims the number of requested mbufs count before making call to qede_alloc_rx_bulk_mbufs(). After qede_alloc_rx_bulk_mbufs() returns successfully, the number of empty Rx BD ring slots are decremented by the correct count. Fixes: 8f2312474529 ("net/qede: fix performance bottleneck in Rx path") Signed-off-by: Devendra Singh Rawat Signed-off-by: Rasesh Mody --- drivers/net/qede/qede_rxtx.c | 49 ++++++++++++++++++------------------ 1 file changed, 24 insertions(+), 25 deletions(-) diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c index 122e9290ed..f357a8f258 100644 --- a/drivers/net/qede/qede_rxtx.c +++ b/drivers/net/qede/qede_rxtx.c @@ -46,17 +46,14 @@ static inline int qede_alloc_rx_bulk_mbufs(struct qede_rx_queue *rxq, int count) int i, ret = 0; uint16_t idx; - if (count > QEDE_MAX_BULK_ALLOC_COUNT) - count = QEDE_MAX_BULK_ALLOC_COUNT; + idx = rxq->sw_rx_prod & NUM_RX_BDS(rxq); ret = rte_mempool_get_bulk(rxq->mb_pool, obj_p, count); if (unlikely(ret)) { PMD_RX_LOG(ERR, rxq, "Failed to allocate %d rx buffers " "sw_rx_prod %u sw_rx_cons %u mp entries %u free %u", - count, - rxq->sw_rx_prod & NUM_RX_BDS(rxq), - rxq->sw_rx_cons & NUM_RX_BDS(rxq), + count, idx, rxq->sw_rx_cons & NUM_RX_BDS(rxq), rte_mempool_avail_count(rxq->mb_pool), rte_mempool_in_use_count(rxq->mb_pool)); return -ENOMEM; @@ -1542,25 +1539,26 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) uint8_t bitfield_val; #endif uint8_t offset, flags, bd_num; - + uint16_t count = 0; /* Allocate buffers that we used in previous loop */ if (rxq->rx_alloc_count) { - if (unlikely(qede_alloc_rx_bulk_mbufs(rxq, - rxq->rx_alloc_count))) { + count = rxq->rx_alloc_count > QEDE_MAX_BULK_ALLOC_COUNT ? + QEDE_MAX_BULK_ALLOC_COUNT : rxq->rx_alloc_count; + + if (unlikely(qede_alloc_rx_bulk_mbufs(rxq, count))) { struct rte_eth_dev *dev; PMD_RX_LOG(ERR, rxq, - "New buffer allocation failed," - "dropping incoming packetn"); + "New buffers allocation failed," + "dropping incoming packets\n"); dev = &rte_eth_devices[rxq->port_id]; - dev->data->rx_mbuf_alloc_failed += - rxq->rx_alloc_count; - rxq->rx_alloc_errors += rxq->rx_alloc_count; + dev->data->rx_mbuf_alloc_failed += count; + rxq->rx_alloc_errors += count; return 0; } qede_update_rx_prod(qdev, rxq); - rxq->rx_alloc_count = 0; + rxq->rx_alloc_count -= count; } hw_comp_cons = rte_le_to_cpu_16(*rxq->hw_cons_ptr); @@ -1729,7 +1727,7 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } /* Request number of buffers to be allocated in next loop */ - rxq->rx_alloc_count = rx_alloc_count; + rxq->rx_alloc_count += rx_alloc_count; rxq->rcv_pkts += rx_pkt; rxq->rx_segs += rx_pkt; @@ -1769,25 +1767,26 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) struct qede_agg_info *tpa_info = NULL; uint32_t rss_hash; int rx_alloc_count = 0; - + uint16_t count = 0; /* Allocate buffers that we used in previous loop */ if (rxq->rx_alloc_count) { - if (unlikely(qede_alloc_rx_bulk_mbufs(rxq, - rxq->rx_alloc_count))) { + count = rxq->rx_alloc_count > QEDE_MAX_BULK_ALLOC_COUNT ? + QEDE_MAX_BULK_ALLOC_COUNT : rxq->rx_alloc_count; + + if (unlikely(qede_alloc_rx_bulk_mbufs(rxq, count))) { struct rte_eth_dev *dev; PMD_RX_LOG(ERR, rxq, - "New buffer allocation failed," - "dropping incoming packetn"); + "New buffers allocation failed," + "dropping incoming packets\n"); dev = &rte_eth_devices[rxq->port_id]; - dev->data->rx_mbuf_alloc_failed += - rxq->rx_alloc_count; - rxq->rx_alloc_errors += rxq->rx_alloc_count; + dev->data->rx_mbuf_alloc_failed += count; + rxq->rx_alloc_errors += count; return 0; } qede_update_rx_prod(qdev, rxq); - rxq->rx_alloc_count = 0; + rxq->rx_alloc_count -= count; } hw_comp_cons = rte_le_to_cpu_16(*rxq->hw_cons_ptr); @@ -2026,7 +2025,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } /* Request number of buffers to be allocated in next loop */ - rxq->rx_alloc_count = rx_alloc_count; + rxq->rx_alloc_count += rx_alloc_count; rxq->rcv_pkts += rx_pkt; -- 2.18.2