From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6668FA0351; Fri, 4 Mar 2022 13:09:04 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5435A427CC; Fri, 4 Mar 2022 13:09:04 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 9CEE9427C3; Fri, 4 Mar 2022 13:09:02 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 2246v2E0002604; Fri, 4 Mar 2022 04:09:01 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=k3JQo3HLH2n9HKdrdb/sSINa7XligknWy3W7CBt4MFc=; b=WQ9qegW4uwTY/BslMF1VrqMSqp+Hk+LPu5K1FiSAJyOWTuJsgaHPqSAOQyysRvfoWQcf qxp895Wk9l2/Jwpl+vVPCswHsI4nvLfUujOdORRScGjkdief0Pve8Vggm8QLbTsYQsUL jTkdpoHR69uNdrZH+XCAUSJwpOWVLZdSTP4b1HinYDPMsJqbNKHsf7Is0yrv3nmzKzM9 maxIAQWiMN550KZ/bzWszTkjQm934xv9BLCPSjXXI0c0SDvPs4sG7WPx6+XDSJ8KUpic qNVPpoOW1QcoFd+qm+unv3KS4rqZHpUW35cgvx47eGlm0TyW4sMObILF3mNKYf5J/uxO 9Q== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3ek4j73d73-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 04 Mar 2022 04:09:01 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 4 Mar 2022 04:09:00 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 4 Mar 2022 04:09:00 -0800 Received: from localhost.marvell.com (unknown [10.30.47.116]) by maili.marvell.com (Postfix) with ESMTP id BFED23F7083; Fri, 4 Mar 2022 04:08:57 -0800 (PST) From: Devendra Singh Rawat To: , , , , CC: , Devendra Singh Rawat , Subject: [PATCH 2/3] net/qede: fix Rx callback Date: Fri, 4 Mar 2022 17:38:32 +0530 Message-ID: <20220304120833.312776-2-dsinghrawat@marvell.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220304120833.312776-1-dsinghrawat@marvell.com> References: <20220304120833.312776-1-dsinghrawat@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-GUID: HlOGa5I-vHZk8AfEzHAErV0hBmfQ7UMh X-Proofpoint-ORIG-GUID: HlOGa5I-vHZk8AfEzHAErV0hBmfQ7UMh X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-04_02,2022-03-04_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org qede_alloc_rx_bulk_mbufs was trimming the no. of requested mbufs count to QEDE_MAX_BULK_ALLOC_COUNT. The RX callback was ignorant of this trimming and it was always resetting the no. of empty RX BD ring slots to 0. This resulted in RX BD ring getting into an inconsistent state and ultimately the application fails to receive any traffic. The fix trims the no. of requested mbufs count before making call to qede_alloc_rx_bulk_mbufs. After qede_alloc_rx_bulk_mbufs returns successfully, the no. of empty RX BD ring slots are decremented by the correct count. Fixes: 8f2312474529 ("net/qede: fix performance bottleneck in Rx path") Cc: stable@dpdk.org Signed-off-by: Devendra Singh Rawat Signed-off-by: Rasesh Mody --- drivers/net/qede/qede_rxtx.c | 68 ++++++++++++++++-------------------- 1 file changed, 31 insertions(+), 37 deletions(-) diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c index 0c52568180..02fa1fcaa1 100644 --- a/drivers/net/qede/qede_rxtx.c +++ b/drivers/net/qede/qede_rxtx.c @@ -38,48 +38,40 @@ static inline int qede_alloc_rx_buffer(struct qede_rx_queue *rxq) static inline int qede_alloc_rx_bulk_mbufs(struct qede_rx_queue *rxq, int count) { + void *obj_p[QEDE_MAX_BULK_ALLOC_COUNT] __rte_cache_aligned; struct rte_mbuf *mbuf = NULL; struct eth_rx_bd *rx_bd; dma_addr_t mapping; int i, ret = 0; uint16_t idx; - uint16_t mask = NUM_RX_BDS(rxq); - - if (count > QEDE_MAX_BULK_ALLOC_COUNT) - count = QEDE_MAX_BULK_ALLOC_COUNT; idx = rxq->sw_rx_prod & NUM_RX_BDS(rxq); - if (count > mask - idx + 1) - count = mask - idx + 1; - - ret = rte_mempool_get_bulk(rxq->mb_pool, (void **)&rxq->sw_rx_ring[idx], - count); - + ret = rte_mempool_get_bulk(rxq->mb_pool, obj_p, count); if (unlikely(ret)) { PMD_RX_LOG(ERR, rxq, "Failed to allocate %d rx buffers " "sw_rx_prod %u sw_rx_cons %u mp entries %u free %u", - count, - rxq->sw_rx_prod & NUM_RX_BDS(rxq), - rxq->sw_rx_cons & NUM_RX_BDS(rxq), + count, idx, rxq->sw_rx_cons & NUM_RX_BDS(rxq), rte_mempool_avail_count(rxq->mb_pool), rte_mempool_in_use_count(rxq->mb_pool)); return -ENOMEM; } for (i = 0; i < count; i++) { - rte_prefetch0(rxq->sw_rx_ring[(idx + 1) & NUM_RX_BDS(rxq)]); - mbuf = rxq->sw_rx_ring[idx & NUM_RX_BDS(rxq)]; + mbuf = obj_p[i]; + if (likely(i < count - 1)) + rte_prefetch0(obj_p[i + 1]); + idx = rxq->sw_rx_prod & NUM_RX_BDS(rxq); + rxq->sw_rx_ring[idx] = mbuf; mapping = rte_mbuf_data_iova_default(mbuf); rx_bd = (struct eth_rx_bd *) ecore_chain_produce(&rxq->rx_bd_ring); rx_bd->addr.hi = rte_cpu_to_le_32(U64_HI(mapping)); rx_bd->addr.lo = rte_cpu_to_le_32(U64_LO(mapping)); - idx++; + rxq->sw_rx_prod++; } - rxq->sw_rx_prod = idx; return 0; } @@ -1544,25 +1536,26 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) uint8_t bitfield_val; #endif uint8_t offset, flags, bd_num; - + uint16_t count = 0; /* Allocate buffers that we used in previous loop */ if (rxq->rx_alloc_count) { - if (unlikely(qede_alloc_rx_bulk_mbufs(rxq, - rxq->rx_alloc_count))) { + count = rxq->rx_alloc_count > QEDE_MAX_BULK_ALLOC_COUNT ? + QEDE_MAX_BULK_ALLOC_COUNT : rxq->rx_alloc_count; + + if (unlikely(qede_alloc_rx_bulk_mbufs(rxq, count))) { struct rte_eth_dev *dev; PMD_RX_LOG(ERR, rxq, - "New buffer allocation failed," - "dropping incoming packetn"); + "New buffers allocation failed," + "dropping incoming packets\n"); dev = &rte_eth_devices[rxq->port_id]; - dev->data->rx_mbuf_alloc_failed += - rxq->rx_alloc_count; - rxq->rx_alloc_errors += rxq->rx_alloc_count; + dev->data->rx_mbuf_alloc_failed += count; + rxq->rx_alloc_errors += count; return 0; } qede_update_rx_prod(qdev, rxq); - rxq->rx_alloc_count = 0; + rxq->rx_alloc_count -= count; } hw_comp_cons = rte_le_to_cpu_16(*rxq->hw_cons_ptr); @@ -1731,7 +1724,7 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } /* Request number of buffers to be allocated in next loop */ - rxq->rx_alloc_count = rx_alloc_count; + rxq->rx_alloc_count += rx_alloc_count; rxq->rcv_pkts += rx_pkt; rxq->rx_segs += rx_pkt; @@ -1771,25 +1764,26 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) struct qede_agg_info *tpa_info = NULL; uint32_t rss_hash; int rx_alloc_count = 0; - + uint16_t count = 0; /* Allocate buffers that we used in previous loop */ if (rxq->rx_alloc_count) { - if (unlikely(qede_alloc_rx_bulk_mbufs(rxq, - rxq->rx_alloc_count))) { + count = rxq->rx_alloc_count > QEDE_MAX_BULK_ALLOC_COUNT ? + QEDE_MAX_BULK_ALLOC_COUNT : rxq->rx_alloc_count; + + if (unlikely(qede_alloc_rx_bulk_mbufs(rxq, count))) { struct rte_eth_dev *dev; PMD_RX_LOG(ERR, rxq, - "New buffer allocation failed," - "dropping incoming packetn"); + "New buffers allocation failed," + "dropping incoming packets\n"); dev = &rte_eth_devices[rxq->port_id]; - dev->data->rx_mbuf_alloc_failed += - rxq->rx_alloc_count; - rxq->rx_alloc_errors += rxq->rx_alloc_count; + dev->data->rx_mbuf_alloc_failed += count; + rxq->rx_alloc_errors += count; return 0; } qede_update_rx_prod(qdev, rxq); - rxq->rx_alloc_count = 0; + rxq->rx_alloc_count -= count; } hw_comp_cons = rte_le_to_cpu_16(*rxq->hw_cons_ptr); @@ -2028,7 +2022,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } /* Request number of buffers to be allocated in next loop */ - rxq->rx_alloc_count = rx_alloc_count; + rxq->rx_alloc_count += rx_alloc_count; rxq->rcv_pkts += rx_pkt; -- 2.18.2