From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 95B0245489 for ; Tue, 18 Jun 2024 09:13:22 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2F552427AE; Tue, 18 Jun 2024 09:13:12 +0200 (CEST) Received: from smtpbg154.qq.com (smtpbg154.qq.com [15.184.224.54]) by mails.dpdk.org (Postfix) with ESMTP id 9770940E15; Tue, 18 Jun 2024 09:12:28 +0200 (CEST) X-QQ-mid: bizesmtpsz1t1718694744tvhgpxx X-QQ-Originating-IP: GcLRAbwycfS2GdEjbDiw9JpFYLjsJfaeAb5aXnOzwIE= Received: from lap-jiawenwu.trustnetic.com ( [183.159.97.141]) by bizesmtp.qq.com (ESMTP) with id ; Tue, 18 Jun 2024 15:12:23 +0800 (CST) X-QQ-SSF: 0000000000000000000000000000000 X-QQ-GoodBg: 0 X-BIZMAIL-ID: 9049734315216521835 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu , stable@dpdk.org Subject: [PATCH 14/19] net/txgbe: fix memory leak Date: Tue, 18 Jun 2024 15:11:45 +0800 Message-Id: <20240618071150.21564-15-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20240618071150.21564-1-jiawenwu@trustnetic.com> References: <20240618071150.21564-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtpsz:trustnetic.com:qybglogicsvrgz:qybglogicsvrgz8a-1 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Fix some memory leaks caused by not release resource in time. Fixes: e1698e383c2a ("net/txgbe: add device init and uninit") Fixes: 635c21354f9a ("net/txgbe: add flow director filter init and uninit") Fixes: c13f84a71b2d ("net/txgbe: add L2 tunnel filter init and uninit") Fixes: 3a123ba60a71 ("net/txgbe: support VF start and stop") Fixes: 039b769f7c01 ("net/txgbe: support VF MAC address") Fixes: 226bf98eda87 ("net/txgbe: add Rx and Tx queues setup and release") Cc: stable@dpdk.org Signed-off-by: Jiawen Wu --- drivers/net/txgbe/txgbe_ethdev.c | 4 ++++ drivers/net/txgbe/txgbe_ethdev_vf.c | 7 ++++++- drivers/net/txgbe/txgbe_rxtx.c | 8 ++++++++ drivers/net/txgbe/txgbe_rxtx.h | 2 ++ 4 files changed, 20 insertions(+), 1 deletion(-) diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c index c2df5a314b..26cf7632c3 100644 --- a/drivers/net/txgbe/txgbe_ethdev.c +++ b/drivers/net/txgbe/txgbe_ethdev.c @@ -735,6 +735,8 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused) PMD_INIT_LOG(ERR, "Failed to allocate %d bytes needed to store MAC addresses", RTE_ETHER_ADDR_LEN * TXGBE_VMDQ_NUM_UC_MAC); + rte_free(eth_dev->data->mac_addrs); + eth_dev->data->mac_addrs = NULL; return -ENOMEM; } @@ -902,6 +904,7 @@ static int txgbe_fdir_filter_init(struct rte_eth_dev *eth_dev) if (!fdir_info->hash_map) { PMD_INIT_LOG(ERR, "Failed to allocate memory for fdir hash map!"); + rte_hash_free(fdir_info->hash_handle); return -ENOMEM; } fdir_info->mask_added = FALSE; @@ -937,6 +940,7 @@ static int txgbe_l2_tn_filter_init(struct rte_eth_dev *eth_dev) if (!l2_tn_info->hash_map) { PMD_INIT_LOG(ERR, "Failed to allocate memory for L2 TN hash map!"); + rte_hash_free(l2_tn_info->hash_handle); return -ENOMEM; } l2_tn_info->e_tag_en = FALSE; diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index 6ac34058ab..87f76673d7 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -295,6 +295,8 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev) err = hw->mac.start_hw(hw); if (err) { PMD_INIT_LOG(ERR, "VF Initialization Failure: %d", err); + rte_free(eth_dev->data->mac_addrs); + eth_dev->data->mac_addrs = NULL; return -EIO; } @@ -671,8 +673,10 @@ txgbevf_dev_start(struct rte_eth_dev *dev) * now only one vector is used for Rx queue */ intr_vector = 1; - if (rte_intr_efd_enable(intr_handle, intr_vector)) + if (rte_intr_efd_enable(intr_handle, intr_vector)) { + txgbe_dev_clear_queues(dev); return -1; + } } if (rte_intr_dp_is_en(intr_handle)) { @@ -680,6 +684,7 @@ txgbevf_dev_start(struct rte_eth_dev *dev) dev->data->nb_rx_queues)) { PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues" " intr_vec", dev->data->nb_rx_queues); + txgbe_dev_clear_queues(dev); return -ENOMEM; } } diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c index 35f80d73ac..a10cbb447d 100644 --- a/drivers/net/txgbe/txgbe_rxtx.c +++ b/drivers/net/txgbe/txgbe_rxtx.c @@ -2157,6 +2157,7 @@ txgbe_tx_queue_release(struct txgbe_tx_queue *txq) if (txq != NULL && txq->ops != NULL) { txq->ops->release_mbufs(txq); txq->ops->free_swring(txq); + rte_memzone_free(txq->mz); rte_free(txq); } } @@ -2376,6 +2377,7 @@ txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, return -ENOMEM; } + txq->mz = tz; txq->nb_tx_desc = nb_desc; txq->tx_free_thresh = tx_free_thresh; txq->pthresh = tx_conf->tx_thresh.pthresh; @@ -2499,6 +2501,7 @@ txgbe_rx_queue_release(struct txgbe_rx_queue *rxq) txgbe_rx_queue_release_mbufs(rxq); rte_free(rxq->sw_ring); rte_free(rxq->sw_sc_ring); + rte_memzone_free(rxq->mz); rte_free(rxq); } } @@ -2592,6 +2595,10 @@ txgbe_reset_rx_queue(struct txgbe_adapter *adapter, struct txgbe_rx_queue *rxq) rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1); rxq->rx_tail = 0; rxq->nb_rx_hold = 0; + + if (rxq->pkt_first_seg != NULL) + rte_pktmbuf_free(rxq->pkt_first_seg); + rxq->pkt_first_seg = NULL; rxq->pkt_last_seg = NULL; @@ -2677,6 +2684,7 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, return -ENOMEM; } + rxq->mz = rz; /* * Zero init all the descriptors in the ring. */ diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h index 336f060633..9155eb1f70 100644 --- a/drivers/net/txgbe/txgbe_rxtx.h +++ b/drivers/net/txgbe/txgbe_rxtx.h @@ -322,6 +322,7 @@ struct txgbe_rx_queue { struct rte_mbuf fake_mbuf; /** hold packets to return to application */ struct rte_mbuf *rx_stage[RTE_PMD_TXGBE_RX_MAX_BURST * 2]; + const struct rte_memzone *mz; }; /** @@ -410,6 +411,7 @@ struct txgbe_tx_queue { uint8_t using_ipsec; /**< indicates that IPsec TX feature is in use */ #endif + const struct rte_memzone *mz; }; struct txgbe_txq_ops { -- 2.27.0