From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 951AEA050A; Sat, 7 May 2022 08:30:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 36AF240395; Sat, 7 May 2022 08:30:16 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 906544014F for ; Sat, 7 May 2022 08:30:14 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1651905014; x=1683441014; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7Y5BBO2k9ParL0OmcRKMISHY9JbtbDjcJT39ZC9giOE=; b=T05LLwwSMBeF7s7LEr8Zt137GKKS4G01mHsA73JgIRvmdxxt39/W+RT4 E6SAs/+EMP9Yoz7Mrv6cp162gQqxwAAvRugE3KqNEha7YdtS0rirmJ35P LPqiy5y+XI2S+zaHdcm8uJFLBAOKqZcgwnMeysgf5qdUBE6S3Om1HUv2y lsFJCPgTfOYjjH0kn/WY+DRXG0lSheRaMkZmgJ5OAvD4a9Vkd3rNcdIKV t/g06++jwmg4vFEvgvsb/iR6TrWeXm70N4bUNA7/+zf0GSoCObQqTDefl 5AOYjXSpZwLvMz0izSnYxiK0GFqpWr37EOUcig/OLDCWw53kMrkyPeLV2 g==; X-IronPort-AV: E=McAfee;i="6400,9594,10339"; a="256151947" X-IronPort-AV: E=Sophos;i="5.91,206,1647327600"; d="scan'208";a="256151947" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 May 2022 23:30:13 -0700 X-IronPort-AV: E=Sophos;i="5.91,206,1647327600"; d="scan'208";a="538225983" Received: from unknown (HELO localhost.localdomain) ([10.239.251.104]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 May 2022 23:30:11 -0700 From: Ke Zhang To: xiaoyun.li@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com, dev@dpdk.org Cc: Ke Zhang Subject: [PATCH v3] fix mbuf release function point corrupt in multi-process Date: Sat, 7 May 2022 06:24:39 +0000 Message-Id: <20220507062439.38657-1-ke1x.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220414092902.176462-1-ke1x.zhang@intel.com> References: <20220414092902.176462-1-ke1x.zhang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In the multi process environment, the sub process operates on the shared memory and changes the function pointer of the main process, resulting in the failure to find the address of the function when main process releasing, resulting in crash. Signed-off-by: Ke Zhang --- drivers/net/iavf/iavf_rxtx.c | 50 +++++++++++++------------ drivers/net/iavf/iavf_rxtx.h | 6 +-- drivers/net/iavf/iavf_rxtx_vec_avx512.c | 4 +- drivers/net/iavf/iavf_rxtx_vec_sse.c | 8 ++-- 4 files changed, 35 insertions(+), 33 deletions(-) diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index 1cef985fcc..56d4dbf2a4 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -362,6 +362,9 @@ release_txq_mbufs(struct iavf_tx_queue *txq) } } +const struct iavf_rxq_ops *iavf_rxq_release_mbufs_ops[RTE_MAX_QUEUES_PER_PORT]; +const struct iavf_txq_ops *iavf_txq_release_mbufs_ops[RTE_MAX_QUEUES_PER_PORT]; + static const struct iavf_rxq_ops def_rxq_ops = { .release_mbufs = release_rxq_mbufs, }; @@ -674,7 +677,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, rxq->q_set = true; dev->data->rx_queues[queue_idx] = rxq; rxq->qrx_tail = hw->hw_addr + IAVF_QRX_TAIL1(rxq->queue_id); - rxq->ops = &def_rxq_ops; + iavf_rxq_release_mbufs_ops[queue_idx] = &def_rxq_ops; if (check_rx_bulk_allow(rxq) == true) { PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are " @@ -811,7 +814,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev, txq->q_set = true; dev->data->tx_queues[queue_idx] = txq; txq->qtx_tail = hw->hw_addr + IAVF_QTX_TAIL1(queue_idx); - txq->ops = &def_txq_ops; + iavf_txq_release_mbufs_ops[queue_idx] = &def_txq_ops; if (check_tx_vec_allow(txq) == false) { struct iavf_adapter *ad = @@ -943,7 +946,7 @@ iavf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) } rxq = dev->data->rx_queues[rx_queue_id]; - rxq->ops->release_mbufs(rxq); + iavf_rxq_release_mbufs_ops[rx_queue_id]->release_mbufs(rxq); reset_rx_queue(rxq); dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; @@ -971,7 +974,7 @@ iavf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) } txq = dev->data->tx_queues[tx_queue_id]; - txq->ops->release_mbufs(txq); + iavf_txq_release_mbufs_ops[tx_queue_id]->release_mbufs(txq); reset_tx_queue(txq); dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; @@ -986,7 +989,7 @@ iavf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid) if (!q) return; - q->ops->release_mbufs(q); + iavf_rxq_release_mbufs_ops[qid]->release_mbufs(q); rte_free(q->sw_ring); rte_memzone_free(q->mz); rte_free(q); @@ -1000,7 +1003,7 @@ iavf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid) if (!q) return; - q->ops->release_mbufs(q); + iavf_txq_release_mbufs_ops[qid]->release_mbufs(q); rte_free(q->sw_ring); rte_memzone_free(q->mz); rte_free(q); @@ -1034,7 +1037,7 @@ iavf_stop_queues(struct rte_eth_dev *dev) txq = dev->data->tx_queues[i]; if (!txq) continue; - txq->ops->release_mbufs(txq); + iavf_txq_release_mbufs_ops[i]->release_mbufs(txq); reset_tx_queue(txq); dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; } @@ -1042,7 +1045,7 @@ iavf_stop_queues(struct rte_eth_dev *dev) rxq = dev->data->rx_queues[i]; if (!rxq) continue; - rxq->ops->release_mbufs(rxq); + iavf_rxq_release_mbufs_ops[i]->release_mbufs(rxq); reset_rx_queue(rxq); dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; } @@ -2822,12 +2825,12 @@ iavf_set_rx_function(struct rte_eth_dev *dev) if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) use_flex = true; - if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - (void)iavf_rxq_vec_setup(rxq); - } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + (void)iavf_rxq_vec_setup(rxq, &iavf_rxq_release_mbufs_ops[i]); } + if (dev->data->scattered_rx) { if (!use_avx512) { PMD_DRV_LOG(DEBUG, @@ -3002,21 +3005,20 @@ iavf_set_tx_function(struct rte_eth_dev *dev) } #endif - if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - if (!txq) - continue; + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + if (!txq) + continue; #ifdef CC_AVX512_SUPPORT - if (use_avx512) - iavf_txq_vec_setup_avx512(txq); - else - iavf_txq_vec_setup(txq); + if (use_avx512) + iavf_txq_vec_setup_avx512(&iavf_txq_release_mbufs_ops[i]); + else + iavf_txq_vec_setup(&iavf_txq_release_mbufs_ops[i]); #else - iavf_txq_vec_setup(txq); + iavf_txq_vec_setup(&iavf_txq_release_mbufs_ops[i]); #endif - } } + return; } diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h index bf8aebbce8..7df501d784 100644 --- a/drivers/net/iavf/iavf_rxtx.h +++ b/drivers/net/iavf/iavf_rxtx.h @@ -657,8 +657,8 @@ uint16_t iavf_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, int iavf_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc); int iavf_rx_vec_dev_check(struct rte_eth_dev *dev); int iavf_tx_vec_dev_check(struct rte_eth_dev *dev); -int iavf_rxq_vec_setup(struct iavf_rx_queue *rxq); -int iavf_txq_vec_setup(struct iavf_tx_queue *txq); +int iavf_rxq_vec_setup(struct iavf_rx_queue *rxq, const struct iavf_rxq_ops **rxq_ops); +int iavf_txq_vec_setup(const struct iavf_txq_ops **txq_ops); uint16_t iavf_recv_pkts_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); uint16_t iavf_recv_pkts_vec_avx512_offload(void *rx_queue, @@ -687,7 +687,7 @@ uint16_t iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t iavf_xmit_pkts_vec_avx512_offload(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); -int iavf_txq_vec_setup_avx512(struct iavf_tx_queue *txq); +int iavf_txq_vec_setup_avx512(const struct iavf_txq_ops **txq_ops); uint8_t iavf_proto_xtr_type_to_rxdid(uint8_t xtr_type); diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c index 7319d4cb65..08de34c87c 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c @@ -2017,9 +2017,9 @@ static const struct iavf_txq_ops avx512_vec_txq_ops = { }; int __rte_cold -iavf_txq_vec_setup_avx512(struct iavf_tx_queue *txq) +iavf_txq_vec_setup_avx512(const struct iavf_txq_ops **txq_ops) { - txq->ops = &avx512_vec_txq_ops; + *txq_ops = &avx512_vec_txq_ops; return 0; } diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index 717a227b2c..a782bed2e0 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -1219,16 +1219,16 @@ static const struct iavf_txq_ops sse_vec_txq_ops = { }; int __rte_cold -iavf_txq_vec_setup(struct iavf_tx_queue *txq) +iavf_txq_vec_setup(const struct iavf_txq_ops **txq_ops) { - txq->ops = &sse_vec_txq_ops; + *txq_ops = &sse_vec_txq_ops; return 0; } int __rte_cold -iavf_rxq_vec_setup(struct iavf_rx_queue *rxq) +iavf_rxq_vec_setup(struct iavf_rx_queue *rxq, const struct iavf_rxq_ops **rxq_ops) { - rxq->ops = &sse_vec_rxq_ops; + *rxq_ops = &sse_vec_rxq_ops; return iavf_rxq_vec_setup_default(rxq); } -- 2.25.1