From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B7D1E46C0D; Fri, 25 Jul 2025 14:50:39 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 109A940656; Fri, 25 Jul 2025 14:50:14 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) by mails.dpdk.org (Postfix) with ESMTP id 96461400D5 for ; Fri, 25 Jul 2025 14:50:10 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1753447811; x=1784983811; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jYXtCUgBwGJLd2xd2y+Fdp/kUuko57qQljicSASiVas=; b=Lkv1HWIp2dEYwsLL6Z9OOc5ZZHgnq1+9tGLxZCvE08OddFHHSVASp9gZ cGabeTWKanhe4rZEO29eiYQnwmtbviVgfYXYozB5duTUkf04ywOzcJD8z +Y3CtoX/irE6aQNGvdOUmBoRwUlFS7EwtciY2VVpSNBbRFaEeWmN5t3xl lAjL7qKCuHJdgMRAKxU05N60m91aODOcQWpGZgVNu27h3z77rDLyUzpBS jVZlMTmuXXVmhDHTmYdipGu7zNMxxgH6IPpnmDxkp09Yb8tkmziXaE2bm UmL8ZOp3o1epslGJ4d/8D2K+Gb/l/VglZCfXRSYzJ6wIIxdioFnIfgfRe w==; X-CSE-ConnectionGUID: MYOFB19xQpioKzj+99BgKg== X-CSE-MsgGUID: w8YbrLxLSNKjAigKlWhAfQ== X-IronPort-AV: E=McAfee;i="6800,10657,11503"; a="66480144" X-IronPort-AV: E=Sophos;i="6.16,339,1744095600"; d="scan'208";a="66480144" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jul 2025 05:50:09 -0700 X-CSE-ConnectionGUID: iiBqJJ4kTD6+iJztFPw65w== X-CSE-MsgGUID: 8nFIbItASY+y2+GqLfsBLQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.16,339,1744095600"; d="scan'208";a="161152292" Received: from silpixa00401177.ir.intel.com ([10.237.213.77]) by orviesa007.jf.intel.com with ESMTP; 25 Jul 2025 05:50:09 -0700 From: Ciara Loftus To: dev@dpdk.org Cc: Ciara Loftus Subject: [RFC PATCH 04/14] net/i40e: use the same Rx path across process types Date: Fri, 25 Jul 2025 12:49:09 +0000 Message-Id: <20250725124919.3564890-5-ciara.loftus@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250725124919.3564890-1-ciara.loftus@intel.com> References: <20250725124919.3564890-1-ciara.loftus@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In the interest of simplicity, let the primary process select the Rx path to be used by all processes using the given device. The many logs which report individual Rx path selections have been consolidated into one single log Signed-off-by: Ciara Loftus --- drivers/net/intel/i40e/i40e_ethdev.h | 20 +++- drivers/net/intel/i40e/i40e_rxtx.c | 168 ++++++++++++--------------- 2 files changed, 93 insertions(+), 95 deletions(-) diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h index 44864292d0..308039c363 100644 --- a/drivers/net/intel/i40e/i40e_ethdev.h +++ b/drivers/net/intel/i40e/i40e_ethdev.h @@ -1226,6 +1226,22 @@ struct i40e_vsi_vlan_pvid_info { #define I40E_MBUF_CHECK_F_TX_SEGMENT (1ULL << 2) #define I40E_MBUF_CHECK_F_TX_OFFLOAD (1ULL << 3) +enum i40e_rx_func_type { + I40E_RX_DEFAULT, + I40E_RX_BULK_ALLOC, + I40E_RX_SCATTERED, + I40E_RX_SSE, + I40E_RX_AVX2, + I40E_RX_SSE_SCATTERED, + I40E_RX_AVX2_SCATTERED, + I40E_RX_AVX512, + I40E_RX_AVX512_SCATTERED, + I40E_RX_NEON, + I40E_RX_NEON_SCATTERED, + I40E_RX_ALTIVEC, + I40E_RX_ALTIVEC_SCATTERED, +}; + /* * Structure to store private data for each PF/VF instance. */ @@ -1242,6 +1258,8 @@ struct i40e_adapter { bool tx_simple_allowed; bool tx_vec_allowed; + enum i40e_rx_func_type rx_func_type; + uint64_t mbuf_check; /* mbuf check flags. */ uint16_t max_pkt_len; /* Maximum packet length */ eth_tx_burst_t tx_pkt_burst; @@ -1262,8 +1280,6 @@ struct i40e_adapter { uint8_t rss_reta_updated; /* used only on x86, zero on other architectures */ - bool rx_use_avx2; - bool rx_use_avx512; bool tx_use_avx2; bool tx_use_avx512; }; diff --git a/drivers/net/intel/i40e/i40e_rxtx.c b/drivers/net/intel/i40e/i40e_rxtx.c index aba3c11ee5..bcf5af50e6 100644 --- a/drivers/net/intel/i40e/i40e_rxtx.c +++ b/drivers/net/intel/i40e/i40e_rxtx.c @@ -3310,6 +3310,31 @@ get_avx_supported(bool request_avx512) } #endif /* RTE_ARCH_X86 */ +static const struct { + eth_rx_burst_t pkt_burst; + const char *info; +} i40e_rx_burst_infos[] = { + [I40E_RX_SCATTERED] = { i40e_recv_scattered_pkts, "Scalar Scattered" }, + [I40E_RX_BULK_ALLOC] = { i40e_recv_pkts_bulk_alloc, "Scalar Bulk Alloc" }, + [I40E_RX_DEFAULT] = { i40e_recv_pkts, "Scalar" }, +#ifdef RTE_ARCH_X86 +#ifdef CC_AVX512_SUPPORT + [I40E_RX_AVX512_SCATTERED] = { + i40e_recv_scattered_pkts_vec_avx512, "Vector AVX512 Scattered" }, + [I40E_RX_AVX512] = { i40e_recv_pkts_vec_avx512, "Vector AVX512" }, +#endif + [I40E_RX_AVX2_SCATTERED] = { i40e_recv_scattered_pkts_vec_avx2, "Vector AVX2 Scattered" }, + [I40E_RX_AVX2] = { i40e_recv_pkts_vec_avx2, "Vector AVX2" }, + [I40E_RX_SSE_SCATTERED] = { i40e_recv_scattered_pkts_vec, "Vector SSE Scattered" }, + [I40E_RX_SSE] = { i40e_recv_pkts_vec, "Vector SSE" }, +#elif defined(RTE_ARCH_ARM64) + [I40E_RX_NEON_SCATTERED] = { i40e_recv_scattered_pkts_vec, "Vector Neon Scattered" }, + [I40E_RX_NEON] = { i40e_recv_pkts_vec, "Vector Neon" }, +#elif defined(RTE_ARCH_PPC_64) + [I40E_RX_ALTIVEC_SCATTERED] = { i40e_recv_scattered_pkts_vec, "Vector AltiVec Scattered" }, + [I40E_RX_ALTIVEC] = { i40e_recv_pkts_vec, "Vector AltiVec" }, +#endif +}; void __rte_cold i40e_set_rx_function(struct rte_eth_dev *dev) @@ -3317,109 +3342,86 @@ i40e_set_rx_function(struct rte_eth_dev *dev) struct i40e_adapter *ad = I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); uint16_t vector_rx, i; + + /* The primary process selects the rx path for all processes. */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto out; + /* In order to allow Vector Rx there are a few configuration * conditions to be met and Rx Bulk Allocation should be allowed. */ - if (rte_eal_process_type() == RTE_PROC_PRIMARY) { #ifdef RTE_ARCH_X86 - ad->rx_use_avx512 = false; - ad->rx_use_avx2 = false; + bool rx_use_avx512 = false, rx_use_avx2 = false; #endif - if (i40e_rx_vec_dev_conf_condition_check(dev) || - !ad->rx_bulk_alloc_allowed) { - PMD_INIT_LOG(DEBUG, "Port[%d] doesn't meet" - " Vector Rx preconditions", - dev->data->port_id); + if (i40e_rx_vec_dev_conf_condition_check(dev) || !ad->rx_bulk_alloc_allowed) { + PMD_INIT_LOG(DEBUG, "Port[%d] doesn't meet" + " Vector Rx preconditions", + dev->data->port_id); - ad->rx_vec_allowed = false; - } - if (ad->rx_vec_allowed) { - for (i = 0; i < dev->data->nb_rx_queues; i++) { - struct ci_rx_queue *rxq = - dev->data->rx_queues[i]; + ad->rx_vec_allowed = false; + } + if (ad->rx_vec_allowed) { + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct ci_rx_queue *rxq = + dev->data->rx_queues[i]; - if (rxq && i40e_rxq_vec_setup(rxq)) { - ad->rx_vec_allowed = false; - break; - } + if (rxq && i40e_rxq_vec_setup(rxq)) { + ad->rx_vec_allowed = false; + break; } + } #ifdef RTE_ARCH_X86 - ad->rx_use_avx512 = get_avx_supported(1); + rx_use_avx512 = get_avx_supported(1); - if (!ad->rx_use_avx512) - ad->rx_use_avx2 = get_avx_supported(0); + if (!rx_use_avx512) + rx_use_avx2 = get_avx_supported(0); #endif - } } - if (ad->rx_vec_allowed && - rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { + if (ad->rx_vec_allowed && rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { #ifdef RTE_ARCH_X86 if (dev->data->scattered_rx) { - if (ad->rx_use_avx512) { + if (rx_use_avx512) { #ifdef CC_AVX512_SUPPORT - PMD_DRV_LOG(NOTICE, - "Using AVX512 Vector Scattered Rx (port %d).", - dev->data->port_id); - dev->rx_pkt_burst = - i40e_recv_scattered_pkts_vec_avx512; + ad->rx_func_type = I40E_RX_AVX512_SCATTERED; #endif } else { - PMD_INIT_LOG(DEBUG, - "Using %sVector Scattered Rx (port %d).", - ad->rx_use_avx2 ? "avx2 " : "", - dev->data->port_id); - dev->rx_pkt_burst = ad->rx_use_avx2 ? - i40e_recv_scattered_pkts_vec_avx2 : - i40e_recv_scattered_pkts_vec; + ad->rx_func_type = rx_use_avx2 ? + I40E_RX_AVX2_SCATTERED : + I40E_RX_SCATTERED; dev->recycle_rx_descriptors_refill = i40e_recycle_rx_descriptors_refill_vec; } } else { - if (ad->rx_use_avx512) { + if (rx_use_avx512) { #ifdef CC_AVX512_SUPPORT - PMD_DRV_LOG(NOTICE, - "Using AVX512 Vector Rx (port %d).", - dev->data->port_id); - dev->rx_pkt_burst = - i40e_recv_pkts_vec_avx512; + ad->rx_func_type = I40E_RX_AVX512; #endif } else { - PMD_INIT_LOG(DEBUG, - "Using %sVector Rx (port %d).", - ad->rx_use_avx2 ? "avx2 " : "", - dev->data->port_id); - dev->rx_pkt_burst = ad->rx_use_avx2 ? - i40e_recv_pkts_vec_avx2 : - i40e_recv_pkts_vec; + ad->rx_func_type = rx_use_avx2 ? + I40E_RX_AVX2 : + I40E_RX_SSE; dev->recycle_rx_descriptors_refill = i40e_recycle_rx_descriptors_refill_vec; } } -#else /* RTE_ARCH_X86 */ +#elif defined(RTE_ARCH_ARM64) dev->recycle_rx_descriptors_refill = i40e_recycle_rx_descriptors_refill_vec; - if (dev->data->scattered_rx) { - PMD_INIT_LOG(DEBUG, - "Using Vector Scattered Rx (port %d).", - dev->data->port_id); - dev->rx_pkt_burst = i40e_recv_scattered_pkts_vec; - } else { - PMD_INIT_LOG(DEBUG, "Using Vector Rx (port %d).", - dev->data->port_id); - dev->rx_pkt_burst = i40e_recv_pkts_vec; - } + if (dev->data->scattered_rx) + ad->rx_func_type = I40E_RX_NEON_SCATTERED; + else + ad->rx_func_type = I40E_RX_NEON; +#elif defined(RTE_ARCH_PPC_64) + dev->recycle_rx_descriptors_refill = i40e_recycle_rx_descriptors_refill_vec; + if (dev->data->scattered_rx) + ad->rx_func_type = I40E_RX_ALTIVEC_SCATTERED; + else + ad->rx_func_type = I40E_RX_ALTIVEC; #endif /* RTE_ARCH_X86 */ } else if (!dev->data->scattered_rx && ad->rx_bulk_alloc_allowed) { - PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are " - "satisfied. Rx Burst Bulk Alloc function " - "will be used on port=%d.", - dev->data->port_id); - dev->rx_pkt_burst = i40e_recv_pkts_bulk_alloc; } else { /* Simple Rx Path. */ - PMD_INIT_LOG(DEBUG, "Simple Rx path will be used on port=%d.", - dev->data->port_id); dev->rx_pkt_burst = dev->data->scattered_rx ? i40e_recv_scattered_pkts : i40e_recv_pkts; @@ -3444,32 +3446,12 @@ i40e_set_rx_function(struct rte_eth_dev *dev) rxq->vector_rx = vector_rx; } } -} -static const struct { - eth_rx_burst_t pkt_burst; - const char *info; -} i40e_rx_burst_infos[] = { - { i40e_recv_scattered_pkts, "Scalar Scattered" }, - { i40e_recv_pkts_bulk_alloc, "Scalar Bulk Alloc" }, - { i40e_recv_pkts, "Scalar" }, -#ifdef RTE_ARCH_X86 -#ifdef CC_AVX512_SUPPORT - { i40e_recv_scattered_pkts_vec_avx512, "Vector AVX512 Scattered" }, - { i40e_recv_pkts_vec_avx512, "Vector AVX512" }, -#endif - { i40e_recv_scattered_pkts_vec_avx2, "Vector AVX2 Scattered" }, - { i40e_recv_pkts_vec_avx2, "Vector AVX2" }, - { i40e_recv_scattered_pkts_vec, "Vector SSE Scattered" }, - { i40e_recv_pkts_vec, "Vector SSE" }, -#elif defined(RTE_ARCH_ARM64) - { i40e_recv_scattered_pkts_vec, "Vector Neon Scattered" }, - { i40e_recv_pkts_vec, "Vector Neon" }, -#elif defined(RTE_ARCH_PPC_64) - { i40e_recv_scattered_pkts_vec, "Vector AltiVec Scattered" }, - { i40e_recv_pkts_vec, "Vector AltiVec" }, -#endif -}; +out: + dev->rx_pkt_burst = i40e_rx_burst_infos[ad->rx_func_type].pkt_burst; + PMD_DRV_LOG(NOTICE, "Using %s Rx burst function (port %d).", + i40e_rx_burst_infos[ad->rx_func_type].info, dev->data->port_id); +} int i40e_rx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id, -- 2.34.1