From: "Loftus, Ciara" <ciara.loftus@intel.com>
To: "Richardson, Bruce" <bruce.richardson@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: RE: [RFC PATCH 06/14] net/ice: use the new common vector capability function
Date: Wed, 6 Aug 2025 14:46:01 +0000 [thread overview]
Message-ID: <DM3PPF7D18F34A1A2E79F6AE9ED95D1AF1D8E2DA@DM3PPF7D18F34A1.namprd11.prod.outlook.com> (raw)
In-Reply-To: <aIONKnXhTZ9UK6Cu@bricha3-mobl1.ger.corp.intel.com>
>
> On Fri, Jul 25, 2025 at 12:49:11PM +0000, Ciara Loftus wrote:
> > Use the new function for determining the maximum simd bitwidth in
> > the ice driver.
> >
> > Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
>
> Few comments inline below.
> > ---
> > drivers/net/intel/ice/ice_ethdev.h | 5 +--
> > drivers/net/intel/ice/ice_rxtx.c | 52 ++++++------------------
> > drivers/net/intel/ice/ice_rxtx.h | 1 +
> > drivers/net/intel/ice/ice_rxtx_vec_sse.c | 6 +++
> > 4 files changed, 21 insertions(+), 43 deletions(-)
> >
> > diff --git a/drivers/net/intel/ice/ice_ethdev.h
> b/drivers/net/intel/ice/ice_ethdev.h
> > index 5fda814f06..992fcc9175 100644
> > --- a/drivers/net/intel/ice/ice_ethdev.h
> > +++ b/drivers/net/intel/ice/ice_ethdev.h
> > @@ -11,6 +11,7 @@
> >
> > #include <ethdev_driver.h>
> > #include <rte_tm_driver.h>
> > +#include <rte_vect.h>
> >
> > #include "base/ice_common.h"
> > #include "base/ice_adminq_cmd.h"
> > @@ -674,9 +675,7 @@ struct ice_adapter {
> > /* Set bit if the engine is disabled */
> > unsigned long disabled_engine_mask;
> > struct ice_parser *psr;
> > - /* used only on X86, zero on other Archs */
> > - bool tx_use_avx2;
> > - bool tx_use_avx512;
> > + enum rte_vect_max_simd tx_simd_width;
> > bool rx_vec_offload_support;
> > };
> >
> > diff --git a/drivers/net/intel/ice/ice_rxtx.c b/drivers/net/intel/ice/ice_rxtx.c
> > index 85832d95a3..79217249b9 100644
> > --- a/drivers/net/intel/ice/ice_rxtx.c
> > +++ b/drivers/net/intel/ice/ice_rxtx.c
> > @@ -3703,7 +3703,7 @@ ice_set_rx_function(struct rte_eth_dev *dev)
> > struct ci_rx_queue *rxq;
> > int i;
> > int rx_check_ret = -1;
> > - bool rx_use_avx512 = false, rx_use_avx2 = false;
> > + enum rte_vect_max_simd rx_simd_width =
> RTE_VECT_SIMD_DISABLED;
> >
> > rx_check_ret = ice_rx_vec_dev_check(dev);
> > if (ad->ptp_ena)
> > @@ -3720,35 +3720,22 @@ ice_set_rx_function(struct rte_eth_dev *dev)
> > break;
> > }
> > }
> > + rx_simd_width = ice_get_max_simd_bitwidth();
> >
>
> Since this whole block is in #ifdef X86_64, do we need a generic ice
> function here? Is it worth just just calling the x86 function directly?
We'd then need to include the rx_vec_x86.h file in the common rxtx.c code which I think is probably not desired.
We could move the function to rx.h but then that file would have some arch specific stuff which again is probably not ideal either.
>
> > - if (rte_vect_get_max_simd_bitwidth() >=
> RTE_VECT_SIMD_512 &&
> > -
> rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
> > -
> rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
> > -#ifdef CC_AVX512_SUPPORT
> > - rx_use_avx512 = true;
> > -#else
> > - PMD_DRV_LOG(NOTICE,
> > - "AVX512 is not supported in build env");
> > -#endif
> > - if (!rx_use_avx512 &&
> > -
> (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1 ||
> > -
> rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1) &&
> > - rte_vect_get_max_simd_bitwidth() >=
> RTE_VECT_SIMD_256)
> > - rx_use_avx2 = true;
> > } else {
> > ad->rx_vec_allowed = false;
> > }
> >
> > if (ad->rx_vec_allowed) {
> > if (dev->data->scattered_rx) {
> > - if (rx_use_avx512) {
> > + if (rx_simd_width == RTE_VECT_SIMD_512) {
> > #ifdef CC_AVX512_SUPPORT
> > if (ad->rx_vec_offload_support)
> > ad->rx_func_type =
> ICE_RX_AVX512_SCATTERED_OFFLOAD;
> > else
> > ad->rx_func_type =
> ICE_RX_AVX512_SCATTERED;
> > #endif
> > - } else if (rx_use_avx2) {
> > + } else if (rx_simd_width == RTE_VECT_SIMD_256) {
> > if (ad->rx_vec_offload_support)
> > ad->rx_func_type =
> ICE_RX_AVX2_SCATTERED_OFFLOAD;
> > else
> > @@ -3757,14 +3744,14 @@ ice_set_rx_function(struct rte_eth_dev *dev)
> > ad->rx_func_type = ICE_RX_SSE_SCATTERED;
> > }
> > } else {
> > - if (rx_use_avx512) {
> > + if (rx_simd_width == RTE_VECT_SIMD_512) {
> > #ifdef CC_AVX512_SUPPORT
> > if (ad->rx_vec_offload_support)
> > ad->rx_func_type =
> ICE_RX_AVX512_OFFLOAD;
> > else
> > ad->rx_func_type = ICE_RX_AVX512;
> > #endif
> > - } else if (rx_use_avx2) {
> > + } else if (rx_simd_width == RTE_VECT_SIMD_256) {
> > if (ad->rx_vec_offload_support)
> > ad->rx_func_type =
> ICE_RX_AVX2_OFFLOAD;
> > else
> > @@ -4032,29 +4019,14 @@ ice_set_tx_function(struct rte_eth_dev *dev)
> > int tx_check_ret = -1;
> >
> > if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> > - ad->tx_use_avx2 = false;
> > - ad->tx_use_avx512 = false;
> > + ad->tx_simd_width = RTE_VECT_SIMD_DISABLED;
> > tx_check_ret = ice_tx_vec_dev_check(dev);
> > + ad->tx_simd_width = ice_get_max_simd_bitwidth();
> > if (tx_check_ret >= 0 &&
> > rte_vect_get_max_simd_bitwidth() >=
> RTE_VECT_SIMD_128) {
> > ad->tx_vec_allowed = true;
> >
> > - if (rte_vect_get_max_simd_bitwidth() >=
> RTE_VECT_SIMD_512 &&
> > - rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F)
> == 1 &&
> > -
> rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
> > -#ifdef CC_AVX512_SUPPORT
> > - ad->tx_use_avx512 = true;
> > -#else
> > - PMD_DRV_LOG(NOTICE,
> > - "AVX512 is not supported in build env");
> > -#endif
> > - if (!ad->tx_use_avx512 &&
> > -
> (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1 ||
> > -
> rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1) &&
> > - rte_vect_get_max_simd_bitwidth() >=
> RTE_VECT_SIMD_256)
> > - ad->tx_use_avx2 = true;
> > -
> > - if (!ad->tx_use_avx2 && !ad->tx_use_avx512 &&
> > + if (ad->tx_simd_width < RTE_VECT_SIMD_256 &&
> > tx_check_ret ==
> ICE_VECTOR_OFFLOAD_PATH)
> > ad->tx_vec_allowed = false;
> >
> > @@ -4074,7 +4046,7 @@ ice_set_tx_function(struct rte_eth_dev *dev)
> >
> > if (ad->tx_vec_allowed) {
> > dev->tx_pkt_prepare = NULL;
> > - if (ad->tx_use_avx512) {
> > + if (ad->tx_simd_width == RTE_VECT_SIMD_512) {
> > #ifdef CC_AVX512_SUPPORT
> > if (tx_check_ret == ICE_VECTOR_OFFLOAD_PATH) {
> > PMD_DRV_LOG(NOTICE,
> > @@ -4100,9 +4072,9 @@ ice_set_tx_function(struct rte_eth_dev *dev)
> > dev->tx_pkt_prepare = ice_prep_pkts;
> > } else {
> > PMD_DRV_LOG(DEBUG, "Using %sVector Tx
> (port %d).",
> > - ad->tx_use_avx2 ? "avx2 " : "",
> > + ad->tx_simd_width ==
> RTE_VECT_SIMD_256 ? "avx2 " : "",
> > dev->data->port_id);
> > - dev->tx_pkt_burst = ad->tx_use_avx2 ?
> > + dev->tx_pkt_burst = ad->tx_simd_width ==
> RTE_VECT_SIMD_256 ?
> > ice_xmit_pkts_vec_avx2 :
> > ice_xmit_pkts_vec;
> > }
> > diff --git a/drivers/net/intel/ice/ice_rxtx.h b/drivers/net/intel/ice/ice_rxtx.h
> > index 0301d05888..8c3d6c413a 100644
> > --- a/drivers/net/intel/ice/ice_rxtx.h
> > +++ b/drivers/net/intel/ice/ice_rxtx.h
> > @@ -261,6 +261,7 @@ uint16_t ice_xmit_pkts_vec_avx512_offload(void
> *tx_queue,
> > int ice_fdir_programming(struct ice_pf *pf, struct ice_fltr_desc *fdir_desc);
> > int ice_tx_done_cleanup(void *txq, uint32_t free_cnt);
> > int ice_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond
> *pmc);
> > +enum rte_vect_max_simd ice_get_max_simd_bitwidth(void);
> >
> > #define FDIR_PARSING_ENABLE_PER_QUEUE(ad, on) do { \
> > int i; \
> > diff --git a/drivers/net/intel/ice/ice_rxtx_vec_sse.c
> b/drivers/net/intel/ice/ice_rxtx_vec_sse.c
> > index d818b3b728..1545bc3b6e 100644
> > --- a/drivers/net/intel/ice/ice_rxtx_vec_sse.c
> > +++ b/drivers/net/intel/ice/ice_rxtx_vec_sse.c
> > @@ -735,3 +735,9 @@ ice_tx_vec_dev_check(struct rte_eth_dev *dev)
> > {
> > return ice_tx_vec_dev_check_default(dev);
> > }
> > +
> > +enum rte_vect_max_simd
> > +ice_get_max_simd_bitwidth(void)
> > +{
> > + return ci_get_x86_max_simd_bitwidth();
> > +}
>
> If we do wrap the x86 bitwidth function in an ice-specific one, we probably
> need to provide one for other architectures. However, as I comment above, I
> don't think we need to wrap this - though perhaps I'm missing something or
> its needed in later patches...
>
> > --
> > 2.34.1
> >
next prev parent reply other threads:[~2025-08-06 14:46 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-25 12:49 [RFC PATCH 00/14] net/intel: rx path selection simplification Ciara Loftus
2025-07-25 12:49 ` [RFC PATCH 01/14] net/ice: use the same Rx path across process types Ciara Loftus
2025-07-25 13:40 ` Bruce Richardson
2025-07-25 12:49 ` [RFC PATCH 02/14] net/iavf: rename Rx/Tx function type variables Ciara Loftus
2025-07-25 13:40 ` Bruce Richardson
2025-07-25 12:49 ` [RFC PATCH 03/14] net/iavf: use the same Rx path across process types Ciara Loftus
2025-07-25 13:41 ` Bruce Richardson
2025-07-25 12:49 ` [RFC PATCH 04/14] net/i40e: " Ciara Loftus
2025-07-25 13:43 ` Bruce Richardson
2025-07-25 12:49 ` [RFC PATCH 05/14] net/intel: introduce common vector capability function Ciara Loftus
2025-07-25 13:45 ` Bruce Richardson
2025-07-25 12:49 ` [RFC PATCH 06/14] net/ice: use the new " Ciara Loftus
2025-07-25 13:56 ` Bruce Richardson
2025-08-06 14:46 ` Loftus, Ciara [this message]
2025-07-25 12:49 ` [RFC PATCH 07/14] net/iavf: " Ciara Loftus
2025-07-25 12:49 ` [RFC PATCH 08/14] net/i40e: " Ciara Loftus
2025-07-25 12:49 ` [RFC PATCH 09/14] net/iavf: remove redundant field from iavf adapter struct Ciara Loftus
2025-07-25 14:51 ` Bruce Richardson
2025-07-25 12:49 ` [RFC PATCH 10/14] net/intel: introduce infrastructure for Rx path selection Ciara Loftus
2025-07-25 15:21 ` Bruce Richardson
2025-08-06 10:14 ` Loftus, Ciara
2025-08-06 10:36 ` Bruce Richardson
2025-07-25 12:49 ` [RFC PATCH 11/14] net/ice: remove unsupported Rx offload Ciara Loftus
2025-07-25 15:22 ` Bruce Richardson
2025-07-25 12:49 ` [RFC PATCH 12/14] net/ice: use the common Rx path selection infrastructure Ciara Loftus
2025-07-25 12:49 ` [RFC PATCH 13/14] net/iavf: " Ciara Loftus
2025-07-25 12:49 ` [RFC PATCH 14/14] net/i40e: " Ciara Loftus
2025-08-07 12:39 ` [PATCH v2 00/15] net/intel: rx path selection simplification Ciara Loftus
2025-08-07 12:39 ` [PATCH v2 01/15] net/ice: use the same Rx path across process types Ciara Loftus
2025-08-07 12:39 ` [PATCH v2 02/15] net/iavf: rename Rx/Tx function type variables Ciara Loftus
2025-08-07 12:39 ` [PATCH v2 03/15] net/iavf: use the same Rx path across process types Ciara Loftus
2025-08-07 12:39 ` [PATCH v2 04/15] net/i40e: " Ciara Loftus
2025-08-07 12:39 ` [PATCH v2 05/15] net/intel: introduce common vector capability function Ciara Loftus
2025-08-07 12:39 ` [PATCH v2 06/15] net/ice: use the new " Ciara Loftus
2025-08-07 12:39 ` [PATCH v2 07/15] net/iavf: " Ciara Loftus
2025-08-07 12:39 ` [PATCH v2 08/15] net/i40e: " Ciara Loftus
2025-08-07 12:39 ` [PATCH v2 09/15] net/iavf: remove redundant field from iavf adapter struct Ciara Loftus
2025-08-07 12:39 ` [PATCH v2 10/15] net/ice: remove unsupported Rx offload Ciara Loftus
2025-08-07 12:39 ` [PATCH v2 11/15] net/iavf: reorder enum of Rx function types Ciara Loftus
2025-08-07 12:39 ` [PATCH v2 12/15] net/intel: introduce infrastructure for Rx path selection Ciara Loftus
2025-08-07 12:39 ` [PATCH v2 13/15] net/ice: use the common Rx path selection infrastructure Ciara Loftus
2025-08-07 12:39 ` [PATCH v2 14/15] net/iavf: " Ciara Loftus
2025-08-07 12:39 ` [PATCH v2 15/15] net/i40e: " Ciara Loftus
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=DM3PPF7D18F34A1A2E79F6AE9ED95D1AF1D8E2DA@DM3PPF7D18F34A1.namprd11.prod.outlook.com \
--to=ciara.loftus@intel.com \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).