patches for DPDK stable branches
 help / color / mirror / Atom feed
From: "Zhang, AlvinX" <alvinx.zhang@intel.com>
To: "Zhang, Qi Z" <qi.z.zhang@intel.com>, "Rong, Leyi" <leyi.rong@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, "stable@dpdk.org" <stable@dpdk.org>
Subject: RE: [PATCH v2] net/ice: fix secondary process Rx offload path
Date: Tue, 16 Nov 2021 02:12:48 +0000	[thread overview]
Message-ID: <DM6PR11MB38984B8264B31A0BAD52EBB29F999@DM6PR11MB3898.namprd11.prod.outlook.com> (raw)
In-Reply-To: <d7bca7f374d84fc29c7a995fea54c3e7@intel.com>

> -----Original Message-----
> From: Zhang, Qi Z <qi.z.zhang@intel.com>
> Sent: Tuesday, November 16, 2021 9:22 AM
> To: Zhang, AlvinX <alvinx.zhang@intel.com>; Rong, Leyi <leyi.rong@intel.com>
> Cc: dev@dpdk.org; stable@dpdk.org
> Subject: RE: [PATCH v2] net/ice: fix secondary process Rx offload path
> 
> 
> 
> > -----Original Message-----
> > From: Zhang, AlvinX <alvinx.zhang@intel.com>
> > Sent: Monday, November 15, 2021 10:06 AM
> > To: Zhang, Qi Z <qi.z.zhang@intel.com>; Rong, Leyi
> > <leyi.rong@intel.com>
> > Cc: dev@dpdk.org; Zhang, AlvinX <alvinx.zhang@intel.com>;
> > stable@dpdk.org
> > Subject: [PATCH v2] net/ice: fix secondary process Rx offload path
> >
> > Secondary process depends on the vector offload flag to select right
> > Rx offload path. This patch adds a variable in share memory to store
> > the vector offload flag that can be directly read by secondary process.
> >
> > Fixes: 808a17b3c1e6 ("net/ice: add Rx AVX512 offload path")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
> > ---
> >  drivers/net/ice/ice_ethdev.h |  1 +
> >  drivers/net/ice/ice_rxtx.c   | 19 +++++++++++--------
> >  2 files changed, 12 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/net/ice/ice_ethdev.h
> > b/drivers/net/ice/ice_ethdev.h index
> > 3a5bb9b..52daae0 100644
> > --- a/drivers/net/ice/ice_ethdev.h
> > +++ b/drivers/net/ice/ice_ethdev.h
> > @@ -538,6 +538,7 @@ struct ice_adapter {  bool rx_use_avx512;  bool
> > tx_use_avx2;  bool tx_use_avx512;
> > +int rx_vec_path;
> 
> Can we make the type/ name more specific, how about defined as: "bool
> rx_vec_offload_support;" ?
> 
> Then we can keep most thing unchanged in primary process branch, but only
> add below ,
> 
> ad->rx_vec_offload_support = (rx_check_ret == ICE_VECTOR_OFFLOAD_PATH);
> 
> 
> in following branch
> 
> we can avoid duplicate if (rx_check_ret == ICE_VECTOR_OFFLOAD_PATH)..
> 

Ok, I'll update it in V3.

> 
> 
> 
> >  #endif
> >  };
> >
> > diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
> > index 2d771ea..981493e 100644
> > --- a/drivers/net/ice/ice_rxtx.c
> > +++ b/drivers/net/ice/ice_rxtx.c
> > @@ -3172,15 +3172,14 @@
> >  #ifdef RTE_ARCH_X86
> >  struct ice_rx_queue *rxq;
> >  int i;
> > -int rx_check_ret = -1;
> >
> >  if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> >  ad->rx_use_avx512 = false;
> >  ad->rx_use_avx2 = false;
> > -rx_check_ret = ice_rx_vec_dev_check(dev);
> > +ad->rx_vec_path = ice_rx_vec_dev_check(dev);
> >  if (ad->ptp_ena)
> > -rx_check_ret = -1;
> > -if (rx_check_ret >= 0 && ad->rx_bulk_alloc_allowed &&
> > +ad->rx_vec_path = -1;
> > +if (ad->rx_vec_path >= 0 && ad->rx_bulk_alloc_allowed &&
> >      rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
> > ad->rx_vec_allowed = true;  for (i = 0; i < dev->data->nb_rx_queues;
> > i++) { @@ -3215,7
> > +3214,8 @@
> >  if (dev->data->scattered_rx) {
> >  if (ad->rx_use_avx512) {
> >  #ifdef CC_AVX512_SUPPORT
> > -if (rx_check_ret == ICE_VECTOR_OFFLOAD_PATH) {
> > +if (ad->rx_vec_path ==
> > +    ICE_VECTOR_OFFLOAD_PATH) {
> >  PMD_DRV_LOG(NOTICE,
> >  "Using AVX512 OFFLOAD Vector Scattered Rx (port %d).",
> > dev->data->port_id); @@ -3230,7 +3230,8 @@  }  #endif  } else if
> > (ad->rx_use_avx2) { -if (rx_check_ret == ICE_VECTOR_OFFLOAD_PATH) {
> > +if (ad->rx_vec_path ==
> > +    ICE_VECTOR_OFFLOAD_PATH) {
> >  PMD_DRV_LOG(NOTICE,
> >      "Using AVX2 OFFLOAD Vector Scattered Rx (port %d).",
> >      dev->data->port_id);
> > @@ -3252,7 +3253,8 @@
> >  } else {
> >  if (ad->rx_use_avx512) {
> >  #ifdef CC_AVX512_SUPPORT
> > -if (rx_check_ret == ICE_VECTOR_OFFLOAD_PATH) {
> > +if (ad->rx_vec_path ==
> > +    ICE_VECTOR_OFFLOAD_PATH) {
> >  PMD_DRV_LOG(NOTICE,
> >  "Using AVX512 OFFLOAD Vector Rx (port %d).",  dev->data->port_id); @@
> > -3267,7 +3269,8 @@  }  #endif  } else if (ad->rx_use_avx2) { -if
> > (rx_check_ret == ICE_VECTOR_OFFLOAD_PATH) {
> > +if (ad->rx_vec_path ==
> > +    ICE_VECTOR_OFFLOAD_PATH) {
> >  PMD_DRV_LOG(NOTICE,
> >      "Using AVX2 OFFLOAD Vector Rx
> > (port %d).",
> >      dev->data->port_id);
> > --
> > 1.8.3.1
> 


  reply	other threads:[~2021-11-16  2:13 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-12  5:43 [PATCH] " Alvin Zhang
2021-11-15  2:05 ` [PATCH v2] " Alvin Zhang
2021-11-16  1:21   ` Zhang, Qi Z
2021-11-16  2:12     ` Zhang, AlvinX [this message]
2021-11-16  2:32   ` [PATCH v3] " Alvin Zhang
2021-11-16  3:09     ` Sun, QinX
2021-11-16  4:54       ` Zhang, Qi Z

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DM6PR11MB38984B8264B31A0BAD52EBB29F999@DM6PR11MB3898.namprd11.prod.outlook.com \
    --to=alvinx.zhang@intel.com \
    --cc=dev@dpdk.org \
    --cc=leyi.rong@intel.com \
    --cc=qi.z.zhang@intel.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).