DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@intel.com>
To: Wenzhuo Lu <wenzhuo.lu@intel.com>, dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH] net/iavf: fix performance drop
Date: Wed, 28 Apr 2021 12:32:04 +0100	[thread overview]
Message-ID: <341c212e-e6d0-b0d5-da29-51862fe35348@intel.com> (raw)
In-Reply-To: <1619414983-131070-1-git-send-email-wenzhuo.lu@intel.com>

On 4/26/2021 6:29 AM, Wenzhuo Lu wrote:
> AVX2 and SSE don't have the offload path.
> Not necessary doing any check. Or the scalar path
> will be chosen.

Hi Wenzhuo,

The fix by not changing Rx implementation, but making sure correct Rx path
selected, right? Can you please clarify this in the commit log?

So the performance drop fixed for whoever have the vector path supported and
offloads enabled, can be good to highlight in the patch title, otherwise it is
too generic.

> 
> Fixes: eff56a7b9f97 ("net/iavf: add offload path for Rx AVX512")
> 

I am not clear what caused the performance drop.
Before above patch, vector path was not supporting the offloads and scalar path
should be selected. After above patch, still scalar path selected, although
vector path supports offloads, but for both before and after scalar path is
selected, so why/when the performance drop happens?
Can you please clarify in the commit log, how to reproduce performance drop?


> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> ---
>  drivers/net/iavf/iavf_rxtx.c | 12 +++++-------
>  1 file changed, 5 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
> index 3f3cf63..0ba19dbf 100644
> --- a/drivers/net/iavf/iavf_rxtx.c
> +++ b/drivers/net/iavf/iavf_rxtx.c
> @@ -2401,13 +2401,11 @@
>  	check_ret = iavf_rx_vec_dev_check(dev);
>  	if (check_ret >= 0 &&
>  	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
> -		if (check_ret == IAVF_VECTOR_PATH) {
> -			use_sse = true;
> -			if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1 ||
> -			     rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1) &&
> -			    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256)
> -				use_avx2 = true;
> -		}
> +		use_sse = true;
> +		if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1 ||
> +		     rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1) &&
> +		    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256)
> +			use_avx2 = true;
>  
>  #ifdef CC_AVX512_SUPPORT
>  		if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
> 


  parent reply	other threads:[~2021-04-28 11:32 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-26  5:29 Wenzhuo Lu
2021-04-26  9:04 ` Zhang, Qi Z
2021-04-27  1:51   ` Lu, Wenzhuo
2021-04-27  2:57     ` Zhang, Qi Z
2021-04-27  5:09       ` Lu, Wenzhuo
2021-04-28 11:32       ` Ferruh Yigit
2021-04-28 12:06         ` Zhang, Qi Z
2021-04-27  5:19 ` Zhang, Qi Z
2021-04-28 11:32 ` Ferruh Yigit [this message]
2021-04-29  1:08   ` Lu, Wenzhuo
2021-04-29  1:33 ` Wenzhuo Lu
2021-04-29  3:20   ` Zhang, Qi Z

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=341c212e-e6d0-b0d5-da29-51862fe35348@intel.com \
    --to=ferruh.yigit@intel.com \
    --cc=dev@dpdk.org \
    --cc=wenzhuo.lu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).