DPDK patches and discussions
 help / color / mirror / Atom feed
From: Konstantin Ananyev <konstantin.ananyev@huawei.com>
To: Bruce Richardson <bruce.richardson@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"david.marchand@redhat.com" <david.marchand@redhat.com>
Subject: RE: [PATCH] eal/x86: cache queried CPU flags
Date: Fri, 11 Oct 2024 13:00:40 +0000	[thread overview]
Message-ID: <fe1fbc6cc14e41d999f722f4e610ef44@huawei.com> (raw)
In-Reply-To: <Zwkeg_-9_w8TNhio@bricha3-mobl1.ger.corp.intel.com>



> -----Original Message-----
> From: Bruce Richardson <bruce.richardson@intel.com>
> Sent: Friday, October 11, 2024 1:48 PM
> To: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> Cc: dev@dpdk.org; david.marchand@redhat.com
> Subject: Re: [PATCH] eal/x86: cache queried CPU flags
> 
> On Fri, Oct 11, 2024 at 12:42:01PM +0000, Konstantin Ananyev wrote:
> >
> >
> > > Rather than re-querying the HW each time a CPU flag is requested, we can
> > > just save the return value in the flags array. This should speed up
> > > repeated querying of CPU flags, and provides a workaround for a reported
> > > issue where errors are seen with constant querying of the AVX-512 CPU
> > > flag from a non-AVX VM.
> > >
> > > Bugzilla Id: 1501
> > >
> > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > ---
> > >  lib/eal/x86/rte_cpuflags.c | 20 +++++++++++++++-----
> > >  1 file changed, 15 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/lib/eal/x86/rte_cpuflags.c b/lib/eal/x86/rte_cpuflags.c
> > > index 26163ab746..62e782fb4b 100644
> > > --- a/lib/eal/x86/rte_cpuflags.c
> > > +++ b/lib/eal/x86/rte_cpuflags.c
> > > @@ -8,6 +8,7 @@
> > >  #include <errno.h>
> > >  #include <stdint.h>
> > >  #include <string.h>
> > > +#include <stdbool.h>
> > >
> > >  #include "rte_cpuid.h"
> > >
> > > @@ -21,12 +22,14 @@ struct feature_entry {
> > >  	uint32_t bit;				/**< cpuid register bit */
> > >  #define CPU_FLAG_NAME_MAX_LEN 64
> > >  	char name[CPU_FLAG_NAME_MAX_LEN];       /**< String for printing */
> > > +	bool has_value;
> > > +	bool value;
> > >  };
> > >
> > >  #define FEAT_DEF(name, leaf, subleaf, reg, bit) \
> > >  	[RTE_CPUFLAG_##name] = {leaf, subleaf, reg, bit, #name },
> > >
> > > -const struct feature_entry rte_cpu_feature_table[] = {
> > > +struct feature_entry rte_cpu_feature_table[] = {
> > >  	FEAT_DEF(SSE3, 0x00000001, 0, RTE_REG_ECX,  0)
> > >  	FEAT_DEF(PCLMULQDQ, 0x00000001, 0, RTE_REG_ECX,  1)
> > >  	FEAT_DEF(DTES64, 0x00000001, 0, RTE_REG_ECX,  2)
> > > @@ -147,7 +150,7 @@ const struct feature_entry rte_cpu_feature_table[] = {
> > >  int
> > >  rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> > >  {
> > > -	const struct feature_entry *feat;
> > > +	struct feature_entry *feat;
> > >  	cpuid_registers_t regs;
> > >  	unsigned int maxleaf;
> > >
> > > @@ -156,6 +159,8 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> > >  		return -ENOENT;
> > >
> > >  	feat = &rte_cpu_feature_table[feature];
> > > +	if (feat->has_value)
> > > +		return feat->value;
> > >
> > >  	if (!feat->leaf)
> > >  		/* This entry in the table wasn't filled out! */
> > > @@ -163,8 +168,10 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> > >
> > >  	maxleaf = __get_cpuid_max(feat->leaf & 0x80000000, NULL);
> > >
> > > -	if (maxleaf < feat->leaf)
> > > -		return 0;
> > > +	if (maxleaf < feat->leaf) {
> > > +		feat->value = 0;
> > > +		goto out;
> > > +	}
> > >
> > >  #ifdef RTE_TOOLCHAIN_MSVC
> > >  	__cpuidex(regs, feat->leaf, feat->subleaf);
> > > @@ -175,7 +182,10 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> > >  #endif
> > >
> > >  	/* check if the feature is enabled */
> > > -	return (regs[feat->reg] >> feat->bit) & 1;
> > > +	feat->value = (regs[feat->reg] >> feat->bit) & 1;
> > > +out:
> > > +	feat->has_value = true;
> > > +	return feat->value;
> >
> > If that function can be called by 2 (or more) threads simultaneously,
> > then In theory  'feat->has_value = true;' can be reordered with
> > ' feat->value = (regs[feat->reg] >> feat->bit) & 1;'  (by cpu or complier)
> > and some thread(s) can get wrong feat->value.
> > The probability of such collision is really low, but still seems not impossible.
> >
> 
> Well since this code is x86-specific the externally visible store ordering
> will match the instruction store ordering. Therefore, I think a compiler
> barrier is all that is necessary before feat->has_value assignment,
> correct?

Yep, seems so.

  reply	other threads:[~2024-10-11 13:00 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-07 11:06 Bruce Richardson
2024-10-07 11:23 ` David Marchand
2024-10-07 14:47   ` Wathsala Wathawana Vithanage
2024-10-07 16:29 ` Wathsala Wathawana Vithanage
2024-10-07 16:43   ` Bruce Richardson
2024-10-07 17:30     ` Wathsala Wathawana Vithanage
2024-10-11 12:42 ` Konstantin Ananyev
2024-10-11 12:48   ` Bruce Richardson
2024-10-11 13:00     ` Konstantin Ananyev [this message]
2024-10-11 13:31 ` Bruce Richardson
2024-10-11 13:33 ` [PATCH v2] " Bruce Richardson
2024-10-11 13:37   ` Konstantin Ananyev
2024-10-14 18:29     ` David Marchand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fe1fbc6cc14e41d999f722f4e610ef44@huawei.com \
    --to=konstantin.ananyev@huawei.com \
    --cc=bruce.richardson@intel.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).