From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id D5D499FE for ; Mon, 11 Dec 2017 18:11:26 +0100 (CET) X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Dec 2017 09:11:25 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,392,1508828400"; d="scan'208";a="17340302" Received: from bricha3-mobl3.ger.corp.intel.com ([10.237.221.106]) by orsmga002.jf.intel.com with SMTP; 11 Dec 2017 09:11:21 -0800 Received: by (sSMTP sendmail emulation); Mon, 11 Dec 2017 17:11:21 +0000 Date: Mon, 11 Dec 2017 17:11:21 +0000 From: Bruce Richardson To: Konstantin Ananyev Cc: dev@dpdk.org Message-ID: <20171211171121.GB2232@bricha3-MOBL3.ger.corp.intel.com> References: <1512126771-27503-1-git-send-email-konstantin.ananyev@intel.com> <1512126771-27503-2-git-send-email-konstantin.ananyev@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1512126771-27503-2-git-send-email-konstantin.ananyev@intel.com> Organization: Intel Research and Development Ireland Ltd. User-Agent: Mutt/1.9.1 (2017-09-22) Subject: Re: [dpdk-dev] [PATCH 2/2] eal/x86: Use lock-prefixed instructions to reduce cost of rte_smp_mb() X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 11 Dec 2017 17:11:27 -0000 On Fri, Dec 01, 2017 at 11:12:51AM +0000, Konstantin Ananyev wrote: > On x86 it is possible to use lock-prefixed instructions to get > the similar effect as mfence. > As pointed by Java guys, on most modern HW that gives a better > performance than using mfence: > https://shipilev.net/blog/2014/on-the-fence-with-dependencies/ > That patch adopts that technique for rte_smp_mb() implementation. > On BDW 2.2 mb_autotest on single lcore reports 2X cycle reduction, > i.e. from ~110 to ~55 cycles per operation. > > Signed-off-by: Konstantin Ananyev > --- > .../common/include/arch/x86/rte_atomic.h | 45 +++++++++++++++++++++- > 1 file changed, 43 insertions(+), 2 deletions(-) > > + * As pointed by Java guys, that makes possible to use lock-prefixed > + * instructions to get the same effect as mfence and on most modern HW > + * that gives a better perfomarnce than using mfence: > + * https://shipilev.net/blog/2014/on-the-fence-with-dependencies/ > + * So below we use that technique for rte_smp_mb() implementation. > + */ > + > +#ifdef RTE_ARCH_I686 > +#define RTE_SP RTE_STR(esp) > +#else > +#define RTE_SP RTE_STR(rsp) > +#endif > + > +#define RTE_MB_DUMMY_MEMP "-128(%%" RTE_SP ")" > + > +static __rte_always_inline void > +rte_smp_mb(void) > +{ > + asm volatile("lock addl $0," RTE_MB_DUMMY_MEMP "; " ::: "memory"); > +} Rather than #defining RTE_SP and RTE_MB_DUMMY_MEMP, why not just put the #ifdef into the rte_smp_mb itself and have two asm volatile lines with hard-coded register names in them? It would be shorter and I think a lot easier to read. /Bruce