From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f65.google.com (mail-pg0-f65.google.com [74.125.83.65]) by dpdk.org (Postfix) with ESMTP id CC9BF5F71 for ; Thu, 8 Mar 2018 22:15:13 +0100 (CET) Received: by mail-pg0-f65.google.com with SMTP id i14so2728843pgv.3 for ; Thu, 08 Mar 2018 13:15:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=U9KjrOjAGoyqPHP2oo/Q2EERkWp8gWQ7cw/wxokTdSI=; b=JBzh1gks/ysy7ghZc38018nKDhQ+ntDmN5xlNvgMoYIdH4xHOjeiOqnkiRkFcksh0d llEf825iZNw+tv+eRTCvJhLFYxIXVt8RDAm8hVBddSGLA4d7G0/WrGBDiRgtnlVaFTBS OzI8buDmrCvGLWJFgrdkltXcyFiPsDk4oJSjfNJG7o69tmkttmODGBsLamDGR9xOB6vl d058jfW8dHzhRQg4/crtCMGyvIMvSo+PwKquoNnEhi28fP2CZzOAsQiNXyDt/byxF84l 1s9Mqrr/fz180PNt/95mGewAWUkZz/r8DacTOq3j1gueDDwoEcApO//iJWT9qyRhhlPJ /tSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=U9KjrOjAGoyqPHP2oo/Q2EERkWp8gWQ7cw/wxokTdSI=; b=j0NaDwJARKmq4hdwC4SOA8vHhDaVtVTmxHp2VdnlLRSoOH2pPUOUcTSSqm3JwK5rgY 55wN9kMxuV3X7nZe0cAX1xwxAvh0M6ksfcuOZ5TAUF5Ighc//eAXAvMJMmt9ik/1FSDu 3hVmHDgI8otgxZlbUxv2PUb67IaQfFxEJcwLPnO6UWdyNBNzk+RSVuYtRsnKZL3Iq30k mbFdds/VHXyCbLTgpQBCfJ5DRahF4H2nDDzVBG/hpQj6BcQQUjMayV9NLJoOPHNv8Htu 7XHYFKLTHbgiOZYr8n/VhOQj4efoPsskwRbTGF975XxsCC04bzXheL2TTgcRDzjrC8vj 0RBg== X-Gm-Message-State: APf1xPD2mUmo5xNx43gKa0eGAjP4KI++5oUV5ntkP/lu7v80VX8G04+i HpGiPXsimbEsKa8dc/RcMJ8SXg== X-Google-Smtp-Source: AG47ELtiE0F4xneozAed5ESwW1Kp3pfeEez7POfGEVHSuYZAFZWADC0AGqifeKMoZXqMfqSvvNKxrw== X-Received: by 10.98.149.138 with SMTP id c10mr27780056pfk.143.1520543712939; Thu, 08 Mar 2018 13:15:12 -0800 (PST) Received: from xeon-e3 (204-195-71-95.wavecable.com. [204.195.71.95]) by smtp.gmail.com with ESMTPSA id v12sm25021419pgs.21.2018.03.08.13.15.12 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 08 Mar 2018 13:15:12 -0800 (PST) Date: Thu, 8 Mar 2018 13:15:10 -0800 From: Stephen Hemminger To: "Ananyev, Konstantin" Cc: "dev@dpdk.org" Message-ID: <20180308131510.677ff22d@xeon-e3> In-Reply-To: <2601191342CEEE43887BDE71AB9772585FAC39B0@irsmsx105.ger.corp.intel.com> References: <1512126771-27503-1-git-send-email-konstantin.ananyev@intel.com> <1512126771-27503-2-git-send-email-konstantin.ananyev@intel.com> <20171201100418.3491bff0@xeon-e3> <2601191342CEEE43887BDE71AB9772585FAC39B0@irsmsx105.ger.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH 2/2] eal/x86: Use lock-prefixed instructions to reduce cost of rte_smp_mb() X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Mar 2018 21:15:14 -0000 On Fri, 1 Dec 2017 23:08:39 +0000 "Ananyev, Konstantin" wrote: > Hi Stephen, > > > -----Original Message----- > > From: Stephen Hemminger [mailto:stephen@networkplumber.org] > > Sent: Friday, December 1, 2017 6:04 PM > > To: Ananyev, Konstantin > > Cc: dev@dpdk.org > > Subject: Re: [dpdk-dev] [PATCH 2/2] eal/x86: Use lock-prefixed instructions to reduce cost of rte_smp_mb() > > > > On Fri, 1 Dec 2017 11:12:51 +0000 > > Konstantin Ananyev wrote: > > > > > On x86 it is possible to use lock-prefixed instructions to get > > > the similar effect as mfence. > > > As pointed by Java guys, on most modern HW that gives a better > > > performance than using mfence: > > > https://shipilev.net/blog/2014/on-the-fence-with-dependencies/ > > > That patch adopts that technique for rte_smp_mb() implementation. > > > On BDW 2.2 mb_autotest on single lcore reports 2X cycle reduction, > > > i.e. from ~110 to ~55 cycles per operation. > > > > > > Signed-off-by: Konstantin Ananyev > > > --- > > > .../common/include/arch/x86/rte_atomic.h | 45 +++++++++++++++++++++- > > > 1 file changed, 43 insertions(+), 2 deletions(-) > > > > > > diff --git a/lib/librte_eal/common/include/arch/x86/rte_atomic.h b/lib/librte_eal/common/include/arch/x86/rte_atomic.h > > > index 4eac66631..07b7fa7f7 100644 > > > --- a/lib/librte_eal/common/include/arch/x86/rte_atomic.h > > > +++ b/lib/librte_eal/common/include/arch/x86/rte_atomic.h > > > @@ -55,12 +55,53 @@ extern "C" { > > > > > > #define rte_rmb() _mm_lfence() > > > > > > -#define rte_smp_mb() rte_mb() > > > - > > > #define rte_smp_wmb() rte_compiler_barrier() > > > > > > #define rte_smp_rmb() rte_compiler_barrier() > > > > > > +/* > > > + * From Intel Software Development Manual; Vol 3; > > > + * 8.2.2 Memory Ordering in P6 and More Recent Processor Families: > > > + * ... > > > + * . Reads are not reordered with other reads. > > > + * . Writes are not reordered with older reads. > > > + * . Writes to memory are not reordered with other writes, > > > + * with the following exceptions: > > > + * . streaming stores (writes) executed with the non-temporal move > > > + * instructions (MOVNTI, MOVNTQ, MOVNTDQ, MOVNTPS, and MOVNTPD); and > > > + * . string operations (see Section 8.2.4.1). > > > + * ... > > > + * . Reads may be reordered with older writes to different locations but not > > > + * with older writes to the same location. > > > + * . Reads or writes cannot be reordered with I/O instructions, > > > + * locked instructions, or serializing instructions. > > > + * . Reads cannot pass earlier LFENCE and MFENCE instructions. > > > + * . Writes ... cannot pass earlier LFENCE, SFENCE, and MFENCE instructions. > > > + * . LFENCE instructions cannot pass earlier reads. > > > + * . SFENCE instructions cannot pass earlier writes ... > > > + * . MFENCE instructions cannot pass earlier reads, writes ... > > > + * > > > + * As pointed by Java guys, that makes possible to use lock-prefixed > > > + * instructions to get the same effect as mfence and on most modern HW > > > + * that gives a better perfomarnce than using mfence: > > > + * https://shipilev.net/blog/2014/on-the-fence-with-dependencies/ > > > + * So below we use that technique for rte_smp_mb() implementation. > > > + */ > > > + > > > +#ifdef RTE_ARCH_I686 > > > +#define RTE_SP RTE_STR(esp) > > > +#else > > > +#define RTE_SP RTE_STR(rsp) > > > +#endif > > > + > > > +#define RTE_MB_DUMMY_MEMP "-128(%%" RTE_SP ")" > > > + > > > +static __rte_always_inline void > > > +rte_smp_mb(void) > > > +{ > > > + asm volatile("lock addl $0," RTE_MB_DUMMY_MEMP "; " ::: "memory"); > > > +} > > > + > > > #define rte_io_mb() rte_mb() > > > > > > #define rte_io_wmb() rte_compiler_barrier() > > > > The lock instruction is a stronger barrier than the compiler barrier > > and has worse performance impact. Are you sure it is necessary to use it in DPDK. > > Linux kernel has successfully used simple compiler reodering barrier for years. > > Where do you see compiler barrier? > Right now for x86 rte_smp_mb()==rte_mb()==mfence. > So I am replacing mfence with 'lock add'. > As comment above says - on most modern x86 systems it is faster, > while allow to preserve memory ordering. There are cases like virtio/vhost where we could be using compiler_barrier. The mfence to lock add conversion makes sense.