From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id BE707374D for ; Tue, 3 Nov 2015 18:12:24 +0100 (CET) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga101.jf.intel.com with ESMTP; 03 Nov 2015 09:12:23 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,239,1444719600"; d="scan'208";a="593311011" Received: from irsmsx106.ger.corp.intel.com ([163.33.3.31]) by FMSMGA003.fm.intel.com with ESMTP; 03 Nov 2015 09:12:22 -0800 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.75]) by IRSMSX106.ger.corp.intel.com ([169.254.8.229]) with mapi id 14.03.0248.002; Tue, 3 Nov 2015 17:12:21 +0000 From: "Ananyev, Konstantin" To: Jerin Jacob Thread-Topic: [dpdk-dev] [RFC ][PATCH] Introduce RTE_ARCH_STRONGLY_ORDERED_MEM_OPS configuration parameter Thread-Index: AQHRFlNpSjL5qgY8+UmstpoPA/HA5Z6KekkggAAJWoCAAABq8A== Date: Tue, 3 Nov 2015 17:12:21 +0000 Message-ID: <2601191342CEEE43887BDE71AB97725836AB8FE8@irsmsx105.ger.corp.intel.com> References: <1446565921-18088-1-git-send-email-jerin.jacob@caviumnetworks.com> <2601191342CEEE43887BDE71AB97725836AB8F01@irsmsx105.ger.corp.intel.com> <20151103161834.GA18450@localhost.localdomain> <2601191342CEEE43887BDE71AB97725836AB8F55@irsmsx105.ger.corp.intel.com> <20151103165318.GA19474@localhost.localdomain> In-Reply-To: <20151103165318.GA19474@localhost.localdomain> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [RFC ][PATCH] Introduce RTE_ARCH_STRONGLY_ORDERED_MEM_OPS configuration parameter X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Nov 2015 17:12:25 -0000 > -----Original Message----- > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > Sent: Tuesday, November 03, 2015 4:53 PM > To: Ananyev, Konstantin > Cc: dev@dpdk.org > Subject: Re: [dpdk-dev] [RFC ][PATCH] Introduce RTE_ARCH_STRONGLY_ORDERED= _MEM_OPS configuration parameter >=20 > On Tue, Nov 03, 2015 at 04:28:00PM +0000, Ananyev, Konstantin wrote: > > > > > > > -----Original Message----- > > > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > > > Sent: Tuesday, November 03, 2015 4:19 PM > > > To: Ananyev, Konstantin > > > Cc: dev@dpdk.org > > > Subject: Re: [dpdk-dev] [RFC ][PATCH] Introduce RTE_ARCH_STRONGLY_ORD= ERED_MEM_OPS configuration parameter > > > > > > On Tue, Nov 03, 2015 at 03:57:24PM +0000, Ananyev, Konstantin wrote: > > > > > > > > > > > > > -----Original Message----- > > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob > > > > > Sent: Tuesday, November 03, 2015 3:52 PM > > > > > To: dev@dpdk.org > > > > > Subject: [dpdk-dev] [RFC ][PATCH] Introduce RTE_ARCH_STRONGLY_ORD= ERED_MEM_OPS configuration parameter > > > > > > > > > > rte_ring implementation needs explicit memory barrier > > > > > in weakly ordered architecture like ARM unlike > > > > > strongly ordered architecture like X86 > > > > > > > > > > Introducing RTE_ARCH_STRONGLY_ORDERED_MEM_OPS > > > > > configuration to abstract such dependency so that other > > > > > weakly ordered architectures can reuse this infrastructure. > > > > > > > > Looks a bit clumsy. > > > > Please try to follow this suggestion instead: > > > > http://dpdk.org/ml/archives/dev/2015-October/025505.html > > > > > > Make sense. Do we agree on a macro that is defined based upon > > > RTE_ARCH_STRONGLY_ORDERED_MEM_OP to remove clumsy #ifdef every where = ? > > > > Why do we need that macro at all? > > Why just not have architecture specific macro as was discussed in that = thread? > > > > So for intel somewhere inside > > lib/librte_eal/common/include/arch/x86/rte_atomic.h > > > > it would be: > > > > #define rte_smp_wmb() rte_compiler_barrier() > > > > For arm inside lib/librte_eal/common/include/arch/x86/rte_atomic.h > > > > #define rte_smp_wmb() rte_wmb() >=20 > I am not sure about the other architecture but in armv8 device memory > (typically mapped through NIC PCIe BAR space) are strongly ordered. > So there is one more dimension to the equation(normal memory or device > memory). > IMO rte_smp_wmb() -> rte_wmb() mapping to deal with device memory may > not be correct on arm64 ? I thought we are talking now for multi-processor case no? For that would be: rte_smp_... set of macros. Similar to what linux guys have.=20 Konstantin >=20 > Thoughts ? >=20 > > > > And so on. > > > > I think it was already an attempt (not finished) to do similar stuff fo= r ppc: > > http://dpdk.org/dev/patchwork/patch/5884/ > > > > Konstantin > > > > > > > > Jerin > > > > > > > > > > > Konstantin > > > > > > > > > > > > > > Signed-off-by: Jerin Jacob > > > > > --- > > > > > config/common_bsdapp | 5 +++++ > > > > > config/common_linuxapp | 5 +++++ > > > > > config/defconfig_arm64-armv8a-linuxapp-gcc | 1 + > > > > > config/defconfig_arm64-thunderx-linuxapp-gcc | 1 + > > > > > lib/librte_ring/rte_ring.h | 20 ++++++++++++++= ++++++ > > > > > 5 files changed, 32 insertions(+) > > > > > > > > > > diff --git a/config/common_bsdapp b/config/common_bsdapp > > > > > index b37dcf4..c8d1f63 100644 > > > > > --- a/config/common_bsdapp > > > > > +++ b/config/common_bsdapp > > > > > @@ -79,6 +79,11 @@ CONFIG_RTE_FORCE_INTRINSICS=3Dn > > > > > CONFIG_RTE_ARCH_STRICT_ALIGN=3Dn > > > > > > > > > > # > > > > > +# Machine has strongly-ordered memory operations on normal memor= y like x86 > > > > > +# > > > > > +CONFIG_RTE_ARCH_STRONGLY_ORDERED_MEM_OPS=3Dy > > > > > + > > > > > +# > > > > > # Compile to share library > > > > > # > > > > > CONFIG_RTE_BUILD_SHARED_LIB=3Dn > > > > > diff --git a/config/common_linuxapp b/config/common_linuxapp > > > > > index 0de43d5..d040a74 100644 > > > > > --- a/config/common_linuxapp > > > > > +++ b/config/common_linuxapp > > > > > @@ -79,6 +79,11 @@ CONFIG_RTE_FORCE_INTRINSICS=3Dn > > > > > CONFIG_RTE_ARCH_STRICT_ALIGN=3Dn > > > > > > > > > > # > > > > > +# Machine has strongly-ordered memory operations on normal memor= y like x86 > > > > > +# > > > > > +CONFIG_RTE_ARCH_STRONGLY_ORDERED_MEM_OPS=3Dy > > > > > + > > > > > +# > > > > > # Compile to share library > > > > > # > > > > > CONFIG_RTE_BUILD_SHARED_LIB=3Dn > > > > > diff --git a/config/defconfig_arm64-armv8a-linuxapp-gcc b/config/= defconfig_arm64-armv8a-linuxapp-gcc > > > > > index 6ea38a5..5289152 100644 > > > > > --- a/config/defconfig_arm64-armv8a-linuxapp-gcc > > > > > +++ b/config/defconfig_arm64-armv8a-linuxapp-gcc > > > > > @@ -37,6 +37,7 @@ CONFIG_RTE_ARCH=3D"arm64" > > > > > CONFIG_RTE_ARCH_ARM64=3Dy > > > > > CONFIG_RTE_ARCH_64=3Dy > > > > > CONFIG_RTE_ARCH_ARM_NEON=3Dy > > > > > +CONFIG_RTE_ARCH_STRONGLY_ORDERED_MEM_OPS=3Dn > > > > > > > > > > CONFIG_RTE_FORCE_INTRINSICS=3Dy > > > > > > > > > > diff --git a/config/defconfig_arm64-thunderx-linuxapp-gcc b/confi= g/defconfig_arm64-thunderx-linuxapp-gcc > > > > > index e8fccc7..79fa9e6 100644 > > > > > --- a/config/defconfig_arm64-thunderx-linuxapp-gcc > > > > > +++ b/config/defconfig_arm64-thunderx-linuxapp-gcc > > > > > @@ -37,6 +37,7 @@ CONFIG_RTE_ARCH=3D"arm64" > > > > > CONFIG_RTE_ARCH_ARM64=3Dy > > > > > CONFIG_RTE_ARCH_64=3Dy > > > > > CONFIG_RTE_ARCH_ARM_NEON=3Dy > > > > > +CONFIG_RTE_ARCH_STRONGLY_ORDERED_MEM_OPS=3Dn > > > > > > > > > > CONFIG_RTE_FORCE_INTRINSICS=3Dy > > > > > > > > > > diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_rin= g.h > > > > > index af68888..1ccd186 100644 > > > > > --- a/lib/librte_ring/rte_ring.h > > > > > +++ b/lib/librte_ring/rte_ring.h > > > > > @@ -457,7 +457,12 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r,= void * const *obj_table, > > > > > > > > > > /* write entries in ring */ > > > > > ENQUEUE_PTRS(); > > > > > + > > > > > +#ifdef RTE_ARCH_STRONGLY_ORDERED_MEM_OPS > > > > > rte_compiler_barrier(); > > > > > +#else > > > > > + rte_wmb(); > > > > > +#endif > > > > > > > > > > /* if we exceed the watermark */ > > > > > if (unlikely(((mask + 1) - free_entries + n) > r->prod.watermar= k)) { > > > > > @@ -552,7 +557,12 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r,= void * const *obj_table, > > > > > > > > > > /* write entries in ring */ > > > > > ENQUEUE_PTRS(); > > > > > + > > > > > +#ifdef RTE_ARCH_STRONGLY_ORDERED_MEM_OPS > > > > > rte_compiler_barrier(); > > > > > +#else > > > > > + rte_wmb(); > > > > > +#endif > > > > > > > > > > /* if we exceed the watermark */ > > > > > if (unlikely(((mask + 1) - free_entries + n) > r->prod.watermar= k)) { > > > > > @@ -643,7 +653,12 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r,= void **obj_table, > > > > > > > > > > /* copy in table */ > > > > > DEQUEUE_PTRS(); > > > > > + > > > > > +#ifdef RTE_ARCH_STRONGLY_ORDERED_MEM_OPS > > > > > rte_compiler_barrier(); > > > > > +#else > > > > > + rte_rmb(); > > > > > +#endif > > > > > > > > > > /* > > > > > * If there are other dequeues in progress that preceded us, > > > > > @@ -727,7 +742,12 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r,= void **obj_table, > > > > > > > > > > /* copy in table */ > > > > > DEQUEUE_PTRS(); > > > > > + > > > > > +#ifdef RTE_ARCH_STRONGLY_ORDERED_MEM_OPS > > > > > rte_compiler_barrier(); > > > > > +#else > > > > > + rte_rmb(); > > > > > +#endif > > > > > > > > > > __RING_STAT_ADD(r, deq_success, n); > > > > > r->cons.tail =3D cons_next; > > > > > -- > > > > > 2.1.0 > > > >