From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id BF9EB58DD for ; Fri, 28 Mar 2014 00:51:57 +0100 (CET) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP; 27 Mar 2014 16:53:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.97,746,1389772800"; d="scan'208";a="481782671" Received: from orsmsx105.amr.corp.intel.com ([10.22.225.132]) by orsmga001.jf.intel.com with ESMTP; 27 Mar 2014 16:53:30 -0700 Received: from orsmsx112.amr.corp.intel.com (10.22.240.13) by ORSMSX105.amr.corp.intel.com (10.22.225.132) with Microsoft SMTP Server (TLS) id 14.3.123.3; Thu, 27 Mar 2014 16:53:30 -0700 Received: from orsmsx102.amr.corp.intel.com ([169.254.1.111]) by ORSMSX112.amr.corp.intel.com ([169.254.12.151]) with mapi id 14.03.0123.003; Thu, 27 Mar 2014 16:53:29 -0700 From: "Venkatesan, Venky" To: Stephen Hemminger , Olivier MATZ Thread-Topic: [dpdk-dev] memory barriers in rte_ring Thread-Index: AQHPSdxphGnlmPRdmkK3M9DgXYsgwpr1wOYAgAALiYCAAAkbgP//xGaw Date: Thu, 27 Mar 2014 23:53:29 +0000 Message-ID: <1FD9B82B8BF2CF418D9A1000154491D974054808@ORSMSX102.amr.corp.intel.com> References: <53345655.9030907@6wind.com> <20140327120620.07f1496b@samsung-9> <53348059.6000505@6wind.com> <20140327132013.10011e6a@samsung-9> In-Reply-To: <20140327132013.10011e6a@samsung-9> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.22.254.140] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] memory barriers in rte_ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Mar 2014 23:51:58 -0000 One caveat - a compiler_barrier should be enough when both sides are using = strongly-ordered memory operations (as in the case of the rings). Weakly or= dered operations will still need fencing. -Venky -----Original Message----- From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Stephen Hemminger Sent: Thursday, March 27, 2014 1:20 PM To: Olivier MATZ Cc: dev@dpdk.org Subject: Re: [dpdk-dev] memory barriers in rte_ring On Thu, 27 Mar 2014 20:47:37 +0100 Olivier MATZ wrote: > Hi Stephen, >=20 > On 03/27/2014 08:06 PM, Stephen Hemminger wrote: > > Long answer: for the multple CPU access ring, it is equivalent to smp_w= mb and smp_rmb > > in Linux kernel. For x86 where DPDK is used, this can normally be rep= laced by simpler > > compiler barrier. In kernel there is a special flage X86_OOSTORE whic= h is only enabled > > for a few special cases, for most cases it is not. When cpu doesnt do= out of order > > stores, there are no cases where other cpu will see wrong state. >=20 > Thank you for this clarification. >=20 > So, if I understand properly, all usages of rte_*mb() sequencing=20 > memory operations between CPUs could be replaced by a compiler=20 > barrier. On the other hand, if the memory is also accessed by a=20 > device, a memory barrier has to be used. >=20 > Olivier >=20 I think so for the current architecture that DPDK runs on. It might be good= to abstract this in some way for eventual users in other environments.