From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id E2147A00E6 for ; Tue, 19 Mar 2019 22:16:43 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EDA311B19; Tue, 19 Mar 2019 22:16:42 +0100 (CET) Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by dpdk.org (Postfix) with ESMTP id 04FC111A4 for ; Tue, 19 Mar 2019 22:16:41 +0100 (CET) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id 6FB7621C0F; Tue, 19 Mar 2019 17:16:41 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Tue, 19 Mar 2019 17:16:41 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; s=mesmtp; bh=gmj5JBfkc3LiG5FY4sFK5wY rT43g+cxRZSnaZsAL/N4=; b=Nd4nyc/x7g4ZEFyQ9pBG8n+JfzllQORMkzDc1A0 Eq+UM1zwKMH84d3+jC9q4VpLB63V2x6ZpSP1hDwwNApwF+QaxssDtuhi/gSyKpm3 rmYqU66aCGUGj4OMCjiNju6CC15VV8TpNIIQHZbhF+83C0BZZ3jqu4mXH2yzg04e 3K7g= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :message-id:mime-version:subject:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=gmj5JBfkc3LiG5FY4 sFK5wYrT43g+cxRZSnaZsAL/N4=; b=on8y/p4Gr37JlQ+u5QDpPBVFb7PUN/Z+W OLHfYzZFeaGFfgUh/Qwpc7hIu2bkhryOhbArMfdgaDdDdMfW9E4eMi3bseVEI4Cg +hWOuje5aZthconynF1inNIOvoFlTH+FK3rsAeedup5wlJnZSkg+0a4QBdLCr4gk KNDjOAILlbPxK+lnw/J+VizEQSRNhip0y2VbzANkjj1kZ+Z6KRMc7pdeOzdYnkXv SKoUgCnFsOdiflcTRsRcOpeSXjYeBhXoe/qGH0JksIDo7YsrFetKsgSPUqAxCTv1 pvGUqyEkpBrIXRknb6TMEKJ0clBAN8d7beZQChY+jYPbkB77bxzWw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrieeggddugeelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffoggfgsedtkeertdertddtnecuhfhrohhmpefvhhhomhgrshcu ofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqnecukfhppe ejjedrudefgedrvddtfedrudekgeenucfrrghrrghmpehmrghilhhfrhhomhepthhhohhm rghssehmohhnjhgrlhhonhdrnhgvthenucevlhhushhtvghrufhiiigvpedt X-ME-Proxy: Received: from xps.monjalon.net (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id A69C310311; Tue, 19 Mar 2019 17:16:39 -0400 (EDT) From: Thomas Monjalon To: Jan Viktorin , Gavin Hu , Chao Zhu Cc: dev@dpdk.org Date: Tue, 19 Mar 2019 22:16:00 +0100 Message-Id: <20190319211601.31983-1-thomas@monjalon.net> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH] eal: remove redundant API description X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Content-Type: text/plain; charset="UTF-8" Message-ID: <20190319211600.dQSxEAyx-cayEJPSGDZFZiN6GbtMlPbNV3YfNrz-rPE@z> Atomic functions are described in doxygen of the file lib/librte_eal/common/include/generic/rte_atomic.h The copies in arch-specific files are redundant and confuse readers about the genericity of the API. Signed-off-by: Thomas Monjalon --- .../common/include/arch/arm/rte_atomic_32.h | 18 ------------------ .../common/include/arch/ppc_64/rte_atomic.h | 18 ------------------ .../common/include/generic/rte_atomic.h | 3 --- 3 files changed, 39 deletions(-) diff --git a/lib/librte_eal/common/include/arch/arm/rte_atomic_32.h b/lib/librte_eal/common/include/arch/arm/rte_atomic_32.h index 859562e59..7dc0d06d1 100644 --- a/lib/librte_eal/common/include/arch/arm/rte_atomic_32.h +++ b/lib/librte_eal/common/include/arch/arm/rte_atomic_32.h @@ -15,28 +15,10 @@ extern "C" { #include "generic/rte_atomic.h" -/** - * General memory barrier. - * - * Guarantees that the LOAD and STORE operations generated before the - * barrier occur before the LOAD and STORE operations generated after. - */ #define rte_mb() __sync_synchronize() -/** - * Write memory barrier. - * - * Guarantees that the STORE operations generated before the barrier - * occur before the STORE operations generated after. - */ #define rte_wmb() do { asm volatile ("dmb st" : : : "memory"); } while (0) -/** - * Read memory barrier. - * - * Guarantees that the LOAD operations generated before the barrier - * occur before the LOAD operations generated after. - */ #define rte_rmb() __sync_synchronize() #define rte_smp_mb() rte_mb() diff --git a/lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h b/lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h index ce38350bd..2dd59fd78 100644 --- a/lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h +++ b/lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h @@ -49,32 +49,14 @@ extern "C" { #include #include "generic/rte_atomic.h" -/** - * General memory barrier. - * - * Guarantees that the LOAD and STORE operations generated before the - * barrier occur before the LOAD and STORE operations generated after. - */ #define rte_mb() asm volatile("sync" : : : "memory") -/** - * Write memory barrier. - * - * Guarantees that the STORE operations generated before the barrier - * occur before the STORE operations generated after. - */ #ifdef RTE_ARCH_64 #define rte_wmb() asm volatile("lwsync" : : : "memory") #else #define rte_wmb() asm volatile("sync" : : : "memory") #endif -/** - * Read memory barrier. - * - * Guarantees that the LOAD operations generated before the barrier - * occur before the LOAD operations generated after. - */ #ifdef RTE_ARCH_64 #define rte_rmb() asm volatile("lwsync" : : : "memory") #else diff --git a/lib/librte_eal/common/include/generic/rte_atomic.h b/lib/librte_eal/common/include/generic/rte_atomic.h index 4afd1acc3..e91742702 100644 --- a/lib/librte_eal/common/include/generic/rte_atomic.h +++ b/lib/librte_eal/common/include/generic/rte_atomic.h @@ -25,7 +25,6 @@ * * Guarantees that the LOAD and STORE operations generated before the * barrier occur before the LOAD and STORE operations generated after. - * This function is architecture dependent. */ static inline void rte_mb(void); @@ -34,7 +33,6 @@ static inline void rte_mb(void); * * Guarantees that the STORE operations generated before the barrier * occur before the STORE operations generated after. - * This function is architecture dependent. */ static inline void rte_wmb(void); @@ -43,7 +41,6 @@ static inline void rte_wmb(void); * * Guarantees that the LOAD operations generated before the barrier * occur before the LOAD operations generated after. - * This function is architecture dependent. */ static inline void rte_rmb(void); ///@} -- 2.20.1