From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id DD3843572 for ; Mon, 18 Mar 2019 13:59:06 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@mellanox.com) with ESMTPS (AES256-SHA encrypted); 18 Mar 2019 14:59:03 +0200 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.128.130.87]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x2ICx3KO030966; Mon, 18 Mar 2019 14:59:03 +0200 From: Dekel Peled To: chaozhu@linux.vnet.ibm.com Cc: yskoh@mellanox.com, shahafs@mellanox.com, dev@dpdk.org, orika@mellanox.com, thomas@monjalon.net, dekelp@mellanox.com, stable@dpdk.org Date: Mon, 18 Mar 2019 14:58:13 +0200 Message-Id: <1552913893-43407-1-git-send-email-dekelp@mellanox.com> X-Mailer: git-send-email 1.7.1 Subject: [dpdk-dev] [PATCH] eal/ppc: remove fix of memory barrier for IBM POWER X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 Mar 2019 12:59:07 -0000 >>From previous patch description: "to improve performance on PPC64, use light weight sync instruction instead of sync instruction." Excerpt from IBM doc [1], section "Memory barrier instructions": "The second form of the sync instruction is light-weight sync, or lwsync. This form is used to control ordering for storage accesses to system memory only. It does not create a memory barrier for accesses to device memory." This patch removes the use of lwsync, so calls to rte_wmb() and rte_rmb() will provide correct memory barrier to ensure order of accesses to system memory and device memory. [1] https://www.ibm.com/developerworks/systems/articles/powerpc.html Fixes: d23a6bd04d72 ("eal/ppc: fix memory barrier for IBM POWER") Cc: stable@dpdk.org Signed-off-by: Dekel Peled --- lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h | 8 -------- 1 file changed, 8 deletions(-) diff --git a/lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h b/lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h index ce38350..797381c 100644 --- a/lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h +++ b/lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h @@ -63,11 +63,7 @@ * Guarantees that the STORE operations generated before the barrier * occur before the STORE operations generated after. */ -#ifdef RTE_ARCH_64 -#define rte_wmb() asm volatile("lwsync" : : : "memory") -#else #define rte_wmb() asm volatile("sync" : : : "memory") -#endif /** * Read memory barrier. @@ -75,11 +71,7 @@ * Guarantees that the LOAD operations generated before the barrier * occur before the LOAD operations generated after. */ -#ifdef RTE_ARCH_64 -#define rte_rmb() asm volatile("lwsync" : : : "memory") -#else #define rte_rmb() asm volatile("sync" : : : "memory") -#endif #define rte_smp_mb() rte_mb() -- 1.8.3.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id AA44AA05FE for ; Mon, 18 Mar 2019 13:59:09 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 603224C94; Mon, 18 Mar 2019 13:59:08 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id DD3843572 for ; Mon, 18 Mar 2019 13:59:06 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@mellanox.com) with ESMTPS (AES256-SHA encrypted); 18 Mar 2019 14:59:03 +0200 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.128.130.87]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x2ICx3KO030966; Mon, 18 Mar 2019 14:59:03 +0200 From: Dekel Peled To: chaozhu@linux.vnet.ibm.com Cc: yskoh@mellanox.com, shahafs@mellanox.com, dev@dpdk.org, orika@mellanox.com, thomas@monjalon.net, dekelp@mellanox.com, stable@dpdk.org Date: Mon, 18 Mar 2019 14:58:13 +0200 Message-Id: <1552913893-43407-1-git-send-email-dekelp@mellanox.com> X-Mailer: git-send-email 1.7.1 Subject: [dpdk-dev] [PATCH] eal/ppc: remove fix of memory barrier for IBM POWER X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Content-Type: text/plain; charset="UTF-8" Message-ID: <20190318125813.3pckFW_dUXIXt729__6002U7Ink8NvnseiSkhU1moIo@z> >From previous patch description: "to improve performance on PPC64, use light weight sync instruction instead of sync instruction." Excerpt from IBM doc [1], section "Memory barrier instructions": "The second form of the sync instruction is light-weight sync, or lwsync. This form is used to control ordering for storage accesses to system memory only. It does not create a memory barrier for accesses to device memory." This patch removes the use of lwsync, so calls to rte_wmb() and rte_rmb() will provide correct memory barrier to ensure order of accesses to system memory and device memory. [1] https://www.ibm.com/developerworks/systems/articles/powerpc.html Fixes: d23a6bd04d72 ("eal/ppc: fix memory barrier for IBM POWER") Cc: stable@dpdk.org Signed-off-by: Dekel Peled --- lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h | 8 -------- 1 file changed, 8 deletions(-) diff --git a/lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h b/lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h index ce38350..797381c 100644 --- a/lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h +++ b/lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h @@ -63,11 +63,7 @@ * Guarantees that the STORE operations generated before the barrier * occur before the STORE operations generated after. */ -#ifdef RTE_ARCH_64 -#define rte_wmb() asm volatile("lwsync" : : : "memory") -#else #define rte_wmb() asm volatile("sync" : : : "memory") -#endif /** * Read memory barrier. @@ -75,11 +71,7 @@ * Guarantees that the LOAD operations generated before the barrier * occur before the LOAD operations generated after. */ -#ifdef RTE_ARCH_64 -#define rte_rmb() asm volatile("lwsync" : : : "memory") -#else #define rte_rmb() asm volatile("sync" : : : "memory") -#endif #define rte_smp_mb() rte_mb() -- 1.8.3.1