From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from e9.ny.us.ibm.com (e9.ny.us.ibm.com [32.97.182.139]) by dpdk.org (Postfix) with ESMTP id 48ACA678C for ; Fri, 26 Sep 2014 11:27:46 +0200 (CEST) Received: from /spool/local by e9.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 26 Sep 2014 05:34:07 -0400 Received: from d01dlp02.pok.ibm.com (9.56.250.167) by e9.ny.us.ibm.com (192.168.1.109) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Fri, 26 Sep 2014 05:34:05 -0400 Received: from b01cxnp22034.gho.pok.ibm.com (b01cxnp22034.gho.pok.ibm.com [9.57.198.24]) by d01dlp02.pok.ibm.com (Postfix) with ESMTP id 29C8E6E8041 for ; Fri, 26 Sep 2014 05:22:50 -0400 (EDT) Received: from d01av05.pok.ibm.com (d01av05.pok.ibm.com [9.56.224.195]) by b01cxnp22034.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id s8Q9Xvh65243382 for ; Fri, 26 Sep 2014 09:34:05 GMT Received: from d01av05.pok.ibm.com (localhost [127.0.0.1]) by d01av05.pok.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id s8Q9XWbi000944 for ; Fri, 26 Sep 2014 05:33:32 -0400 Received: from d01hub02.pok.ibm.com (d01hub02.pok.ibm.com [9.63.10.236]) by d01av05.pok.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id s8Q9XWEW000704 for ; Fri, 26 Sep 2014 05:33:32 -0400 Received: from localhost.localdomain ([9.186.57.14]) by rescrl1.research.ibm.com (IBM Domino Release 9.0.1) with ESMTP id 2014092617324658-312539 ; Fri, 26 Sep 2014 17:32:46 +0800 From: Chao Zhu To: dev@dpdk.org Date: Fri, 26 Sep 2014 05:33:36 -0400 Message-Id: <1411724018-7738-6-git-send-email-bjzhuc@cn.ibm.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1411724018-7738-1-git-send-email-bjzhuc@cn.ibm.com> References: <1411724018-7738-1-git-send-email-bjzhuc@cn.ibm.com> X-MIMETrack: Itemize by SMTP Server on rescrl1/Research/Affiliated/IBM(Release 9.0.1|October 14, 2013) at 2014/09/26 17:32:46, Serialize by Router on D01HUB02/01/H/IBM(Release 8.5.3FP2 ZX853FP2HF5|February, 2013) at 09/26/2014 05:33:32, Serialize complete at 09/26/2014 05:33:32 X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 14092609-7182-0000-0000-0000008E3760 Subject: [dpdk-dev] [PATCH 5/7] Split spinlock operations to architecture specific X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Sep 2014 09:27:46 -0000 This patch splits the spinlock operations from DPDK and push them to architecture specific arch directories, so that other processor architecture to support DPDK can be easily adopted. Signed-off-by: Chao Zhu --- lib/librte_eal/common/Makefile | 2 +- .../common/include/i686/arch/rte_spinlock_arch.h | 128 ++++++++++++++++++++ lib/librte_eal/common/include/rte_spinlock.h | 55 +-------- .../common/include/x86_64/arch/rte_spinlock_arch.h | 128 ++++++++++++++++++++ 4 files changed, 261 insertions(+), 52 deletions(-) create mode 100644 lib/librte_eal/common/include/i686/arch/rte_spinlock_arch.h create mode 100644 lib/librte_eal/common/include/x86_64/arch/rte_spinlock_arch.h diff --git a/lib/librte_eal/common/Makefile b/lib/librte_eal/common/Makefile index bb175ca..249ea2f 100644 --- a/lib/librte_eal/common/Makefile +++ b/lib/librte_eal/common/Makefile @@ -46,7 +46,7 @@ ifeq ($(CONFIG_RTE_INSECURE_FUNCTION_WARNING),y) INC += rte_warnings.h endif -ARCH_INC := rte_atomic.h rte_atomic_arch.h rte_byteorder_arch.h rte_cycles_arch.h rte_prefetch_arch.h +ARCH_INC := rte_atomic.h rte_atomic_arch.h rte_byteorder_arch.h rte_cycles_arch.h rte_prefetch_arch.h rte_spinlock_arch.h SYMLINK-$(CONFIG_RTE_LIBRTE_EAL)-include := $(addprefix include/,$(INC)) SYMLINK-$(CONFIG_RTE_LIBRTE_EAL)-include/arch := \ diff --git a/lib/librte_eal/common/include/i686/arch/rte_spinlock_arch.h b/lib/librte_eal/common/include/i686/arch/rte_spinlock_arch.h new file mode 100644 index 0000000..2b13dcd --- /dev/null +++ b/lib/librte_eal/common/include/i686/arch/rte_spinlock_arch.h @@ -0,0 +1,128 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_SPINLOCK_ARCH_H_ +#define _RTE_SPINLOCK_ARCH_H_ + +#include +#ifdef RTE_FORCE_INTRINSICS +#include +#endif + +/** + * The rte_spinlock_t type. + */ +typedef struct { + volatile int locked; /**< lock status 0 = unlocked, 1 = locked */ +} rte_spinlock_t; + +/** + * Take the spinlock. + * + * @param sl + * A pointer to the spinlock. + */ +static inline void +rte_arch_spinlock_lock(rte_spinlock_t *sl) +{ +#ifndef RTE_FORCE_INTRINSICS + int lock_val = 1; + asm volatile ( + "1:\n" + "xchg %[locked], %[lv]\n" + "test %[lv], %[lv]\n" + "jz 3f\n" + "2:\n" + "pause\n" + "cmpl $0, %[locked]\n" + "jnz 2b\n" + "jmp 1b\n" + "3:\n" + : [locked] "=m" (sl->locked), [lv] "=q" (lock_val) + : "[lv]" (lock_val) + : "memory"); +#else + while (__sync_lock_test_and_set(&sl->locked, 1)) + while(sl->locked) + rte_pause(); +#endif +} + +/** + * Release the spinlock. + * + * @param sl + * A pointer to the spinlock. + */ +static inline void +rte_arch_spinlock_unlock (rte_spinlock_t *sl) +{ +#ifndef RTE_FORCE_INTRINSICS + int unlock_val = 0; + asm volatile ( + "xchg %[locked], %[ulv]\n" + : [locked] "=m" (sl->locked), [ulv] "=q" (unlock_val) + : "[ulv]" (unlock_val) + : "memory"); +#else + __sync_lock_release(&sl->locked); +#endif +} + +/** + * Try to take the lock. + * + * @param sl + * A pointer to the spinlock. + * @return + * 1 if the lock is successfully taken; 0 otherwise. + */ +static inline int +rte_arch_spinlock_trylock (rte_spinlock_t *sl) +{ +#ifndef RTE_FORCE_INTRINSICS + int lockval = 1; + + asm volatile ( + "xchg %[locked], %[lockval]" + : [locked] "=m" (sl->locked), [lockval] "=q" (lockval) + : "[lockval]" (lockval) + : "memory"); + + return (lockval == 0); +#else + return (__sync_lock_test_and_set(&sl->locked,1) == 0); +#endif +} + +#endif /* _RTE_SPINLOCK_ARCH_H_ */ \ No newline at end of file diff --git a/lib/librte_eal/common/include/rte_spinlock.h b/lib/librte_eal/common/include/rte_spinlock.h index 661908d..1cab17f 100644 --- a/lib/librte_eal/common/include/rte_spinlock.h +++ b/lib/librte_eal/common/include/rte_spinlock.h @@ -55,13 +55,7 @@ extern "C" { #ifdef RTE_FORCE_INTRINSICS #include #endif - -/** - * The rte_spinlock_t type. - */ -typedef struct { - volatile int locked; /**< lock status 0 = unlocked, 1 = locked */ -} rte_spinlock_t; +#include /** * A static spinlock initializer. @@ -89,27 +83,7 @@ rte_spinlock_init(rte_spinlock_t *sl) static inline void rte_spinlock_lock(rte_spinlock_t *sl) { -#ifndef RTE_FORCE_INTRINSICS - int lock_val = 1; - asm volatile ( - "1:\n" - "xchg %[locked], %[lv]\n" - "test %[lv], %[lv]\n" - "jz 3f\n" - "2:\n" - "pause\n" - "cmpl $0, %[locked]\n" - "jnz 2b\n" - "jmp 1b\n" - "3:\n" - : [locked] "=m" (sl->locked), [lv] "=q" (lock_val) - : "[lv]" (lock_val) - : "memory"); -#else - while (__sync_lock_test_and_set(&sl->locked, 1)) - while(sl->locked) - rte_pause(); -#endif + rte_arch_spinlock_lock(sl); } /** @@ -121,16 +95,7 @@ rte_spinlock_lock(rte_spinlock_t *sl) static inline void rte_spinlock_unlock (rte_spinlock_t *sl) { -#ifndef RTE_FORCE_INTRINSICS - int unlock_val = 0; - asm volatile ( - "xchg %[locked], %[ulv]\n" - : [locked] "=m" (sl->locked), [ulv] "=q" (unlock_val) - : "[ulv]" (unlock_val) - : "memory"); -#else - __sync_lock_release(&sl->locked); -#endif + rte_arch_spinlock_unlock(sl); } /** @@ -144,19 +109,7 @@ rte_spinlock_unlock (rte_spinlock_t *sl) static inline int rte_spinlock_trylock (rte_spinlock_t *sl) { -#ifndef RTE_FORCE_INTRINSICS - int lockval = 1; - - asm volatile ( - "xchg %[locked], %[lockval]" - : [locked] "=m" (sl->locked), [lockval] "=q" (lockval) - : "[lockval]" (lockval) - : "memory"); - - return (lockval == 0); -#else - return (__sync_lock_test_and_set(&sl->locked,1) == 0); -#endif + return rte_arch_spinlock_trylock(sl); } /** diff --git a/lib/librte_eal/common/include/x86_64/arch/rte_spinlock_arch.h b/lib/librte_eal/common/include/x86_64/arch/rte_spinlock_arch.h new file mode 100644 index 0000000..2b13dcd --- /dev/null +++ b/lib/librte_eal/common/include/x86_64/arch/rte_spinlock_arch.h @@ -0,0 +1,128 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_SPINLOCK_ARCH_H_ +#define _RTE_SPINLOCK_ARCH_H_ + +#include +#ifdef RTE_FORCE_INTRINSICS +#include +#endif + +/** + * The rte_spinlock_t type. + */ +typedef struct { + volatile int locked; /**< lock status 0 = unlocked, 1 = locked */ +} rte_spinlock_t; + +/** + * Take the spinlock. + * + * @param sl + * A pointer to the spinlock. + */ +static inline void +rte_arch_spinlock_lock(rte_spinlock_t *sl) +{ +#ifndef RTE_FORCE_INTRINSICS + int lock_val = 1; + asm volatile ( + "1:\n" + "xchg %[locked], %[lv]\n" + "test %[lv], %[lv]\n" + "jz 3f\n" + "2:\n" + "pause\n" + "cmpl $0, %[locked]\n" + "jnz 2b\n" + "jmp 1b\n" + "3:\n" + : [locked] "=m" (sl->locked), [lv] "=q" (lock_val) + : "[lv]" (lock_val) + : "memory"); +#else + while (__sync_lock_test_and_set(&sl->locked, 1)) + while(sl->locked) + rte_pause(); +#endif +} + +/** + * Release the spinlock. + * + * @param sl + * A pointer to the spinlock. + */ +static inline void +rte_arch_spinlock_unlock (rte_spinlock_t *sl) +{ +#ifndef RTE_FORCE_INTRINSICS + int unlock_val = 0; + asm volatile ( + "xchg %[locked], %[ulv]\n" + : [locked] "=m" (sl->locked), [ulv] "=q" (unlock_val) + : "[ulv]" (unlock_val) + : "memory"); +#else + __sync_lock_release(&sl->locked); +#endif +} + +/** + * Try to take the lock. + * + * @param sl + * A pointer to the spinlock. + * @return + * 1 if the lock is successfully taken; 0 otherwise. + */ +static inline int +rte_arch_spinlock_trylock (rte_spinlock_t *sl) +{ +#ifndef RTE_FORCE_INTRINSICS + int lockval = 1; + + asm volatile ( + "xchg %[locked], %[lockval]" + : [locked] "=m" (sl->locked), [lockval] "=q" (lockval) + : "[lockval]" (lockval) + : "memory"); + + return (lockval == 0); +#else + return (__sync_lock_test_and_set(&sl->locked,1) == 0); +#endif +} + +#endif /* _RTE_SPINLOCK_ARCH_H_ */ \ No newline at end of file -- 1.7.1