From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from na01-bl2-obe.outbound.protection.outlook.com (mail-bl2on0128.outbound.protection.outlook.com [65.55.169.128]) by dpdk.org (Postfix) with ESMTP id F1FEE11F5 for ; Mon, 29 Sep 2014 08:09:45 +0200 (CEST) Received: from BY2PR0301MB0693.namprd03.prod.outlook.com (25.160.63.148) by BY2PR0301MB0693.namprd03.prod.outlook.com (25.160.63.148) with Microsoft SMTP Server (TLS) id 15.0.1039.15; Mon, 29 Sep 2014 06:16:17 +0000 Received: from BY2PR0301MB0693.namprd03.prod.outlook.com ([25.160.63.148]) by BY2PR0301MB0693.namprd03.prod.outlook.com ([25.160.63.148]) with mapi id 15.00.1039.011; Mon, 29 Sep 2014 06:16:17 +0000 From: "Hemant@freescale.com" To: Chao Zhu , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH 02/12] Add atomic operations for IBM Power architecture Thread-Index: AQHP2W2jEDTOY+jQeka9pNcI+5ab8pwXpULg Date: Mon, 29 Sep 2014 06:16:16 +0000 Message-ID: References: <1411724186-8036-1-git-send-email-bjzhuc@cn.ibm.com> <1411724186-8036-3-git-send-email-bjzhuc@cn.ibm.com> In-Reply-To: <1411724186-8036-3-git-send-email-bjzhuc@cn.ibm.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [192.88.169.1] x-microsoft-antispam: BCL:0;PCL:0;RULEID:;SRVR:BY2PR0301MB0693; x-forefront-prvs: 034902F5BC x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(6009001)(199003)(51704005)(189002)(13464003)(377454003)(77096002)(83072002)(33646002)(85306004)(105586002)(108616004)(106116001)(31966008)(95666004)(81542003)(46102003)(54356999)(19580395003)(77982003)(74316001)(86362001)(83322001)(80022003)(2656002)(76176999)(79102003)(74662003)(92566001)(19580405001)(106356001)(74502003)(81342003)(99286002)(107046002)(107886001)(87936001)(85852003)(21056001)(76576001)(2501002)(97736003)(4396001)(20776003)(66066001)(76482002)(10300001)(50986999)(101416001)(90102001)(64706001)(120916001)(99396003)(24736002)(80792004)(579004); DIR:OUT; SFP:1102; SCL:1; SRVR:BY2PR0301MB0693; H:BY2PR0301MB0693.namprd03.prod.outlook.com; FPR:; MLV:sfv; PTR:InfoNoRecords; A:1; MX:1; LANG:en; Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: freescale.com Subject: Re: [dpdk-dev] [PATCH 02/12] Add atomic operations for IBM Power architecture X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Sep 2014 06:09:46 -0000 Hi Chao, This Patch seems to be incomplete. You may also need to patch the librte_ea= l\common\include\rte_atomic.h=20 e.g. #if !(defined RTE_ARCH_X86_64) || !(defined RTE_ARCH_I686) #include #else /* if Intel*/ Otherwise you shall be getting compilation errors for "_mm_mfence" Similar is true for other common header files as well. Regards, Hemant > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Chao Zhu > Sent: 26/Sep/2014 3:06 PM > To: dev@dpdk.org > Subject: [dpdk-dev] [PATCH 02/12] Add atomic operations for IBM Power > architecture >=20 > The atomic operations implemented with assembly code in DPDK only support > x86. This patch add architecture specific atomic operations for IBM Power > architecture. >=20 > Signed-off-by: Chao Zhu > --- > .../common/include/powerpc/arch/rte_atomic.h | 387 > ++++++++++++++++++++ > .../common/include/powerpc/arch/rte_atomic_arch.h | 318 > ++++++++++++++++ > 2 files changed, 705 insertions(+), 0 deletions(-) create mode 100644 > lib/librte_eal/common/include/powerpc/arch/rte_atomic.h > create mode 100644 > lib/librte_eal/common/include/powerpc/arch/rte_atomic_arch.h >=20 > diff --git a/lib/librte_eal/common/include/powerpc/arch/rte_atomic.h > b/lib/librte_eal/common/include/powerpc/arch/rte_atomic.h > new file mode 100644 > index 0000000..7f5214e > --- /dev/null > +++ b/lib/librte_eal/common/include/powerpc/arch/rte_atomic.h > @@ -0,0 +1,387 @@ > +/* > + * BSD LICENSE > + * > + * Copyright (C) IBM Corporation 2014. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * > + * * Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * * Redistributions in binary form must reproduce the above copyrig= ht > + * notice, this list of conditions and the following disclaimer in > + * the documentation and/or other materials provided with the > + * distribution. > + * * Neither the name of IBM Corporation nor the names of its > + * contributors may be used to endorse or promote products derived > + * from this software without specific prior written permission. > + * > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND > CONTRIBUTORS > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT > NOT > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND > FITNESS FOR > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE > COPYRIGHT > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, > INCIDENTAL, > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT > NOT > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS > OF USE, > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND > ON ANY > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF > THE USE > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH > DAMAGE. > +*/ > + > +/* > + * Inspired from FreeBSD src/sys/powerpc/include/atomic.h > + * Copyright (c) 2008 Marcel Moolenaar > + * Copyright (c) 2001 Benno Rice > + * Copyright (c) 2001 David E. O'Brien > + * Copyright (c) 1998 Doug Rabson > + * All rights reserved. > + */ > + > +#ifndef _RTE_ATOMIC_H_ > +#error "don't include this file directly, please include generic " > +#endif > + > +#ifndef _RTE_POWERPC_64_ATOMIC_H_ > +#define _RTE_POWERPC_64_ATOMIC_H_ > + > +/*------------------------- 64 bit atomic operations > +-------------------------*/ > + > +/** > + * An atomic compare and set function used by the mutex functions. > + * (atomic) equivalent to: > + * if (*dst =3D=3D exp) > + * *dst =3D src (all 64-bit words) > + * > + * @param dst > + * The destination into which the value will be written. > + * @param exp > + * The expected value. > + * @param src > + * The new value. > + * @return > + * Non-zero on success; 0 on failure. > + */ > +static inline int > +rte_atomic64_cmpset(volatile uint64_t *dst, uint64_t exp, uint64_t src) > +{ > + unsigned int ret =3D 0; > + > + asm volatile ( > + "\tlwsync\n" > + "1: ldarx %[ret], 0, %[dst]\n" > + "cmpld %[exp], %[ret]\n" > + "bne 2f\n" > + "stdcx. %[src], 0, %[dst]\n" > + "bne- 1b\n" > + "li %[ret], 1\n" > + "b 3f\n" > + "2:\n" > + "stdcx. %[ret], 0, %[dst]\n" > + "li %[ret], 0\n" > + "3:\n" > + "isync\n" > + : [ret] "=3D&r" (ret), "=3Dm" (*dst) > + : [dst] "r" (dst), [exp] "r" (exp), [src] "r" (src), "m" (*dst) > + : "cc", "memory"); > + return ret; > +} > + > +/** > + * The atomic counter structure. > + */ > +typedef struct { > + volatile int64_t cnt; /**< Internal counter value. */ } > +rte_atomic64_t; > + > +/** > + * Static initializer for an atomic counter. > + */ > +#define RTE_ATOMIC64_INIT(val) { (val) } > + > +/** > + * Initialize the atomic counter. > + * > + * @param v > + * A pointer to the atomic counter. > + */ > +static inline void > +rte_atomic64_init(rte_atomic64_t *v) > +{ > + v->cnt =3D 0; > +} > + > +/** > + * Atomically read a 64-bit counter. > + * > + * @param v > + * A pointer to the atomic counter. > + * @return > + * The value of the counter. > + */ > +static inline int64_t > +rte_atomic64_read(rte_atomic64_t *v) > +{ > + long ret; > + > + asm volatile("ld%U1%X1 %[ret],%[cnt]" : [ret] "=3Dr"(ret) : [cnt] > +"m"(v->cnt)); > + > + return ret; > +} > + > +/** > + * Atomically set a 64-bit counter. > + * > + * @param v > + * A pointer to the atomic counter. > + * @param new_value > + * The new value of the counter. > + */ > +static inline void > +rte_atomic64_set(rte_atomic64_t *v, int64_t new_value) { > + asm volatile("std%U0%X0 %[new_value],%[cnt]" : [cnt] "=3Dm"(v->cnt) : > +[new_value] "r"(new_value)); } > + > +/** > + * Atomically add a 64-bit value to a counter. > + * > + * @param v > + * A pointer to the atomic counter. > + * @param inc > + * The value to be added to the counter. > + */ > +static inline void > +rte_atomic64_add(rte_atomic64_t *v, int64_t inc) { > + long t; > + > + asm volatile( > + "1: ldarx %[t],0,%[cnt]\n" > + "add %[t],%[inc],%[t]\n" > + "stdcx. %[t],0,%[cnt]\n" > + "bne- 1b\n" > + : [t] "=3D&r" (t), "=3Dm" (v->cnt) > + : [cnt] "r" (&v->cnt), [inc] "r" (inc), "m" (v->cnt) > + : "cc", "memory"); > +} > + > +/** > + * Atomically subtract a 64-bit value from a counter. > + * > + * @param v > + * A pointer to the atomic counter. > + * @param dec > + * The value to be subtracted from the counter. > + */ > +static inline void > +rte_atomic64_sub(rte_atomic64_t *v, int64_t dec) { > + long t; > + > + asm volatile( > + "1: ldarx %[t],0,%[cnt]\n" > + "subf %[t],%[dec],%[t]\n" > + "stdcx. %[t],0,%[cnt]\n" > + "bne- 1b\n" > + : [t] "=3D&r" (t), "+m" (v->cnt) > + : [cnt] "r" (&v->cnt), [dec] "r" (dec), "m" (v->cnt) > + : "cc", "memory"); > +} > + > +/** > + * Atomically increment a 64-bit counter by one and test. > + * > + * @param v > + * A pointer to the atomic counter. > + */ > +static inline void > +rte_atomic64_inc(rte_atomic64_t *v) > +{ > + long t; > + > + asm volatile( > + "1: ldarx %[t],0,%[cnt]\n" > + "addic %[t],%[t],1\n" > + "stdcx. %[t],0,%[cnt] \n" > + "bne- 1b\n" > + : [t] "=3D&r" (t), "+m" (v->cnt) > + : [cnt] "r" (&v->cnt), "m" (v->cnt) > + : "cc", "xer", "memory"); > +} > + > +/** > + * Atomically decrement a 64-bit counter by one and test. > + * > + * @param v > + * A pointer to the atomic counter. > + */ > +static inline void > +rte_atomic64_dec(rte_atomic64_t *v) > +{ > + long t; > + > + asm volatile( > + "1: ldarx %[t],0,%[cnt]\n" > + "addic %[t],%[t],-1\n" > + "stdcx. %[t],0,%[cnt]\n" > + "bne- 1b\n" > + : [t] "=3D&r" (t), "+m" (v->cnt) > + : [cnt] "r" (&v->cnt), "m" (v->cnt) > + : "cc", "xer", "memory"); > +} > + > +/** > + * Add a 64-bit value to an atomic counter and return the result. > + * > + * Atomically adds the 64-bit value (inc) to the atomic counter (v) and > + * returns the value of v after the addition. > + * > + * @param v > + * A pointer to the atomic counter. > + * @param inc > + * The value to be added to the counter. > + * @return > + * The value of v after the addition. > + */ > +static inline int64_t > +rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc) { > + long ret; > + > + asm volatile( > + "\n\tlwsync\n" > + "1: ldarx %[ret],0,%[cnt]\n" > + "add %[ret],%[inc],%[ret]\n" > + "stdcx. %[ret],0,%[cnt]\n" > + "bne- 1b\n" > + "isync\n" > + : [ret] "=3D&r" (ret) > + : [inc] "r" (inc), [cnt] "r" (&v->cnt) > + : "cc", "memory"); > + > + return ret; > +} > + > +/** > + * Subtract a 64-bit value from an atomic counter and return the result. > + * > + * Atomically subtracts the 64-bit value (dec) from the atomic counter > +(v) > + * and returns the value of v after the subtraction. > + * > + * @param v > + * A pointer to the atomic counter. > + * @param dec > + * The value to be subtracted from the counter. > + * @return > + * The value of v after the subtraction. > + */ > +static inline int64_t > +rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec) { > + long ret; > + > + asm volatile( > + "\n\tlwsync\n" > + "1: ldarx %[ret],0,%[cnt]\n" > + "subf %[ret],%[dec],%[ret]\n" > + "stdcx. %[ret],0,%[cnt] \n" > + "bne- 1b\n" > + "isync\n" > + : [ret] "=3D&r" (ret) > + : [dec] "r" (dec), [cnt] "r" (&v->cnt) > + : "cc", "memory"); > + > + return ret; > +} > + > +static __inline__ long rte_atomic64_inc_return(rte_atomic64_t *v) { > + long ret; > + > + asm volatile( > + "\n\tlwsync\n" > + "1: ldarx %[ret],0,%[cnt]\n" > + "addic %[ret],%[ret],1\n" > + "stdcx. %[ret],0,%[cnt]\n" > + "bne- 1b\n" > + "isync\n" > + : [ret] "=3D&r" (ret) > + : [cnt] "r" (&v->cnt) > + : "cc", "xer", "memory"); > + > + return ret; > +} > +/** > + * Atomically increment a 64-bit counter by one and test. > + * > + * Atomically increments the atomic counter (v) by one and returns > + * true if the result is 0, or false in all other cases. > + * > + * @param v > + * A pointer to the atomic counter. > + * @return > + * True if the result after the addition is 0; false otherwise. > + */ > +#define rte_atomic64_inc_and_test(v) (rte_atomic64_inc_return(v) =3D=3D = 0) > + > +static __inline__ long rte_atomic64_dec_return(rte_atomic64_t *v) { > + long ret; > + > + asm volatile( > + "\n\tlwsync\n" > + "1: ldarx %[ret],0,%[cnt]\n" > + "addic %[ret],%[ret],-1\n" > + "stdcx. %[ret],0,%[cnt]\n" > + "bne- 1b\n" > + "isync\n" > + : [ret] "=3D&r" (ret) > + : [cnt] "r" (&v->cnt) > + : "cc", "xer", "memory"); > + > + return ret; > +} > +/** > + * Atomically decrement a 64-bit counter by one and test. > + * > + * Atomically decrements the atomic counter (v) by one and returns true > +if > + * the result is 0, or false in all other cases. > + * > + * @param v > + * A pointer to the atomic counter. > + * @return > + * True if the result after subtraction is 0; false otherwise. > + */ > +#define rte_atomic64_dec_and_test(v) (rte_atomic64_dec_return((v)) = =3D=3D > 0) > + > +/** > + * Atomically test and set a 64-bit atomic counter. > + * > + * If the counter value is already set, return 0 (failed). Otherwise, > +set > + * the counter value to 1 and return 1 (success). > + * > + * @param v > + * A pointer to the atomic counter. > + * @return > + * 0 if failed; else 1, success. > + */ > +static inline int rte_atomic64_test_and_set(rte_atomic64_t *v) { > + return rte_atomic64_cmpset((volatile uint64_t *)&v->cnt, 0, 1); } > + > +/** > + * Atomically set a 64-bit counter to 0. > + * > + * @param v > + * A pointer to the atomic counter. > + */ > +static inline void rte_atomic64_clear(rte_atomic64_t *v) { > + v->cnt =3D 0; > +} > + > +#endif /* _RTE_POWERPC_64_ATOMIC_H_ */ > + > diff --git a/lib/librte_eal/common/include/powerpc/arch/rte_atomic_arch.h > b/lib/librte_eal/common/include/powerpc/arch/rte_atomic_arch.h > new file mode 100644 > index 0000000..fe5666e > --- /dev/null > +++ b/lib/librte_eal/common/include/powerpc/arch/rte_atomic_arch.h > @@ -0,0 +1,318 @@ > +/* > + * BSD LICENSE > + * > + * Copyright (C) IBM Corporation 2014. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * > + * * Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * * Redistributions in binary form must reproduce the above copyrig= ht > + * notice, this list of conditions and the following disclaimer in > + * the documentation and/or other materials provided with the > + * distribution. > + * * Neither the name of IBM Corporation nor the names of its > + * contributors may be used to endorse or promote products derived > + * from this software without specific prior written permission. > + * > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND > CONTRIBUTORS > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT > NOT > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND > FITNESS FOR > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE > COPYRIGHT > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, > INCIDENTAL, > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT > NOT > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS > OF USE, > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND > ON ANY > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF > THE USE > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH > DAMAGE. > +*/ > + > +/* > + * Inspired from FreeBSD src/sys/powerpc/include/atomic.h > + * Copyright (c) 2008 Marcel Moolenaar > + * Copyright (c) 2001 Benno Rice > + * Copyright (c) 2001 David E. O'Brien > + * Copyright (c) 1998 Doug Rabson > + * All rights reserved. > + */ > + > +#ifndef _RTE_ATOMIC_H_ > +#error "don't include this file directly, please include generic " > +#endif > + > +#ifndef _RTE_ATOMIC_ARCH_H_ > +#define _RTE_ATOMIC_ARCH_H_ > + > +#include > + > +/** > + * General memory barrier. > + * > + * Guarantees that the LOAD and STORE operations generated before the > + * barrier occur before the LOAD and STORE operations generated after. > + */ > +#define rte_arch_mb() asm volatile("sync" : : : "memory") > + > +/** > + * Write memory barrier. > + * > + * Guarantees that the STORE operations generated before the barrier > + * occur before the STORE operations generated after. > + */ > +#define rte_arch_wmb() asm volatile("sync" : : : "memory") > + > +/** > + * Read memory barrier. > + * > + * Guarantees that the LOAD operations generated before the barrier > + * occur before the LOAD operations generated after. > + */ > +#define rte_arch_rmb() asm volatile("sync" : : : "memory") > + > +#define rte_arch_compiler_barrier() do { \ > + asm volatile ("" : : : "memory"); \ > +} while(0) > + > +/*------------------------- 16 bit atomic operations > +-------------------------*/ > + > +/** > + * The atomic counter structure. > + */ > +typedef struct { > + volatile int16_t cnt; /**< An internal counter value. */ } > +rte_atomic16_t; > + > +/** > + * Atomic compare and set. > + * > + * (atomic) equivalent to: > + * if (*dst =3D=3D exp) > + * *dst =3D src (all 16-bit words) > + * > + * @param dst > + * The destination location into which the value will be written. > + * @param exp > + * The expected value. > + * @param src > + * The new value. > + * @return > + * Non-zero on success; 0 on failure. > + */ > +static inline int > +rte_arch_atomic16_cmpset(volatile uint16_t *dst, uint16_t exp, uint16_t > +src) { > + return __atomic_compare_exchange(dst, &exp, &src, 0, > __ATOMIC_ACQUIRE, > +__ATOMIC_ACQUIRE) ? 1 : 0; } > + > +/** > + * Atomically increment a counter by one. > + * > + * @param v > + * A pointer to the atomic counter. > + */ > +static inline void > +rte_arch_atomic16_inc(rte_atomic16_t *v) { > + __atomic_add_fetch(&v->cnt, 1, __ATOMIC_ACQUIRE); } > + > +/** > + * Atomically decrement a counter by one. > + * > + * @param v > + * A pointer to the atomic counter. > + */ > +static inline void > +rte_arch_atomic16_dec(rte_atomic16_t *v) { > + __atomic_sub_fetch(&v->cnt, 1, __ATOMIC_ACQUIRE); } > + > +/** > + * Atomically increment a 16-bit counter by one and test. > + * > + * Atomically increments the atomic counter (v) by one and returns true > +if > + * the result is 0, or false in all other cases. > + * > + * @param v > + * A pointer to the atomic counter. > + * @return > + * True if the result after the increment operation is 0; false otherw= ise. > + */ > +static inline int rte_arch_atomic16_inc_and_test(rte_atomic16_t *v) { > + return (__atomic_add_fetch(&v->cnt, 1, __ATOMIC_ACQUIRE) =3D=3D 0); } > + > +/** > + * Atomically decrement a 16-bit counter by one and test. > + * > + * Atomically decrements the atomic counter (v) by one and returns true > +if > + * the result is 0, or false in all other cases. > + * > + * @param v > + * A pointer to the atomic counter. > + * @return > + * True if the result after the decrement operation is 0; false otherw= ise. > + */ > +static inline int rte_arch_atomic16_dec_and_test(rte_atomic16_t *v) { > + return (__atomic_sub_fetch(&v->cnt, 1, __ATOMIC_ACQUIRE) =3D=3D 0); } > + > +/*------------------------- 32 bit atomic operations > +-------------------------*/ > + > +/** > + * The atomic counter structure. > + */ > +typedef struct { > + volatile int32_t cnt; /**< An internal counter value. */ } > +rte_atomic32_t; > + > +/** > + * Atomic compare and set. > + * > + * (atomic) equivalent to: > + * if (*dst =3D=3D exp) > + * *dst =3D src (all 32-bit words) > + * > + * @param dst > + * The destination location into which the value will be written. > + * @param exp > + * The expected value. > + * @param src > + * The new value. > + * @return > + * Non-zero on success; 0 on failure. > + */ > +static inline int > +rte_arch_atomic32_cmpset(volatile uint32_t *dst, uint32_t exp, uint32_t > +src) { > + unsigned int ret =3D 0; > + > + asm volatile( > + "\tlwsync\n" > + "1:\tlwarx %[ret], 0, %[dst]\n" > + "cmplw %[exp], %[ret]\n" > + "bne 2f\n" > + "stwcx. %[src], 0, %[dst]\n" > + "bne- 1b\n" > + "li %[ret], 1\n" > + "b 3f\n" > + "2:\n" > + "stwcx. %[ret], 0, %[dst]\n" > + "li %[ret], 0\n" > + "3:\n" > + "isync\n" > + : [ret] "=3D&r" (ret), "=3Dm" (*dst) > + : [dst] "r" (dst), [exp] "r" (exp), [src] "r" (src), "m" (*dst) > + : "cc", "memory"); > + > + return ret; > +} > + > +/** > + * Atomically increment a counter by one. > + * > + * @param v > + * A pointer to the atomic counter. > + */ > +static inline void > +rte_arch_atomic32_inc(rte_atomic32_t *v) { > + int t; > + > + asm volatile( > + "1: lwarx %[t],0,%[cnt]\n" > + "addic %[t],%[t],1\n" > + "stwcx. %[t],0,%[cnt]\n" > + "bne- 1b\n" > + : [t] "=3D&r" (t), "=3Dm" (v->cnt) > + : [cnt] "r" (&v->cnt), "m" (v->cnt) > + : "cc", "xer", "memory"); > +} > + > +/** > + * Atomically decrement a counter by one. > + * > + * @param v > + * A pointer to the atomic counter. > + */ > +static inline void > +rte_arch_atomic32_dec(rte_atomic32_t *v) { > + int t; > + > + asm volatile( > + "1: lwarx %[t],0,%[cnt]\n" > + "addic %[t],%[t],-1\n" > + "stwcx. %[t],0,%[cnt]\n" > + "bne- 1b\n" > + : [t] "=3D&r" (t), "=3Dm" (v->cnt) > + : [cnt] "r" (&v->cnt), "m" (v->cnt) > + : "cc", "xer", "memory"); > +} > + > +/** > + * Atomically increment a 32-bit counter by one and test. > + * > + * Atomically increments the atomic counter (v) by one and returns true > +if > + * the result is 0, or false in all other cases. > + * > + * @param v > + * A pointer to the atomic counter. > + * @return > + * True if the result after the increment operation is 0; false otherw= ise. > + */ > +static inline int rte_arch_atomic32_inc_and_test(rte_atomic32_t *v) { > + int ret; > + > + asm volatile( > + "\n\tlwsync\n" > + "1: lwarx %[ret],0,%[cnt]\n" > + "addic %[ret],%[ret],1\n" > + "stwcx. %[ret],0,%[cnt]\n" > + "bne- 1b\n" > + "isync\n" > + : [ret] "=3D&r" (ret) > + : [cnt] "r" (&v->cnt) > + : "cc", "xer", "memory"); > + > + return (ret =3D=3D 0); > +} > + > +/** > + * Atomically decrement a 32-bit counter by one and test. > + * > + * Atomically decrements the atomic counter (v) by one and returns true > +if > + * the result is 0, or false in all other cases. > + * > + * @param v > + * A pointer to the atomic counter. > + * @return > + * True if the result after the decrement operation is 0; false otherw= ise. > + */ > +static inline int rte_arch_atomic32_dec_and_test(rte_atomic32_t *v) { > + int ret; > + > + asm volatile( > + "\n\tlwsync\n" > + "1: lwarx %[ret],0,%[cnt]\n" > + "addic %[ret],%[ret],-1\n" > + "stwcx. %[ret],0,%[cnt]\n" > + "bne- 1b\n" > + "isync\n" > + : [ret] "=3D&r" (ret) > + : [cnt] "r" (&v->cnt) > + : "cc", "xer", "memory"); > + > + return (ret =3D=3D 0); > +} > + > +#endif /* _RTE_ATOMIC_ARCH_H_ */ > + > -- > 1.7.1