From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 64E73A00B8; Sun, 27 Oct 2019 21:49:57 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 61A981BEBF; Sun, 27 Oct 2019 21:49:56 +0100 (CET) Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by dpdk.org (Postfix) with ESMTP id 7C0BE1BEA8 for ; Sun, 27 Oct 2019 21:49:55 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1572209394; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=u7nF8utNBxbDDkXzx5wq9eJkof5QuGGWB9rYOZICTas=; b=SMP5tH31HwFMIUywnvq3NgnxVuNtAb8usj4mjoxqGk4wl3hhez+Et+1u/mGJvvKMK6GSwq +kHzl/OFpw1UqmGCSGMsCcxuNqTK1Y3BkvKazLii/cluof/z4Evt+GADuZJDWMA2cfOBRR A5I7USnLIwoA2khQA4rXefrbIRI6Eho= Received: from mail-vk1-f197.google.com (mail-vk1-f197.google.com [209.85.221.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-42-wJPXay_2PsqE0kBCFgUKdQ-1; Sun, 27 Oct 2019 16:49:52 -0400 Received: by mail-vk1-f197.google.com with SMTP id t128so2081697vkb.9 for ; Sun, 27 Oct 2019 13:49:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ysD/uCsIpE5cVlGNZlXtCn/+LSCTk7O8GCcG6Fdt6ZA=; b=i4HHbe0d0Ej/hfdAoH2wotc4YKFhdufzPeehGoGsvOj9+C8NEC2jz9C5lujrXYMLOI mtZykKCbEU6a9pVxilcdNTi6aAnNTOedXNlMwLl9UGCmROYr2om9R+Yn+XYRDs16IBDZ NFBkfsuTRVLYksLfl9/ewVjfQLgs4Y6am81XVGLADj1q348dH8Xpsexj1H3cdLRCWOIK HbwpNdil2OItXMhqnh+YX6ru8QGk8KFvRWZGvsn6ojdQVh0X8chlrsk3b+BATJBFoMDx i43Bm05SiXyqH0W1XffbXfn26LpLw3WKM0h0IqPzF4rXOwNgYlwX8/BBCLjZfZAr/Jx+ Ycng== X-Gm-Message-State: APjAAAW8+eEb1MIewZ76HQrJojvshQGHQFpw13C0vMQkWHTZXSSPDLb4 sATWu4kn1XF1Wddwu4cCvTU/U1L/qxRnBgUndyeceEgEOJytqH7QBE6AgGVt5Bmf1xnaQxLsdMS G8IwxInTMsiE54b7VxZU= X-Received: by 2002:a05:6102:387:: with SMTP id m7mr7479262vsq.105.1572209391947; Sun, 27 Oct 2019 13:49:51 -0700 (PDT) X-Google-Smtp-Source: APXvYqz1YcNM8w+zj3pc0vOW6bJUh8Sh6d2YY9By7UT09XxP9uPOB1K+0NQdoDy0UBhjoFyZOtaEwUdkja6gAkyEYZA= X-Received: by 2002:a05:6102:387:: with SMTP id m7mr7479239vsq.105.1572209391559; Sun, 27 Oct 2019 13:49:51 -0700 (PDT) MIME-Version: 1.0 References: <1561911676-37718-1-git-send-email-gavin.hu@arm.com> <1572180765-49767-1-git-send-email-gavin.hu@arm.com> <1572180765-49767-3-git-send-email-gavin.hu@arm.com> In-Reply-To: <1572180765-49767-3-git-send-email-gavin.hu@arm.com> From: David Marchand Date: Sun, 27 Oct 2019 21:49:39 +0100 Message-ID: To: Gavin Hu Cc: dev , nd , "Ananyev, Konstantin" , Thomas Monjalon , Stephen Hemminger , Hemant Agrawal , Jerin Jacob Kollanukkaran , Pavan Nikhilesh , Honnappa Nagarahalli , "Ruifeng Wang (Arm Technology China)" , Phil Yang , Steve Capper X-MC-Unique: wJPXay_2PsqE0kBCFgUKdQ-1 X-Mimecast-Spam-Score: 0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Subject: Re: [dpdk-dev] [PATCH v11 2/5] eal: add the APIs to wait until equal X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Sun, Oct 27, 2019 at 1:53 PM Gavin Hu wrote: [snip] > diff --git a/lib/librte_eal/common/include/arch/arm/rte_pause_64.h b/lib/= librte_eal/common/include/arch/arm/rte_pause_64.h > index 93895d3..1680d7a 100644 > --- a/lib/librte_eal/common/include/arch/arm/rte_pause_64.h > +++ b/lib/librte_eal/common/include/arch/arm/rte_pause_64.h [snip] > @@ -17,6 +23,188 @@ static inline void rte_pause(void) > asm volatile("yield" ::: "memory"); > } > > +/** > + * Send an event to quit WFE. > + */ > +static inline void rte_sevl(void); > + > +/** > + * Put processor into low power WFE(Wait For Event) state > + */ > +static inline void rte_wfe(void); > + > +#ifdef RTE_ARM_USE_WFE > +static inline void rte_sevl(void) > +{ > + asm volatile("sevl" : : : "memory"); > +} > + > +static inline void rte_wfe(void) > +{ > + asm volatile("wfe" : : : "memory"); > +} > +#else > +static inline void rte_sevl(void) > +{ > +} > +static inline void rte_wfe(void) > +{ > + rte_pause(); > +} > +#endif > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change, or be removed, without prior no= tice experimental? Just complaining on the principle, you missed the __rte_experimental in such a case. But this API is a no go for me, see below. > + * > + * Atomic exclusive load from addr, it returns the 16-bit content of *ad= dr > + * while making it 'monitored',when it is written by someone else, the > + * 'monitored' state is cleared and a event is generated implicitly to e= xit > + * WFE. > + * > + * @param addr > + * A pointer to the memory location. > + * @param memorder > + * The valid memory order variants are __ATOMIC_ACQUIRE and __ATOMIC_RE= LAXED. > + * These map to C++11 memory orders with the same names, see the C++11 = standard > + * the GCC wiki on atomic synchronization for detailed definitions. > + */ > +static __rte_always_inline uint16_t > +rte_atomic_load_ex_16(volatile uint16_t *addr, int memorder); This API does not make sense for anything but arm, so this prefix is not go= od. On arm, when RTE_ARM_USE_WFE is undefined, why would you need it? A non exclusive load is enough since you don't want to use wfe. [snip] > + > +static __rte_always_inline uint16_t > +rte_atomic_load_ex_16(volatile uint16_t *addr, int memorder) > +{ > + uint16_t tmp; > + assert((memorder =3D=3D __ATOMIC_ACQUIRE) > + || (memorder =3D=3D __ATOMIC_RELAXED)); > + if (memorder =3D=3D __ATOMIC_ACQUIRE) > + asm volatile("ldaxrh %w[tmp], [%x[addr]]" > + : [tmp] "=3D&r" (tmp) > + : [addr] "r"(addr) > + : "memory"); > + else if (memorder =3D=3D __ATOMIC_RELAXED) > + asm volatile("ldxrh %w[tmp], [%x[addr]]" > + : [tmp] "=3D&r" (tmp) > + : [addr] "r"(addr) > + : "memory"); > + return tmp; > +} > + > +static __rte_always_inline uint32_t > +rte_atomic_load_ex_32(volatile uint32_t *addr, int memorder) > +{ > + uint32_t tmp; > + assert((memorder =3D=3D __ATOMIC_ACQUIRE) > + || (memorder =3D=3D __ATOMIC_RELAXED)); > + if (memorder =3D=3D __ATOMIC_ACQUIRE) > + asm volatile("ldaxr %w[tmp], [%x[addr]]" > + : [tmp] "=3D&r" (tmp) > + : [addr] "r"(addr) > + : "memory"); > + else if (memorder =3D=3D __ATOMIC_RELAXED) > + asm volatile("ldxr %w[tmp], [%x[addr]]" > + : [tmp] "=3D&r" (tmp) > + : [addr] "r"(addr) > + : "memory"); > + return tmp; > +} > + > +static __rte_always_inline uint64_t > +rte_atomic_load_ex_64(volatile uint64_t *addr, int memorder) > +{ > + uint64_t tmp; > + assert((memorder =3D=3D __ATOMIC_ACQUIRE) > + || (memorder =3D=3D __ATOMIC_RELAXED)); > + if (memorder =3D=3D __ATOMIC_ACQUIRE) > + asm volatile("ldaxr %x[tmp], [%x[addr]]" > + : [tmp] "=3D&r" (tmp) > + : [addr] "r"(addr) > + : "memory"); > + else if (memorder =3D=3D __ATOMIC_RELAXED) > + asm volatile("ldxr %x[tmp], [%x[addr]]" > + : [tmp] "=3D&r" (tmp) > + : [addr] "r"(addr) > + : "memory"); > + return tmp; > +} > + > +#ifdef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED > +static __rte_always_inline void > +rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected, > +int memorder) > +{ > + if (__atomic_load_n(addr, memorder) !=3D expected) { > + rte_sevl(); > + do { > + rte_wfe(); We are in the RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED case. rte_wfe() is always asm volatile("wfe" : : : "memory"); > + } while (rte_atomic_load_ex_16(addr, memorder) !=3D expec= ted); > + } > +} > + > +static __rte_always_inline void > +rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected, > +int memorder) > +{ > + if (__atomic_load_n(addr, memorder) !=3D expected) { > + rte_sevl(); > + do { > + rte_wfe(); > + } while (__atomic_load_n(addr, memorder) !=3D expected); > + } > +} The while() should be with an exclusive load. I will submit a v12 with those comments addressed so that we move forward for rc2. But it won't make it in rc1, sorry. --=20 David Marchand