From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BB5FDA0C45; Thu, 28 Oct 2021 09:51:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7AF234067B; Thu, 28 Oct 2021 09:51:34 +0200 (CEST) Received: from mail-il1-f178.google.com (mail-il1-f178.google.com [209.85.166.178]) by mails.dpdk.org (Postfix) with ESMTP id 2B01A4003F for ; Thu, 28 Oct 2021 09:51:33 +0200 (CEST) Received: by mail-il1-f178.google.com with SMTP id i26so5796550ila.12 for ; Thu, 28 Oct 2021 00:51:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=8aPDXcGfPsaFAYOEjJ7mbsogVDBmRZZhCXwDCm3SLBk=; b=pccbnZUCeycuyKt8twfQjLv714heeadJuyuhDjRwzc5eLCXTR0FGedzcnMVZXlLu6/ zOYp+TWGdZBgx33nD4Da6HZlmUoTZBL+Pw83oylXMzVKn11IUcTHipG1VqLn0Py7UAZd AO/mIM5NqRdv4vLz0J1FYCEiLr8QQbVhrLMZzvfRkq11TpY7ZO6SMFBkVsmIVETyo3YP Mx1mYO4qhrDTmVfcjSMH9KCX3AJMHj+hv9PPFjl6Y7iMZpG3eCNQCvBQR12Y+h60Txba J0IbDMRgpDQCnD15bDpI/IPFdSRCOXJL2qNZiXDBomK4WNPGtNDJk0BV3l2bRjBBuZwx zrpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=8aPDXcGfPsaFAYOEjJ7mbsogVDBmRZZhCXwDCm3SLBk=; b=KDn8qSAjMG2Fgd1bXym9A5oVUwVVpz4aQMEC4aGWFFhZOOdSsbiB+MiSaQd2d2NPtr jeM/grZJH1SULzBV18RMYATGjXK2DyuxZITAJEhDuYIftvQbLF4GpwnpskTrn5/uK4y0 dmJGdaeDEUoxk2byhQWby0D1e1MJBLlc/t5QCKU08vmeGrmnffnk5tQ2o97rsVxHUpv+ hwrQS3H2rpMQuGdpyxKrYIZDu7tMykffSiWC2H5pa9JDok0SNvY+3rBUa7LVU+CqC57l NZuY7sQ1c5g9gVoqxqcMMfcH/WUB4tm9t5Ar/DWJvU6u+c7xdWG1dv/uICjWqx4uvTqg LktA== X-Gm-Message-State: AOAM531o43H/TdfDuC/l+Oq+swyBajy3EqUtqgo75RYgpQVHzfPozgfk 8vNVDG7M3n6yAMCVyMV0JjJJyX9MePHO0BAjqYo= X-Google-Smtp-Source: ABdhPJxFi5TZHpmCR+rY1yG6T4zsRqhKQb2xkGxTY6iIQ/76KfwiHLetu80f9YtDbWhkz8iqfFVvYB6kEXVJeKznRVg= X-Received: by 2002:a05:6e02:1bcb:: with SMTP id x11mr13994ilv.94.1635407492432; Thu, 28 Oct 2021 00:51:32 -0700 (PDT) MIME-Version: 1.0 References: <20210902053253.3017858-1-feifei.wang2@arm.com> <20211028065640.139655-1-feifei.wang2@arm.com> <20211028065640.139655-2-feifei.wang2@arm.com> In-Reply-To: From: Jerin Jacob Date: Thu, 28 Oct 2021 13:21:05 +0530 Message-ID: To: Feifei Wang Cc: Ruifeng Wang , dpdk-dev , nd , "Ananyev, Konstantin" , Stephen Hemminger , David Marchand , "thomas@monjalon.net" , =?UTF-8?Q?Mattias_R=C3=B6nnblom?= Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: Re: [dpdk-dev] [PATCH v7 1/5] eal: add new definitions for wait scheme X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Thu, Oct 28, 2021 at 1:11 PM Feifei Wang wrote: > > > > > -----=E9=82=AE=E4=BB=B6=E5=8E=9F=E4=BB=B6----- > > =E5=8F=91=E4=BB=B6=E4=BA=BA: Jerin Jacob > > =E5=8F=91=E9=80=81=E6=97=B6=E9=97=B4: Thursday, October 28, 2021 3:16 P= M > > =E6=94=B6=E4=BB=B6=E4=BA=BA: Feifei Wang > > =E6=8A=84=E9=80=81: Ruifeng Wang ; dpdk-dev ; > > nd ; Ananyev, Konstantin ; > > Stephen Hemminger ; David Marchand > > ; thomas@monjalon.net; Mattias R=C3=B6nnblom > > > > =E4=B8=BB=E9=A2=98: Re: [PATCH v7 1/5] eal: add new definitions for wai= t scheme > > > > On Thu, Oct 28, 2021 at 12:26 PM Feifei Wang > > wrote: > > > > > > Introduce macros as generic interface for address monitoring. > > > For different size, encapsulate '__LOAD_EXC_16', '__LOAD_EXC_32' > > > and '__LOAD_EXC_64' into a new macro '__LOAD_EXC'. > > > > > > Furthermore, to prevent compilation warning in arm: > > > ---------------------------------------------- > > > 'warning: implicit declaration of function ...' > > > ---------------------------------------------- > > > Delete 'undef' constructions for '__LOAD_EXC_xx', '__SEVL' and '__WFE= '. > > > And add =E2=80=98__RTE_ARM=E2=80=99 for these macros to fix the names= pace. > > > > > > This is because original macros are undefine at the end of the file. > > > If new macro 'rte_wait_event' calls them in other files, they will be > > > seen as 'not defined'. > > > > > > Signed-off-by: Feifei Wang > > > Reviewed-by: Ruifeng Wang > > > --- > > > > > +static __rte_always_inline void > > > +rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected, > > > + int memorder) > > > +{ > > > + uint16_t value; > > > + > > > + assert(memorder =3D=3D __ATOMIC_ACQUIRE || memorder =3D=3D > > > + __ATOMIC_RELAXED); > > > > Assert is not good in the library, Why not RTE_BUILD_BUG_ON here > [Feifei] This line is the original code which has nothing to do with this= patch, > I can change it in the next version. > > > > > > > + > > > + __RTE_ARM_LOAD_EXC_16(addr, value, memorder) > > > if (value !=3D expected) { > > > - __SEVL() > > > + __RTE_ARM_SEVL() > > > do { > > > - __WFE() > > > - __LOAD_EXC_16(addr, value, memorder) > > > + __RTE_ARM_WFE() > > > + __RTE_ARM_LOAD_EXC_16(addr, value, memorder) > > > } while (value !=3D expected); > > > } > > > -#undef __LOAD_EXC_16 > > > } > > > > > > static __rte_always_inline void > > > @@ -77,34 +124,14 @@ rte_wait_until_equal_32(volatile uint32_t *addr, > > > uint32_t expected, > > > > > > assert(memorder =3D=3D __ATOMIC_ACQUIRE || memorder =3D=3D > > > __ATOMIC_RELAXED); > > > > > > - /* > > > - * Atomic exclusive load from addr, it returns the 32-bit con= tent of > > > - * *addr while making it 'monitored',when it is written by so= meone > > > - * else, the 'monitored' state is cleared and a event is gene= rated > > > - * implicitly to exit WFE. > > > - */ > > > -#define __LOAD_EXC_32(src, dst, memorder) { \ > > > - if (memorder =3D=3D __ATOMIC_RELAXED) { \ > > > - asm volatile("ldxr %w[tmp], [%x[addr]]" \ > > > - : [tmp] "=3D&r" (dst) \ > > > - : [addr] "r"(src) \ > > > - : "memory"); \ > > > - } else { \ > > > - asm volatile("ldaxr %w[tmp], [%x[addr]]" \ > > > - : [tmp] "=3D&r" (dst) \ > > > - : [addr] "r"(src) \ > > > - : "memory"); \ > > > - } } > > > - > > > - __LOAD_EXC_32(addr, value, memorder) > > > + __RTE_ARM_LOAD_EXC_32(addr, value, memorder) > > > if (value !=3D expected) { > > > - __SEVL() > > > + __RTE_ARM_SEVL() > > > do { > > > - __WFE() > > > - __LOAD_EXC_32(addr, value, memorder) > > > + __RTE_ARM_WFE() > > > + __RTE_ARM_LOAD_EXC_32(addr, value, memorder) > > > } while (value !=3D expected); > > > } > > > -#undef __LOAD_EXC_32 > > > } > > > > > > static __rte_always_inline void > > > @@ -115,38 +142,33 @@ rte_wait_until_equal_64(volatile uint64_t *addr= , > > > uint64_t expected, > > > > > > assert(memorder =3D=3D __ATOMIC_ACQUIRE || memorder =3D=3D > > > __ATOMIC_RELAXED); > > > > remove assert and change to BUILD_BUG_ON > [Feifei] OK > > > > > > > > - /* > > > - * Atomic exclusive load from addr, it returns the 64-bit con= tent of > > > - * *addr while making it 'monitored',when it is written by so= meone > > > - * else, the 'monitored' state is cleared and a event is gene= rated > > > - * implicitly to exit WFE. > > > - */ > > > -#define __LOAD_EXC_64(src, dst, memorder) { \ > > > - if (memorder =3D=3D __ATOMIC_RELAXED) { \ > > > - asm volatile("ldxr %x[tmp], [%x[addr]]" \ > > > - : [tmp] "=3D&r" (dst) \ > > > - : [addr] "r"(src) \ > > > - : "memory"); \ > > > - } else { \ > > > - asm volatile("ldaxr %x[tmp], [%x[addr]]" \ > > > - : [tmp] "=3D&r" (dst) \ > > > - : [addr] "r"(src) \ > > > - : "memory"); \ > > > - } } > > > - > > > - __LOAD_EXC_64(addr, value, memorder) > > > + __RTE_ARM_LOAD_EXC_64(addr, value, memorder) > > > if (value !=3D expected) { > > > - __SEVL() > > > + __RTE_ARM_SEVL() > > > do { > > > - __WFE() > > > - __LOAD_EXC_64(addr, value, memorder) > > > + __RTE_ARM_WFE() > > > + __RTE_ARM_LOAD_EXC_64(addr, value, memorder) > > > } while (value !=3D expected); > > > } > > > } > > > -#undef __LOAD_EXC_64 > > > > > > -#undef __SEVL > > > -#undef __WFE > > > +#define rte_wait_event(addr, mask, cond, expected, memorder) = \ > > > +do { = \ > > > + RTE_BUILD_BUG_ON(!__builtin_constant_p(memorder)); = \ > > > + RTE_BUILD_BUG_ON(memorder !=3D __ATOMIC_ACQUIRE && > > \ > > > + memorder !=3D __ATOMIC_RELAXED); = \ > > > + uint32_t size =3D sizeof(*(addr)) << 3; > > > > Add const > [Feifei] OK. > > > + typeof(*(addr)) expected_value =3D (expected); = \ > > > + typeof(*(addr)) value =3D 0; > > > > Why zero assignment > I will delete this initialization. > > \ > > > + __RTE_ARM_LOAD_EXC((addr), value, memorder, size) = \ > > > > Assert is not good in the library, Why not RTE_BUILD_BUG_ON here > [Feifei] For __RTE_ARM_LOAD_EXC, 'size' is known until code is running. > So it cannot check 'size' in the compile time and BUILD_BUG_ON doesn't wo= rk here. uint32_t size =3D sizeof(*(addr)) << 3 value will get in comple time as _sizeof_ is preprocessor function. So I think, BUILD_BUG_ON is fine. > > > > > > > + if ((value & (mask)) cond expected_value) { = \ > > > + __RTE_ARM_SEVL() = \ > > > + do { = \ > > > + __RTE_ARM_WFE() = \ > > > + __RTE_ARM_LOAD_EXC((addr), value, memorder, > > > + size) \ > > > > if the address is the type of __int128_t. This logic will fail? Could y= ou add > > 128bit support too and remove the assert from __RTE_ARM_LOAD_EXC > [Feifei] There is no 128bit case in library. And maybe there will be 128b= its case, we can > add 128 path here. Now there is assert check in __RTE_ARM_LOAD_EXC to ch= eck > whether size is '16/32/64'. API expects is only "addr" without any type so the application can use 128bit too. Worst case for now we can fall back to __atomic_load_n() for size 128, we dont want to break applications while using this API. Or add support for 128 in code. > > > > > > > + } while ((value & (mask)) cond expected_value); = \ > > > + } = \ > > > +} while (0) > > > > > > #endif > > > > > > diff --git a/lib/eal/include/generic/rte_pause.h > > > b/lib/eal/include/generic/rte_pause.h > > > index 668ee4a184..d0c5b5a415 100644 > > > --- a/lib/eal/include/generic/rte_pause.h > > > +++ b/lib/eal/include/generic/rte_pause.h > > > @@ -111,6 +111,34 @@ rte_wait_until_equal_64(volatile uint64_t *addr, > > uint64_t expected, > > > while (__atomic_load_n(addr, memorder) !=3D expected) > > > rte_pause(); > > > } > > > + > > > +/* > > > + * Wait until *addr breaks the condition, with a relaxed memory > > > + * ordering model meaning the loads around this API can be reordered= . > > > + * > > > + * @param addr > > > + * A pointer to the memory location. > > > + * @param mask > > > + * A mask of value bits in interest. > > > + * @param cond > > > + * A symbol representing the condition. > > > + * @param expected > > > + * An expected value to be in the memory location. > > > + * @param memorder > > > + * Two different memory orders that can be specified: > > > + * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to > > > + * C++11 memory orders with the same names, see the C++11 standard > > > +or > > > + * the GCC wiki on atomic synchronization for detailed definition. > > > + */ > > > +#define rte_wait_event(addr, mask, cond, expected, memorder) > > \ > > > +do { = \ > > > + RTE_BUILD_BUG_ON(!__builtin_constant_p(memorder)); > > \ > > > + RTE_BUILD_BUG_ON(memorder !=3D __ATOMIC_ACQUIRE && > > \ > > > + memorder !=3D __ATOMIC_RELAXED); = \ > > > + typeof(*(addr)) expected_value =3D (expected); = \ > > > + while ((__atomic_load_n((addr), (memorder)) & (mask)) cond > > expected_value) \ > > > + rte_pause(); = \ > > > +} while (0) > > > #endif > > > > > > #endif /* _RTE_PAUSE_H_ */ > > > -- > > > 2.25.1 > > >