From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AEB3E457A1; Mon, 12 Aug 2024 14:28:34 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7E2BE402C3; Mon, 12 Aug 2024 14:28:34 +0200 (CEST) Received: from mail.lysator.liu.se (mail.lysator.liu.se [130.236.254.3]) by mails.dpdk.org (Postfix) with ESMTP id 8DDB74029C for ; Mon, 12 Aug 2024 14:28:32 +0200 (CEST) Received: from mail.lysator.liu.se (localhost [127.0.0.1]) by mail.lysator.liu.se (Postfix) with ESMTP id DC29E1BB for ; Mon, 12 Aug 2024 14:28:31 +0200 (CEST) Received: by mail.lysator.liu.se (Postfix, from userid 1004) id BD4C1221; Mon, 12 Aug 2024 14:28:31 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 4.0.0 (2022-12-13) on hermod.lysator.liu.se X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=ALL_TRUSTED,AWL, T_SCC_BODY_TEXT_LINE autolearn=disabled version=4.0.0 X-Spam-Score: -1.3 Received: from [192.168.1.86] (h-62-63-215-114.A163.priv.bahnhof.se [62.63.215.114]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mail.lysator.liu.se (Postfix) with ESMTPSA id 5E2E521F; Mon, 12 Aug 2024 14:28:29 +0200 (CEST) Message-ID: Date: Mon, 12 Aug 2024 14:28:28 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 5/5] eal: extend bitops to handle volatile pointers To: Jack Bond-Preston , =?UTF-8?Q?Mattias_R=C3=B6nnblom?= , dev@dpdk.org Cc: Heng Wang , Stephen Hemminger , Joyce Kong , Tyler Retzlaff , =?UTF-8?Q?Morten_Br=C3=B8rup?= References: <20240809090439.589295-2-mattias.ronnblom@ericsson.com> <20240809095829.589396-1-mattias.ronnblom@ericsson.com> <20240809095829.589396-6-mattias.ronnblom@ericsson.com> <0c46f8fd-c63b-4736-839f-ab787076109a@foss.arm.com> Content-Language: en-US From: =?UTF-8?Q?Mattias_R=C3=B6nnblom?= In-Reply-To: <0c46f8fd-c63b-4736-839f-ab787076109a@foss.arm.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Virus-Scanned: ClamAV using ClamSMTP X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 2024-08-12 13:22, Jack Bond-Preston wrote: > On 09/08/2024 10:58, Mattias Rönnblom wrote: >> >> +#define __RTE_GEN_BIT_ATOMIC_TEST(v, qualifier, size)            \ >>       __rte_experimental                        \ >>       static inline bool                        \ >> -    __rte_bit_atomic_test ## size(const uint ## size ## _t *addr,    \ >> -                      unsigned int nr, int memory_order) \ >> +    __rte_bit_atomic_ ## v ## test ## size(const qualifier uint ## >> size ## _t *addr, \ >> +                           unsigned int nr, int memory_order) \ >>       {                                \ >>           RTE_ASSERT(nr < size);                    \ >>                                       \ >> -        const RTE_ATOMIC(uint ## size ## _t) *a_addr =        \ >> -            (const RTE_ATOMIC(uint ## size ## _t) *)addr;    \ >> +        const qualifier RTE_ATOMIC(uint ## size ## _t) *a_addr = \ >> +            (const qualifier RTE_ATOMIC(uint ## size ## _t) *)addr;    \ >>           uint ## size ## _t mask = (uint ## size ## _t)1 << nr;    \ >>           return rte_atomic_load_explicit(a_addr, memory_order) & mask; \ >>       } >> -#define __RTE_GEN_BIT_ATOMIC_SET(size)                    \ >> +#define __RTE_GEN_BIT_ATOMIC_SET(v, qualifier, size)            \ >>       __rte_experimental                        \ >>       static inline void                        \ >> -    __rte_bit_atomic_set ## size(uint ## size ## _t *addr,        \ >> -                     unsigned int nr, int memory_order)    \ >> +    __rte_bit_atomic_ ## v ## set ## size(qualifier uint ## size ## >> _t *addr, \ >> +                          unsigned int nr, int memory_order) \ >>       {                                \ >>           RTE_ASSERT(nr < size);                    \ >>                                       \ >> -        RTE_ATOMIC(uint ## size ## _t) *a_addr =        \ >> -            (RTE_ATOMIC(uint ## size ## _t) *)addr;        \ >> +        qualifier RTE_ATOMIC(uint ## size ## _t) *a_addr =    \ >> +            (qualifier RTE_ATOMIC(uint ## size ## _t) *)addr; \ >>           uint ## size ## _t mask = (uint ## size ## _t)1 << nr;    \ >>           rte_atomic_fetch_or_explicit(a_addr, mask, memory_order); \ >>       } >> -#define __RTE_GEN_BIT_ATOMIC_CLEAR(size)                \ >> +#define __RTE_GEN_BIT_ATOMIC_CLEAR(v, qualifier, size)            \ >>       __rte_experimental                        \ >>       static inline void                        \ >> -    __rte_bit_atomic_clear ## size(uint ## size ## _t *addr,    \ >> -                       unsigned int nr, int memory_order) \ >> +    __rte_bit_atomic_ ## v ## clear ## size(qualifier uint ## size ## >> _t *addr,    \ >> +                        unsigned int nr, int memory_order) \ >>       {                                \ >>           RTE_ASSERT(nr < size);                    \ >>                                       \ >> -        RTE_ATOMIC(uint ## size ## _t) *a_addr =        \ >> -            (RTE_ATOMIC(uint ## size ## _t) *)addr;        \ >> +        qualifier RTE_ATOMIC(uint ## size ## _t) *a_addr =    \ >> +            (qualifier RTE_ATOMIC(uint ## size ## _t) *)addr; \ >>           uint ## size ## _t mask = (uint ## size ## _t)1 << nr;    \ >>           rte_atomic_fetch_and_explicit(a_addr, ~mask, memory_order); \ >>       } >> -#define __RTE_GEN_BIT_ATOMIC_FLIP(size)                    \ >> +#define __RTE_GEN_BIT_ATOMIC_FLIP(v, qualifier, size)            \ >>       __rte_experimental                        \ >>       static inline void                        \ >> -    __rte_bit_atomic_flip ## size(uint ## size ## _t *addr,        \ >> -                       unsigned int nr, int memory_order) \ >> +    __rte_bit_atomic_ ## v ## flip ## size(qualifier uint ## size ## >> _t *addr, \ >> +                           unsigned int nr, int memory_order) \ >>       {                                \ >>           RTE_ASSERT(nr < size);                    \ >>                                       \ >> -        RTE_ATOMIC(uint ## size ## _t) *a_addr =        \ >> -            (RTE_ATOMIC(uint ## size ## _t) *)addr;        \ >> +        qualifier RTE_ATOMIC(uint ## size ## _t) *a_addr =    \ >> +            (qualifier RTE_ATOMIC(uint ## size ## _t) *)addr; \ >>           uint ## size ## _t mask = (uint ## size ## _t)1 << nr;    \ >>           rte_atomic_fetch_xor_explicit(a_addr, mask, memory_order); \ >>       } >> -#define __RTE_GEN_BIT_ATOMIC_ASSIGN(size)                \ >> +#define __RTE_GEN_BIT_ATOMIC_ASSIGN(v, qualifier, size)            \ >>       __rte_experimental                        \ >>       static inline void                        \ >> -    __rte_bit_atomic_assign ## size(uint ## size ## _t *addr,    \ >> -                    unsigned int nr, bool value,    \ >> -                    int memory_order)        \ >> +    __rte_bit_atomic_## v ## assign ## size(qualifier uint ## size ## >> _t *addr, \ >> +                        unsigned int nr, bool value, \ >> +                        int memory_order)    \ >>       {                                \ >>           if (value)                        \ >> -            __rte_bit_atomic_set ## size(addr, nr, memory_order); \ >> +            __rte_bit_atomic_ ## v ## set ## size(addr, nr, >> memory_order); \ >>           else                            \ >> -            __rte_bit_atomic_clear ## size(addr, nr,    \ >> -                               memory_order);    \ >> +            __rte_bit_atomic_ ## v ## clear ## size(addr, nr, \ >> +                                     memory_order); \ >>       } >> -#define __RTE_GEN_BIT_ATOMIC_TEST_AND_SET(size)                \ >> +#define __RTE_GEN_BIT_ATOMIC_TEST_AND_SET(v, qualifier, size)        \ >>       __rte_experimental                        \ >>       static inline bool                        \ >> -    __rte_bit_atomic_test_and_set ## size(uint ## size ## _t *addr,    \ >> -                          unsigned int nr,        \ >> -                          int memory_order)        \ >> +    __rte_bit_atomic_ ## v ## test_and_set ## size(qualifier uint ## >> size ## _t *addr, \ >> +                               unsigned int nr,    \ >> +                               int memory_order) \ >>       {                                \ >>           RTE_ASSERT(nr < size);                    \ >>                                       \ >> -        RTE_ATOMIC(uint ## size ## _t) *a_addr =        \ >> -            (RTE_ATOMIC(uint ## size ## _t) *)addr;        \ >> +        qualifier RTE_ATOMIC(uint ## size ## _t) *a_addr =    \ >> +            (qualifier RTE_ATOMIC(uint ## size ## _t) *)addr; \ >>           uint ## size ## _t mask = (uint ## size ## _t)1 << nr;    \ >>           uint ## size ## _t prev;                \ >>                                       \ >> @@ -587,17 +632,17 @@ __RTE_GEN_BIT_FLIP(, flip,, 64) >>           return prev & mask;                    \ >>       } >> -#define __RTE_GEN_BIT_ATOMIC_TEST_AND_CLEAR(size)            \ >> +#define __RTE_GEN_BIT_ATOMIC_TEST_AND_CLEAR(v, qualifier, size)        \ >>       __rte_experimental                        \ >>       static inline bool                        \ >> -    __rte_bit_atomic_test_and_clear ## size(uint ## size ## _t *addr, \ >> -                        unsigned int nr,    \ >> -                        int memory_order)    \ >> +    __rte_bit_atomic_ ## v ## test_and_clear ## size(qualifier uint >> ## size ## _t *addr, \ >> +                             unsigned int nr, \ >> +                             int memory_order) \ >>       {                                \ >>           RTE_ASSERT(nr < size);                    \ >>                                       \ >> -        RTE_ATOMIC(uint ## size ## _t) *a_addr =        \ >> -            (RTE_ATOMIC(uint ## size ## _t) *)addr;        \ >> +        qualifier RTE_ATOMIC(uint ## size ## _t) *a_addr =    \ >> +            (qualifier RTE_ATOMIC(uint ## size ## _t) *)addr; \ >>           uint ## size ## _t mask = (uint ## size ## _t)1 << nr;    \ >>           uint ## size ## _t prev;                \ >>                                       \ >> @@ -607,34 +652,36 @@ __RTE_GEN_BIT_FLIP(, flip,, 64) >>           return prev & mask;                    \ >>       } >> -#define __RTE_GEN_BIT_ATOMIC_TEST_AND_ASSIGN(size)            \ >> +#define __RTE_GEN_BIT_ATOMIC_TEST_AND_ASSIGN(v, qualifier, size)    \ >>       __rte_experimental                        \ >>       static inline bool                        \ >> -    __rte_bit_atomic_test_and_assign ## size(uint ## size ## _t *addr, \ >> -                         unsigned int nr,    \ >> -                         bool value,        \ >> -                         int memory_order)    \ >> +    __rte_bit_atomic_ ## v ## test_and_assign ## size(qualifier uint >> ## size ## _t *addr, \ >> +                              unsigned int nr, \ >> +                              bool value,    \ >> +                              int memory_order) \ >>       {                                \ >>           if (value)                        \ >> -            return __rte_bit_atomic_test_and_set ## size(addr, nr, \ >> -                                     memory_order); \ >> +            return __rte_bit_atomic_ ## v ## test_and_set ## >> size(addr, nr, memory_order); \ >>           else                            \ >> -            return __rte_bit_atomic_test_and_clear ## size(addr, nr, \ >> -                                       memory_order); \ >> +            return __rte_bit_atomic_ ## v ## test_and_clear ## >> size(addr, nr, memory_order); \ >>       } >> -#define __RTE_GEN_BIT_ATOMIC_OPS(size)            \ >> -    __RTE_GEN_BIT_ATOMIC_TEST(size)            \ >> -    __RTE_GEN_BIT_ATOMIC_SET(size)            \ >> -    __RTE_GEN_BIT_ATOMIC_CLEAR(size)        \ >> -    __RTE_GEN_BIT_ATOMIC_ASSIGN(size)        \ >> -    __RTE_GEN_BIT_ATOMIC_TEST_AND_SET(size)        \ >> -    __RTE_GEN_BIT_ATOMIC_TEST_AND_CLEAR(size)    \ >> -    __RTE_GEN_BIT_ATOMIC_TEST_AND_ASSIGN(size)    \ >> -    __RTE_GEN_BIT_ATOMIC_FLIP(size) >> +#define __RTE_GEN_BIT_ATOMIC_OPS(v, qualifier, size)    \ >> +    __RTE_GEN_BIT_ATOMIC_TEST(v, qualifier, size)    \ >> +    __RTE_GEN_BIT_ATOMIC_SET(v, qualifier, size)    \ >> +    __RTE_GEN_BIT_ATOMIC_CLEAR(v, qualifier, size)    \ >> +    __RTE_GEN_BIT_ATOMIC_ASSIGN(v, qualifier, size)    \ >> +    __RTE_GEN_BIT_ATOMIC_TEST_AND_SET(v, qualifier, size) \ >> +    __RTE_GEN_BIT_ATOMIC_TEST_AND_CLEAR(v, qualifier, size) \ >> +    __RTE_GEN_BIT_ATOMIC_TEST_AND_ASSIGN(v, qualifier, size) \ >> +    __RTE_GEN_BIT_ATOMIC_FLIP(v, qualifier, size) >> -__RTE_GEN_BIT_ATOMIC_OPS(32) >> -__RTE_GEN_BIT_ATOMIC_OPS(64) >> +#define __RTE_GEN_BIT_ATOMIC_OPS_SIZE(size) \ >> +    __RTE_GEN_BIT_ATOMIC_OPS(,, size) \ >> +    __RTE_GEN_BIT_ATOMIC_OPS(v_, volatile, size) >> + >> +__RTE_GEN_BIT_ATOMIC_OPS_SIZE(32) >> +__RTE_GEN_BIT_ATOMIC_OPS_SIZE(64) > > The first argument for these should probably be called "family", for > consistency with the non-atomic ops. > The family is "atomic" or "" (for the non-atomic version, so it's not a good name. I'll rename the macro parameters in __RTE_GEN_BIT_TEST() instead. 'qualifier' should be 'c', or maybe const_qualifier or const_qual to be more descriptive. The names should be consistent with the overload macros. >>   /*------------------------ 32-bit relaxed operations >> ------------------------*/ >>