From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4C732428B0; Wed, 5 Apr 2023 01:49:28 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DE49D40FDF; Wed, 5 Apr 2023 01:49:27 +0200 (CEST) Received: from forward501a.mail.yandex.net (forward501a.mail.yandex.net [178.154.239.81]) by mails.dpdk.org (Postfix) with ESMTP id 67ADD40DDC for ; Wed, 5 Apr 2023 01:49:26 +0200 (CEST) Received: from mail-nwsmtp-smtp-production-main-64.vla.yp-c.yandex.net (mail-nwsmtp-smtp-production-main-64.vla.yp-c.yandex.net [IPv6:2a02:6b8:c0f:170e:0:640:d60c:0]) by forward501a.mail.yandex.net (Yandex) with ESMTP id 8C1DD5EE11; Wed, 5 Apr 2023 02:49:25 +0300 (MSK) Received: by mail-nwsmtp-smtp-production-main-64.vla.yp-c.yandex.net (smtp/Yandex) with ESMTPSA id MnYqP13W0uQ0-j27gmg1l; Wed, 05 Apr 2023 02:49:24 +0300 X-Yandex-Fwd: 1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1680652164; bh=0x9sKFINM87oWU+I7CVCt5qDM7H4deIcbJTOGDuNUC4=; h=From:In-Reply-To:Cc:Date:References:To:Subject:Message-ID; b=lvRA7SKZa63ci3zmYajfjFi5QpfV/TWvGYRgDEhvcEOjDsq6+WP4kT5TcEIYS00DQ pKdcr0wxurXmWeM3u8mukY+I+lD7jSwa5xJX6Np0wzmArP0UT0atvCRYImXoU6ORpT 0jzE5TCFDYxYDbHT8kDEsMdzcFBYqrLe7V9EldSk= Authentication-Results: mail-nwsmtp-smtp-production-main-64.vla.yp-c.yandex.net; dkim=pass header.i=@yandex.ru Message-ID: Date: Wed, 5 Apr 2023 00:49:21 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.9.0 Subject: Re: [PATCH 3/9] eal: use barrier intrinsics when compiling with msvc Content-Language: en-US To: Tyler Retzlaff , Konstantin Ananyev Cc: "dev@dpdk.org" , "bruce.richardson@intel.com" , "david.marchand@redhat.com" , "thomas@monjalon.net" , "mb@smartsharesystems.com" References: <1680558751-17931-1-git-send-email-roretzla@linux.microsoft.com> <1680558751-17931-4-git-send-email-roretzla@linux.microsoft.com> <754a877d020a4a199a5310b469e876a7@huawei.com> <20230404154906.GC23247@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net> From: Konstantin Ananyev In-Reply-To: <20230404154906.GC23247@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org 04/04/2023 16:49, Tyler Retzlaff пишет: > On Tue, Apr 04, 2023 at 12:11:07PM +0000, Konstantin Ananyev wrote: >> >> >>> Inline assembly is not supported for msvc x64 instead use >>> _{Read,Write,ReadWrite}Barrier() intrinsics. >>> >>> Signed-off-by: Tyler Retzlaff >>> --- >>> lib/eal/include/generic/rte_atomic.h | 4 ++++ >>> lib/eal/x86/include/rte_atomic.h | 10 +++++++++- >>> 2 files changed, 13 insertions(+), 1 deletion(-) >>> >>> diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h >>> index 234b268..e973184 100644 >>> --- a/lib/eal/include/generic/rte_atomic.h >>> +++ b/lib/eal/include/generic/rte_atomic.h >>> @@ -116,9 +116,13 @@ >>> * Guarantees that operation reordering does not occur at compile time >>> * for operations directly before and after the barrier. >>> */ >>> +#ifndef RTE_TOOLCHAIN_MSVC >>> #define rte_compiler_barrier() do { \ >>> asm volatile ("" : : : "memory"); \ >>> } while(0) >>> +#else >>> +#define rte_compiler_barrier() _ReadWriteBarrier() >>> +#endif >>> >>> /** >>> * Synchronization fence between threads based on the specified memory order. >>> diff --git a/lib/eal/x86/include/rte_atomic.h b/lib/eal/x86/include/rte_atomic.h >>> index f2ee1a9..5cce9ba 100644 >>> --- a/lib/eal/x86/include/rte_atomic.h >>> +++ b/lib/eal/x86/include/rte_atomic.h >>> @@ -27,9 +27,13 @@ >>> >>> #define rte_rmb() _mm_lfence() >>> >>> +#ifndef RTE_TOOLCHAIN_MSVC >>> #define rte_smp_wmb() rte_compiler_barrier() >>> - >>> #define rte_smp_rmb() rte_compiler_barrier() >>> +#else >>> +#define rte_smp_wmb() _WriteBarrier() >>> +#define rte_smp_rmb() _ReadBarrier() >>> +#endif >>> >>> /* >>> * From Intel Software Development Manual; Vol 3; >>> @@ -66,11 +70,15 @@ >>> static __rte_always_inline void >>> rte_smp_mb(void) >>> { >>> +#ifndef RTE_TOOLCHAIN_MSVC >>> #ifdef RTE_ARCH_I686 >>> asm volatile("lock addl $0, -128(%%esp); " ::: "memory"); >>> #else >>> asm volatile("lock addl $0, -128(%%rsp); " ::: "memory"); >>> #endif >>> +#else >>> + rte_compiler_barrier(); >>> +#endif >> >> It doesn't look right to me: compiler_barrier is not identical to LOCK-ed operation, >> and is not enough to serve as a proper memory barrier for SMP. > > i think i'm confused by the macro naming here. i'll take another look > thank you for raising it. > >> >> Another ore generic comment - do we really need to pollute all that code with RTE_TOOLCHAIN_MSVC ifdefs? >> Right now we have ability to have subdir per arch (x86/arm/etc.). >> Can we treat x86+windows+msvc as a special arch? > > i asked this question previously and confirmed in the technical board > meeting. the answer i received was that the community did not want new > directory/headers introduced for compiler support matrix and i should > use #ifdef in the existing headers. Ok, can I then ask at least to minimize number of ifdefs to absolute minimum? It is really hard to read an follow acode that is heavily ifdefed. Let say above we probably don't need to re-define rte_smp_rmb/rte_smp_wmb, as both are boiled down to compiler_barrier(), which is already redefined. Another question - could it be visa-versa approach: can we replace some inline assembly with common instincts whenever possible?