From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CC3EA440F4; Tue, 28 May 2024 18:17:55 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5AB82402E4; Tue, 28 May 2024 18:17:55 +0200 (CEST) Received: from mail.lysator.liu.se (mail.lysator.liu.se [130.236.254.3]) by mails.dpdk.org (Postfix) with ESMTP id 515A24025D for ; Tue, 28 May 2024 18:17:54 +0200 (CEST) Received: from mail.lysator.liu.se (localhost [127.0.0.1]) by mail.lysator.liu.se (Postfix) with ESMTP id DE32443BB for ; Tue, 28 May 2024 18:17:53 +0200 (CEST) Received: by mail.lysator.liu.se (Postfix, from userid 1004) id D282A4435; Tue, 28 May 2024 18:17:53 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 4.0.0 (2022-12-13) on hermod.lysator.liu.se X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=ALL_TRUSTED,AWL, T_SCC_BODY_TEXT_LINE autolearn=disabled version=4.0.0 X-Spam-Score: -1.3 Received: from [192.168.1.59] (h-62-63-215-114.A163.priv.bahnhof.se [62.63.215.114]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mail.lysator.liu.se (Postfix) with ESMTPSA id 8FBE343BA; Tue, 28 May 2024 18:17:51 +0200 (CEST) Message-ID: <184120df-26c4-446c-826e-19a8a3b1aa49@lysator.liu.se> Date: Tue, 28 May 2024 18:17:51 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC v2] eal: provide option to use compiler memcpy instead of RTE To: =?UTF-8?Q?Morten_Br=C3=B8rup?= , Bruce Richardson Cc: =?UTF-8?Q?Mattias_R=C3=B6nnblom?= , dev@dpdk.org, Stephen Hemminger References: <20240527111151.188607-1-mattias.ronnblom@ericsson.com> <20240528074354.190779-1-mattias.ronnblom@ericsson.com> <738e376c-c5b6-44dc-ad51-00f40d2ea6b5@lysator.liu.se> <6c21fd93-a875-4fde-ae02-0a146f0efdb4@lysator.liu.se> <98CBD80474FA8B44BF855DF32C47DC35E9F4B7@smartserver.smartshare.dk> Content-Language: en-US From: =?UTF-8?Q?Mattias_R=C3=B6nnblom?= In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35E9F4B7@smartserver.smartshare.dk> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Virus-Scanned: ClamAV using ClamSMTP X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 2024-05-28 11:07, Morten Brørup wrote: >> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se] >> Sent: Tuesday, 28 May 2024 11.00 >> >> On 2024-05-28 10:27, Bruce Richardson wrote: >>> On Tue, May 28, 2024 at 10:19:15AM +0200, Mattias Rönnblom wrote: >>>> On 2024-05-28 09:43, Mattias Rönnblom wrote: >>>>> Provide build option to have functions in delegate to >>>>> the standard compiler/libc memcpy(), instead of using the various >>>>> traditional, handcrafted, per-architecture rte_memcpy() >>>>> implementations. >>>>> >>>>> A new meson build option 'use_cc_memcpy' is added. The default is >>>>> true. It's not obvious what should be the default, but compiler >>>>> memcpy() is enabled by default in this RFC so any tests run with this >>>>> patch use the new approach. >>>>> >>>>> One purpose of this RFC is to make it easy to evaluate the costs and >>>>> benefits of a switch. >>>>> >>>> >>>> I've tested this patch some with DSW micro benchmarks, and the result is a >>>> 2.5% reduction of the DSW+testapp overhead with cc/libc memcpy. GCC 11.4. >>>> >>>> We've also run characteristic test suite of a large, real world app. Here, >>>> we saw no effect. GCC 10.5. >>>> >>>> x86_64 in both cases (Skylake and Raptor Lake). >>>> >>>> Last time we did the same, there were a noticeable performance degradation >>>> in both the above cases. > > Mattias, which compiler was that? > GCC 9, I think. Not only the compiler changed between those two test runs. It would be interesting with some ARM data points as well. > As previously mentioned in another thread, I'm worried about memcpy performance with older compilers. > DPDK officially supports GCC 4.9 and clang 3.4 [1]. > I don't think degrading performance when using supported compilers is considered acceptable. > > Alternatively, we could change the DPDK compiler policy from "supported" to "works with (but might not perform optimally)". > GCC 4.9 is ten years old. If you are using an old compiler, odds are you don't really care too much about squeezing out max performance, considering how much better code generation is in newer compilers. That said, we obviously don't want to cause large performance regressions for no good reason, even for old compilers. > [1]: https://doc.dpdk.org/guides-21.11/linux_gsg/sys_reqs.html#compilation-of-the-dpdk > >>>> >>>> This is not a lot of data points, but I think it we should consider making >>>> the custom RTE memcpy() implementations optional in the next release, and >> if >>>> no-one complains, remove the implementations in the next release. >>>> >>>> (Whether or not [or how long] to keep the wrapper API is another question.) >>>> >>>> >>> >>> The other instance I've heard mention of in the past is virtio/vhost, which >>> used to have a speedup from the custom memcpy. >>> >>> My own thinking on these cases, is that for targetted settings like these, >>> we should look to have local memcpy functions written - taking account of >>> the specifics of each usecase. For virtio/vhost for example, we can have >>> assumptions around host buffer alignment, and we also can be pretty >>> confident we are copying to another CPU. For DSW, or other eventdev cases, >>> we would only be looking at copies of multiples of 16, with guaranteed >>> 8-byte alignment on both source and destination. Writing efficient copy fns >> >> In such cases, you should first try to tell the compiler that it's safe >> to assume that the pointers have a certain alignment. >> >> void copy256(void *dst, const void *src) >> { >> memcpy(dst, src, 256); >> } >> >> void copy256_a(void *dst, const void *src) >> { >> void *dst_a = __builtin_assume_aligned(dst, 32); >> const void *src_a = __builtin_assume_aligned(src, 32); >> memcpy(dst_a, src_a, 256); >> } >> >> The first will generate loads/stores without alignment restrictions, >> while the latter will use things like vmovdqa or vmovaps. >> >> (I doubt there's much of a performance difference though, if any at all.) > > Interesting. > >> >>> for specific scenarios can be faster and more effective than trying to >>> write a general, optimized in all cases, memcpy. It also discourages the >>> use of non-libc memcpy except where really necessary. > > Good idea, Bruce. > I have previously worked on an optimized memcpy, where information about alignment, multiples, non-temporal source/destination, etc. is passed as flags to the function [2]. But it turned into too much work, so I never finished it. > > If we start with local memcpy functions optimized for each specific use case, we still have the option of consolidating them into a common rte_memcpy function later. It will also reveal which flags/features such a common function needs to support. > > [2]: https://inbox.dpdk.org/dev/20221010064600.16495-1-mb@smartsharesystems.com/ > >>> >>> Naturally, if we find there are a lot of cases where use of libc memcpy >>> slows us down, we will want to keep a general rte_memcpy. However, I'd hope >>> the slowdown cases are very few. >>> >>> /Bruce