From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BAEB0A00C3; Mon, 17 Jan 2022 13:03:39 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 65D47411DD; Mon, 17 Jan 2022 13:03:39 +0100 (CET) Received: from smartserver.smartsharesystems.com (smartserver.smartsharesystems.com [77.243.40.215]) by mails.dpdk.org (Postfix) with ESMTP id 03F23411C3 for ; Mon, 17 Jan 2022 13:03:37 +0100 (CET) Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Subject: RE: rte_memcpy alignment X-MimeOLE: Produced By Microsoft Exchange V6.5 Date: Mon, 17 Jan 2022 13:03:36 +0100 Message-ID: <98CBD80474FA8B44BF855DF32C47DC35D86E0A@smartserver.smartshare.dk> In-Reply-To: X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: rte_memcpy alignment Thread-Index: AdgJJKuLTRnjdnFpTB+l82T17Hz8JgAAf5GAAAF+hAAAAYyb4AAAuizwAAGUpTAAlzdG0A== References: <98CBD80474FA8B44BF855DF32C47DC35D86E00@smartserver.smartshare.dk> <98CBD80474FA8B44BF855DF32C47DC35D86E02@smartserver.smartshare.dk> <98CBD80474FA8B44BF855DF32C47DC35D86E03@smartserver.smartshare.dk> From: =?iso-8859-1?Q?Morten_Br=F8rup?= To: "Ananyev, Konstantin" , "Richardson, Bruce" Cc: "Jan Viktorin" , "Ruifeng Wang" , "David Christensen" , X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org > From: Ananyev, Konstantin [mailto:konstantin.ananyev@intel.com] > Sent: Friday, 14 January 2022 12.52 >=20 > > > > > From: Ananyev, Konstantin [mailto:konstantin.ananyev@intel.com] > > > Sent: Friday, 14 January 2022 11.54 > > > > > > > From: Morten Br=F8rup > > > > Sent: Friday, January 14, 2022 9:54 AM > > > > > > > > > From: Bruce Richardson [mailto:bruce.richardson@intel.com] > > > > > Sent: Friday, 14 January 2022 10.11 > > > > > > > > > > On Fri, Jan 14, 2022 at 09:56:50AM +0100, Morten Br=F8rup = wrote: > > > > > > Dear ARM/POWER/x86 maintainers, > > > > > > > > > > > > The architecture specific rte_memcpy() provides optimized > > > variants to > > > > > copy aligned data. However, the alignment requirements depend > on > > > the > > > > > hardware architecture, and there is no common definition for > the > > > > > alignment. > > > > > > > > > > > > DPDK provides __rte_cache_aligned for cache optimization > > > purposes, > > > > > with architecture specific values. Would you consider = providing > an > > > > > __rte_memcpy_aligned for rte_memcpy() optimization purposes? > > > > > > > > > > > > Or should I just use __rte_cache_aligned, although it is > > > overkill? > > > > > > > > > > > > > > > > > > Specifically, I am working on a mempool optimization where > the > > > objs > > > > > field in the rte_mempool_cache structure may benefit by being > > > aligned > > > > > for optimized rte_memcpy(). > > > > > > > > > > > For me the difficulty with such a memcpy proposal - apart from > > > probably > > > > > adding to the amount of memcpy code we have to maintain - is > the > > > > > specific meaning > > > > > of what "aligned" in the memcpy case. Unlike for a struct > > > definition, > > > > > the > > > > > possible meaning of aligned in memcpy could be: > > > > > * the source address is aligned > > > > > * the destination address is aligned > > > > > * both source and destination is aligned > > > > > * both source and destination are aligned and the copy length > is a > > > > > multiple > > > > > of the alignment length > > > > > * the data is aligned to a cacheline boundary > > > > > * the data is aligned to the largest load-store size for = system > > > > > * the data is aligned to the boundary suitable for the copy > size, > > > e.g. > > > > > memcpy of 8 bytes is 8-byte aligned etc. > > > > > > > > > > Can you clarify a bit more on your own thinking here? > Personally, I > > > am > > > > > a > > > > > little dubious of the benefit of general memcpy optimization, > but I > > > do > > > > > believe that for specific usecases there is value is having > their > > > own > > > > > copy > > > > > operations which include constraints for that specific = usecase. > For > > > > > example, in the AVX-512 ice/i40e PMD code, we fold the memcpy > from > > > the > > > > > mempool cache into the descriptor rearm function because we > know we > > > can > > > > > always do 64-byte loads and stores, and also because we know > that > > > for > > > > > each > > > > > load in the copy, we can reuse the data just after storing it > > > (giving > > > > > good > > > > > perf boost). Perhaps something similar could work for you in > your > > > > > mempool > > > > > optimization. > > > > > > > > > > /Bruce > > > > > > > > I'm going to copy array of pointers, specifically the 'objs' > array in > > > the rte_mempool_cache structure. > > > > > > > > The 'objs' array starts at byte 24, which is only 8 byte = aligned. > So > > > it always fails the ALIGNMENT_MASK test in the x86 specific > > > > rte_memcpy(), and thus cannot ever use the optimized > > > rte_memcpy_aligned() function to copy the array, but will use the > > > > rte_memcpy_generic() function. > > > > > > > > If the 'objs' array was optimally aligned, and the other array > that > > > is being copied to/from is also optimally aligned, rte_memcpy() > would > > > use > > > > the optimized rte_memcpy_aligned() function. > > > > > > > > Please also note that the value of ALIGNMENT_MASK depends on > which > > > vector instruction set DPDK is being compiled with. > > > > > > > > The other CPU architectures have similar stuff in their > rte_memcpy() > > > implementations, and their alignment requirements are also > different. > > > > > > > > Please also note that rte_memcpy() becomes even more optimized > when > > > the size of the memcpy() operation is known at compile time. > > > > > > If the size is known at compile time, rte_memcpy() probably an > overkill > > > - modern compilers usually generate fast enough code for such > cases. > > > > > > > > > > > So I am asking for a public #define __rte_memcpy_aligned I can > use to > > > meet the alignment requirements for optimal rte_memcpy(). > > > > > > Even on x86 ALIGNMENT_MASK could have different values (15/31/63) > > > depending on ISA. > > > So probably 64 as 'generic' one is the safest bet. > > > > I will use cache line alignment for now. Dear ARM/POWER/x86 maintainers, Please forget my request. I am quite confident that __rte_cache_aligned suffices for rte_memcpy() = purposes too, so there is no need to introduce one more definition. > > > > > Though I wonder do we really need such micro-optimizations here? > > > > I'm not sure, but since it's available, I will use it. :-) > > > > And the mempool get/put functions are very frequently used, so I > think we should squeeze out every bit of performance we can. >=20 > Well it wouldn't come for free, right? > You would probably need to do some extra checking and add handling for > non-aligned cases. > Anyway, will probably just wait for the patch before going into = further > discussions :) Konstantin was right! Mempool_perf_autotest revealed that rte_memcpy() was inefficient, so I = used a different method in the patch: http://inbox.dpdk.org/dev/20220117115231.8060-1-mb@smartsharesystems.com/= T/#u