From: Bruce Richardson <bruce.richardson@intel.com>
To: "Morten Brørup" <mb@smartsharesystems.com>
Cc: Jan Viktorin <viktorin@rehivetech.com>,
Ruifeng Wang <ruifeng.wang@arm.com>,
David Christensen <drc@linux.vnet.ibm.com>,
Konstantin Ananyev <konstantin.ananyev@intel.com>,
dev@dpdk.org
Subject: Re: rte_memcpy alignment
Date: Fri, 14 Jan 2022 09:11:07 +0000 [thread overview]
Message-ID: <YeE+K08sU6wnkEgx@bricha3-MOBL.ger.corp.intel.com> (raw)
In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35D86E00@smartserver.smartshare.dk>
On Fri, Jan 14, 2022 at 09:56:50AM +0100, Morten Brørup wrote:
> Dear ARM/POWER/x86 maintainers,
>
> The architecture specific rte_memcpy() provides optimized variants to copy aligned data. However, the alignment requirements depend on the hardware architecture, and there is no common definition for the alignment.
>
> DPDK provides __rte_cache_aligned for cache optimization purposes, with architecture specific values. Would you consider providing an __rte_memcpy_aligned for rte_memcpy() optimization purposes?
>
> Or should I just use __rte_cache_aligned, although it is overkill?
>
>
> Specifically, I am working on a mempool optimization where the objs field in the rte_mempool_cache structure may benefit by being aligned for optimized rte_memcpy().
>
For me the difficulty with such a memcpy proposal - apart from probably
adding to the amount of memcpy code we have to maintain - is the specific meaning
of what "aligned" in the memcpy case. Unlike for a struct definition, the
possible meaning of aligned in memcpy could be:
* the source address is aligned
* the destination address is aligned
* both source and destination is aligned
* both source and destination are aligned and the copy length is a multiple
of the alignment length
* the data is aligned to a cacheline boundary
* the data is aligned to the largest load-store size for system
* the data is aligned to the boundary suitable for the copy size, e.g.
memcpy of 8 bytes is 8-byte aligned etc.
Can you clarify a bit more on your own thinking here? Personally, I am a
little dubious of the benefit of general memcpy optimization, but I do
believe that for specific usecases there is value is having their own copy
operations which include constraints for that specific usecase. For
example, in the AVX-512 ice/i40e PMD code, we fold the memcpy from the
mempool cache into the descriptor rearm function because we know we can
always do 64-byte loads and stores, and also because we know that for each
load in the copy, we can reuse the data just after storing it (giving good
perf boost). Perhaps something similar could work for you in your mempool
optimization.
/Bruce
next prev parent reply other threads:[~2022-01-14 9:11 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-14 8:56 Morten Brørup
2022-01-14 9:11 ` Bruce Richardson [this message]
2022-01-14 9:53 ` Morten Brørup
2022-01-14 10:22 ` Bruce Richardson
2022-01-14 10:54 ` Ananyev, Konstantin
2022-01-14 11:05 ` Morten Brørup
2022-01-14 11:51 ` Ananyev, Konstantin
2022-01-17 12:03 ` Morten Brørup
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YeE+K08sU6wnkEgx@bricha3-MOBL.ger.corp.intel.com \
--to=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=drc@linux.vnet.ibm.com \
--cc=konstantin.ananyev@intel.com \
--cc=mb@smartsharesystems.com \
--cc=ruifeng.wang@arm.com \
--cc=viktorin@rehivetech.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).