DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Morten Brørup" <mb@smartsharesystems.com>
To: "Bruce Richardson" <bruce.richardson@intel.com>,
	"Manish Sharma" <manish.sharmajee75@gmail.com>
Cc: <dev@dpdk.org>
Subject: Re: [dpdk-dev] rte_memcpy - fence and stream
Date: Thu, 27 May 2021 20:15:19 +0200	[thread overview]
Message-ID: <98CBD80474FA8B44BF855DF32C47DC35C617E4@smartserver.smartshare.dk> (raw)
In-Reply-To: <YK/VVIlCwQ8cHwB3@bricha3-MOBL.ger.corp.intel.com>

> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Bruce Richardson
> Sent: Thursday, 27 May 2021 19.22
> 
> On Thu, May 27, 2021 at 10:39:59PM +0530, Manish Sharma wrote:
> >    For the case I have, hardly 2% of the data buffers which are being
> >    copied get looked at - mostly its for DMA. Having a version of
> DPDK
> >    memcopy that does non temporal copies would definitely be good.
> >    If in my case, I have a lot of CPUs doing the copy in parallel,
> would
> >    I/OAT driver copy accelerator still help?
> >
> It will depend upon the size of the copies being done. For bigger
> packets
> the accelerator can help free up CPU cycles for other things.
> 
> However, if only 2% of the data which is being copied gets looked at,
> why
> does it need to be copied? Can the original buffers not be used in that
> case?

I can only speak for myself here...

Our firmware has a packet capture feature with a filter.

If a packet matches the capture filter, a metadata header and the relevant part of the packet contents ("snap length" in tcpdump terminology) is appended to a large memory area (the "capture buffer") using rte_pktmbuf_read/rte_memcpy. This capture buffer is only read through the GUI or management API by the network administrator, i.e. it will only be read minutes or hours later, so there is no need to put any of it in any CPU cache.

It does not make sense to clone and hold on to many thousands of mbufs when we only need some of their contents. So we copy the contents instead of increasing the mbuf refcount.

We currently only use our packet capture feature for R&D purposes, so we have not optimized it yet. However, we will need to optimize it for production use at some point. So I find this discussion initiated by Manish very interesting.

-Morten


  reply	other threads:[~2021-05-27 18:15 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-24  8:49 [dpdk-dev] rte_memcpy Manish Sharma
2021-05-24 18:13 ` [dpdk-dev] rte_memcpy - fence and stream Manish Sharma
2021-05-25  9:20   ` Bruce Richardson
2021-05-27 15:49     ` Morten Brørup
2021-05-27 16:25       ` Bruce Richardson
2021-05-27 17:09         ` Manish Sharma
2021-05-27 17:22           ` Bruce Richardson
2021-05-27 18:15             ` Morten Brørup [this message]
2021-06-22 21:55               ` Morten Brørup

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=98CBD80474FA8B44BF855DF32C47DC35C617E4@smartserver.smartshare.dk \
    --to=mb@smartsharesystems.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=manish.sharmajee75@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).