DPDK patches and discussions
 help / color / mirror / Atom feed
From: Bruce Richardson <bruce.richardson@intel.com>
To: "Morten Brørup" <mb@smartsharesystems.com>
Cc: Ferruh Yigit <ferruh.yigit@amd.com>,
	Vipin Varghese <vipin.varghese@amd.com>, <dev@dpdk.org>,
	<stable@dpdk.org>, <honest.jiang@foxmail.com>,
	Thiyagrajan P <thiyagarajan.p@amd.com>
Subject: Re: [PATCH] app/dma-perf: replace pktmbuf with mempool objects
Date: Tue, 12 Dec 2023 15:37:29 +0000	[thread overview]
Message-ID: <ZXh-ObnTwmu69WaF@bricha3-MOBL.ger.corp.intel.com> (raw)
In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35E9F0C0@smartserver.smartshare.dk>

On Tue, Dec 12, 2023 at 04:16:20PM +0100, Morten Brørup wrote:
> +TO: Bruce, please stop me if I'm completely off track here.
> 
> > From: Ferruh Yigit [mailto:ferruh.yigit@amd.com] Sent: Tuesday, 12
> > December 2023 15.38
> > 
> > On 12/12/2023 11:40 AM, Morten Brørup wrote:
> > >> From: Vipin Varghese [mailto:vipin.varghese@amd.com] Sent: Tuesday,
> > >> 12 December 2023 11.38
> > >>
> > >> Replace pktmbuf pool with mempool, this allows increase in MOPS
> > >> especially in lower buffer size. Using Mempool, allows to reduce the
> > >> extra CPU cycles.
> > >
> > > I get the point of this change: It tests the performance of copying
> > raw memory objects using respectively rte_memcpy and DMA, without the
> > mbuf indirection overhead.
> > >
> > > However, I still consider the existing test relevant: The performance
> > of copying packets using respectively rte_memcpy and DMA.
> > >
> > 
> > This is DMA performance test application and packets are not used,
> > using pktmbuf just introduces overhead to the main focus of the
> > application.
> > 
> > I am not sure if pktmuf selected intentionally for this test
> > application, but I assume it is there because of historical reasons.
> 
> I think pktmbuf was selected intentionally, to provide more accurate
> results for application developers trying to determine when to use
> rte_memcpy and when to use DMA. Much like the "copy breakpoint" in Linux
> Ethernet drivers is used to determine which code path to take for each
> received packet.
> 
> Most applications will be working with pktmbufs, so these applications
> will also experience the pktmbuf overhead. Performance testing with the
> same overhead as the application will be better to help the application
> developer determine when to use rte_memcpy and when to use DMA when
> working with pktmbufs.
> 
> (Furthermore, for the pktmbuf tests, I wonder if copying performance
> could also depend on IOVA mode and RTE_IOVA_IN_MBUF.)
> 
> Nonetheless, there may also be use cases where raw mempool objects are
> being copied by rte_memcpy or DMA, so adding tests for these use cases
> are useful.
> 
> 
> @Bruce, you were also deeply involved in the DMA library, and probably
> have more up-to-date practical experience with it. Am I right that
> pktmbuf overhead in these tests provides more "real life use"-like
> results? Or am I completely off track with my thinking here, i.e. the
> pktmbuf overhead is only noise?
> 
I'm actually not that familiar with the dma-test application, so can't
comment on the specific overhead involved here. In the general case, if we
are just talking about the overhead of dereferencing the mbufs then I would
expect the overhead to be negligible. However, if we are looking to include
the cost of allocation and freeing of buffers, I'd try to avoid that as it
is a cost that would have to be paid for both SW copies and HW copies, so
should not count when calculating offload cost.

/Bruce


  reply	other threads:[~2023-12-12 15:38 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-12 10:37 Vipin Varghese
2023-12-12 11:40 ` Morten Brørup
2023-12-12 14:38   ` Ferruh Yigit
2023-12-12 15:16     ` Morten Brørup
2023-12-12 15:37       ` Bruce Richardson [this message]
2023-12-12 17:13         ` Varghese, Vipin
2023-12-12 18:09           ` Morten Brørup
2023-12-12 18:13             ` Varghese, Vipin
2023-12-20  9:17               ` Varghese, Vipin
2023-12-20  9:21                 ` David Marchand
2023-12-20 11:03 ` [PATCH v2] " Vipin Varghese
2024-02-26  2:05   ` fengchengwen
2024-02-27  9:57     ` Varghese, Vipin
2024-02-27 12:27       ` fengchengwen
2024-02-28  3:08         ` Varghese, Vipin
2023-12-19 16:35 [PATCH] " Vipin Varghese

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZXh-ObnTwmu69WaF@bricha3-MOBL.ger.corp.intel.com \
    --to=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@amd.com \
    --cc=honest.jiang@foxmail.com \
    --cc=mb@smartsharesystems.com \
    --cc=stable@dpdk.org \
    --cc=thiyagarajan.p@amd.com \
    --cc=vipin.varghese@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).