DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Morten Brørup" <mb@smartsharesystems.com>
To: "Bruce Richardson" <bruce.richardson@intel.com>
Cc: "Olivier Matz" <olivier.matz@6wind.com>,
	"Matan Azrad" <matan@nvidia.com>,
	"Shahaf Shuler" <shahafs@nvidia.com>,
	"Viacheslav Ovsiienko" <viacheslavo@nvidia.com>, <dev@dpdk.org>
Subject: Re: [dpdk-dev] mbuf next field belongs in the first cacheline
Date: Tue, 15 Jun 2021 15:40:50 +0200	[thread overview]
Message-ID: <98CBD80474FA8B44BF855DF32C47DC35C6185E@smartserver.smartshare.dk> (raw)
In-Reply-To: <YMilmf0XvQbLmS7d@bricha3-MOBL.ger.corp.intel.com>

> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Bruce Richardson
> Sent: Tuesday, 15 June 2021 15.05
> 
> On Tue, Jun 15, 2021 at 02:16:27PM +0200, Morten Brørup wrote:
> > MBUF and MLX5 maintainers,
> >
> > I'm picking up an old discussion, which you might consider pursuing.
> Feel free to ignore, if you consider this discussion irrelevant or
> already closed and done with.
> >
> > The Techboard has previously discussed the organization of the mbuf
> fields. Ref: http://mails.dpdk.org/archives/dev/2020-
> November/191859.html
> >
> > It was concluded that there was no measured performance difference if
> the "pool" or "next" field was in the first cacheline, so it was
> decided to put the "pool" field in the first cacheline. And further
> optimizing the mbuf field organization could be reconsidered later.
> >
> > I have been looking at it. In theory it should not be required to
> touch the "pool" field at RX. But the "next" field must be written for
> segmented packets.
> >
> Question: are there cases where segmented packets are used, but they
> aren't
> big packets, and so need a high packets-per-second value? The thinking
> when
> designing the mbuf was that any application which could handle high
> packets
> per second for medium/small packets would be fine with a few extra
> cycles
> penalty for big ones, since the overall PPS for the driver would be
> much
> lower.

Always good with a reality check! :-)

I recall a proposal from NVIDIA that introduced a feature to split RX packets into multiple small segments from a list of mbuf pools; basically a variant of "header split". Here it is: https://patchwork.dpdk.org/project/dpdk/list/?series=13070&state=%2A&archive=both

I don't know if swapping "next" and "pool" fields would make a performance difference if the RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT is being used.

Otherwise, you are correct: The performance gain is mostly theoretical.

So in reality, it would be a very big change for an insignificant improvement.

It's mainly the principle that annoys me: The DPDK documentation mentions that the mbuf structure is designed for the second cache line not to be touched by RX. If that is not the bearing principle anymore, the documentation needs to be updated.



      reply	other threads:[~2021-06-15 13:40 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-15 12:16 Morten Brørup
2021-06-15 13:05 ` Bruce Richardson
2021-06-15 13:40   ` Morten Brørup [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=98CBD80474FA8B44BF855DF32C47DC35C6185E@smartserver.smartshare.dk \
    --to=mb@smartsharesystems.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=matan@nvidia.com \
    --cc=olivier.matz@6wind.com \
    --cc=shahafs@nvidia.com \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).