From: Olivier MATZ <olivier.matz@6wind.com>
To: Stephen Hemminger <stephen@networkplumber.org>,
Bruce Richardson <bruce.richardson@intel.com>
Cc: dev@dpdk.org, Stephen Hemminger <shemming@brocade.com>
Subject: Re: [dpdk-dev] [PATCH 2/2] mbuf: make sure userdata is initialized
Date: Wed, 03 Feb 2016 18:23:37 +0100 [thread overview]
Message-ID: <56B23799.4020600@6wind.com> (raw)
In-Reply-To: <20150710084356.37d22b65@urahara>
Hi,
On 07/10/2015 05:43 PM, Stephen Hemminger wrote:
> On Fri, 10 Jul 2015 10:32:17 +0100
> Bruce Richardson <bruce.richardson@intel.com> wrote:
>
>> On Thu, Jul 09, 2015 at 04:37:48PM -0700, Stephen Hemminger wrote:
>>> From: Stephen Hemminger <shemming@brocade.com>
>>>
>>> For applications that use m->userdata the initialization can
>>> be a signficant (10%) performance penalty.
>>>
>>> Rather than taking the cache penalty of initializing userdata
>>> in the receive handling, do it in the place where mbuf is
>>> already cache hot and being setup.
>>
>> Should the management of the userdata field not be the responsibility of the
>> app itself, rather than having the PMD manage it? If the PMD does manage the
>> userdata field, I would suggest taking the approach of having the field cleared
>> by the mbuf library on free, rather than on RX.
>
> The problem with that is m->userdata is that when the application
> gets the mbuf, touching the userdata field causes false cache
> sharing and we see a significant performance drop. Internally the
> userdata field is only used for few special cases and 0/NULL
> is used to indicate that no metadata is present.
Agree with Bruce. The management of this field is the responsibility
of the application.
Also, this field is located in the second cache line, and we should
avoid to touch it in the rx functions. We had the same discussion for
the next field in this thread:
http://dpdk.org/ml/archives/dev/2015-June/019182.html
A pure-app solution would be to set it to NULL in:
- your mempool object init function
- in your mbuf_free() function, before calling rte_pktmbuf_free()
- before passing the mbuf to the tx driver
But a better solution in dpdk is probably what Bruce suggests.
Regards,
Olivier
prev parent reply other threads:[~2016-02-03 17:23 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-07-09 23:37 [dpdk-dev] [PATCH 0/2] mbuf: improvements Stephen Hemminger
2015-07-09 23:37 ` [dpdk-dev] [PATCH 1/2] mbuf: Add rte_pktmbuf_copy Stephen Hemminger
2015-07-15 10:14 ` Olivier MATZ
2016-01-22 13:40 ` Mrzyglod, DanielX T
2016-01-22 17:33 ` Stephen Hemminger
2016-03-18 17:03 ` Pattan, Reshma
2016-03-18 17:40 ` Stephen Hemminger
2016-02-03 17:23 ` Olivier MATZ
[not found] ` <ccbdb829556f4423b009aff93f27c93b@BRMWP-EXMB11.corp.brocade.com>
2016-02-04 0:49 ` Stephen Hemminger
2015-07-09 23:37 ` [dpdk-dev] [PATCH 2/2] mbuf: make sure userdata is initialized Stephen Hemminger
2015-07-10 9:32 ` Bruce Richardson
2015-07-10 15:43 ` Stephen Hemminger
2015-07-15 10:10 ` Olivier MATZ
2016-02-03 17:23 ` Olivier MATZ [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56B23799.4020600@6wind.com \
--to=olivier.matz@6wind.com \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=shemming@brocade.com \
--cc=stephen@networkplumber.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).