DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Xie, Huawei" <huawei.xie@intel.com>
To: Olivier MATZ <olivier.matz@6wind.com>,
	Panu Matilainen <pmatilai@redhat.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Cc: "dprovan@bivio.net" <dprovan@bivio.net>
Subject: Re: [dpdk-dev] [PATCH v6 1/2] mbuf: provide rte_pktmbuf_alloc_bulk API
Date: Tue, 23 Feb 2016 05:35:08 +0000	[thread overview]
Message-ID: <C37D651A908B024F974696C65296B57B4C5F0655@SHSMSX101.ccr.corp.intel.com> (raw)
In-Reply-To: <C37D651A908B024F974696C65296B57B4C5EE961@SHSMSX101.ccr.corp.intel.com>

On 2/22/2016 10:52 PM, Xie, Huawei wrote:
> On 2/4/2016 1:24 AM, Olivier MATZ wrote:
>> Hi,
>>
>> On 01/27/2016 02:56 PM, Panu Matilainen wrote:
>>> Since rte_pktmbuf_alloc_bulk() is an inline function, it is not part of
>>> the library ABI and should not be listed in the version map.
>>>
>>> I assume its inline for performance reasons, but then you lose the
>>> benefits of dynamic linking such as ability to fix bugs and/or improve
>>> itby just updating the library. Since the point of having a bulk API is
>>> to improve performance by reducing the number of calls required, does it
>>> really have to be inline? As in, have you actually measured the
>>> difference between inline and non-inline and decided its worth all the
>>> downsides?
>> Agree with Panu. It would be interesting to compare the performance
>> between inline and non inline to decide whether inlining it or not.
> Will update after i gathered more data. inline could show obvious
> performance difference in some cases.

Panu and Oliver:
I write a simple benchmark. This benchmark run 10M rounds, in each round
8 mbufs are allocated through bulk API, and then freed.
These are the CPU cycles measured(Intel(R) Xeon(R) CPU E5-2680 0 @
2.70GHz, CPU isolated, timer interrupt disabled, rcu offloaded).
Btw, i have removed some exceptional data, the frequency of which is
like 1/10. Sometimes observed user usage suddenly disappeared, no clue
what happened.

With 8 mbufs allocated, there is about 6% performance increase using inline.
inline            non-inline
2780738888        2950309416
2834853696        2951378072
2823015320        2954500888
2825060032        2958939912
2824499804        2898938284
2810859720        2944892796
2852229420        3014273296
2787308500        2956809852
2793337260        2958674900
2822223476        2954346352
2785455184        2925719136
2821528624        2937380416
2822922136        2974978604
2776645920        2947666548
2815952572        2952316900
2801048740        2947366984
2851462672        2946469004

With 16 mbufs allocated, we could still observe obvious performance
difference, though only 1%-2%

inline            non-inline
5519987084        5669902680
5538416096        5737646840
5578934064        5590165532
5548131972        5767926840
5625585696        5831345628
5558282876        5662223764
5445587768        5641003924
5559096320        5775258444
5656437988        5743969272
5440939404        5664882412
5498875968        5785138532
5561652808        5737123940
5515211716        5627775604
5550567140        5630790628
5665964280        5589568164
5591295900        5702697308

With 32/64 mbufs allocated, the deviation of the data itself would hide
the performance difference.

So we prefer using inline for performance.
>> Also, it would be nice to have a simple test function in
>> app/test/test_mbuf.c. For instance, you could update
>> test_one_pktmbuf() to take a mbuf pointer as a parameter and remove
>> the mbuf allocation from the function. Then it could be called with
>> a mbuf allocated with rte_pktmbuf_alloc() (like before) and with
>> all the mbufs of rte_pktmbuf_alloc_bulk().

Don't quite get you. Is it that we write two cases, one case allocate
mbuf through rte_pktmbuf_alloc_bulk and one use rte_pktmbuf_alloc? It is
good to have. I could do this after this patch.
>>
>> Regards,
>> Olivier
>>
>


  reply	other threads:[~2016-02-23  5:35 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-13 23:35 [dpdk-dev] [PATCH 0/2] provide rte_pktmbuf_alloc_bulk API and call it in vhost dequeue Huawei Xie
2015-12-13 23:35 ` [dpdk-dev] [PATCH 1/2] mbuf: provide rte_pktmbuf_alloc_bulk API Huawei Xie
2015-12-13 23:35 ` [dpdk-dev] [PATCH 2/2] vhost: call rte_pktmbuf_alloc_bulk in vhost dequeue Huawei Xie
2015-12-14  1:14 ` [dpdk-dev] [PATCH v2 0/2] provide rte_pktmbuf_alloc_bulk API and call it " Huawei Xie
2015-12-14  1:14   ` [dpdk-dev] [PATCH v2 1/2] mbuf: provide rte_pktmbuf_alloc_bulk API Huawei Xie
2015-12-17  6:41     ` Yuanhan Liu
2015-12-17 15:42       ` Ananyev, Konstantin
2015-12-18  2:17         ` Yuanhan Liu
2015-12-18  5:01     ` Stephen Hemminger
2015-12-18  5:21       ` Yuanhan Liu
2015-12-18  7:10       ` Xie, Huawei
2015-12-18 10:44       ` Ananyev, Konstantin
2015-12-18 17:32         ` Stephen Hemminger
2015-12-18 19:27           ` Wiles, Keith
2015-12-21 15:21             ` Xie, Huawei
2015-12-21 17:20               ` Wiles, Keith
2015-12-21 21:30                 ` Thomas Monjalon
2015-12-22  1:58                   ` Xie, Huawei
2015-12-21 22:34               ` Don Provan
2015-12-21 12:25           ` Xie, Huawei
2015-12-14  1:14   ` [dpdk-dev] [PATCH v2 2/2] vhost: call rte_pktmbuf_alloc_bulk in vhost dequeue Huawei Xie
2015-12-17  6:41     ` Yuanhan Liu
2015-12-22 16:17   ` [dpdk-dev] [PATCH v3 0/2] provide rte_pktmbuf_alloc_bulk API and call it " Huawei Xie
2015-12-22 16:17     ` [dpdk-dev] [PATCH v3 1/2] mbuf: provide rte_pktmbuf_alloc_bulk API Huawei Xie
2015-12-23 18:37       ` Stephen Hemminger
2015-12-23 18:49         ` Ananyev, Konstantin
2015-12-24  1:33           ` Xie, Huawei
2015-12-22 16:17     ` [dpdk-dev] [PATCH v3 2/2] vhost: call rte_pktmbuf_alloc_bulk in vhost dequeue Huawei Xie
2015-12-23 11:22       ` linhaifeng
2015-12-23 11:39         ` Xie, Huawei
2015-12-22 23:05 ` [dpdk-dev] [PATCH v4 0/2] provide rte_pktmbuf_alloc_bulk API and call it " Huawei Xie
2015-12-22 23:05   ` [dpdk-dev] [PATCH v4 1/2] mbuf: provide rte_pktmbuf_alloc_bulk API Huawei Xie
2015-12-22 23:05   ` [dpdk-dev] [PATCH v4 2/2] vhost: call rte_pktmbuf_alloc_bulk in vhost dequeue Huawei Xie
2015-12-27 16:38 ` [dpdk-dev] [PATCH v5 0/2] provide rte_pktmbuf_alloc_bulk API and call it " Huawei Xie
2015-12-27 16:38   ` [dpdk-dev] [PATCH v5 1/2] mbuf: provide rte_pktmbuf_alloc_bulk API Huawei Xie
2015-12-27 16:38   ` [dpdk-dev] [PATCH v5 2/2] vhost: call rte_pktmbuf_alloc_bulk in vhost dequeue Huawei Xie
2016-01-26 17:03 ` [dpdk-dev] [PATCH v6 0/2] provide rte_pktmbuf_alloc_bulk API and call it " Huawei Xie
2016-01-26 17:03   ` [dpdk-dev] [PATCH v6 1/2] mbuf: provide rte_pktmbuf_alloc_bulk API Huawei Xie
2016-01-27 13:56     ` Panu Matilainen
2016-02-03 17:23       ` Olivier MATZ
2016-02-22 14:49         ` Xie, Huawei
2016-02-23  5:35           ` Xie, Huawei [this message]
2016-02-24 12:11             ` Panu Matilainen
2016-02-24 13:23               ` Ananyev, Konstantin
2016-02-26  7:39                 ` Xie, Huawei
2016-02-26  8:45                   ` Olivier MATZ
2016-02-29 10:51                 ` Panu Matilainen
2016-02-29 16:14                   ` Thomas Monjalon
2016-02-26  8:55             ` Olivier MATZ
2016-02-26  9:07               ` Xie, Huawei
2016-02-26  9:18                 ` Olivier MATZ
2016-01-26 17:03   ` [dpdk-dev] [PATCH v6 2/2] vhost: call rte_pktmbuf_alloc_bulk in vhost dequeue Huawei Xie
2016-02-28 12:44 ` [dpdk-dev] [PATCH v7] mbuf: provide rte_pktmbuf_alloc_bulk API Huawei Xie
2016-02-29 16:27   ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=C37D651A908B024F974696C65296B57B4C5F0655@SHSMSX101.ccr.corp.intel.com \
    --to=huawei.xie@intel.com \
    --cc=dev@dpdk.org \
    --cc=dprovan@bivio.net \
    --cc=olivier.matz@6wind.com \
    --cc=pmatilai@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).