From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: Panu Matilainen <pmatilai@redhat.com>,
"Xie, Huawei" <huawei.xie@intel.com>,
Olivier MATZ <olivier.matz@6wind.com>,
"dev@dpdk.org" <dev@dpdk.org>
Cc: "dprovan@bivio.net" <dprovan@bivio.net>
Subject: Re: [dpdk-dev] [PATCH v6 1/2] mbuf: provide rte_pktmbuf_alloc_bulk API
Date: Wed, 24 Feb 2016 13:23:46 +0000 [thread overview]
Message-ID: <2601191342CEEE43887BDE71AB97725836B0AC3F@irsmsx105.ger.corp.intel.com> (raw)
In-Reply-To: <56CD9DFE.4070702@redhat.com>
Hi Panu,
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Panu Matilainen
> Sent: Wednesday, February 24, 2016 12:12 PM
> To: Xie, Huawei; Olivier MATZ; dev@dpdk.org
> Cc: dprovan@bivio.net
> Subject: Re: [dpdk-dev] [PATCH v6 1/2] mbuf: provide rte_pktmbuf_alloc_bulk API
>
> On 02/23/2016 07:35 AM, Xie, Huawei wrote:
> > On 2/22/2016 10:52 PM, Xie, Huawei wrote:
> >> On 2/4/2016 1:24 AM, Olivier MATZ wrote:
> >>> Hi,
> >>>
> >>> On 01/27/2016 02:56 PM, Panu Matilainen wrote:
> >>>> Since rte_pktmbuf_alloc_bulk() is an inline function, it is not part of
> >>>> the library ABI and should not be listed in the version map.
> >>>>
> >>>> I assume its inline for performance reasons, but then you lose the
> >>>> benefits of dynamic linking such as ability to fix bugs and/or improve
> >>>> itby just updating the library. Since the point of having a bulk API is
> >>>> to improve performance by reducing the number of calls required, does it
> >>>> really have to be inline? As in, have you actually measured the
> >>>> difference between inline and non-inline and decided its worth all the
> >>>> downsides?
> >>> Agree with Panu. It would be interesting to compare the performance
> >>> between inline and non inline to decide whether inlining it or not.
> >> Will update after i gathered more data. inline could show obvious
> >> performance difference in some cases.
> >
> > Panu and Oliver:
> > I write a simple benchmark. This benchmark run 10M rounds, in each round
> > 8 mbufs are allocated through bulk API, and then freed.
> > These are the CPU cycles measured(Intel(R) Xeon(R) CPU E5-2680 0 @
> > 2.70GHz, CPU isolated, timer interrupt disabled, rcu offloaded).
> > Btw, i have removed some exceptional data, the frequency of which is
> > like 1/10. Sometimes observed user usage suddenly disappeared, no clue
> > what happened.
> >
> > With 8 mbufs allocated, there is about 6% performance increase using inline.
> [...]
> >
> > With 16 mbufs allocated, we could still observe obvious performance
> > difference, though only 1%-2%
> >
> [...]
> >
> > With 32/64 mbufs allocated, the deviation of the data itself would hide
> > the performance difference.
> > So we prefer using inline for performance.
>
> At least I was more after real-world performance in a real-world
> use-case rather than CPU cycles in a microbenchmark, we know function
> calls have a cost but the benefits tend to outweight the cons.
>
> Inline functions have their place and they're far less evil in project
> internal use, but in library public API they are BAD and should be ...
> well, not banned because there are exceptions to every rule, but highly
> discouraged.
Why is that?
As you can see right now we have all mbuf alloc/free routines as static inline.
And I think we would like to keep it like that.
So why that particular function should be different?
After all that function is nothing more than a wrapper
around rte_mempool_get_bulk() unrolled by 4 loop {rte_pktmbuf_reset()}
So unless mempool get/put API would change, I can hardly see there could be any ABI
breakages in future.
About 'real world' performance gain - it was a 'real world' performance problem,
that we tried to solve by introducing that function:
http://dpdk.org/ml/archives/dev/2015-May/017633.html
And according to the user feedback, it does help:
http://dpdk.org/ml/archives/dev/2016-February/033203.html
Konstantin
>
> - Panu -
>
next prev parent reply other threads:[~2016-02-24 13:23 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-12-13 23:35 [dpdk-dev] [PATCH 0/2] provide rte_pktmbuf_alloc_bulk API and call it in vhost dequeue Huawei Xie
2015-12-13 23:35 ` [dpdk-dev] [PATCH 1/2] mbuf: provide rte_pktmbuf_alloc_bulk API Huawei Xie
2015-12-13 23:35 ` [dpdk-dev] [PATCH 2/2] vhost: call rte_pktmbuf_alloc_bulk in vhost dequeue Huawei Xie
2015-12-14 1:14 ` [dpdk-dev] [PATCH v2 0/2] provide rte_pktmbuf_alloc_bulk API and call it " Huawei Xie
2015-12-14 1:14 ` [dpdk-dev] [PATCH v2 1/2] mbuf: provide rte_pktmbuf_alloc_bulk API Huawei Xie
2015-12-17 6:41 ` Yuanhan Liu
2015-12-17 15:42 ` Ananyev, Konstantin
2015-12-18 2:17 ` Yuanhan Liu
2015-12-18 5:01 ` Stephen Hemminger
2015-12-18 5:21 ` Yuanhan Liu
2015-12-18 7:10 ` Xie, Huawei
2015-12-18 10:44 ` Ananyev, Konstantin
2015-12-18 17:32 ` Stephen Hemminger
2015-12-18 19:27 ` Wiles, Keith
2015-12-21 15:21 ` Xie, Huawei
2015-12-21 17:20 ` Wiles, Keith
2015-12-21 21:30 ` Thomas Monjalon
2015-12-22 1:58 ` Xie, Huawei
2015-12-21 22:34 ` Don Provan
2015-12-21 12:25 ` Xie, Huawei
2015-12-14 1:14 ` [dpdk-dev] [PATCH v2 2/2] vhost: call rte_pktmbuf_alloc_bulk in vhost dequeue Huawei Xie
2015-12-17 6:41 ` Yuanhan Liu
2015-12-22 16:17 ` [dpdk-dev] [PATCH v3 0/2] provide rte_pktmbuf_alloc_bulk API and call it " Huawei Xie
2015-12-22 16:17 ` [dpdk-dev] [PATCH v3 1/2] mbuf: provide rte_pktmbuf_alloc_bulk API Huawei Xie
2015-12-23 18:37 ` Stephen Hemminger
2015-12-23 18:49 ` Ananyev, Konstantin
2015-12-24 1:33 ` Xie, Huawei
2015-12-22 16:17 ` [dpdk-dev] [PATCH v3 2/2] vhost: call rte_pktmbuf_alloc_bulk in vhost dequeue Huawei Xie
2015-12-23 11:22 ` linhaifeng
2015-12-23 11:39 ` Xie, Huawei
2015-12-22 23:05 ` [dpdk-dev] [PATCH v4 0/2] provide rte_pktmbuf_alloc_bulk API and call it " Huawei Xie
2015-12-22 23:05 ` [dpdk-dev] [PATCH v4 1/2] mbuf: provide rte_pktmbuf_alloc_bulk API Huawei Xie
2015-12-22 23:05 ` [dpdk-dev] [PATCH v4 2/2] vhost: call rte_pktmbuf_alloc_bulk in vhost dequeue Huawei Xie
2015-12-27 16:38 ` [dpdk-dev] [PATCH v5 0/2] provide rte_pktmbuf_alloc_bulk API and call it " Huawei Xie
2015-12-27 16:38 ` [dpdk-dev] [PATCH v5 1/2] mbuf: provide rte_pktmbuf_alloc_bulk API Huawei Xie
2015-12-27 16:38 ` [dpdk-dev] [PATCH v5 2/2] vhost: call rte_pktmbuf_alloc_bulk in vhost dequeue Huawei Xie
2016-01-26 17:03 ` [dpdk-dev] [PATCH v6 0/2] provide rte_pktmbuf_alloc_bulk API and call it " Huawei Xie
2016-01-26 17:03 ` [dpdk-dev] [PATCH v6 1/2] mbuf: provide rte_pktmbuf_alloc_bulk API Huawei Xie
2016-01-27 13:56 ` Panu Matilainen
2016-02-03 17:23 ` Olivier MATZ
2016-02-22 14:49 ` Xie, Huawei
2016-02-23 5:35 ` Xie, Huawei
2016-02-24 12:11 ` Panu Matilainen
2016-02-24 13:23 ` Ananyev, Konstantin [this message]
2016-02-26 7:39 ` Xie, Huawei
2016-02-26 8:45 ` Olivier MATZ
2016-02-29 10:51 ` Panu Matilainen
2016-02-29 16:14 ` Thomas Monjalon
2016-02-26 8:55 ` Olivier MATZ
2016-02-26 9:07 ` Xie, Huawei
2016-02-26 9:18 ` Olivier MATZ
2016-01-26 17:03 ` [dpdk-dev] [PATCH v6 2/2] vhost: call rte_pktmbuf_alloc_bulk in vhost dequeue Huawei Xie
2016-02-28 12:44 ` [dpdk-dev] [PATCH v7] mbuf: provide rte_pktmbuf_alloc_bulk API Huawei Xie
2016-02-29 16:27 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2601191342CEEE43887BDE71AB97725836B0AC3F@irsmsx105.ger.corp.intel.com \
--to=konstantin.ananyev@intel.com \
--cc=dev@dpdk.org \
--cc=dprovan@bivio.net \
--cc=huawei.xie@intel.com \
--cc=olivier.matz@6wind.com \
--cc=pmatilai@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).