From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id DAE052BA6 for ; Wed, 24 Feb 2016 13:11:44 +0100 (CET) Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (Postfix) with ESMTPS id 3A51490E41; Wed, 24 Feb 2016 12:11:44 +0000 (UTC) Received: from sopuli.koti.laiskiainen.org (vpn1-7-116.ams2.redhat.com [10.36.7.116]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u1OCBgcK026424; Wed, 24 Feb 2016 07:11:43 -0500 To: "Xie, Huawei" , Olivier MATZ , "dev@dpdk.org" References: <1450049754-33635-1-git-send-email-huawei.xie@intel.com> <1453827815-56384-1-git-send-email-huawei.xie@intel.com> <1453827815-56384-2-git-send-email-huawei.xie@intel.com> <56A8CCA3.7060302@redhat.com> <56B237AD.1040209@6wind.com> From: Panu Matilainen Message-ID: <56CD9DFE.4070702@redhat.com> Date: Wed, 24 Feb 2016 14:11:42 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.5.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 Cc: "dprovan@bivio.net" Subject: Re: [dpdk-dev] [PATCH v6 1/2] mbuf: provide rte_pktmbuf_alloc_bulk API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 24 Feb 2016 12:11:45 -0000 On 02/23/2016 07:35 AM, Xie, Huawei wrote: > On 2/22/2016 10:52 PM, Xie, Huawei wrote: >> On 2/4/2016 1:24 AM, Olivier MATZ wrote: >>> Hi, >>> >>> On 01/27/2016 02:56 PM, Panu Matilainen wrote: >>>> Since rte_pktmbuf_alloc_bulk() is an inline function, it is not part of >>>> the library ABI and should not be listed in the version map. >>>> >>>> I assume its inline for performance reasons, but then you lose the >>>> benefits of dynamic linking such as ability to fix bugs and/or improve >>>> itby just updating the library. Since the point of having a bulk API is >>>> to improve performance by reducing the number of calls required, does it >>>> really have to be inline? As in, have you actually measured the >>>> difference between inline and non-inline and decided its worth all the >>>> downsides? >>> Agree with Panu. It would be interesting to compare the performance >>> between inline and non inline to decide whether inlining it or not. >> Will update after i gathered more data. inline could show obvious >> performance difference in some cases. > > Panu and Oliver: > I write a simple benchmark. This benchmark run 10M rounds, in each round > 8 mbufs are allocated through bulk API, and then freed. > These are the CPU cycles measured(Intel(R) Xeon(R) CPU E5-2680 0 @ > 2.70GHz, CPU isolated, timer interrupt disabled, rcu offloaded). > Btw, i have removed some exceptional data, the frequency of which is > like 1/10. Sometimes observed user usage suddenly disappeared, no clue > what happened. > > With 8 mbufs allocated, there is about 6% performance increase using inline. [...] > > With 16 mbufs allocated, we could still observe obvious performance > difference, though only 1%-2% > [...] > > With 32/64 mbufs allocated, the deviation of the data itself would hide > the performance difference. > So we prefer using inline for performance. At least I was more after real-world performance in a real-world use-case rather than CPU cycles in a microbenchmark, we know function calls have a cost but the benefits tend to outweight the cons. Inline functions have their place and they're far less evil in project internal use, but in library public API they are BAD and should be ... well, not banned because there are exceptions to every rule, but highly discouraged. - Panu -