From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.droids-corp.org (zoll.droids-corp.org [94.23.50.67]) by dpdk.org (Postfix) with ESMTP id 82FB72C47 for ; Fri, 26 Feb 2016 09:45:34 +0100 (CET) Received: from was59-1-82-226-113-214.fbx.proxad.net ([82.226.113.214] helo=[192.168.0.10]) by mail.droids-corp.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.84) (envelope-from ) id 1aZE39-000811-6A; Fri, 26 Feb 2016 09:46:59 +0100 To: "Xie, Huawei" , "Ananyev, Konstantin" , Panu Matilainen , "dev@dpdk.org" References: <1450049754-33635-1-git-send-email-huawei.xie@intel.com> <1453827815-56384-1-git-send-email-huawei.xie@intel.com> <1453827815-56384-2-git-send-email-huawei.xie@intel.com> <56A8CCA3.7060302@redhat.com> <56B237AD.1040209@6wind.com> <56CD9DFE.4070702@redhat.com> <2601191342CEEE43887BDE71AB97725836B0AC3F@irsmsx105.ger.corp.intel.com> From: Olivier MATZ X-Enigmail-Draft-Status: N1110 Message-ID: <56D010A5.9050006@6wind.com> Date: Fri, 26 Feb 2016 09:45:25 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Icedove/38.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Cc: "dprovan@bivio.net" Subject: Re: [dpdk-dev] [PATCH v6 1/2] mbuf: provide rte_pktmbuf_alloc_bulk API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Feb 2016 08:45:34 -0000 On 02/26/2016 08:39 AM, Xie, Huawei wrote: >>>> With 8 mbufs allocated, there is about 6% performance increase using inline. >>>> With 16 mbufs allocated, we could still observe obvious performance >>>> difference, though only 1%-2% > > On 2/24/2016 9:23 PM, Ananyev, Konstantin wrote: >> As you can see right now we have all mbuf alloc/free routines as static inline. >> And I think we would like to keep it like that. >> So why that particular function should be different? >> After all that function is nothing more than a wrapper >> around rte_mempool_get_bulk() unrolled by 4 loop {rte_pktmbuf_reset()} >> So unless mempool get/put API would change, I can hardly see there could be any ABI >> breakages in future. >> About 'real world' performance gain - it was a 'real world' performance problem, >> that we tried to solve by introducing that function: >> http://dpdk.org/ml/archives/dev/2015-May/017633.html >> >> And according to the user feedback, it does help: >> http://dpdk.org/ml/archives/dev/2016-February/033203.html For me, there's no doubt this function will help in real world use cases. That's also true that today most (oh no, all) datapath mbuf functions are inline. Although I understand Panu's point of view about the use of inline functions, trying to de-inline some functions of the mbuf API (and others APIs like mempool or ring) would require a deep analysis first to check the performance impact. And I think there would be an impact for most of them. In this particular case, as the function does bulk allocations, it probably tempers the cost of the function call, and that's why I was curious of any comparison with/without inlining. But I'm not sure having this only function as non-inline makes a lot of sense. So: Acked-by: Olivier Matz