From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.droids-corp.org (zoll.droids-corp.org [94.23.50.67]) by dpdk.org (Postfix) with ESMTP id 6CFE0CCD8 for ; Fri, 17 Jun 2016 12:40:31 +0200 (CEST) Received: from was59-1-82-226-113-214.fbx.proxad.net ([82.226.113.214] helo=[192.168.0.10]) by mail.droids-corp.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.84_2) (envelope-from ) id 1bDrEh-0004d9-EH; Fri, 17 Jun 2016 12:42:51 +0200 To: Jerin Jacob References: <1464101442-10501-1-git-send-email-jerin.jacob@caviumnetworks.com> <1464250025-9191-1-git-send-email-jerin.jacob@caviumnetworks.com> <574BFD97.2010505@6wind.com> <20160531125822.GA10995@localhost.localdomain> <574DFC9A.2050304@6wind.com> <20160601070018.GA26922@localhost.localdomain> <574FE202.2060306@6wind.com> <20160602093936.GB6794@localhost.localdomain> <5750A220.6040804@6wind.com> <20160603070202.GA6153@localhost.localdomain> Cc: dev@dpdk.org, thomas.monjalon@6wind.com, bruce.richardson@intel.com, konstantin.ananyev@intel.com From: Olivier Matz Message-ID: <5763D397.8060900@6wind.com> Date: Fri, 17 Jun 2016 12:40:23 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Icedove/38.6.0 MIME-Version: 1.0 In-Reply-To: <20160603070202.GA6153@localhost.localdomain> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v2] mempool: replace c memcpy code semantics with optimized rte_memcpy X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Jun 2016 10:40:31 -0000 Hi Jerin, On 06/03/2016 09:02 AM, Jerin Jacob wrote: > On Thu, Jun 02, 2016 at 11:16:16PM +0200, Olivier MATZ wrote: > Hi Olivier, > >> This is probably more a measure of the pure CPU cost of the mempool >> function, without considering the memory cache aspect. So, of course, >> a real use-case test should be done to confirm or not that it increases >> the performance. I'll manage to do a test and let you know the result. > > OK > > IMO, put rte_memcpy makes sense(this patch) as their no behavior change. > However, if get rte_memcpy with behavioral changes makes sense some platform > then we can enable it on conditional basics(I am OK with that) > >> >> By the way, not all drivers are allocating or freeing the mbufs by >> bulk, so this modification would only affect these ones. What driver >> are you using for your test? > > I have tested with ThunderX nicvf pmd(uses the bulk mode). > Recently sent out driver in ml for review Just to let you know I do not forget this. I still need to find some time to do a performance test. Regards, Olivier