From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 639D6292D for ; Fri, 27 May 2016 12:25:01 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga102.fm.intel.com with ESMTP; 27 May 2016 03:25:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,373,1459839600"; d="scan'208";a="815994156" Received: from dhunt5-mobl.ger.corp.intel.com (HELO [10.237.220.26]) ([10.237.220.26]) by orsmga003.jf.intel.com with ESMTP; 27 May 2016 03:24:58 -0700 To: Jerin Jacob , Olivier Matz References: <1464101442-10501-1-git-send-email-jerin.jacob@caviumnetworks.com> <57446C63.4040605@6wind.com> <20160524151654.GA10870@localhost.localdomain> Cc: dev@dpdk.org, thomas.monjalon@6wind.com, bruce.richardson@intel.com, konstantin.ananyev@intel.com From: "Hunt, David" Message-ID: <57482079.1050605@intel.com> Date: Fri, 27 May 2016 11:24:57 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: <20160524151654.GA10870@localhost.localdomain> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH] mbuf: replace c memcpy code semantics with optimized rte_memcpy X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 27 May 2016 10:25:01 -0000 On 5/24/2016 4:17 PM, Jerin Jacob wrote: > On Tue, May 24, 2016 at 04:59:47PM +0200, Olivier Matz wrote: > >> Are you seeing some performance improvement by using rte_memcpy()? > Yes, In some case, In default case, It was replaced with memcpy by the > compiler itself(gcc 5.3). But when I tried external mempool manager patch and > then performance dropped almost 800Kpps. Debugging further it turns out that > external mempool managers unrelated change was knocking out the memcpy. > explicit rte_memcpy brought back 500Kpps. Remaing 300Kpps drop is still > unknown(In my test setup, packets are in the local cache, so it must be > something do with __mempool_put_bulk text alignment change or similar. > > Anyone else observed performance drop with external poolmanager? > > Jerin Jerin, I'm seeing a 300kpps drop in throughput when I apply this on top of the external mempool manager patch. If you're seeing an increase if you apply this patch first, then a drop when applying the mempool manager, the two patches must be conflicting in some way. We probably need to investigate further. Regards, Dave.