From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 00A5E214A for ; Fri, 27 May 2016 15:45:22 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP; 27 May 2016 06:45:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,374,1459839600"; d="scan'208";a="816093674" Received: from dhunt5-mobl.ger.corp.intel.com (HELO [10.237.220.26]) ([10.237.220.26]) by orsmga003.jf.intel.com with ESMTP; 27 May 2016 06:45:20 -0700 To: Jerin Jacob , Olivier Matz References: <1464101442-10501-1-git-send-email-jerin.jacob@caviumnetworks.com> <57446C63.4040605@6wind.com> <20160524151654.GA10870@localhost.localdomain> <57482079.1050605@intel.com> Cc: dev@dpdk.org, thomas.monjalon@6wind.com, bruce.richardson@intel.com, konstantin.ananyev@intel.com From: "Hunt, David" Message-ID: <57484F6F.40204@intel.com> Date: Fri, 27 May 2016 14:45:19 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: <57482079.1050605@intel.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH] mbuf: replace c memcpy code semantics with optimized rte_memcpy X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 27 May 2016 13:45:23 -0000 On 5/27/2016 11:24 AM, Hunt, David wrote: > > > On 5/24/2016 4:17 PM, Jerin Jacob wrote: >> On Tue, May 24, 2016 at 04:59:47PM +0200, Olivier Matz wrote: >> >>> Are you seeing some performance improvement by using rte_memcpy()? >> Yes, In some case, In default case, It was replaced with memcpy by the >> compiler itself(gcc 5.3). But when I tried external mempool manager >> patch and >> then performance dropped almost 800Kpps. Debugging further it turns >> out that >> external mempool managers unrelated change was knocking out the memcpy. >> explicit rte_memcpy brought back 500Kpps. Remaing 300Kpps drop is still >> unknown(In my test setup, packets are in the local cache, so it must be >> something do with __mempool_put_bulk text alignment change or similar. >> >> Anyone else observed performance drop with external poolmanager? >> >> Jerin > > Jerin, > I'm seeing a 300kpps drop in throughput when I apply this on top > of the external > mempool manager patch. If you're seeing an increase if you apply this > patch first, then > a drop when applying the mempool manager, the two patches must be > conflicting in > some way. We probably need to investigate further. > Regards, > Dave. > On further investigation, I now have a setup with no performance degradation. My previous tests were accessing the NICS on a different NUMA node. Once I initiated testPMD with the correct coremask, the difference between pre and post rte_memcpy patch is negligible (maybe 0.1% drop). Regards, Dave.