From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 9BF041B1A4 for ; Thu, 19 Oct 2017 11:29:46 +0200 (CEST) Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Oct 2017 02:29:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.43,400,1503385200"; d="scan'208";a="139968311" Received: from bricha3-mobl3.ger.corp.intel.com ([10.237.221.32]) by orsmga004.jf.intel.com with SMTP; 19 Oct 2017 02:29:43 -0700 Received: by (sSMTP sendmail emulation); Thu, 19 Oct 2017 10:29:42 +0100 Date: Thu, 19 Oct 2017 10:29:42 +0100 From: Bruce Richardson To: Thomas Monjalon Cc: "Li, Xiaoyun" , "Ananyev, Konstantin" , dev@dpdk.org, "Lu, Wenzhuo" , "Zhang, Helin" , "ophirmu@mellanox.com" Message-ID: <20171019092941.GA5780@bricha3-MOBL3.ger.corp.intel.com> References: <1507206794-79941-1-git-send-email-xiaoyun.li@intel.com> <3438028.jIYWTcBuhA@xps> <4686516.j2scn2ENsX@xps> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4686516.j2scn2ENsX@xps> Organization: Intel Research and Development Ireland Ltd. User-Agent: Mutt/1.9.1 (2017-09-22) Subject: Re: [dpdk-dev] [PATCH v8 1/3] eal/x86: run-time dispatch over memcpy X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Oct 2017 09:29:47 -0000 On Thu, Oct 19, 2017 at 11:00:33AM +0200, Thomas Monjalon wrote: > 19/10/2017 10:50, Li, Xiaoyun: > > > > > -----Original Message----- > > > From: Thomas Monjalon [mailto:thomas@monjalon.net] > > > Sent: Thursday, October 19, 2017 16:34 > > > To: Li, Xiaoyun > > > Cc: Ananyev, Konstantin ; Richardson, > > > Bruce ; dev@dpdk.org; Lu, Wenzhuo > > > ; Zhang, Helin ; > > > ophirmu@mellanox.com > > > Subject: Re: [dpdk-dev] [PATCH v8 1/3] eal/x86: run-time dispatch over > > > memcpy > > > > > > 19/10/2017 09:51, Li, Xiaoyun: > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net] > > > > > 19/10/2017 04:45, Li, Xiaoyun: > > > > > > Hi > > > > > > > > > > > > > > > > > > The significant change of this patch is to call a function > > > > > > > > > pointer for packet size > 128 (RTE_X86_MEMCPY_THRESH). > > > > > > > > The perf drop is due to function call replacing inline. > > > > > > > > > > > > > > > > > Please could you provide some benchmark numbers? > > > > > > > > I ran memcpy_perf_test which would show the time cost of > > > > > > > > memcpy. I ran it on broadwell with sse and avx2. > > > > > > > > But I just draw pictures and looked at the trend not computed > > > > > > > > the exact percentage. Sorry about that. > > > > > > > > The picture shows results of copy size of 2, 4, 6, 8, 9, 12, > > > > > > > > 16, 32, 64, 128, 192, 256, 320, 384, 448, 512, 768, 1024, > > > > > > > > 1518, 1522, 1536, 1600, 2048, 2560, 3072, 3584, 4096, 4608, > > > > > > > > 5120, 5632, 6144, 6656, 7168, > > > > > > > 7680, 8192. > > > > > > > > In my test, the size grows, the drop degrades. (Using copy > > > > > > > > time indicates the > > > > > > > > perf.) From the trend picture, when the size is smaller than > > > > > > > > 128 bytes, the perf drops a lot, almost 50%. And above 128 > > > > > > > > bytes, it approaches the original dpdk. > > > > > > > > I computed it right now, it shows that when greater than 128 > > > > > > > > bytes and smaller than 1024 bytes, the perf drops about 15%. > > > > > > > > When above > > > > > > > > 1024 bytes, the perf drops about 4%. > > > > > > > > > > > > > > > > > From a test done at Mellanox, there might be a performance > > > > > > > > > degradation of about 15% in testpmd txonly with AVX2. > > > > > > > > > > > > > > > > > > > I did tests on X710, XXV710, X540 and MT27710 but didn't see > > > > > performance degradation. > > > > > > > > > > > > I used command "./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n > > > > > > 4 -- - > > > > > I" and set fwd txonly. > > > > > > I tested it on v17.11-rc1, then revert my patch and tested it again. > > > > > > Show port stats all and see the throughput pps. But the results > > > > > > are similar > > > > > and no drop. > > > > > > > > > > > > Did I miss something? > > > > > > > > > > I do not understand. Yesterday you confirmed a 15% drop with buffers > > > > > between > > > > > 128 and 1024 bytes. > > > > > But you do not see this drop in your txonly tests, right? > > > > > > > > > Yes. The drop is using test. > > > > Using command "make test -j" and then " ./build/app/test -c f -n 4 " > > > > Then run "memcpy_perf_autotest" > > > > The results are the cycles that memory copy costs. > > > > But I just use it to show the trend because I heard that it's not > > > recommended to use micro benchmarks like test_memcpy_perf for memcpy > > > performance report as they aren't likely able to reflect performance of real > > > world applications. > > > > > > Yes real applications can hide the memcpy cost. > > > Sometimes, the cost appear for real :) > > > > > > > Details can be seen at > > > > https://software.intel.com/en-us/articles/performance-optimization-of- > > > > memcpy-in-dpdk > > > > > > > > And I didn't see drop in testpmd txonly test. Maybe it's because not a lot > > > memcpy calls. > > > > > > It has been seen in a mlx4 use-case using more memcpy. > > > I think 15% in micro-benchmark is too much. > > > What can we do? Raise the threshold? > > > > > I think so. If there is big drop, can try raise the threshold. Maybe 1024? but not sure. > > But I didn't reproduce the 15% drop on mellanox and not sure how to verify it. > > I think we should focus on micro-benchmark and find a reasonnable threshold > for a reasonnable drop tradeoff. > Sadly, it may not be that simple. What shows best performance for micro-benchmarks may not show the same effect in a real application. /Bruce