From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id E89DF1B7B8 for ; Wed, 25 Oct 2017 11:14:38 +0200 (CEST) Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Oct 2017 02:14:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.43,431,1503385200"; d="scan'208";a="164700144" Received: from irsmsx105.ger.corp.intel.com ([163.33.3.28]) by orsmga005.jf.intel.com with ESMTP; 25 Oct 2017 02:14:36 -0700 Received: from irsmsx103.ger.corp.intel.com ([169.254.3.49]) by irsmsx105.ger.corp.intel.com ([163.33.3.28]) with mapi id 14.03.0319.002; Wed, 25 Oct 2017 10:14:35 +0100 From: "Ananyev, Konstantin" To: "Li, Xiaoyun" , "Richardson, Bruce" , Thomas Monjalon CC: "dev@dpdk.org" , "Lu, Wenzhuo" , "Zhang, Helin" , "ophirmu@mellanox.com" Thread-Topic: [dpdk-dev] [PATCH v8 1/3] eal/x86: run-time dispatch over memcpy Thread-Index: AQHTRAImhtVu8Wh2TUmXY3+ab3yexaLog8UAgABTEQCAAENNgIABVbcAgABG0gCAAA7OAIAAC6cAgAAE2YCAAAKvgIAACCUAgAEEqYCACD4lgIAAMKpA///wlwCAABZmcA== Date: Wed, 25 Oct 2017 09:14:34 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772585FAAEC00@IRSMSX103.ger.corp.intel.com> References: <1507206794-79941-1-git-send-email-xiaoyun.li@intel.com> <3438028.jIYWTcBuhA@xps> <4686516.j2scn2ENsX@xps> <20171019092941.GA5780@bricha3-MOBL3.ger.corp.intel.com> <2601191342CEEE43887BDE71AB9772585FAAEB8E@IRSMSX103.ger.corp.intel.com> In-Reply-To: Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiMWIyNmExNGYtNGQ2OC00MDY0LThmNWUtMTlkM2U4OTQyNGIxIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE2LjUuOS4zIiwiVHJ1c3RlZExhYmVsSGFzaCI6Imw2Wm5ZenEzTXl1cnUrSUhua1hCaFkxUXNHUmh2dHVxdytLWjNlbjJaVW89In0= x-ctpclassification: CTP_IC dlp-product: dlpe-windows dlp-version: 11.0.0.116 dlp-reaction: no-action x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v8 1/3] eal/x86: run-time dispatch over memcpy X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 Oct 2017 09:14:39 -0000 > -----Original Message----- > From: Li, Xiaoyun > Sent: Wednesday, October 25, 2017 9:54 AM > To: Ananyev, Konstantin ; Richardson, Bruce= ; Thomas Monjalon > > Cc: dev@dpdk.org; Lu, Wenzhuo ; Zhang, Helin ; ophirmu@mellanox.com > Subject: RE: [dpdk-dev] [PATCH v8 1/3] eal/x86: run-time dispatch over me= mcpy >=20 >=20 >=20 > > -----Original Message----- > > From: Ananyev, Konstantin > > Sent: Wednesday, October 25, 2017 16:51 > > To: Li, Xiaoyun ; Richardson, Bruce > > ; Thomas Monjalon > > Cc: dev@dpdk.org; Lu, Wenzhuo ; Zhang, Helin > > ; ophirmu@mellanox.com > > Subject: RE: [dpdk-dev] [PATCH v8 1/3] eal/x86: run-time dispatch over > > memcpy > > > > > > > > > -----Original Message----- > > > From: Li, Xiaoyun > > > Sent: Wednesday, October 25, 2017 7:55 AM > > > To: Li, Xiaoyun ; Richardson, Bruce > > > ; Thomas Monjalon > > > Cc: Ananyev, Konstantin ; dev@dpdk.org; > > > Lu, Wenzhuo ; Zhang, Helin > > > ; ophirmu@mellanox.com > > > Subject: RE: [dpdk-dev] [PATCH v8 1/3] eal/x86: run-time dispatch ove= r > > > memcpy > > > > > > Hi > > > > > > > -----Original Message----- > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Li, Xiaoyun > > > > Sent: Friday, October 20, 2017 09:03 > > > > To: Richardson, Bruce ; Thomas Monjalon > > > > > > > > Cc: Ananyev, Konstantin ; > > > > dev@dpdk.org; Lu, Wenzhuo ; Zhang, Helin > > > > ; ophirmu@mellanox.com > > > > Subject: Re: [dpdk-dev] [PATCH v8 1/3] eal/x86: run-time dispatch > > > > over memcpy > > > > > > > > > > > > > > > > > -----Original Message----- > > > > > From: Richardson, Bruce > > > > > Sent: Thursday, October 19, 2017 17:30 > > > > > To: Thomas Monjalon > > > > > Cc: Li, Xiaoyun ; Ananyev, Konstantin > > > > > ; dev@dpdk.org; Lu, Wenzhuo > > > > > ; Zhang, Helin ; > > > > > ophirmu@mellanox.com > > > > > Subject: Re: [dpdk-dev] [PATCH v8 1/3] eal/x86: run-time dispatch > > > > > over memcpy > > > > > > > > > > On Thu, Oct 19, 2017 at 11:00:33AM +0200, Thomas Monjalon wrote: > > > > > > 19/10/2017 10:50, Li, Xiaoyun: > > > > > > > > > > > > > > > -----Original Message----- > > > > > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net] > > > > > > > > Sent: Thursday, October 19, 2017 16:34 > > > > > > > > To: Li, Xiaoyun > > > > > > > > Cc: Ananyev, Konstantin ; > > > > > > > > Richardson, Bruce ; > > > > > > > > dev@dpdk.org; Lu, Wenzhuo ; Zhang, > > > > > > > > Helin ; ophirmu@mellanox.com > > > > > > > > Subject: Re: [dpdk-dev] [PATCH v8 1/3] eal/x86: run-time > > > > > > > > dispatch over memcpy > > > > > > > > > > > > > > > > 19/10/2017 09:51, Li, Xiaoyun: > > > > > > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net] > > > > > > > > > > 19/10/2017 04:45, Li, Xiaoyun: > > > > > > > > > > > Hi > > > > > > > > > > > > > > > > > > > > > > > > > > > > The significant change of this patch is to call > > > > > > > > > > > > > > a function pointer for packet size > 128 > > > > > (RTE_X86_MEMCPY_THRESH). > > > > > > > > > > > > > The perf drop is due to function call replacing i= nline. > > > > > > > > > > > > > > > > > > > > > > > > > > > Please could you provide some benchmark numbers= ? > > > > > > > > > > > > > I ran memcpy_perf_test which would show the time > > > > > > > > > > > > > cost of memcpy. I ran it on broadwell with sse an= d avx2. > > > > > > > > > > > > > But I just draw pictures and looked at the trend > > > > > > > > > > > > > not computed the exact percentage. Sorry about th= at. > > > > > > > > > > > > > The picture shows results of copy size of 2, 4, 6= , > > > > > > > > > > > > > 8, 9, 12, 16, 32, 64, 128, 192, 256, 320, 384, > > > > > > > > > > > > > 448, 512, 768, 1024, 1518, 1522, 1536, 1600, 2048= , > > > > > > > > > > > > > 2560, 3072, 3584, 4096, 4608, 5120, 5632, 6144, > > > > > > > > > > > > > 6656, 7168, > > > > > > > > > > > > 7680, 8192. > > > > > > > > > > > > > In my test, the size grows, the drop degrades. > > > > > > > > > > > > > (Using copy time indicates the > > > > > > > > > > > > > perf.) From the trend picture, when the size is > > > > > > > > > > > > > smaller than > > > > > > > > > > > > > 128 bytes, the perf drops a lot, almost 50%. And > > > > > > > > > > > > > above > > > > > > > > > > > > > 128 bytes, it approaches the original dpdk. > > > > > > > > > > > > > I computed it right now, it shows that when > > > > > > > > > > > > > greater than > > > > > > > > > > > > > 128 bytes and smaller than 1024 bytes, the perf > > > > > > > > > > > > > drops about > > > > > 15%. > > > > > > > > > > > > > When above > > > > > > > > > > > > > 1024 bytes, the perf drops about 4%. > > > > > > > > > > > > > > > > > > > > > > > > > > > From a test done at Mellanox, there might be a > > > > > > > > > > > > > > performance degradation of about 15% in testpmd > > > > > > > > > > > > > > txonly > > > > > with AVX2. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I did tests on X710, XXV710, X540 and MT27710 but > > > > > > > > > > > didn't see > > > > > > > > > > performance degradation. > > > > > > > > > > > > > > > > > > > > > > I used command > > > > > > > > > > > "./x86_64-native-linuxapp-gcc/app/testpmd > > > > > > > > > > > -c 0xf -n > > > > > > > > > > > 4 -- - > > > > > > > > > > I" and set fwd txonly. > > > > > > > > > > > I tested it on v17.11-rc1, then revert my patch and > > > > > > > > > > > tested it > > > > again. > > > > > > > > > > > Show port stats all and see the throughput pps. But > > > > > > > > > > > the results are similar > > > > > > > > > > and no drop. > > > > > > > > > > > > > > > > > > > > > > Did I miss something? > > > > > > > > > > > > > > > > > > > > I do not understand. Yesterday you confirmed a 15% drop > > > > > > > > > > with buffers between > > > > > > > > > > 128 and 1024 bytes. > > > > > > > > > > But you do not see this drop in your txonly tests, righ= t? > > > > > > > > > > > > > > > > > > > Yes. The drop is using test. > > > > > > > > > Using command "make test -j" and then " ./build/app/test = -c f -n > > 4 " > > > > > > > > > Then run "memcpy_perf_autotest" > > > > > > > > > The results are the cycles that memory copy costs. > > > > > > > > > But I just use it to show the trend because I heard that > > > > > > > > > it's not > > > > > > > > recommended to use micro benchmarks like test_memcpy_perf > > > > > > > > for memcpy performance report as they aren't likely able to > > > > > > > > reflect performance of real world applications. > > > > > > > > > > > > > > > > Yes real applications can hide the memcpy cost. > > > > > > > > Sometimes, the cost appear for real :) > > > > > > > > > > > > > > > > > Details can be seen at > > > > > > > > > https://software.intel.com/en-us/articles/performance-opt= i > > > > > > > > > miza > > > > > > > > > ti > > > > > > > > > on-of- > > > > > > > > > memcpy-in-dpdk > > > > > > > > > > > > > > > > > > And I didn't see drop in testpmd txonly test. Maybe it's > > > > > > > > > because not a lot > > > > > > > > memcpy calls. > > > > > > > > > > > > > > > > It has been seen in a mlx4 use-case using more memcpy. > > > > > > > > I think 15% in micro-benchmark is too much. > > > > > > > > What can we do? Raise the threshold? > > > > > > > > > > > > > > > I think so. If there is big drop, can try raise the threshold= . Maybe > > 1024? > > > > > but not sure. > > > > > > > But I didn't reproduce the 15% drop on mellanox and not sure > > > > > > > how to > > > > > verify it. > > > > > > > > > > > > I think we should focus on micro-benchmark and find a > > > > > > reasonnable threshold for a reasonnable drop tradeoff. > > > > > > > > > > > Sadly, it may not be that simple. What shows best performance for > > > > > micro- benchmarks may not show the same effect in a real applicat= ion. > > > > > > > > > > /Bruce > > > > > > > > Then how to measure the performance? > > > > > > > > And I cannot reproduce 15% drop on mellanox. > > > > Could the person who tested 15% drop help to do test again with 102= 4 > > > > threshold and see if there is any improvement? > > > > > > As Bruce said, best performance on micro-benchmark may not show the > > same effect in real applications. > > > And I cannot reproduce the 15% drop. > > > And I don't know if raising the threshold can improve the perf or not= . > > > Could the person who tested 15% drop help to do test again with 1024 > > threshold and see if there is any improvement? > > > > As I already asked before - why not to make that threshold dynamic? > > Konstantin > > > I want to confirm that raising threshold is useful. Then can make it dyna= mic and set it very large as default. Ok. >=20 > > > > > > Best Regards > > > Xiaoyun Li > > > > > >