From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 4F7991B6A6 for ; Wed, 25 Oct 2017 12:32:29 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Oct 2017 03:32:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.43,431,1503385200"; d="scan'208";a="1235048039" Received: from fmsmsx103.amr.corp.intel.com ([10.18.124.201]) by fmsmga002.fm.intel.com with ESMTP; 25 Oct 2017 03:32:27 -0700 Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS) id 14.3.319.2; Wed, 25 Oct 2017 03:32:27 -0700 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.159]) by SHSMSX151.ccr.corp.intel.com ([169.254.3.218]) with mapi id 14.03.0319.002; Wed, 25 Oct 2017 18:32:26 +0800 From: "Li, Xiaoyun" To: Thomas Monjalon CC: "Ananyev, Konstantin" , "Richardson, Bruce" , "dev@dpdk.org" , "Lu, Wenzhuo" , "Zhang, Helin" , "ophirmu@mellanox.com" Thread-Topic: [dpdk-dev] [PATCH v8 1/3] eal/x86: run-time dispatch over memcpy Thread-Index: AQHTRAIprgY4BVeLI0+Q0HvX97iHcaLoDmwAgADN1xCAAE2qAIABU4GQ///D5QCAAJH3wP//iH4AgACJLtD//35agAABBKoAADE0GyABBw7bEP//oQYA//95ZYCAAIktAP//eCGw Date: Wed, 25 Oct 2017 10:32:25 +0000 Message-ID: References: <1507206794-79941-1-git-send-email-xiaoyun.li@intel.com> <2601191342CEEE43887BDE71AB9772585FAAEB8E@IRSMSX103.ger.corp.intel.com> <4158692.mhjs8xbxgm@xps> In-Reply-To: <4158692.mhjs8xbxgm@xps> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v8 1/3] eal/x86: run-time dispatch over memcpy X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 Oct 2017 10:32:30 -0000 > -----Original Message----- > From: Thomas Monjalon [mailto:thomas@monjalon.net] > Sent: Wednesday, October 25, 2017 17:00 > To: Li, Xiaoyun > Cc: Ananyev, Konstantin ; Richardson, > Bruce ; dev@dpdk.org; Lu, Wenzhuo > ; Zhang, Helin ; > ophirmu@mellanox.com > Subject: Re: [dpdk-dev] [PATCH v8 1/3] eal/x86: run-time dispatch over > memcpy >=20 > 25/10/2017 10:54, Li, Xiaoyun: > > > > > > > I think we should focus on micro-benchmark and find a > > > > > > > reasonnable threshold for a reasonnable drop tradeoff. > > > > > > > > > > > > > Sadly, it may not be that simple. What shows best performance > > > > > > for > > > > > > micro- benchmarks may not show the same effect in a real > application. > > > > > > > > > > > > /Bruce > > > > > > > > > > Then how to measure the performance? > > > > > > > > > > And I cannot reproduce 15% drop on mellanox. > > > > > Could the person who tested 15% drop help to do test again with > > > > > 1024 threshold and see if there is any improvement? > > > > > > > > As Bruce said, best performance on micro-benchmark may not show > > > > the > > > same effect in real applications. > > > > And I cannot reproduce the 15% drop. > > > > And I don't know if raising the threshold can improve the perf or n= ot. > > > > Could the person who tested 15% drop help to do test again with > > > > 1024 > > > threshold and see if there is any improvement? > > > > > > As I already asked before - why not to make that threshold dynamic? > > > Konstantin > > > > > I want to confirm that raising threshold is useful. Then can make it dy= namic > and set it very large as default. >=20 > You can confirm it with micro-benchmarks. I did tests on memcpy_perf_test. Set threshold to 1024. But when smaller than 1024 bytes, it costs 2~4 cycles more than the origina= l. Such as original is 10, right now is 12. Then the drop is 2/12=3D16%. I don't know this kind of drop matters a lot or not. And above 1024 bytes, the drop is almost 4% as I said before. /Xiaoyun