From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 2BC9A6841 for ; Thu, 19 Oct 2017 09:51:57 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Oct 2017 00:51:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.43,400,1503385200"; d="scan'208";a="911438684" Received: from fmsmsx105.amr.corp.intel.com ([10.18.124.203]) by FMSMGA003.fm.intel.com with ESMTP; 19 Oct 2017 00:51:56 -0700 Received: from fmsmsx114.amr.corp.intel.com (10.18.116.8) by FMSMSX105.amr.corp.intel.com (10.18.124.203) with Microsoft SMTP Server (TLS) id 14.3.319.2; Thu, 19 Oct 2017 00:51:56 -0700 Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by FMSMSX114.amr.corp.intel.com (10.18.116.8) with Microsoft SMTP Server (TLS) id 14.3.319.2; Thu, 19 Oct 2017 00:51:56 -0700 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.159]) by shsmsx102.ccr.corp.intel.com ([169.254.2.175]) with mapi id 14.03.0319.002; Thu, 19 Oct 2017 15:51:54 +0800 From: "Li, Xiaoyun" To: Thomas Monjalon CC: "Ananyev, Konstantin" , "Richardson, Bruce" , "dev@dpdk.org" , "Lu, Wenzhuo" , "Zhang, Helin" , "ophirmu@mellanox.com" Thread-Topic: [dpdk-dev] [PATCH v8 1/3] eal/x86: run-time dispatch over memcpy Thread-Index: AQHTRAIprgY4BVeLI0+Q0HvX97iHcaLoDmwAgADN1xCAAE2qAIABU4GQ///D5QCAAJH3wA== Date: Thu, 19 Oct 2017 07:51:54 +0000 Message-ID: References: <1507206794-79941-1-git-send-email-xiaoyun.li@intel.com> <1661434.2yK3chXuTC@xps> In-Reply-To: <1661434.2yK3chXuTC@xps> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v8 1/3] eal/x86: run-time dispatch over memcpy X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Oct 2017 07:51:58 -0000 > -----Original Message----- > From: Thomas Monjalon [mailto:thomas@monjalon.net] > Sent: Thursday, October 19, 2017 14:59 > To: Li, Xiaoyun > Cc: Ananyev, Konstantin ; Richardson, > Bruce ; dev@dpdk.org; Lu, Wenzhuo > ; Zhang, Helin ; > ophirmu@mellanox.com > Subject: Re: [dpdk-dev] [PATCH v8 1/3] eal/x86: run-time dispatch over > memcpy >=20 > 19/10/2017 04:45, Li, Xiaoyun: > > Hi > > > > > > > > > > The significant change of this patch is to call a function > > > > > pointer for packet size > 128 (RTE_X86_MEMCPY_THRESH). > > > > The perf drop is due to function call replacing inline. > > > > > > > > > Please could you provide some benchmark numbers? > > > > I ran memcpy_perf_test which would show the time cost of memcpy. I > > > > ran it on broadwell with sse and avx2. > > > > But I just draw pictures and looked at the trend not computed the > > > > exact percentage. Sorry about that. > > > > The picture shows results of copy size of 2, 4, 6, 8, 9, 12, 16, > > > > 32, 64, 128, 192, 256, 320, 384, 448, 512, 768, 1024, 1518, 1522, > > > > 1536, 1600, 2048, 2560, 3072, 3584, 4096, 4608, 5120, 5632, 6144, > > > > 6656, 7168, > > > 7680, 8192. > > > > In my test, the size grows, the drop degrades. (Using copy time > > > > indicates the > > > > perf.) From the trend picture, when the size is smaller than 128 > > > > bytes, the perf drops a lot, almost 50%. And above 128 bytes, it > > > > approaches the original dpdk. > > > > I computed it right now, it shows that when greater than 128 bytes > > > > and smaller than 1024 bytes, the perf drops about 15%. When above > > > > 1024 bytes, the perf drops about 4%. > > > > > > > > > From a test done at Mellanox, there might be a performance > > > > > degradation of about 15% in testpmd txonly with AVX2. > > > > > > > I did tests on X710, XXV710, X540 and MT27710 but didn't see > performance degradation. > > > > I used command "./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 --= - > I" and set fwd txonly. > > I tested it on v17.11-rc1, then revert my patch and tested it again. > > Show port stats all and see the throughput pps. But the results are sim= ilar > and no drop. > > > > Did I miss something? >=20 > I do not understand. Yesterday you confirmed a 15% drop with buffers > between > 128 and 1024 bytes. > But you do not see this drop in your txonly tests, right? >=20 Yes. The drop is using test. Using command "make test -j" and then " ./build/app/test -c f -n 4 "=20 Then run "memcpy_perf_autotest" The results are the cycles that memory copy costs. But I just use it to show the trend because I heard that it's not recommend= ed to use micro benchmarks like test_memcpy_perf for memcpy performance rep= ort as they aren't likely able to reflect performance of real world applica= tions. Details can be seen at https://software.intel.com/en-us/articles/performanc= e-optimization-of-memcpy-in-dpdk And I didn't see drop in testpmd txonly test. Maybe it's because not a lot = memcpy calls. > > > Another thing, I will test testpmd txonly with intel nics and > > > mellanox these days. > > > And try adjusting the RTE_X86_MEMCPY_THRESH to see if there is any > > > improvement. > > > > > > > > Is there someone else seeing a performance degradation? >=20 >=20