From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 3BE5F5A1F for ; Wed, 21 Jan 2015 14:03:14 +0100 (CET) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP; 21 Jan 2015 05:02:38 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.09,441,1418112000"; d="scan'208";a="640384085" Received: from bricha3-mobl3.ger.corp.intel.com ([10.243.20.28]) by orsmga001.jf.intel.com with SMTP; 21 Jan 2015 05:02:35 -0800 Received: by (sSMTP sendmail emulation); Wed, 21 Jan 2015 13:02:35 +0025 Date: Wed, 21 Jan 2015 13:02:34 +0000 From: Bruce Richardson To: Marc Sune Message-ID: <20150121130234.GB10756@bricha3-MOBL3> References: <1421632414-10027-1-git-send-email-zhihong.wang@intel.com> <20150119130221.GB21790@hmsreliant.think-freely.org> <20150120151118.GD18449@hmsreliant.think-freely.org> <20150120161453.GA5316@bricha3-MOBL3> <54BF9D59.7070104@bisdn.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <54BF9D59.7070104@bisdn.de> Organization: Intel Shannon Ltd. User-Agent: Mutt/1.5.23 (2014-03-12) Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Jan 2015 13:03:15 -0000 On Wed, Jan 21, 2015 at 01:36:41PM +0100, Marc Sune wrote: > > On 21/01/15 04:44, Wang, Zhihong wrote: > > > >>-----Original Message----- > >>From: Richardson, Bruce > >>Sent: Wednesday, January 21, 2015 12:15 AM > >>To: Neil Horman > >>Cc: Wang, Zhihong; dev@dpdk.org > >>Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization > >> > >>On Tue, Jan 20, 2015 at 10:11:18AM -0500, Neil Horman wrote: > >>>On Tue, Jan 20, 2015 at 03:01:44AM +0000, Wang, Zhihong wrote: > >>>> > >>>>>-----Original Message----- > >>>>>From: Neil Horman [mailto:nhorman@tuxdriver.com] > >>>>>Sent: Monday, January 19, 2015 9:02 PM > >>>>>To: Wang, Zhihong > >>>>>Cc: dev@dpdk.org > >>>>>Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization > >>>>> > >>>>>On Mon, Jan 19, 2015 at 09:53:30AM +0800, zhihong.wang@intel.com > >>wrote: > >>>>>>This patch set optimizes memcpy for DPDK for both SSE and AVX > >>platforms. > >>>>>>It also extends memcpy test coverage with unaligned cases and > >>>>>>more test > >>>>>points. > >>>>>>Optimization techniques are summarized below: > >>>>>> > >>>>>>1. Utilize full cache bandwidth > >>>>>> > >>>>>>2. Enforce aligned stores > >>>>>> > >>>>>>3. Apply load address alignment based on architecture features > >>>>>> > >>>>>>4. Make load/store address available as early as possible > >>>>>> > >>>>>>5. General optimization techniques like inlining, branch > >>>>>>reducing, prefetch pattern access > >>>>>> > >>>>>>Zhihong Wang (4): > >>>>>> Disabled VTA for memcpy test in app/test/Makefile > >>>>>> Removed unnecessary test cases in test_memcpy.c > >>>>>> Extended test coverage in test_memcpy_perf.c > >>>>>> Optimized memcpy in arch/x86/rte_memcpy.h for both SSE and AVX > >>>>>> platforms > >>>>>> > >>>>>> app/test/Makefile | 6 + > >>>>>> app/test/test_memcpy.c | 52 +- > >>>>>> app/test/test_memcpy_perf.c | 238 +++++--- > >>>>>> .../common/include/arch/x86/rte_memcpy.h | 664 > >>>>>+++++++++++++++------ > >>>>>> 4 files changed, 656 insertions(+), 304 deletions(-) > >>>>>> > >>>>>>-- > >>>>>>1.9.3 > >>>>>> > >>>>>> > >>>>>Are you able to compile this with gcc 4.9.2? The compilation of > >>>>>test_memcpy_perf is taking forever for me. It appears hung. > >>>>>Neil > >>>> > >>>>Neil, > >>>> > >>>>Thanks for reporting this! > >>>>It should compile but will take quite some time if the CPU doesn't support > >>AVX2, the reason is that: > >>>>1. The SSE & AVX memcpy implementation is more complicated than > >>AVX2 > >>>>version thus the compiler takes more time to compile and optimize 2. > >>>>The new test_memcpy_perf.c contains 126 constants memcpy calls for > >>>>better test case coverage, that's quite a lot > >>>> > >>>>I've just tested this patch on an Ivy Bridge machine with GCC 4.9.2: > >>>>1. The whole compile process takes 9'41" with the original > >>>>test_memcpy_perf.c (63 + 63 = 126 constant memcpy calls) 2. It takes > >>>>only 2'41" after I reduce the constant memcpy call number to 12 + 12 > >>>>= 24 > >>>> > >>>>I'll reduce memcpy call in the next version of patch. > >>>> > >>>ok, thank you. I'm all for optimzation, but I think a compile that > >>>takes almost > >>>10 minutes for a single file is going to generate some raised eyebrows > >>>when end users start tinkering with it > >>> > >>>Neil > >>> > >>>>Zhihong (John) > >>>> > >>Even two minutes is a very long time to compile, IMHO. The whole of DPDK > >>doesn't take that long to compile right now, and that's with a couple of huge > >>header files with routing tables in it. Any chance you could cut compile time > >>down to a few seconds while still having reasonable tests? > >>Also, when there is AVX2 present on the system, what is the compile time > >>like for that code? > >> > >> /Bruce > >Neil, Bruce, > > > >Some data first. > > > >Sandy Bridge without AVX2: > >1. original w/ 10 constant memcpy: 2'25" > >2. patch w/ 12 constant memcpy: 2'41" > >3. patch w/ 63 constant memcpy: 9'41" > > > >Haswell with AVX2: > >1. original w/ 10 constant memcpy: 1'57" > >2. patch w/ 12 constant memcpy: 1'56" > >3. patch w/ 63 constant memcpy: 3'16" > > > >Also, to address Bruce's question, we have to reduce test case to cut down compile time. Because we use: > >1. intrinsics instead of assembly for better flexibility and can utilize more compiler optimization > >2. complex function body for better performance > >3. inlining > >This increases compile time. > >But I think it'd be okay to do that as long as we can select a fair set of test points. > > > >It'd be great if you could give some suggestion, say, 12 points. > > > >Zhihong (John) > > > > > > While I agree in the general case these long compilation times is painful > for the users, having a factor of 2-8x in memcpy operations is quite an > improvement, specially in DPDK applications which need to deal > (unfortunately) heavily on them -- e.g. IP fragmentation and reassembly. > > Why not having a fast compilation by default, and a tunable config flag to > enable a highly optimized version of rte_memcpy (e.g. RTE_EAL_OPT_MEMCPY)? > > Marc > Out of interest, are these 2-8x improvements something you have benchmarked in these app scenarios? [i.e. not just in micro-benchmarks]. /Bruce