From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pd0-f179.google.com (mail-pd0-f179.google.com [209.85.192.179]) by dpdk.org (Postfix) with ESMTP id 19B825A1F for ; Wed, 21 Jan 2015 20:49:56 +0100 (CET) Received: by mail-pd0-f179.google.com with SMTP id v10so33452701pde.10 for ; Wed, 21 Jan 2015 11:49:55 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-type:content-transfer-encoding; bh=y05NiKpsnXd0iwgqcV+93MsoyoRf9uMTndCMt6KnBn8=; b=Ov1x7xcTsGeg4judbklbtnOj5vM6X8o3rjrG1ZOKL/WIWkaOFvwVu31wQn8Au5mBmP niwSNvo4RgX7Sjt3LhAlykbPIS14pwqYQshJ8zsVtGUbMFjMkbhCkqgTqx8ZJt1a9vjV bPqFBBhh380RD4OuvH1N4qolwOqwg7hxUl7VJUT2Ada+KEBdqlosT5tKMILPgdKRYi8a aCzj1lUb6Cp/QjOJnZCxt3lgqv3VgvgQ6u4PnlTHu+sdhK6828NI7b1WN+KMYPmV2sKZ WszPUwkgYlgg09QqIRWwPYn0iQWHkhvU+L1rLubFhozAZ8rGaFslmlQPX+fWU3+uAx8D uGMg== X-Gm-Message-State: ALoCoQm1awEWkkUTSgoXxeOEc/FEpnnQPUkvyNY1JXhHhwo9ORYq3oGK0mIdMSxij8EWb9hTT88n X-Received: by 10.70.88.161 with SMTP id bh1mr21343040pdb.36.1421869795304; Wed, 21 Jan 2015 11:49:55 -0800 (PST) Received: from urahara (static-50-53-82-155.bvtn.or.frontiernet.net. [50.53.82.155]) by mx.google.com with ESMTPSA id gx1sm6714967pbd.57.2015.01.21.11.49.52 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 21 Jan 2015 11:49:54 -0800 (PST) Date: Wed, 21 Jan 2015 11:49:47 -0800 From: Stephen Hemminger To: Bruce Richardson Message-ID: <20150121114947.0753ae87@urahara> In-Reply-To: <20150121132620.GC10756@bricha3-MOBL3> References: <1421632414-10027-1-git-send-email-zhihong.wang@intel.com> <20150119130221.GB21790@hmsreliant.think-freely.org> <20150120151118.GD18449@hmsreliant.think-freely.org> <20150120161453.GA5316@bricha3-MOBL3> <54BF9D59.7070104@bisdn.de> <20150121130234.GB10756@bricha3-MOBL3> <54BFA7D5.7020106@bisdn.de> <20150121132620.GC10756@bricha3-MOBL3> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Jan 2015 19:49:56 -0000 On Wed, 21 Jan 2015 13:26:20 +0000 Bruce Richardson wrote: > On Wed, Jan 21, 2015 at 02:21:25PM +0100, Marc Sune wrote: > > > > On 21/01/15 14:02, Bruce Richardson wrote: > > >On Wed, Jan 21, 2015 at 01:36:41PM +0100, Marc Sune wrote: > > >>On 21/01/15 04:44, Wang, Zhihong wrote: > > >>>>-----Original Message----- > > >>>>From: Richardson, Bruce > > >>>>Sent: Wednesday, January 21, 2015 12:15 AM > > >>>>To: Neil Horman > > >>>>Cc: Wang, Zhihong; dev@dpdk.org > > >>>>Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization > > >>>> > > >>>>On Tue, Jan 20, 2015 at 10:11:18AM -0500, Neil Horman wrote: > > >>>>>On Tue, Jan 20, 2015 at 03:01:44AM +0000, Wang, Zhihong wrote: > > >>>>>>>-----Original Message----- > > >>>>>>>From: Neil Horman [mailto:nhorman@tuxdriver.com] > > >>>>>>>Sent: Monday, January 19, 2015 9:02 PM > > >>>>>>>To: Wang, Zhihong > > >>>>>>>Cc: dev@dpdk.org > > >>>>>>>Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization > > >>>>>>> > > >>>>>>>On Mon, Jan 19, 2015 at 09:53:30AM +0800, zhihong.wang@intel.com > > >>>>wrote: > > >>>>>>>>This patch set optimizes memcpy for DPDK for both SSE and AVX > > >>>>platforms. > > >>>>>>>>It also extends memcpy test coverage with unaligned cases and > > >>>>>>>>more test > > >>>>>>>points. > > >>>>>>>>Optimization techniques are summarized below: > > >>>>>>>> > > >>>>>>>>1. Utilize full cache bandwidth > > >>>>>>>> > > >>>>>>>>2. Enforce aligned stores > > >>>>>>>> > > >>>>>>>>3. Apply load address alignment based on architecture features > > >>>>>>>> > > >>>>>>>>4. Make load/store address available as early as possible > > >>>>>>>> > > >>>>>>>>5. General optimization techniques like inlining, branch > > >>>>>>>>reducing, prefetch pattern access > > >>>>>>>> > > >>>>>>>>Zhihong Wang (4): > > >>>>>>>> Disabled VTA for memcpy test in app/test/Makefile > > >>>>>>>> Removed unnecessary test cases in test_memcpy.c > > >>>>>>>> Extended test coverage in test_memcpy_perf.c > > >>>>>>>> Optimized memcpy in arch/x86/rte_memcpy.h for both SSE and AVX > > >>>>>>>> platforms > > >>>>>>>> > > >>>>>>>> app/test/Makefile | 6 + > > >>>>>>>> app/test/test_memcpy.c | 52 +- > > >>>>>>>> app/test/test_memcpy_perf.c | 238 +++++--- > > >>>>>>>> .../common/include/arch/x86/rte_memcpy.h | 664 > > >>>>>>>+++++++++++++++------ > > >>>>>>>> 4 files changed, 656 insertions(+), 304 deletions(-) > > >>>>>>>> > > >>>>>>>>-- > > >>>>>>>>1.9.3 > > >>>>>>>> > > >>>>>>>> > > >>>>>>>Are you able to compile this with gcc 4.9.2? The compilation of > > >>>>>>>test_memcpy_perf is taking forever for me. It appears hung. > > >>>>>>>Neil > > >>>>>>Neil, > > >>>>>> > > >>>>>>Thanks for reporting this! > > >>>>>>It should compile but will take quite some time if the CPU doesn't support > > >>>>AVX2, the reason is that: > > >>>>>>1. The SSE & AVX memcpy implementation is more complicated than > > >>>>AVX2 > > >>>>>>version thus the compiler takes more time to compile and optimize 2. > > >>>>>>The new test_memcpy_perf.c contains 126 constants memcpy calls for > > >>>>>>better test case coverage, that's quite a lot > > >>>>>> > > >>>>>>I've just tested this patch on an Ivy Bridge machine with GCC 4.9.2: > > >>>>>>1. The whole compile process takes 9'41" with the original > > >>>>>>test_memcpy_perf.c (63 + 63 = 126 constant memcpy calls) 2. It takes > > >>>>>>only 2'41" after I reduce the constant memcpy call number to 12 + 12 > > >>>>>>= 24 > > >>>>>> > > >>>>>>I'll reduce memcpy call in the next version of patch. > > >>>>>> > > >>>>>ok, thank you. I'm all for optimzation, but I think a compile that > > >>>>>takes almost > > >>>>>10 minutes for a single file is going to generate some raised eyebrows > > >>>>>when end users start tinkering with it > > >>>>> > > >>>>>Neil > > >>>>> > > >>>>>>Zhihong (John) > > >>>>>> > > >>>>Even two minutes is a very long time to compile, IMHO. The whole of DPDK > > >>>>doesn't take that long to compile right now, and that's with a couple of huge > > >>>>header files with routing tables in it. Any chance you could cut compile time > > >>>>down to a few seconds while still having reasonable tests? > > >>>>Also, when there is AVX2 present on the system, what is the compile time > > >>>>like for that code? > > >>>> > > >>>> /Bruce > > >>>Neil, Bruce, > > >>> > > >>>Some data first. > > >>> > > >>>Sandy Bridge without AVX2: > > >>>1. original w/ 10 constant memcpy: 2'25" > > >>>2. patch w/ 12 constant memcpy: 2'41" > > >>>3. patch w/ 63 constant memcpy: 9'41" > > >>> > > >>>Haswell with AVX2: > > >>>1. original w/ 10 constant memcpy: 1'57" > > >>>2. patch w/ 12 constant memcpy: 1'56" > > >>>3. patch w/ 63 constant memcpy: 3'16" > > >>> > > >>>Also, to address Bruce's question, we have to reduce test case to cut down compile time. Because we use: > > >>>1. intrinsics instead of assembly for better flexibility and can utilize more compiler optimization > > >>>2. complex function body for better performance > > >>>3. inlining > > >>>This increases compile time. > > >>>But I think it'd be okay to do that as long as we can select a fair set of test points. > > >>> > > >>>It'd be great if you could give some suggestion, say, 12 points. > > >>> > > >>>Zhihong (John) > > >>> > > >>> > > >>While I agree in the general case these long compilation times is painful > > >>for the users, having a factor of 2-8x in memcpy operations is quite an > > >>improvement, specially in DPDK applications which need to deal > > >>(unfortunately) heavily on them -- e.g. IP fragmentation and reassembly. > > >> > > >>Why not having a fast compilation by default, and a tunable config flag to > > >>enable a highly optimized version of rte_memcpy (e.g. RTE_EAL_OPT_MEMCPY)? > > >> > > >>Marc > > >> > > >Out of interest, are these 2-8x improvements something you have benchmarked > > >in these app scenarios? [i.e. not just in micro-benchmarks]. > > > > How much that micro-speedup will end up affecting the performance of the > > entire application is something I cannot say, so I agree that we should > > probably have some additional benchmarks before deciding that pays off > > maintaining 2 versions of rte_memcpy. > > > > There are however a bunch of possible DPDK applications that could > > potentially benefit; IP fragmentation, tunneling and specialized DPI > > applications, among others, since they involve a reasonable amount of > > memcpys per pkt. My point was, *if* it proves that is enough beneficial, why > > not having it optionally? > > > > Marc > > I agree, if it provides the speedups then we need to have it in - and quite possibly > on by default, even. > > /Bruce One issue I have is that as a vendor we need to ship on binary, not different distributions for each Intel chip variant. There is some support for multi-chip version functions but only in latest Gcc which isn't in Debian stable. And the multi-chip version of functions is going to be more expensive than inlining. For some cases, I have seen that the overhead of fancy instructions looks good but have nasty side effects like CPU stall and/or increased power consumption which turns of turbo boost. Distro's in general have the same problem with special case optimizations.