From: "Wang, Zhihong" <zhihong.wang@intel.com>
To: Neil Horman <nhorman@tuxdriver.com>,
"Ananyev, Konstantin" <konstantin.ananyev@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
Date: Fri, 23 Jan 2015 03:26:04 +0000 [thread overview]
Message-ID: <F60F360A2500CD45ACDB1D700268892D0E7603D6@SHSMSX101.ccr.corp.intel.com> (raw)
In-Reply-To: <20150121123801.GB18515@hmsreliant.think-freely.org>
> -----Original Message-----
> From: Neil Horman [mailto:nhorman@tuxdriver.com]
> Sent: Wednesday, January 21, 2015 8:38 PM
> To: Ananyev, Konstantin
> Cc: Wang, Zhihong; Richardson, Bruce; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
>
> On Wed, Jan 21, 2015 at 12:02:57PM +0000, Ananyev, Konstantin wrote:
> >
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wang, Zhihong
> > > Sent: Wednesday, January 21, 2015 3:44 AM
> > > To: Richardson, Bruce; Neil Horman
> > > Cc: dev@dpdk.org
> > > Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Richardson, Bruce
> > > > Sent: Wednesday, January 21, 2015 12:15 AM
> > > > To: Neil Horman
> > > > Cc: Wang, Zhihong; dev@dpdk.org
> > > > Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
> > > >
> > > > On Tue, Jan 20, 2015 at 10:11:18AM -0500, Neil Horman wrote:
> > > > > On Tue, Jan 20, 2015 at 03:01:44AM +0000, Wang, Zhihong wrote:
> > > > > >
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Neil Horman [mailto:nhorman@tuxdriver.com]
> > > > > > > Sent: Monday, January 19, 2015 9:02 PM
> > > > > > > To: Wang, Zhihong
> > > > > > > Cc: dev@dpdk.org
> > > > > > > Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
> > > > > > >
> > > > > > > On Mon, Jan 19, 2015 at 09:53:30AM +0800,
> > > > > > > zhihong.wang@intel.com
> > > > wrote:
> > > > > > > > This patch set optimizes memcpy for DPDK for both SSE and
> > > > > > > > AVX
> > > > platforms.
> > > > > > > > It also extends memcpy test coverage with unaligned cases
> > > > > > > > and more test
> > > > > > > points.
> > > > > > > >
> > > > > > > > Optimization techniques are summarized below:
> > > > > > > >
> > > > > > > > 1. Utilize full cache bandwidth
> > > > > > > >
> > > > > > > > 2. Enforce aligned stores
> > > > > > > >
> > > > > > > > 3. Apply load address alignment based on architecture
> > > > > > > > features
> > > > > > > >
> > > > > > > > 4. Make load/store address available as early as possible
> > > > > > > >
> > > > > > > > 5. General optimization techniques like inlining, branch
> > > > > > > > reducing, prefetch pattern access
> > > > > > > >
> > > > > > > > Zhihong Wang (4):
> > > > > > > > Disabled VTA for memcpy test in app/test/Makefile
> > > > > > > > Removed unnecessary test cases in test_memcpy.c
> > > > > > > > Extended test coverage in test_memcpy_perf.c
> > > > > > > > Optimized memcpy in arch/x86/rte_memcpy.h for both SSE
> and AVX
> > > > > > > > platforms
> > > > > > > >
> > > > > > > > app/test/Makefile | 6 +
> > > > > > > > app/test/test_memcpy.c | 52 +-
> > > > > > > > app/test/test_memcpy_perf.c | 238 +++++---
> > > > > > > > .../common/include/arch/x86/rte_memcpy.h | 664
> > > > > > > +++++++++++++++------
> > > > > > > > 4 files changed, 656 insertions(+), 304 deletions(-)
> > > > > > > >
> > > > > > > > --
> > > > > > > > 1.9.3
> > > > > > > >
> > > > > > > >
> > > > > > > Are you able to compile this with gcc 4.9.2? The
> > > > > > > compilation of test_memcpy_perf is taking forever for me. It
> appears hung.
> > > > > > > Neil
> > > > > >
> > > > > >
> > > > > > Neil,
> > > > > >
> > > > > > Thanks for reporting this!
> > > > > > It should compile but will take quite some time if the CPU
> > > > > > doesn't support
> > > > AVX2, the reason is that:
> > > > > > 1. The SSE & AVX memcpy implementation is more complicated
> > > > > > than
> > > > AVX2
> > > > > > version thus the compiler takes more time to compile and optimize
> 2.
> > > > > > The new test_memcpy_perf.c contains 126 constants memcpy calls
> > > > > > for better test case coverage, that's quite a lot
> > > > > >
> > > > > > I've just tested this patch on an Ivy Bridge machine with GCC 4.9.2:
> > > > > > 1. The whole compile process takes 9'41" with the original
> > > > > > test_memcpy_perf.c (63 + 63 = 126 constant memcpy calls) 2. It
> > > > > > takes only 2'41" after I reduce the constant memcpy call
> > > > > > number to 12 + 12 = 24
> > > > > >
> > > > > > I'll reduce memcpy call in the next version of patch.
> > > > > >
> > > > > ok, thank you. I'm all for optimzation, but I think a compile
> > > > > that takes almost
> > > > > 10 minutes for a single file is going to generate some raised
> > > > > eyebrows when end users start tinkering with it
> > > > >
> > > > > Neil
> > > > >
> > > > > > Zhihong (John)
> > > > > >
> > > > Even two minutes is a very long time to compile, IMHO. The whole
> > > > of DPDK doesn't take that long to compile right now, and that's
> > > > with a couple of huge header files with routing tables in it. Any
> > > > chance you could cut compile time down to a few seconds while still
> having reasonable tests?
> > > > Also, when there is AVX2 present on the system, what is the
> > > > compile time like for that code?
> > > >
> > > > /Bruce
> > >
> > > Neil, Bruce,
> > >
> > > Some data first.
> > >
> > > Sandy Bridge without AVX2:
> > > 1. original w/ 10 constant memcpy: 2'25"
> > > 2. patch w/ 12 constant memcpy: 2'41"
> > > 3. patch w/ 63 constant memcpy: 9'41"
> > >
> > > Haswell with AVX2:
> > > 1. original w/ 10 constant memcpy: 1'57"
> > > 2. patch w/ 12 constant memcpy: 1'56"
> > > 3. patch w/ 63 constant memcpy: 3'16"
> > >
> > > Also, to address Bruce's question, we have to reduce test case to cut
> down compile time. Because we use:
> > > 1. intrinsics instead of assembly for better flexibility and can
> > > utilize more compiler optimization 2. complex function body for
> > > better performance 3. inlining This increases compile time.
> >
> > We use instrincts and inlining in many other places too.
> > Why it suddenly became a problem here?
> I agree, something just doesnt feel right here. not sure what it is yet, but I
> don't see how a memcpy function can be so complex as to take almost 10
> minutes to compile. Its almost like we're recursively including something
> here and its driving gcc into a huge loop Neil
>
> > Konstantin
> >
> > > But I think it'd be okay to do that as long as we can select a fair set of test
> points.
> > >
> > > It'd be great if you could give some suggestion, say, 12 points.
> > >
> > > Zhihong (John)
> > >
> > >
> > >
> >
> >
Konstantin, Bruce, Neil,
The reason why it took so long is simply because there're too many function calls.
Just keep in mind that there are (63 + 63) * 4 = 504 memcpy calls (inline) with constant length, gcc will unroll the memcpy function body and generate instructions directly for all of them based on the immediate value.
I wrote a small program separately which contains the rte_memcpy.h and a "main" function that calls rte_memcpy 120 * 4 = 480 times with constant length, it took 11 minutes to compile.
Also, the compile time doesn't grow linearly because 1 group (120) memcpy calls took less than 1 minute.
So I think to reduce the compile time, we need to reduce the constant test case, like the original file in dpdk 1.8.0 has only (10 + 10)* 4 calls.
To Konstantin,
Intrinsics is not a problem, what I meant is that, if we write assembly, gcc may not have to optimize it, but if we use intrinsics, gcc will treat it like a piece of C code and do optimization, that may increase compile time.
To Bruce,
My previous compile time in this thread is measured like this: make clean ; rm -rf x86_64-native-linuxapp-gcc ; then make with j 1.
next prev parent reply other threads:[~2015-01-23 3:26 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-01-19 1:53 zhihong.wang
2015-01-19 1:53 ` [dpdk-dev] [PATCH 1/4] app/test: Disabled VTA for memcpy test in app/test/Makefile zhihong.wang
2015-01-19 1:53 ` [dpdk-dev] [PATCH 2/4] app/test: Removed unnecessary test cases in test_memcpy.c zhihong.wang
2015-01-19 1:53 ` [dpdk-dev] [PATCH 3/4] app/test: Extended test coverage in test_memcpy_perf.c zhihong.wang
2015-01-19 1:53 ` [dpdk-dev] [PATCH 4/4] lib/librte_eal: Optimized memcpy in arch/x86/rte_memcpy.h for both SSE and AVX platforms zhihong.wang
2015-01-20 17:15 ` Stephen Hemminger
2015-01-20 19:16 ` Neil Horman
2015-01-21 3:18 ` Wang, Zhihong
2015-01-25 20:02 ` Jim Thompson
2015-01-26 14:43 ` Wodkowski, PawelX
2015-01-27 5:12 ` Wang, Zhihong
2015-01-19 13:02 ` [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization Neil Horman
2015-01-20 3:01 ` Wang, Zhihong
2015-01-20 15:11 ` Neil Horman
2015-01-20 16:14 ` Bruce Richardson
2015-01-21 3:44 ` Wang, Zhihong
2015-01-21 11:40 ` Bruce Richardson
2015-01-21 12:02 ` Ananyev, Konstantin
2015-01-21 12:38 ` Neil Horman
2015-01-23 3:26 ` Wang, Zhihong [this message]
2015-01-21 12:36 ` Marc Sune
2015-01-21 13:02 ` Bruce Richardson
2015-01-21 13:21 ` Marc Sune
2015-01-21 13:26 ` Bruce Richardson
2015-01-21 19:49 ` Stephen Hemminger
2015-01-21 20:54 ` Neil Horman
2015-01-21 21:25 ` Jim Thompson
2015-01-22 0:53 ` Stephen Hemminger
2015-01-22 9:06 ` Luke Gorrie
2015-01-22 13:29 ` Jay Rolette
2015-01-22 18:27 ` Luke Gorrie
2015-01-22 19:36 ` Jay Rolette
2015-01-22 18:21 ` EDMISON, Kelvin (Kelvin)
2015-01-27 8:22 ` Wang, Zhihong
2015-01-28 21:48 ` EDMISON, Kelvin (Kelvin)
2015-01-29 1:53 ` Wang, Zhihong
2015-01-23 6:52 ` Wang, Zhihong
2015-01-26 18:29 ` Ananyev, Konstantin
2015-01-27 1:42 ` Wang, Zhihong
2015-01-27 11:30 ` Ananyev, Konstantin
2015-01-27 12:19 ` Ananyev, Konstantin
2015-01-28 2:06 ` Wang, Zhihong
2015-01-25 14:50 ` Luke Gorrie
2015-01-26 1:30 ` Wang, Zhihong
2015-01-26 8:03 ` Luke Gorrie
2015-01-27 7:19 ` Wang, Zhihong
2015-01-27 13:57 ` [dpdk-dev] [snabb-devel] " Luke Gorrie
2015-01-29 3:42 ` [dpdk-dev] " Fu, JingguoX
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=F60F360A2500CD45ACDB1D700268892D0E7603D6@SHSMSX101.ccr.corp.intel.com \
--to=zhihong.wang@intel.com \
--cc=dev@dpdk.org \
--cc=konstantin.ananyev@intel.com \
--cc=nhorman@tuxdriver.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).