From: "Yao, Lei A" <lei.a.yao@intel.com>
To: Yuanhan Liu <yuanhan.liu@linux.intel.com>,
"Yang, Zhiyong" <zhiyong.yang@intel.com>
Cc: "Richardson, Bruce" <bruce.richardson@intel.com>,
"Ananyev, Konstantin" <konstantin.ananyev@intel.com>,
Thomas Monjalon <thomas.monjalon@6wind.com>,
"dev@dpdk.org" <dev@dpdk.org>,
"De Lara Guarch, Pablo" <pablo.de.lara.guarch@intel.com>,
"Wang, Zhihong" <zhihong.wang@intel.com>
Subject: Re: [dpdk-dev] [PATCH 1/4] eal/common: introduce rte_memset on IA platform
Date: Tue, 20 Dec 2016 02:41:17 +0000 [thread overview]
Message-ID: <2DBBFF226F7CF64BAFCA79B681719D953A1365FB@shsmsx102.ccr.corp.intel.com> (raw)
In-Reply-To: <20161219062736.GO18991@yliu-dev.sh.intel.com>
> On Fri, Dec 16, 2016 at 10:19:43AM +0000, Yang, Zhiyong wrote:
> > > > I run the same virtio/vhost loopback tests without NIC.
> > > > I can see the throughput drop when running choosing functions at run
> > > > time compared to original code as following on the same platform(my
> > > machine is haswell)
> > > > Packet size perf drop
> > > > 64 -4%
> > > > 256 -5.4%
> > > > 1024 -5%
> > > > 1500 -2.5%
> > > > Another thing, I run the memcpy_perf_autotest, when N= <128, the
> > > > rte_memcpy perf gains almost disappears When choosing functions at
> run
> > > > time. For N=other numbers, the perf gains will become narrow.
> > > >
> > > How narrow. How significant is the improvement that we gain from
> having to
> > > maintain our own copy of memcpy. If the libc version is nearly as good we
> > > should just use that.
> > >
> > > /Bruce
> >
> > Zhihong sent a patch about rte_memcpy, From the patch,
> > we can see the optimization job for memcpy will bring obvious perf
> improvements
> > than glibc for DPDK.
>
> Just a clarification: it's better than the __original DPDK__ rte_memcpy
> but not the glibc one. That makes me think have any one tested the memcpy
> with big packets? Does the one from DPDK outweigh the one from glibc,
> even for big packets?
>
> --yliu
>
I have test the loopback performanc rte_memcpy and glibc memcpy. For both small packer and
Big packet, rte_memcpy has better performance. My test enviromen is following
CPU: BDW
Ubutnu16.04
Kernal: 4.4.0
gcc : 5.4.0
Path: mergeable
Size rte_memcpy performance gain
64 31%
128 35%
260 27%
520 33%
1024 18%
1500 12%
--Lei
> > http://www.dpdk.org/dev/patchwork/patch/17753/
> > git log as following:
> > This patch is tested on Ivy Bridge, Haswell and Skylake, it provides
> > up to 20% gain for Virtio Vhost PVP traffic, with packet size ranging
> > from 64 to 1500 bytes.
> >
> > thanks
> > Zhiyong
next prev parent reply other threads:[~2016-12-20 2:41 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-12-05 8:26 [dpdk-dev] [PATCH 0/4] eal/common: introduce rte_memset and related test Zhiyong Yang
2016-12-02 10:00 ` Maxime Coquelin
2016-12-06 6:33 ` Yang, Zhiyong
2016-12-06 8:29 ` Maxime Coquelin
2016-12-07 9:28 ` Yang, Zhiyong
2016-12-07 9:37 ` Yuanhan Liu
2016-12-07 9:43 ` Yang, Zhiyong
2016-12-07 9:48 ` Yuanhan Liu
2016-12-05 8:26 ` [dpdk-dev] [PATCH 1/4] eal/common: introduce rte_memset on IA platform Zhiyong Yang
2016-12-02 10:25 ` Thomas Monjalon
2016-12-08 7:41 ` Yang, Zhiyong
2016-12-08 9:26 ` Ananyev, Konstantin
2016-12-08 9:53 ` Yang, Zhiyong
2016-12-08 10:27 ` Bruce Richardson
2016-12-08 10:30 ` Ananyev, Konstantin
2016-12-11 12:32 ` Yang, Zhiyong
2016-12-15 6:51 ` Yang, Zhiyong
2016-12-15 10:12 ` Bruce Richardson
2016-12-16 10:19 ` Yang, Zhiyong
2016-12-19 6:27 ` Yuanhan Liu
2016-12-20 2:41 ` Yao, Lei A [this message]
2016-12-15 10:53 ` Ananyev, Konstantin
2016-12-16 2:15 ` Yang, Zhiyong
2016-12-16 11:47 ` Ananyev, Konstantin
2016-12-20 9:31 ` Yang, Zhiyong
2016-12-08 15:09 ` Thomas Monjalon
2016-12-11 12:04 ` Yang, Zhiyong
2016-12-27 10:04 ` [dpdk-dev] [PATCH v2 0/4] eal/common: introduce rte_memset and related test Zhiyong Yang
2016-12-27 10:04 ` [dpdk-dev] [PATCH v2 1/4] eal/common: introduce rte_memset on IA platform Zhiyong Yang
2016-12-27 10:04 ` [dpdk-dev] [PATCH v2 2/4] app/test: add functional autotest for rte_memset Zhiyong Yang
2016-12-27 10:04 ` [dpdk-dev] [PATCH v2 3/4] app/test: add performance " Zhiyong Yang
2016-12-27 10:04 ` [dpdk-dev] [PATCH v2 4/4] lib/librte_vhost: improve vhost perf using rte_memset Zhiyong Yang
2017-01-09 9:48 ` [dpdk-dev] [PATCH v2 0/4] eal/common: introduce rte_memset and related test Yang, Zhiyong
2017-01-17 6:24 ` Yang, Zhiyong
2017-01-17 20:14 ` Thomas Monjalon
2017-01-18 0:15 ` Vincent JARDIN
2017-01-18 2:42 ` Yang, Zhiyong
2017-01-18 7:42 ` Thomas Monjalon
2017-01-19 1:36 ` Yang, Zhiyong
2016-12-05 8:26 ` [dpdk-dev] [PATCH 2/4] app/test: add functional autotest for rte_memset Zhiyong Yang
2016-12-05 8:26 ` [dpdk-dev] [PATCH 3/4] app/test: add performance " Zhiyong Yang
2016-12-05 8:26 ` [dpdk-dev] [PATCH 4/4] lib/librte_vhost: improve vhost perf using rte_memset Zhiyong Yang
2016-12-02 9:46 ` Thomas Monjalon
2016-12-06 8:04 ` Yang, Zhiyong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2DBBFF226F7CF64BAFCA79B681719D953A1365FB@shsmsx102.ccr.corp.intel.com \
--to=lei.a.yao@intel.com \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=konstantin.ananyev@intel.com \
--cc=pablo.de.lara.guarch@intel.com \
--cc=thomas.monjalon@6wind.com \
--cc=yuanhan.liu@linux.intel.com \
--cc=zhihong.wang@intel.com \
--cc=zhiyong.yang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).