From: "Wang, Zhihong" <zhihong.wang@intel.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v2 0/5] Optimize memcpy for AVX512 platforms
Date: Tue, 19 Jan 2016 02:37:46 +0000 [thread overview]
Message-ID: <8F6C2BD409508844A0EFC19955BE0941033A8CF5@SHSMSX103.ccr.corp.intel.com> (raw)
In-Reply-To: <20160118120629.5ed7bcd9@xeon-e3>
> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Tuesday, January 19, 2016 4:06 AM
> To: Wang, Zhihong <zhihong.wang@intel.com>
> Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> Richardson, Bruce <bruce.richardson@intel.com>; Xie, Huawei
> <huawei.xie@intel.com>
> Subject: Re: [PATCH v2 0/5] Optimize memcpy for AVX512 platforms
>
> On Sun, 17 Jan 2016 22:05:09 -0500
> Zhihong Wang <zhihong.wang@intel.com> wrote:
>
> > This patch set optimizes DPDK memcpy for AVX512 platforms, to make full
> > utilization of hardware resources and deliver high performance.
> >
> > In current DPDK, memcpy holds a large proportion of execution time in
> > libs like Vhost, especially for large packets, and this patch can bring
> > considerable benefits.
> >
> > The implementation is based on the current DPDK memcpy framework, some
> > background introduction can be found in these threads:
> > http://dpdk.org/ml/archives/dev/2014-November/008158.html
> > http://dpdk.org/ml/archives/dev/2015-January/011800.html
> >
> > Code changes are:
> >
> > 1. Read CPUID to check if AVX512 is supported by CPU
> >
> > 2. Predefine AVX512 macro if AVX512 is enabled by compiler
> >
> > 3. Implement AVX512 memcpy and choose the right implementation based
> on
> > predefined macros
> >
> > 4. Decide alignment unit for memcpy perf test based on predefined macros
>
> Cool, I like it. How much impact does this have on VHOST?
The impact is significant especially for enqueue (Detailed numbers might not
be appropriate here due to policy :-), only how I test it), because VHOST actually
spends a lot of time doing memcpy. Simply measure 1024B RX/TX time cost and
compare it with 64B's and you'll get a sense of it, although not precise.
My test cases include NIC2VM2NIC and VM2VM scenarios, which are the main
use cases currently, and use both throughput and RX/TX cycles for evaluation.
next prev parent reply other threads:[~2016-01-19 2:37 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-14 6:13 [dpdk-dev] [PATCH 0/4] " Zhihong Wang
2016-01-14 6:13 ` [dpdk-dev] [PATCH 1/4] lib/librte_eal: Identify AVX512 CPU flag Zhihong Wang
2016-01-14 6:13 ` [dpdk-dev] [PATCH 2/4] mk: Predefine AVX512 macro for compiler Zhihong Wang
2016-01-14 6:13 ` [dpdk-dev] [PATCH 3/4] lib/librte_eal: Optimize memcpy for AVX512 platforms Zhihong Wang
2016-01-14 6:13 ` [dpdk-dev] [PATCH 4/4] app/test: Adjust alignment unit for memcpy perf test Zhihong Wang
2016-01-14 16:48 ` [dpdk-dev] [PATCH 0/4] Optimize memcpy for AVX512 platforms Stephen Hemminger
2016-01-15 6:39 ` Wang, Zhihong
2016-01-15 22:03 ` Vincent JARDIN
2016-01-18 3:05 ` [dpdk-dev] [PATCH v2 0/5] " Zhihong Wang
2016-01-18 3:05 ` [dpdk-dev] [PATCH v2 1/5] lib/librte_eal: Identify AVX512 CPU flag Zhihong Wang
2016-01-18 3:05 ` [dpdk-dev] [PATCH v2 2/5] mk: Predefine AVX512 macro for compiler Zhihong Wang
2016-01-18 3:05 ` [dpdk-dev] [PATCH v2 3/5] lib/librte_eal: Optimize memcpy for AVX512 platforms Zhihong Wang
2016-01-18 3:05 ` [dpdk-dev] [PATCH v2 4/5] app/test: Adjust alignment unit for memcpy perf test Zhihong Wang
2016-01-18 3:05 ` [dpdk-dev] [PATCH v2 5/5] lib/librte_eal: Tune memcpy for prior platforms Zhihong Wang
2016-01-18 20:06 ` [dpdk-dev] [PATCH v2 0/5] Optimize memcpy for AVX512 platforms Stephen Hemminger
2016-01-19 2:37 ` Wang, Zhihong [this message]
2016-01-27 15:23 ` Thomas Monjalon
2016-01-28 6:09 ` Wang, Zhihong
2016-01-27 15:30 ` Thomas Monjalon
2016-01-27 18:48 ` Ananyev, Konstantin
2016-01-27 20:18 ` Thomas Monjalon
2017-08-30 9:37 ` linhaifeng
2017-09-18 5:10 ` Wang, Zhihong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8F6C2BD409508844A0EFC19955BE0941033A8CF5@SHSMSX103.ccr.corp.intel.com \
--to=zhihong.wang@intel.com \
--cc=dev@dpdk.org \
--cc=stephen@networkplumber.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).