DPDK patches and discussions
 help / color / mirror / Atom feed
From: Tiwei Bie <tiwei.bie@intel.com>
To: Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: dev@dpdk.org, yliu@fridaylinux.org,
	Zhihong Wang <zhihong.wang@intel.com>,
	Zhiyong Yang <zhiyong.yang@intel.com>
Subject: Re: [dpdk-dev] [PATCH] vhost: adaptively batch small guest memory copies
Date: Fri, 8 Sep 2017 08:48:50 +0800	[thread overview]
Message-ID: <20170908004849.GA18498@debian-ZGViaWFuCg> (raw)
In-Reply-To: <a236798f-c4fe-8651-3471-b766c127f346@redhat.com>

Hi Maxime,

On Thu, Sep 07, 2017 at 07:47:57PM +0200, Maxime Coquelin wrote:
> Hi Tiwei,
> 
> On 08/24/2017 04:19 AM, Tiwei Bie wrote:
> > This patch adaptively batches the small guest memory copies.
> > By batching the small copies, the efficiency of executing the
> > memory LOAD instructions can be improved greatly, because the
> > memory LOAD latency can be effectively hidden by the pipeline.
> > We saw great performance boosts for small packets PVP test.
> > 
> > This patch improves the performance for small packets, and has
> > distinguished the packets by size. So although the performance
> > for big packets doesn't change, it makes it relatively easy to
> > do some special optimizations for the big packets too.
> > 
> > Signed-off-by: Tiwei Bie<tiwei.bie@intel.com>
> > Signed-off-by: Zhihong Wang<zhihong.wang@intel.com>
> > Signed-off-by: Zhiyong Yang<zhiyong.yang@intel.com>
> > ---
> > This optimization depends on the CPU internal pipeline design.
> > So further tests (e.g. ARM) from the community is appreciated.
> > 
> >   lib/librte_vhost/vhost.c      |   2 +-
> >   lib/librte_vhost/vhost.h      |  13 +++
> >   lib/librte_vhost/vhost_user.c |  12 +++
> >   lib/librte_vhost/virtio_net.c | 240 ++++++++++++++++++++++++++++++++----------
> >   4 files changed, 209 insertions(+), 58 deletions(-)
> 
> I did some PVP benchmark with your patch.
> First I tried my standard PVP setup, with io forwarding on host and
> macswap on guest in bidirectional mode.
> 
> With this, I notice no improvement (18.8Mpps), but I think it explains
> because guest is the bottleneck here.
> So I change my setup to do csum forwarding on host side, so that host's
> PMD threads are more loaded.
> 
> In this case, I notice a great improvement, I get 18.8Mpps with your
> patch instead of 14.8Mpps without! Great work!
> 
> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> 

Thank you very much for taking time to review and test this patch! :-)

Best regards,
Tiwei Bie

  reply	other threads:[~2017-09-08  0:48 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-24  2:19 Tiwei Bie
2017-08-28  6:31 ` Jens Freimann
2017-08-28 11:41   ` Yao, Lei A
2017-09-01  9:45 ` Maxime Coquelin
2017-09-01 10:33   ` Tiwei Bie
2017-09-07 17:47 ` Maxime Coquelin
2017-09-08  0:48   ` Tiwei Bie [this message]
2017-09-08  7:41 ` Yuanhan Liu
2017-09-08 10:38   ` Tiwei Bie
2017-09-08 12:50 ` [dpdk-dev] [PATCH v2] " Tiwei Bie
2017-09-09 14:58   ` santosh
2017-09-11  1:27     ` Tiwei Bie
2017-09-11 12:06     ` Yuanhan Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170908004849.GA18498@debian-ZGViaWFuCg \
    --to=tiwei.bie@intel.com \
    --cc=dev@dpdk.org \
    --cc=maxime.coquelin@redhat.com \
    --cc=yliu@fridaylinux.org \
    --cc=zhihong.wang@intel.com \
    --cc=zhiyong.yang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).