DPDK patches and discussions
 help / color / mirror / Atom feed
From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: Tiwei Bie <tiwei.bie@intel.com>, dev@dpdk.org
Cc: yliu@fridaylinux.org, Zhihong Wang <zhihong.wang@intel.com>,
	Zhiyong Yang <zhiyong.yang@intel.com>
Subject: Re: [dpdk-dev] [PATCH] vhost: adaptively batch small guest memory copies
Date: Thu, 7 Sep 2017 19:47:57 +0200	[thread overview]
Message-ID: <a236798f-c4fe-8651-3471-b766c127f346@redhat.com> (raw)
In-Reply-To: <20170824021939.21306-1-tiwei.bie@intel.com>

Hi Tiwei,

On 08/24/2017 04:19 AM, Tiwei Bie wrote:
> This patch adaptively batches the small guest memory copies.
> By batching the small copies, the efficiency of executing the
> memory LOAD instructions can be improved greatly, because the
> memory LOAD latency can be effectively hidden by the pipeline.
> We saw great performance boosts for small packets PVP test.
> 
> This patch improves the performance for small packets, and has
> distinguished the packets by size. So although the performance
> for big packets doesn't change, it makes it relatively easy to
> do some special optimizations for the big packets too.
> 
> Signed-off-by: Tiwei Bie<tiwei.bie@intel.com>
> Signed-off-by: Zhihong Wang<zhihong.wang@intel.com>
> Signed-off-by: Zhiyong Yang<zhiyong.yang@intel.com>
> ---
> This optimization depends on the CPU internal pipeline design.
> So further tests (e.g. ARM) from the community is appreciated.
> 
>   lib/librte_vhost/vhost.c      |   2 +-
>   lib/librte_vhost/vhost.h      |  13 +++
>   lib/librte_vhost/vhost_user.c |  12 +++
>   lib/librte_vhost/virtio_net.c | 240 ++++++++++++++++++++++++++++++++----------
>   4 files changed, 209 insertions(+), 58 deletions(-)

I did some PVP benchmark with your patch.
First I tried my standard PVP setup, with io forwarding on host and
macswap on guest in bidirectional mode.

With this, I notice no improvement (18.8Mpps), but I think it explains
because guest is the bottleneck here.
So I change my setup to do csum forwarding on host side, so that host's
PMD threads are more loaded.

In this case, I notice a great improvement, I get 18.8Mpps with your
patch instead of 14.8Mpps without! Great work!

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime

  parent reply	other threads:[~2017-09-07 17:48 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-24  2:19 Tiwei Bie
2017-08-28  6:31 ` Jens Freimann
2017-08-28 11:41   ` Yao, Lei A
2017-09-01  9:45 ` Maxime Coquelin
2017-09-01 10:33   ` Tiwei Bie
2017-09-07 17:47 ` Maxime Coquelin [this message]
2017-09-08  0:48   ` Tiwei Bie
2017-09-08  7:41 ` Yuanhan Liu
2017-09-08 10:38   ` Tiwei Bie
2017-09-08 12:50 ` [dpdk-dev] [PATCH v2] " Tiwei Bie
2017-09-09 14:58   ` santosh
2017-09-11  1:27     ` Tiwei Bie
2017-09-11 12:06     ` Yuanhan Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a236798f-c4fe-8651-3471-b766c127f346@redhat.com \
    --to=maxime.coquelin@redhat.com \
    --cc=dev@dpdk.org \
    --cc=tiwei.bie@intel.com \
    --cc=yliu@fridaylinux.org \
    --cc=zhihong.wang@intel.com \
    --cc=zhiyong.yang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).