From: Bruce Richardson <bruce.richardson@intel.com>
To: Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: dev@dpdk.org, tiwei.bie@intel.com, david.marchand@redhat.com,
jfreimann@redhat.com, zhihong.wang@intel.com,
konstantin.ananyev@intel.com, mattias.ronnblom@ericsson.com
Subject: Re: [dpdk-dev] [PATCH v3 0/5] vhost: I-cache pressure optimizations
Date: Wed, 5 Jun 2019 13:52:37 +0100 [thread overview]
Message-ID: <20190605125237.GE1550@bricha3-MOBL.ger.corp.intel.com> (raw)
In-Reply-To: <d4dc9b8f-5b10-0030-f10d-5af0b5637e35@redhat.com>
On Wed, Jun 05, 2019 at 02:32:27PM +0200, Maxime Coquelin wrote:
>
>
> On 5/29/19 3:04 PM, Maxime Coquelin wrote:
> > Some OVS-DPDK PVP benchmarks show a performance drop
> > when switching from DPDK v17.11 to v18.11.
> >
> > With the addition of packed ring layout support,
> > rte_vhost_enqueue_burst and rte_vhost_dequeue_burst
> > became very large, and only a part of the instructions
> > are executed (either packed or split ring used).
> >
> > This series aims at improving the I-cache pressure,
> > first by un-inlining split and packed rings, but
> > also by moving parts considered as cold in dedicated
> > functions (dirty page logging, fragmented descriptors
> > buffer management added for CVE-2018-1059).
> >
> > With the series applied, size of the enqueue and
> > dequeue split paths is reduced significantly:
> >
> > +---------+--------------------+---------------------+
> > | Version | Enqueue split path | Dequeue split path |
> > +---------+--------------------+---------------------+
> > | v19.05 | 16461B | 25521B |
> > | +series | 7286B | 11285B |
> > +---------+--------------------+---------------------+
> >
> > Using perf tool to monitor iTLB-load-misses event
> > while doing PVP benchmark with testpmd as vswitch,
> > we can see the number of iTLB misses being reduced:
> >
> > - v19.05:
> > # perf stat --repeat 10 -C 2,3 -e iTLB-load-miss -- sleep 10
> >
> > Performance counter stats for 'CPU(s) 2,3' (10 runs):
> >
> > 2,438 iTLB-load-miss ( +- 13.43% )
> >
> > 10.00058928 +- 0.00000336 seconds time elapsed ( +- 0.00% )
> >
> > - +series:
> > # perf stat --repeat 10 -C 2,3 -e iTLB-load-miss -- sleep 10
> >
> > Performance counter stats for 'CPU(s) 2,3' (10 runs):
> >
> > 55 iTLB-load-miss ( +- 10.08% )
> >
> > 10.00059466 +- 0.00000283 seconds time elapsed ( +- 0.00% )
> >
> > The series also force the inlining of some rte_memcpy
> > helpers, as by adding packed ring support, some of them
> > were not more inlined but embedded as functions in
> > the virtio_net object file, which was not expected.
> >
> > Finally, the series simplifies the descriptors buffers
> > prefetching, by doing it in the recently introduced
> > descriptor buffer mapping function.
> >
> > v3:
> > ===
> > - Prefix alloc_copy_ind_table with vhost_ (Mattias)
> > - Remove double new line (Tiwei)
> > - Fix grammar error in patch 3's commit message (Jens)
> > - Force noinline for hear copy functions (Mattias)
> > - Fix dst assignement in copy_hdr_from_desc (Tiwei)
> >
> > v2:
> > ===
> > - Fix checkpatch issue
> > - Reset author for patch 5 (David)
> > - Force non-inlining in patch 2 (David)
> > - Fix typo in path 3 commit message (David)
> >
> > Maxime Coquelin (5):
> > vhost: un-inline dirty pages logging functions
> > vhost: do not inline packed and split functions
> > vhost: do not inline unlikely fragmented buffers code
> > vhost: simplify descriptor's buffer prefetching
> > eal/x86: force inlining of all memcpy and mov helpers
> >
> > .../common/include/arch/x86/rte_memcpy.h | 18 +-
> > lib/librte_vhost/vdpa.c | 2 +-
> > lib/librte_vhost/vhost.c | 164 +++++++++++++++++
> > lib/librte_vhost/vhost.h | 165 ++----------------
> > lib/librte_vhost/virtio_net.c | 140 +++++++--------
> > 5 files changed, 251 insertions(+), 238 deletions(-)
> >
>
>
> Applied patches 1 to 4 to dpdk-next-virtio/master.
>
> Bruce, I'm assigning patch 5 to you in Patchwork, as this is not
> vhost/virtio specific.
>
Patch looks ok to me, but I'm not the one to apply it.
/Bruce
next prev parent reply other threads:[~2019-06-05 12:52 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-29 13:04 Maxime Coquelin
2019-05-29 13:04 ` [dpdk-dev] [PATCH v3 1/5] vhost: un-inline dirty pages logging functions Maxime Coquelin
2019-05-29 13:04 ` [dpdk-dev] [PATCH v3 2/5] vhost: do not inline packed and split functions Maxime Coquelin
2019-05-29 13:04 ` [dpdk-dev] [PATCH v3 3/5] vhost: do not inline unlikely fragmented buffers code Maxime Coquelin
2019-05-29 13:04 ` [dpdk-dev] [PATCH v3 4/5] vhost: simplify descriptor's buffer prefetching Maxime Coquelin
2019-05-29 13:04 ` [dpdk-dev] [PATCH v3 5/5] eal/x86: force inlining of all memcpy and mov helpers Maxime Coquelin
2019-06-05 12:53 ` Bruce Richardson
2019-06-06 9:33 ` Maxime Coquelin
2019-06-05 12:32 ` [dpdk-dev] [PATCH v3 0/5] vhost: I-cache pressure optimizations Maxime Coquelin
2019-06-05 12:52 ` Bruce Richardson [this message]
2019-06-05 13:00 ` Maxime Coquelin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190605125237.GE1550@bricha3-MOBL.ger.corp.intel.com \
--to=bruce.richardson@intel.com \
--cc=david.marchand@redhat.com \
--cc=dev@dpdk.org \
--cc=jfreimann@redhat.com \
--cc=konstantin.ananyev@intel.com \
--cc=mattias.ronnblom@ericsson.com \
--cc=maxime.coquelin@redhat.com \
--cc=tiwei.bie@intel.com \
--cc=zhihong.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).