From: "Wang, Zhihong" <zhihong.wang@intel.com>
To: Maxime Coquelin <maxime.coquelin@redhat.com>,
Yuanhan Liu <yuanhan.liu@linux.intel.com>
Cc: "mst@redhat.com" <mst@redhat.com>, "dev@dpdk.org" <dev@dpdk.org>,
"vkaplans@redhat.com" <vkaplans@redhat.com>
Subject: Re: [dpdk-dev] [PATCH v4] vhost: Add indirect descriptors support to the TX path
Date: Mon, 31 Oct 2016 10:01:18 +0000 [thread overview]
Message-ID: <8F6C2BD409508844A0EFC19955BE09414E7DA533@SHSMSX103.ccr.corp.intel.com> (raw)
In-Reply-To: <88169067-290d-a7bb-ab2c-c9b8ec1b1ded@redhat.com>
> -----Original Message-----
> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]
> Sent: Friday, October 28, 2016 3:42 PM
> To: Wang, Zhihong <zhihong.wang@intel.com>; Yuanhan Liu
> <yuanhan.liu@linux.intel.com>
> Cc: stephen@networkplumber.org; Pierre Pfister (ppfister)
> <ppfister@cisco.com>; Xie, Huawei <huawei.xie@intel.com>; dev@dpdk.org;
> vkaplans@redhat.com; mst@redhat.com
> Subject: Re: [dpdk-dev] [PATCH v4] vhost: Add indirect descriptors support
> to the TX path
>
>
>
> On 10/28/2016 02:49 AM, Wang, Zhihong wrote:
> >
> >> > -----Original Message-----
> >> > From: Yuanhan Liu [mailto:yuanhan.liu@linux.intel.com]
> >> > Sent: Thursday, October 27, 2016 6:46 PM
> >> > To: Maxime Coquelin <maxime.coquelin@redhat.com>
> >> > Cc: Wang, Zhihong <zhihong.wang@intel.com>;
> >> > stephen@networkplumber.org; Pierre Pfister (ppfister)
> >> > <ppfister@cisco.com>; Xie, Huawei <huawei.xie@intel.com>;
> dev@dpdk.org;
> >> > vkaplans@redhat.com; mst@redhat.com
> >> > Subject: Re: [dpdk-dev] [PATCH v4] vhost: Add indirect descriptors
> support
> >> > to the TX path
> >> >
> >> > On Thu, Oct 27, 2016 at 12:35:11PM +0200, Maxime Coquelin wrote:
> >>> > >
> >>> > >
> >>> > > On 10/27/2016 12:33 PM, Yuanhan Liu wrote:
> >>>> > > >On Thu, Oct 27, 2016 at 11:10:34AM +0200, Maxime Coquelin
> wrote:
> >>>>> > > >>Hi Zhihong,
> >>>>> > > >>
> >>>>> > > >>On 10/27/2016 11:00 AM, Wang, Zhihong wrote:
> >>>>>> > > >>>Hi Maxime,
> >>>>>> > > >>>
> >>>>>> > > >>>Seems indirect desc feature is causing serious performance
> >>>>>> > > >>>degradation on Haswell platform, about 20% drop for both
> >>>>>> > > >>>mrg=on and mrg=off (--txqflags=0xf00, non-vector version),
> >>>>>> > > >>>both iofwd and macfwd.
> >>>>> > > >>I tested PVP (with macswap on guest) and Txonly/Rxonly on an
> Ivy
> >> > Bridge
> >>>>> > > >>platform, and didn't faced such a drop.
> >>>> > > >
> >>>> > > >I was actually wondering that may be the cause. I tested it with
> >>>> > > >my IvyBridge server as well, I saw no drop.
> >>>> > > >
> >>>> > > >Maybe you should find a similar platform (Haswell) and have a try?
> >>> > > Yes, that's why I asked Zhihong whether he could test Txonly in guest
> to
> >>> > > see if issue is reproducible like this.
> >> >
> >> > I have no Haswell box, otherwise I could do a quick test for you. IIRC,
> >> > he tried to disable the indirect_desc feature, then the performance
> >> > recovered. So, it's likely the indirect_desc is the culprit here.
> >> >
> >>> > > I will be easier for me to find an Haswell machine if it has not to be
> >>> > > connected back to back to and HW/SW packet generator.
> > In fact simple loopback test will also do, without pktgen.
> >
> > Start testpmd in both host and guest, and do "start" in one
> > and "start tx_first 32" in another.
> >
> > Perf drop is about 24% in my test.
> >
>
> Thanks, I never tried this test.
> I managed to find an Haswell platform (Intel(R) Xeon(R) CPU E5-2699 v3
> @ 2.30GHz), and can reproduce the problem with the loop test you
> mention. I see a performance drop about 10% (8.94Mpps/8.08Mpps).
> Out of curiosity, what are the numbers you get with your setup?
Hi Maxime,
Let's align our test case to RC2, mrg=on, loopback, on Haswell.
My results below:
1. indirect=1: 5.26 Mpps
2. indirect=0: 6.54 Mpps
It's about 24% drop.
>
> As I never tried this test, I run it again on my Sandy Bridge setup, and
> I also see a performance regression, this time of 4%.
>
> If I understand correctly the test, only 32 packets are allocated,
> corresponding to a single burst, which is less than the queue size.
> So it makes sense that the performance is lower with this test case.
Actually it's 32 burst, so 1024 packets in total, enough to
fill the queue.
Thanks
Zhihong
>
> Thanks,
> Maxime
next prev parent reply other threads:[~2016-10-31 10:01 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-09-23 8:28 [dpdk-dev] [PATCH v3] " Maxime Coquelin
2016-09-23 15:49 ` Michael S. Tsirkin
2016-09-23 18:02 ` Maxime Coquelin
2016-09-23 18:06 ` Michael S. Tsirkin
2016-09-23 18:16 ` Maxime Coquelin
2016-09-23 18:22 ` Michael S. Tsirkin
2016-09-23 20:24 ` Stephen Hemminger
2016-09-26 3:03 ` Yuanhan Liu
2016-09-26 12:25 ` Michael S. Tsirkin
2016-09-26 13:04 ` Yuanhan Liu
2016-09-27 4:15 ` Yuanhan Liu
2016-09-27 7:25 ` Maxime Coquelin
2016-09-27 8:42 ` [dpdk-dev] [PATCH v4] " Maxime Coquelin
2016-09-27 12:18 ` Yuanhan Liu
2016-10-14 7:24 ` Wang, Zhihong
2016-10-14 7:34 ` Wang, Zhihong
2016-10-14 15:50 ` Maxime Coquelin
2016-10-17 11:23 ` Maxime Coquelin
2016-10-17 13:21 ` Yuanhan Liu
2016-10-17 14:14 ` Maxime Coquelin
2016-10-27 9:00 ` Wang, Zhihong
2016-10-27 9:10 ` Maxime Coquelin
2016-10-27 9:55 ` Maxime Coquelin
2016-10-27 10:19 ` Wang, Zhihong
2016-10-28 7:32 ` Pierre Pfister (ppfister)
2016-10-28 7:58 ` Maxime Coquelin
2016-11-01 8:15 ` Yuanhan Liu
2016-11-01 9:39 ` Thomas Monjalon
2016-11-02 2:44 ` Yuanhan Liu
2016-10-27 10:33 ` Yuanhan Liu
2016-10-27 10:35 ` Maxime Coquelin
2016-10-27 10:46 ` Yuanhan Liu
2016-10-28 0:49 ` Wang, Zhihong
2016-10-28 7:42 ` Maxime Coquelin
2016-10-31 10:01 ` Wang, Zhihong [this message]
2016-11-02 10:51 ` Maxime Coquelin
2016-11-03 8:11 ` Maxime Coquelin
2016-11-04 6:18 ` Xu, Qian Q
2016-11-04 7:41 ` Maxime Coquelin
2016-11-04 7:20 ` Wang, Zhihong
2016-11-04 7:57 ` Maxime Coquelin
2016-11-04 7:59 ` Maxime Coquelin
2016-11-04 10:43 ` Wang, Zhihong
2016-11-04 11:22 ` Maxime Coquelin
2016-11-04 11:36 ` Yuanhan Liu
2016-11-04 11:39 ` Maxime Coquelin
2016-11-04 12:30 ` Wang, Zhihong
2016-11-04 12:54 ` Maxime Coquelin
2016-11-04 13:09 ` Wang, Zhihong
2016-11-08 10:51 ` Wang, Zhihong
2016-10-27 10:53 ` Maxime Coquelin
2016-10-28 6:05 ` Xu, Qian Q
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8F6C2BD409508844A0EFC19955BE09414E7DA533@SHSMSX103.ccr.corp.intel.com \
--to=zhihong.wang@intel.com \
--cc=dev@dpdk.org \
--cc=maxime.coquelin@redhat.com \
--cc=mst@redhat.com \
--cc=vkaplans@redhat.com \
--cc=yuanhan.liu@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).