From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <zhihong.wang@intel.com>
Received: from mga03.intel.com (mga03.intel.com [134.134.136.65])
 by dpdk.org (Postfix) with ESMTP id B42F64B79
 for <dev@dpdk.org>; Sun, 25 Sep 2016 07:42:00 +0200 (CEST)
Received: from orsmga002.jf.intel.com ([10.7.209.21])
 by orsmga103.jf.intel.com with ESMTP; 24 Sep 2016 22:41:59 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.30,391,1470726000"; d="scan'208";a="1056282539"
Received: from fmsmsx103.amr.corp.intel.com ([10.18.124.201])
 by orsmga002.jf.intel.com with ESMTP; 24 Sep 2016 22:41:58 -0700
Received: from fmsmsx154.amr.corp.intel.com (10.18.116.70) by
 FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS)
 id 14.3.248.2; Sat, 24 Sep 2016 22:41:58 -0700
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
 FMSMSX154.amr.corp.intel.com (10.18.116.70) with Microsoft SMTP Server (TLS)
 id 14.3.248.2; Sat, 24 Sep 2016 22:41:57 -0700
Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.234]) by
 SHSMSX152.ccr.corp.intel.com ([169.254.6.95]) with mapi id 14.03.0248.002;
 Sun, 25 Sep 2016 13:41:56 +0800
From: "Wang, Zhihong" <zhihong.wang@intel.com>
To: Thomas Monjalon <thomas.monjalon@6wind.com>, Jianbo Liu
 <jianbo.liu@linaro.org>
CC: "dev@dpdk.org" <dev@dpdk.org>, Yuanhan Liu <yuanhan.liu@linux.intel.com>, 
 Maxime Coquelin <maxime.coquelin@redhat.com>
Thread-Topic: [dpdk-dev] [PATCH v3 0/5] vhost: optimize enqueue
Thread-Index: AQHR+hh5PoNiMS5qakKA1d8du6AZMqBUHzGAgC8wyICAAI0Q0P//tyiAgADjrICAADeEgIAAl3pg//+etYCAAIqJAP//1HGAACoR6+D///6vAIAAMjEA//zh+uA=
Date: Sun, 25 Sep 2016 05:41:55 +0000
Message-ID: <8F6C2BD409508844A0EFC19955BE09414E7B6EA6@SHSMSX103.ccr.corp.intel.com>
References: <1471319402-112998-1-git-send-email-zhihong.wang@intel.com>
 <8F6C2BD409508844A0EFC19955BE09414E7B6204@SHSMSX103.ccr.corp.intel.com>
 <CAP4Qi3_DxAnvs0jX1P=G_PiLnRRbP5Wty-eU-OPE_81RGCAuTA@mail.gmail.com>
 <1536480.IYe8r5XoNN@xps13>
In-Reply-To: <1536480.IYe8r5XoNN@xps13>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ctpclassification: CTP_IC
x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiOGEzNjkxOTUtNzQ0OS00NTBiLTliOWEtYjliMzk1MWQ4YjcxIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6ImltTkt5UWdMUlg2VHdpT2FUemlLbk45MTFCSm52TmhiQjhUVU5iMHRQUjg9In0=
x-originating-ip: [10.239.127.40]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: Re: [dpdk-dev] [PATCH v3 0/5] vhost: optimize enqueue
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Sun, 25 Sep 2016 05:42:01 -0000



> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Friday, September 23, 2016 9:41 PM
> To: Jianbo Liu <jianbo.liu@linaro.org>
> Cc: dev@dpdk.org; Wang, Zhihong <zhihong.wang@intel.com>; Yuanhan Liu
> <yuanhan.liu@linux.intel.com>; Maxime Coquelin
> <maxime.coquelin@redhat.com>
> Subject: Re: [dpdk-dev] [PATCH v3 0/5] vhost: optimize enqueue
>=20
> 2016-09-23 18:41, Jianbo Liu:
> > On 23 September 2016 at 10:56, Wang, Zhihong <zhihong.wang@intel.com>
> wrote:
> > .....
> > > This is expected because the 2nd patch is just a baseline and all opt=
imization
> > > patches are organized in the rest of this patch set.
> > >
> > > I think you can do bottleneck analysis on ARM to see what's slowing d=
own the
> > > perf, there might be some micro-arch complications there, mostly like=
ly in
> > > memcpy.
> > >
> > > Do you use glibc's memcpy? I suggest to hand-crafted it on your own.
> > >
> > > Could you publish the mrg_rxbuf=3Don data also? Since it's more widel=
y used
> > > in terms of spec integrity.
> > >
> > I don't think it will be helpful for you, considering the differences
> > between x86 and arm.


Hi Jianbo,

This patch does help in ARM for small packets like 64B sized ones,
this actually proves the similarity between x86 and ARM in terms
of caching optimization in this patch.

My estimation is based on:

 1. The last patch are for mrg_rxbuf=3Don, and since you said it helps
    perf, we can ignore it for now when we discuss mrg_rxbuf=3Doff

 2. Vhost enqueue perf =3D
    Ring overhead + Virtio header overhead + Data memcpy overhead

 3. This patch helps small packets traffic, which means it helps
    ring + virtio header operations

 4. So, when you say perf drop when packet size larger than 512B,
    this is most likely caused by memcpy in ARM not working well
    with this patch

I'm not saying glibc's memcpy is not good enough, it's just that
this is a rather special use case. And since we see specialized
memcpy + this patch give better performance than other combinations
significantly on x86, we suggest to hand-craft a specialized memcpy
for it.

Of course on ARM this is still just my speculation, and we need to
either prove it or find the actual root cause.

It can be **REALLY HELPFUL** if you could help to test this patch on
ARM for mrg_rxbuf=3Don cases to see if this patch is in fact helpful
to ARM at all, since mrg_rxbuf=3Don the more widely used cases.


Thanks
Zhihong


> > So please move on with this patchset...
>=20
> Jianbo,
> I don't understand.
> You said that the 2nd patch is a regression:
> -       volatile uint16_t       last_used_idx;
> +       uint16_t                last_used_idx;
>=20
> And the overrall series lead to performance regression
> for packets > 512 B, right?
> But we don't know wether you have tested the v6 or not.
>=20
> Zhihong talked about some improvements possible in rte_memcpy.
> ARM64 is using libc memcpy in rte_memcpy.
>=20
> Now you seem to give up.
> Does it mean you accept having a regression in 16.11 release?
> Are you working on rte_memcpy?