From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id C354E1396 for ; Tue, 27 Sep 2016 18:45:31 +0200 (CEST) Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP; 27 Sep 2016 09:45:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.30,405,1470726000"; d="scan'208";a="13875880" Received: from fmsmsx105.amr.corp.intel.com ([10.18.124.203]) by orsmga005.jf.intel.com with ESMTP; 27 Sep 2016 09:45:29 -0700 Received: from fmsmsx119.amr.corp.intel.com (10.18.124.207) by FMSMSX105.amr.corp.intel.com (10.18.124.203) with Microsoft SMTP Server (TLS) id 14.3.248.2; Tue, 27 Sep 2016 09:45:28 -0700 Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by FMSMSX119.amr.corp.intel.com (10.18.124.207) with Microsoft SMTP Server (TLS) id 14.3.248.2; Tue, 27 Sep 2016 09:45:28 -0700 Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.234]) by SHSMSX101.ccr.corp.intel.com ([169.254.1.118]) with mapi id 14.03.0248.002; Wed, 28 Sep 2016 00:45:25 +0800 From: "Wang, Zhihong" To: Yuanhan Liu , Jianbo Liu CC: Maxime Coquelin , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH v3 0/5] vhost: optimize enqueue Thread-Index: AQHR+hh5PoNiMS5qakKA1d8du6AZMqBUHzGAgC8wyICAAI0Q0P//tyiAgADjrICAADeEgIAAl3pg//+etYCAB/HtgIAA8Uzw Date: Tue, 27 Sep 2016 16:45:24 +0000 Message-ID: <8F6C2BD409508844A0EFC19955BE09414E7B7C0B@SHSMSX103.ccr.corp.intel.com> References: <1471319402-112998-1-git-send-email-zhihong.wang@intel.com> <1471585430-125925-1-git-send-email-zhihong.wang@intel.com> <8F6C2BD409508844A0EFC19955BE09414E7B5581@SHSMSX103.ccr.corp.intel.com> <20160922022903.GJ23158@yliu-dev.sh.intel.com> <8F6C2BD409508844A0EFC19955BE09414E7B5DAE@SHSMSX103.ccr.corp.intel.com> <20160927102123.GL25823@yliu-dev.sh.intel.com> In-Reply-To: <20160927102123.GL25823@yliu-dev.sh.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ctpclassification: CTP_IC x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiOWY5MmZiZDMtMTNlMS00MDc1LThmOTItYjU1OGQ4NWNlODI2IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6Ijl5ajZJQ0hMNHU0M0VuK0E4RXZ3Uk5mRFROSG95ckFDYTNpdzZmbGNwK2M9In0= x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v3 0/5] vhost: optimize enqueue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 27 Sep 2016 16:45:32 -0000 > -----Original Message----- > From: Yuanhan Liu [mailto:yuanhan.liu@linux.intel.com] > Sent: Tuesday, September 27, 2016 6:21 PM > To: Jianbo Liu > Cc: Wang, Zhihong ; Maxime Coquelin > ; dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH v3 0/5] vhost: optimize enqueue >=20 > On Thu, Sep 22, 2016 at 05:01:41PM +0800, Jianbo Liu wrote: > > On 22 September 2016 at 14:58, Wang, Zhihong > wrote: > > > > > > > > >> -----Original Message----- > > >> From: Jianbo Liu [mailto:jianbo.liu@linaro.org] > > >> Sent: Thursday, September 22, 2016 1:48 PM > > >> To: Yuanhan Liu > > >> Cc: Wang, Zhihong ; Maxime Coquelin > > >> ; dev@dpdk.org > > >> Subject: Re: [dpdk-dev] [PATCH v3 0/5] vhost: optimize enqueue > > >> > > >> On 22 September 2016 at 10:29, Yuanhan Liu > > >> wrote: > > >> > On Wed, Sep 21, 2016 at 08:54:11PM +0800, Jianbo Liu wrote: > > >> >> >> > My setup consists of one host running a guest. > > >> >> >> > The guest generates as much 64bytes packets as possible usin= g > > >> >> >> > > >> >> >> Have you tested with other different packet size? > > >> >> >> My testing shows that performance is dropping when packet size= is > > >> more > > >> >> >> than 256. > > >> >> > > > >> >> > > > >> >> > Hi Jianbo, > > >> >> > > > >> >> > Thanks for reporting this. > > >> >> > > > >> >> > 1. Are you running the vector frontend with mrg_rxbuf=3Doff? > > >> >> > > > >> Yes, my testing is mrg_rxbuf=3Doff, but not vector frontend PMD. > > >> > > >> >> > 2. Could you please specify what CPU you're running? Is it Has= well > > >> >> > or Ivy Bridge? > > >> >> > > > >> It's an ARM server. > > >> > > >> >> > 3. How many percentage of drop are you seeing? > > >> The testing result: > > >> size (bytes) improvement (%) > > >> 64 3.92 > > >> 128 11.51 > > >> 256 24.16 > > >> 512 -13.79 > > >> 1024 -22.51 > > >> 1500 -12.22 > > >> A correction is that performance is dropping if byte size is larger = than 512. > > > > > > > > > Jianbo, > > > > > > Could you please verify does this patch really cause enqueue perf to = drop? > > > > > > You can test the enqueue path only by set guest to do rxonly, and com= pare > > > the mpps by show port stats all in the guest. > > > > > > > > Tested with testpmd, host: txonly, guest: rxonly > > size (bytes) improvement (%) > > 64 4.12 > > 128 6 > > 256 2.65 > > 512 -1.12 > > 1024 -7.02 >=20 > There is a difference between Zhihong's code and the old I spotted in > the first time: Zhihong removed the avail_idx prefetch. I understand > the prefetch becomes a bit tricky when mrg-rx code path is considered; > thus, I didn't comment on that. >=20 > That's one of the difference that, IMO, could drop a regression. I then > finally got a chance to add it back. >=20 > A rough test shows it improves the performance of 1400B packet size great= ly > in the "txonly in host and rxonly in guest" case: +33% is the number I ge= t > with my test server (Ivybridge). Thanks Yuanhan! I'll validate this on x86. >=20 > I guess this might/would help your case as well. Mind to have a test > and tell me the results? >=20 > BTW, I made it in rush; I haven't tested the mrg-rx code path yet. >=20 > Thanks. >=20 > --yliu