From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 8BDCF2931 for ; Tue, 23 Aug 2016 16:32:39 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga104.fm.intel.com with ESMTP; 23 Aug 2016 07:32:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.28,566,1464678000"; d="scan'208";a="1045910185" Received: from yliu-dev.sh.intel.com (HELO yliu-dev) ([10.239.67.162]) by fmsmga002.fm.intel.com with ESMTP; 23 Aug 2016 07:32:27 -0700 Date: Tue, 23 Aug 2016 22:42:11 +0800 From: Yuanhan Liu To: Maxime Coquelin Cc: dev@dpdk.org Message-ID: <20160823144211.GP30752@yliu-dev.sh.intel.com> References: <1471939839-29778-1-git-send-email-yuanhan.liu@linux.intel.com> <7eeb0acd-e98d-1dcd-793f-0b5f4b74874c@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <7eeb0acd-e98d-1dcd-793f-0b5f4b74874c@redhat.com> User-Agent: Mutt/1.5.23 (2014-03-12) Subject: Re: [dpdk-dev] [PATCH 0/6] vhost: add Tx zero copy support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 23 Aug 2016 14:32:39 -0000 On Tue, Aug 23, 2016 at 04:18:40PM +0200, Maxime Coquelin wrote: > > > On 08/23/2016 10:10 AM, Yuanhan Liu wrote: > >This patch set enables vhost Tx zero copy. The majority work goes to > >patch 4: vhost: add Tx zero copy. > > > >The basic idea of Tx zero copy is, instead of copying data from the > >desc buf, here we let the mbuf reference the desc buf addr directly. > > > >The major issue behind that is how and when to update the used ring. > >You could check the commit log of patch 4 for more details. > > > >Patch 5 introduces a new flag, RTE_VHOST_USER_TX_ZERO_COPY, to enable > >Tx zero copy, which is disabled by default. > > > >Few more TODOs are left, including handling a desc buf that is across > >two physical pages, updating release note, etc. Those will be fixed > >in later version. For now, here is a simple one that hopefully it > >shows the idea clearly. > > > >I did some quick tests, the performance gain is quite impressive. > > > >For a simple dequeue workload (running rxonly in vhost-pmd and runnin > >txonly in guest testpmd), it yields 40+% performance boost for packet > >size 1400B. > > > >For VM2VM iperf test case, it's even better: about 70% boost. > > This is indeed impressive. > Somewhere else, you mention that there is a small regression with small > packets. Do you have some figures to share? It could be 15% drop for PVP case with 64B packet size. The test topo is: nic 0 --> VM Rx --> VM Tx --> nic 0 Put simply, I run vhost-switch example in the host and run testpmd in the guest. Though the number looks big, I don't think it's an issue. First of all, it's disabled by default. Secondly, if you want to enable it, you should be certain that the packet size is normally big, otherwise, you should not bother to try with zero copy. > Also, with this feature OFF, do you see some regressions for both small > and bigger packets? Good question. I didn't check it on purpose, but I did try when it's disabled, the number I got is pretty the same as the one I got without this feature. So, I would say I don't see regressions. Anyway, I could do more tests to make sure. --yliu