From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 3E7182B98 for ; Mon, 10 Oct 2016 16:54:44 +0200 (CEST) Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 9C496A756A; Mon, 10 Oct 2016 14:54:43 +0000 (UTC) Received: from [10.36.4.73] (vpn1-4-73.ams2.redhat.com [10.36.4.73]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u9AEseUR011149 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Mon, 10 Oct 2016 10:54:41 -0400 To: Yuanhan Liu References: <20160929205047-mutt-send-email-mst@kernel.org> <2889e609-f750-a4e1-66f8-768bb07a2339@redhat.com> <20160929231252-mutt-send-email-mst@kernel.org> <05d62750-303c-4b9b-a5cb-9db8552f0ab2@redhat.com> <2b458818-01ef-0533-4366-1c35a8452e4a@redhat.com> <20160930221241-mutt-send-email-mst@kernel.org> <20161010040531.GZ1597@yliu-dev.sh.intel.com> <20161010070800-mutt-send-email-mst@kernel.org> <20161010042209.GB1597@yliu-dev.sh.intel.com> <3b71f113-26af-711c-4060-ca576260ec72@redhat.com> <20161010144209.GI1597@yliu-dev.sh.intel.com> Cc: "Michael S. Tsirkin" , Stephen Hemminger , dev@dpdk.org, qemu-devel@nongnu.org From: Maxime Coquelin Message-ID: <18372cc2-19d3-f455-728d-2f2ed405d800@redhat.com> Date: Mon, 10 Oct 2016 16:54:39 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 MIME-Version: 1.0 In-Reply-To: <20161010144209.GI1597@yliu-dev.sh.intel.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Mon, 10 Oct 2016 14:54:43 +0000 (UTC) Subject: Re: [dpdk-dev] [Qemu-devel] [PATCH 1/2] vhost: enable any layout feature X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Oct 2016 14:54:44 -0000 On 10/10/2016 04:42 PM, Yuanhan Liu wrote: > On Mon, Oct 10, 2016 at 02:40:44PM +0200, Maxime Coquelin wrote: >>>>> At that time, a packet always use 2 descs. Since indirect desc is >>>>> enabled (by default) now, the assumption is not true then. What's >>>>> worse, it might even slow things a bit down. That should also be >>>>> part of the reason why performance is slightly worse than before. >>>>> >>>>> --yliu >>>> >>>> I'm not sure I get what you are saying >>>> >>>>> commit 1d41d77cf81c448c1b09e1e859bfd300e2054a98 >>>>> Author: Yuanhan Liu >>>>> Date: Mon May 2 17:46:17 2016 -0700 >>>>> >>>>> vhost: optimize dequeue for small packets >>>>> >>>>> A virtio driver normally uses at least 2 desc buffers for Tx: the >>>>> first for storing the header, and the others for storing the data. >>>>> >>>>> Therefore, we could fetch the first data desc buf before the main >>>>> loop, and do the copy first before the check of "are we done yet?". >>>>> This could save one check for small packets that just have one data >>>>> desc buffer and need one mbuf to store it. >>>>> >>>>> Signed-off-by: Yuanhan Liu >>>>> Acked-by: Huawei Xie >>>>> Tested-by: Rich Lane >>>> >>>> This fast-paths the 2-descriptors format but it's not active >>>> for indirect descriptors. Is this what you mean? >>> >>> Yes. It's also not active when ANY_LAYOUT is actually turned on. >>>> Should be a simple matter to apply this optimization for indirect. >>> >>> Might be. >> >> If I understand the code correctly, indirect descs also benefit from this >> optimization, or am I missing something? > > Aha..., you are right! The interesting thing is that the patch I send on Thursday that removes header access when no offload has been negotiated[0] seems to reduce almost to zero the performance seen with indirect descriptors enabled. I see this with 64 bytes packets using testpmd on both ends. When I did the patch, I would have expected the same gain with both modes, whereas I measured +1% for direct and +4% for indirect. Maxime [0]: http://dpdk.org/dev/patchwork/patch/16423/