From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id A3A7F6A95 for ; Mon, 10 Oct 2016 14:40:48 +0200 (CEST) Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id DF8ECC03E1; Mon, 10 Oct 2016 12:40:47 +0000 (UTC) Received: from [10.36.6.29] (vpn1-6-29.ams2.redhat.com [10.36.6.29]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u9ACei5b010664 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Mon, 10 Oct 2016 08:40:46 -0400 To: Yuanhan Liu , "Michael S. Tsirkin" References: <20160928022848.GE1597@yliu-dev.sh.intel.com> <20160929205047-mutt-send-email-mst@kernel.org> <2889e609-f750-a4e1-66f8-768bb07a2339@redhat.com> <20160929231252-mutt-send-email-mst@kernel.org> <05d62750-303c-4b9b-a5cb-9db8552f0ab2@redhat.com> <2b458818-01ef-0533-4366-1c35a8452e4a@redhat.com> <20160930221241-mutt-send-email-mst@kernel.org> <20161010040531.GZ1597@yliu-dev.sh.intel.com> <20161010070800-mutt-send-email-mst@kernel.org> <20161010042209.GB1597@yliu-dev.sh.intel.com> Cc: Stephen Hemminger , dev@dpdk.org, qemu-devel@nongnu.org From: Maxime Coquelin Message-ID: <3b71f113-26af-711c-4060-ca576260ec72@redhat.com> Date: Mon, 10 Oct 2016 14:40:44 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 MIME-Version: 1.0 In-Reply-To: <20161010042209.GB1597@yliu-dev.sh.intel.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Mon, 10 Oct 2016 12:40:48 +0000 (UTC) Subject: Re: [dpdk-dev] [Qemu-devel] [PATCH 1/2] vhost: enable any layout feature X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Oct 2016 12:40:49 -0000 On 10/10/2016 06:22 AM, Yuanhan Liu wrote: > On Mon, Oct 10, 2016 at 07:17:06AM +0300, Michael S. Tsirkin wrote: >> On Mon, Oct 10, 2016 at 12:05:31PM +0800, Yuanhan Liu wrote: >>> On Fri, Sep 30, 2016 at 10:16:43PM +0300, Michael S. Tsirkin wrote: >>>>>> And the same is done is done in DPDK: >>>>>> >>>>>> static inline int __attribute__((always_inline)) >>>>>> copy_desc_to_mbuf(struct virtio_net *dev, struct vring_desc *descs, >>>>>> uint16_t max_desc, struct rte_mbuf *m, uint16_t desc_idx, >>>>>> struct rte_mempool *mbuf_pool) >>>>>> { >>>>>> ... >>>>>> /* >>>>>> * A virtio driver normally uses at least 2 desc buffers >>>>>> * for Tx: the first for storing the header, and others >>>>>> * for storing the data. >>>>>> */ >>>>>> if (likely((desc->len == dev->vhost_hlen) && >>>>>> (desc->flags & VRING_DESC_F_NEXT) != 0)) { >>>>>> desc = &descs[desc->next]; >>>>>> if (unlikely(desc->flags & VRING_DESC_F_INDIRECT)) >>>>>> return -1; >>>>>> >>>>>> desc_addr = gpa_to_vva(dev, desc->addr); >>>>>> if (unlikely(!desc_addr)) >>>>>> return -1; >>>>>> >>>>>> rte_prefetch0((void *)(uintptr_t)desc_addr); >>>>>> >>>>>> desc_offset = 0; >>>>>> desc_avail = desc->len; >>>>>> nr_desc += 1; >>>>>> >>>>>> PRINT_PACKET(dev, (uintptr_t)desc_addr, desc->len, 0); >>>>>> } else { >>>>>> desc_avail = desc->len - dev->vhost_hlen; >>>>>> desc_offset = dev->vhost_hlen; >>>>>> } >>>>> >>>>> Actually, the header is parsed in DPDK vhost implementation. >>>>> But as Virtio PMD provides a zero'ed header, we could just parse >>>>> the header only if VIRTIO_NET_F_NO_TX_HEADER is not negotiated. >>>> >>>> host can always skip the header parse if it wants to. >>>> It didn't seem worth it to add branches there but >>>> if I'm wrong, by all means code it up. >>> >>> It's added by following commit, which yields about 10% performance >>> boosts for PVP case (with 64B packet size). >>> >>> At that time, a packet always use 2 descs. Since indirect desc is >>> enabled (by default) now, the assumption is not true then. What's >>> worse, it might even slow things a bit down. That should also be >>> part of the reason why performance is slightly worse than before. >>> >>> --yliu >> >> I'm not sure I get what you are saying >> >>> commit 1d41d77cf81c448c1b09e1e859bfd300e2054a98 >>> Author: Yuanhan Liu >>> Date: Mon May 2 17:46:17 2016 -0700 >>> >>> vhost: optimize dequeue for small packets >>> >>> A virtio driver normally uses at least 2 desc buffers for Tx: the >>> first for storing the header, and the others for storing the data. >>> >>> Therefore, we could fetch the first data desc buf before the main >>> loop, and do the copy first before the check of "are we done yet?". >>> This could save one check for small packets that just have one data >>> desc buffer and need one mbuf to store it. >>> >>> Signed-off-by: Yuanhan Liu >>> Acked-by: Huawei Xie >>> Tested-by: Rich Lane >> >> This fast-paths the 2-descriptors format but it's not active >> for indirect descriptors. Is this what you mean? > > Yes. It's also not active when ANY_LAYOUT is actually turned on. >> Should be a simple matter to apply this optimization for indirect. > > Might be. If I understand the code correctly, indirect descs also benefit from this optimization, or am I missing something? Maxime