From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 28F546932 for ; Mon, 10 Oct 2016 06:25:32 +0200 (CEST) Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id A3E5E43A39; Mon, 10 Oct 2016 04:25:31 +0000 (UTC) Received: from redhat.com (vpn-53-157.rdu2.redhat.com [10.10.53.157]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP id u9A4PUxh011879; Mon, 10 Oct 2016 00:25:31 -0400 Date: Mon, 10 Oct 2016 07:25:30 +0300 From: "Michael S. Tsirkin" To: Yuanhan Liu Cc: Maxime Coquelin , Stephen Hemminger , dev@dpdk.org, qemu-devel@nongnu.org Message-ID: <20161010072452-mutt-send-email-mst@kernel.org> References: <20160929205047-mutt-send-email-mst@kernel.org> <2889e609-f750-a4e1-66f8-768bb07a2339@redhat.com> <20160929231252-mutt-send-email-mst@kernel.org> <05d62750-303c-4b9b-a5cb-9db8552f0ab2@redhat.com> <2b458818-01ef-0533-4366-1c35a8452e4a@redhat.com> <20160930221241-mutt-send-email-mst@kernel.org> <20161010040531.GZ1597@yliu-dev.sh.intel.com> <20161010070800-mutt-send-email-mst@kernel.org> <20161010042209.GB1597@yliu-dev.sh.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161010042209.GB1597@yliu-dev.sh.intel.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Mon, 10 Oct 2016 04:25:31 +0000 (UTC) Subject: Re: [dpdk-dev] [Qemu-devel] [PATCH 1/2] vhost: enable any layout feature X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Oct 2016 04:25:32 -0000 On Mon, Oct 10, 2016 at 12:22:09PM +0800, Yuanhan Liu wrote: > On Mon, Oct 10, 2016 at 07:17:06AM +0300, Michael S. Tsirkin wrote: > > On Mon, Oct 10, 2016 at 12:05:31PM +0800, Yuanhan Liu wrote: > > > On Fri, Sep 30, 2016 at 10:16:43PM +0300, Michael S. Tsirkin wrote: > > > > > > And the same is done is done in DPDK: > > > > > > > > > > > > static inline int __attribute__((always_inline)) > > > > > > copy_desc_to_mbuf(struct virtio_net *dev, struct vring_desc *descs, > > > > > > uint16_t max_desc, struct rte_mbuf *m, uint16_t desc_idx, > > > > > > struct rte_mempool *mbuf_pool) > > > > > > { > > > > > > ... > > > > > > /* > > > > > > * A virtio driver normally uses at least 2 desc buffers > > > > > > * for Tx: the first for storing the header, and others > > > > > > * for storing the data. > > > > > > */ > > > > > > if (likely((desc->len == dev->vhost_hlen) && > > > > > > (desc->flags & VRING_DESC_F_NEXT) != 0)) { > > > > > > desc = &descs[desc->next]; > > > > > > if (unlikely(desc->flags & VRING_DESC_F_INDIRECT)) > > > > > > return -1; > > > > > > > > > > > > desc_addr = gpa_to_vva(dev, desc->addr); > > > > > > if (unlikely(!desc_addr)) > > > > > > return -1; > > > > > > > > > > > > rte_prefetch0((void *)(uintptr_t)desc_addr); > > > > > > > > > > > > desc_offset = 0; > > > > > > desc_avail = desc->len; > > > > > > nr_desc += 1; > > > > > > > > > > > > PRINT_PACKET(dev, (uintptr_t)desc_addr, desc->len, 0); > > > > > > } else { > > > > > > desc_avail = desc->len - dev->vhost_hlen; > > > > > > desc_offset = dev->vhost_hlen; > > > > > > } > > > > > > > > > > Actually, the header is parsed in DPDK vhost implementation. > > > > > But as Virtio PMD provides a zero'ed header, we could just parse > > > > > the header only if VIRTIO_NET_F_NO_TX_HEADER is not negotiated. > > > > > > > > host can always skip the header parse if it wants to. > > > > It didn't seem worth it to add branches there but > > > > if I'm wrong, by all means code it up. > > > > > > It's added by following commit, which yields about 10% performance > > > boosts for PVP case (with 64B packet size). > > > > > > At that time, a packet always use 2 descs. Since indirect desc is > > > enabled (by default) now, the assumption is not true then. What's > > > worse, it might even slow things a bit down. That should also be > > > part of the reason why performance is slightly worse than before. > > > > > > --yliu > > > > I'm not sure I get what you are saying > > > > > commit 1d41d77cf81c448c1b09e1e859bfd300e2054a98 > > > Author: Yuanhan Liu > > > Date: Mon May 2 17:46:17 2016 -0700 > > > > > > vhost: optimize dequeue for small packets > > > > > > A virtio driver normally uses at least 2 desc buffers for Tx: the > > > first for storing the header, and the others for storing the data. > > > > > > Therefore, we could fetch the first data desc buf before the main > > > loop, and do the copy first before the check of "are we done yet?". > > > This could save one check for small packets that just have one data > > > desc buffer and need one mbuf to store it. > > > > > > Signed-off-by: Yuanhan Liu > > > Acked-by: Huawei Xie > > > Tested-by: Rich Lane > > > > This fast-paths the 2-descriptors format but it's not active > > for indirect descriptors. Is this what you mean? > > Yes. It's also not active when ANY_LAYOUT is actually turned on. It's not needed there though - you only use 1 desc, no point in fetching the next one. > > Should be a simple matter to apply this optimization for indirect. > > Might be. > > --yliu