From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 15E935588 for ; Mon, 26 Sep 2016 14:25:38 +0200 (CEST) Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 7EBED3B71E; Mon, 26 Sep 2016 12:25:37 +0000 (UTC) Received: from redhat.com (vpn-54-197.rdu2.redhat.com [10.10.54.197]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP id u8QCPaFu030748; Mon, 26 Sep 2016 08:25:36 -0400 Date: Mon, 26 Sep 2016 15:25:35 +0300 From: "Michael S. Tsirkin" To: Yuanhan Liu Cc: Stephen Hemminger , Maxime Coquelin , huawei.xie@intel.com, dev@dpdk.org, vkaplans@redhat.com Message-ID: <20160926152442-mutt-send-email-mst@kernel.org> References: <1474619303-16709-1-git-send-email-maxime.coquelin@redhat.com> <20160923184310-mutt-send-email-mst@kernel.org> <20160923210259-mutt-send-email-mst@kernel.org> <425573ad-216f-54e7-f4ee-998a4f87e189@redhat.com> <20160923212116-mutt-send-email-mst@kernel.org> <20160923132414.3fb52474@xeon-e3> <20160926030354.GF23158@yliu-dev.sh.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160926030354.GF23158@yliu-dev.sh.intel.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Mon, 26 Sep 2016 12:25:37 +0000 (UTC) Subject: Re: [dpdk-dev] [PATCH v3] vhost: Add indirect descriptors support to the TX path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 26 Sep 2016 12:25:38 -0000 On Mon, Sep 26, 2016 at 11:03:54AM +0800, Yuanhan Liu wrote: > On Fri, Sep 23, 2016 at 01:24:14PM -0700, Stephen Hemminger wrote: > > On Fri, 23 Sep 2016 21:22:23 +0300 > > "Michael S. Tsirkin" wrote: > > > > > On Fri, Sep 23, 2016 at 08:16:09PM +0200, Maxime Coquelin wrote: > > > > > > > > > > > > On 09/23/2016 08:06 PM, Michael S. Tsirkin wrote: > > > > > On Fri, Sep 23, 2016 at 08:02:27PM +0200, Maxime Coquelin wrote: > > > > > > > > > > > > > > > > > > On 09/23/2016 05:49 PM, Michael S. Tsirkin wrote: > > > > > > > On Fri, Sep 23, 2016 at 10:28:23AM +0200, Maxime Coquelin wrote: > > > > > > > > Indirect descriptors are usually supported by virtio-net devices, > > > > > > > > allowing to dispatch a larger number of requests. > > > > > > > > > > > > > > > > When the virtio device sends a packet using indirect descriptors, > > > > > > > > only one slot is used in the ring, even for large packets. > > > > > > > > > > > > > > > > The main effect is to improve the 0% packet loss benchmark. > > > > > > > > A PVP benchmark using Moongen (64 bytes) on the TE, and testpmd > > > > > > > > (fwd io for host, macswap for VM) on DUT shows a +50% gain for > > > > > > > > zero loss. > > > > > > > > > > > > > > > > On the downside, micro-benchmark using testpmd txonly in VM and > > > > > > > > rxonly on host shows a loss between 1 and 4%.i But depending on > > > > > > > > the needs, feature can be disabled at VM boot time by passing > > > > > > > > indirect_desc=off argument to vhost-user device in Qemu. > > > > > > > > > > > > > > Even better, change guest pmd to only use indirect > > > > > > > descriptors when this makes sense (e.g. sufficiently > > > > > > > large packets). > > > > > > With the micro-benchmark, the degradation is quite constant whatever > > > > > > the packet size. > > > > > > > > > > > > For PVP, I could not test with larger packets than 64 bytes, as I don't > > > > > > have a 40G interface, > > > > > > > > > > Don't 64 byte packets fit in a single slot anyway? > > > > No, indirect is used. I didn't checked in details, but I think this is > > > > because there is no headroom reserved in the mbuf. > > > > > > > > This is the condition to meet to fit in a single slot: > > > > /* optimize ring usage */ > > > > if (vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) && > > > > rte_mbuf_refcnt_read(txm) == 1 && > > > > RTE_MBUF_DIRECT(txm) && > > > > txm->nb_segs == 1 && > > > > rte_pktmbuf_headroom(txm) >= hdr_size && > > > > rte_is_aligned(rte_pktmbuf_mtod(txm, char *), > > > > __alignof__(struct virtio_net_hdr_mrg_rxbuf))) > > > > can_push = 1; > > > > else if (vtpci_with_feature(hw, VIRTIO_RING_F_INDIRECT_DESC) && > > > > txm->nb_segs < VIRTIO_MAX_TX_INDIRECT) > > > > use_indirect = 1; > > > > > > > > I will check more in details next week. > > > > > > Two thoughts then > > > 1. so can some headroom be reserved? > > > 2. how about using indirect with 3 s/g entries, > > > but direct with 2 and down? > > > > The default mbuf allocator does keep headroom available. Sounds like a > > test bug. > > That's because we don't have VIRTIO_F_ANY_LAYOUT set, as Stephen claimed > in v2's comment. > > Since DPDK vhost actually supports VIRTIO_F_ANY_LAYOUT for a while, I'd > like to add it in the features list (VHOST_SUPPORTED_FEATURES). > > Will drop a patch shortly. > > --yliu If VERSION_1 is set then this implies ANY_LAYOUT without it being set. -- MST