From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-f41.google.com (mail-pa0-f41.google.com [209.85.220.41]) by dpdk.org (Postfix) with ESMTP id 5CB3A6CC5 for ; Fri, 23 Sep 2016 22:24:05 +0200 (CEST) Received: by mail-pa0-f41.google.com with SMTP id hm5so43300572pac.0 for ; Fri, 23 Sep 2016 13:24:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VCA0vmhcVGxYp4YZlNfQ6Iq1KDk1a/8QpE9wdDeOtC0=; b=Gh4Lu5ejZJHavuNcxlizmBDq7VNtHO8r+gkI8c8JRpe5HvD3XWUugkIjzs0FR84lde fuIwEu6lJ9HyoSiENZukV2QRsOljsT5z6hVEiaAZnF1moZpBy58pzHqr4sXgHRLR/WLi pJKzbbvkWLNrXgCQLK7besFRfij6F5FKnBBQtGmp8Fz9bKM1X9ltMWEwQFwPlbrc2qpY tIIBbScqEy7gOjhqnRMm+ZAE6WVHWmzkcdnGhKjdKi5DF+j9sK7tuxYw/NZuCtfGKY9U zLO+CxU8+zenjcDfUeZtUkk6bpiRlw1hDQVYYRL8xnqwlEbBayX+q8+iVw96TKJBHnGt 8b8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VCA0vmhcVGxYp4YZlNfQ6Iq1KDk1a/8QpE9wdDeOtC0=; b=BK0VsIRygzGe9sbvgMGEu/mwJASG5x8xts1rZfG6edeLyiUK+C/znAf6o6GXrVQEFY OgfVmRuhHuv7q6+UnzFakv5d68acqOdA99VhO2EFzKRiBQ56ai2Jdr8Vr0ysGmh/u21v 060JtLQS/dkrr4UI0eQiUEurWh85KyMkXm2av/scdITIgCKCWrOMjX1GDkAsQKURyHiE CcWxboHxv1b2JLkualIh0rSelP6MmalRpz+m0hcrYVg6KYQzFuBN7LWhYXEfuJM6j3BW A1HqW0r9sr9S1fYBSMzs4KAMtpDJDNGZAeC179V5GBw0DCbq6LcFi3zRdcu/yW2zqtFa aY0g== X-Gm-Message-State: AE9vXwOs2OrMkxyhkHd31Jdcw5xONJ3mXu/W67v4J4FQ50F660uSaSMPnOk5QI+8gXB4QA== X-Received: by 10.66.227.4 with SMTP id rw4mr15615804pac.119.1474662244664; Fri, 23 Sep 2016 13:24:04 -0700 (PDT) Received: from xeon-e3 (static-50-53-69-251.bvtn.or.frontiernet.net. [50.53.69.251]) by smtp.gmail.com with ESMTPSA id y5sm13513555pfb.13.2016.09.23.13.24.04 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 23 Sep 2016 13:24:04 -0700 (PDT) Date: Fri, 23 Sep 2016 13:24:14 -0700 From: Stephen Hemminger To: "Michael S. Tsirkin" Cc: Maxime Coquelin , yuanhan.liu@linux.intel.com, huawei.xie@intel.com, dev@dpdk.org, vkaplans@redhat.com Message-ID: <20160923132414.3fb52474@xeon-e3> In-Reply-To: <20160923212116-mutt-send-email-mst@kernel.org> References: <1474619303-16709-1-git-send-email-maxime.coquelin@redhat.com> <20160923184310-mutt-send-email-mst@kernel.org> <20160923210259-mutt-send-email-mst@kernel.org> <425573ad-216f-54e7-f4ee-998a4f87e189@redhat.com> <20160923212116-mutt-send-email-mst@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v3] vhost: Add indirect descriptors support to the TX path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Sep 2016 20:24:05 -0000 On Fri, 23 Sep 2016 21:22:23 +0300 "Michael S. Tsirkin" wrote: > On Fri, Sep 23, 2016 at 08:16:09PM +0200, Maxime Coquelin wrote: > > > > > > On 09/23/2016 08:06 PM, Michael S. Tsirkin wrote: > > > On Fri, Sep 23, 2016 at 08:02:27PM +0200, Maxime Coquelin wrote: > > > > > > > > > > > > On 09/23/2016 05:49 PM, Michael S. Tsirkin wrote: > > > > > On Fri, Sep 23, 2016 at 10:28:23AM +0200, Maxime Coquelin wrote: > > > > > > Indirect descriptors are usually supported by virtio-net devices, > > > > > > allowing to dispatch a larger number of requests. > > > > > > > > > > > > When the virtio device sends a packet using indirect descriptors, > > > > > > only one slot is used in the ring, even for large packets. > > > > > > > > > > > > The main effect is to improve the 0% packet loss benchmark. > > > > > > A PVP benchmark using Moongen (64 bytes) on the TE, and testpmd > > > > > > (fwd io for host, macswap for VM) on DUT shows a +50% gain for > > > > > > zero loss. > > > > > > > > > > > > On the downside, micro-benchmark using testpmd txonly in VM and > > > > > > rxonly on host shows a loss between 1 and 4%.i But depending on > > > > > > the needs, feature can be disabled at VM boot time by passing > > > > > > indirect_desc=off argument to vhost-user device in Qemu. > > > > > > > > > > Even better, change guest pmd to only use indirect > > > > > descriptors when this makes sense (e.g. sufficiently > > > > > large packets). > > > > With the micro-benchmark, the degradation is quite constant whatever > > > > the packet size. > > > > > > > > For PVP, I could not test with larger packets than 64 bytes, as I don't > > > > have a 40G interface, > > > > > > Don't 64 byte packets fit in a single slot anyway? > > No, indirect is used. I didn't checked in details, but I think this is > > because there is no headroom reserved in the mbuf. > > > > This is the condition to meet to fit in a single slot: > > /* optimize ring usage */ > > if (vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) && > > rte_mbuf_refcnt_read(txm) == 1 && > > RTE_MBUF_DIRECT(txm) && > > txm->nb_segs == 1 && > > rte_pktmbuf_headroom(txm) >= hdr_size && > > rte_is_aligned(rte_pktmbuf_mtod(txm, char *), > > __alignof__(struct virtio_net_hdr_mrg_rxbuf))) > > can_push = 1; > > else if (vtpci_with_feature(hw, VIRTIO_RING_F_INDIRECT_DESC) && > > txm->nb_segs < VIRTIO_MAX_TX_INDIRECT) > > use_indirect = 1; > > > > I will check more in details next week. > > Two thoughts then > 1. so can some headroom be reserved? > 2. how about using indirect with 3 s/g entries, > but direct with 2 and down? The default mbuf allocator does keep headroom available. Sounds like a test bug.