DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>,
	Maxime Coquelin <maxime.coquelin@redhat.com>,
	huawei.xie@intel.com, dev@dpdk.org, vkaplans@redhat.com
Subject: Re: [dpdk-dev] [PATCH v3] vhost: Add indirect descriptors support to the TX path
Date: Mon, 26 Sep 2016 15:25:35 +0300	[thread overview]
Message-ID: <20160926152442-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20160926030354.GF23158@yliu-dev.sh.intel.com>

On Mon, Sep 26, 2016 at 11:03:54AM +0800, Yuanhan Liu wrote:
> On Fri, Sep 23, 2016 at 01:24:14PM -0700, Stephen Hemminger wrote:
> > On Fri, 23 Sep 2016 21:22:23 +0300
> > "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > 
> > > On Fri, Sep 23, 2016 at 08:16:09PM +0200, Maxime Coquelin wrote:
> > > > 
> > > > 
> > > > On 09/23/2016 08:06 PM, Michael S. Tsirkin wrote:  
> > > > > On Fri, Sep 23, 2016 at 08:02:27PM +0200, Maxime Coquelin wrote:  
> > > > > > 
> > > > > > 
> > > > > > On 09/23/2016 05:49 PM, Michael S. Tsirkin wrote:  
> > > > > > > On Fri, Sep 23, 2016 at 10:28:23AM +0200, Maxime Coquelin wrote:  
> > > > > > > > Indirect descriptors are usually supported by virtio-net devices,
> > > > > > > > allowing to dispatch a larger number of requests.
> > > > > > > > 
> > > > > > > > When the virtio device sends a packet using indirect descriptors,
> > > > > > > > only one slot is used in the ring, even for large packets.
> > > > > > > > 
> > > > > > > > The main effect is to improve the 0% packet loss benchmark.
> > > > > > > > A PVP benchmark using Moongen (64 bytes) on the TE, and testpmd
> > > > > > > > (fwd io for host, macswap for VM) on DUT shows a +50% gain for
> > > > > > > > zero loss.
> > > > > > > > 
> > > > > > > > On the downside, micro-benchmark using testpmd txonly in VM and
> > > > > > > > rxonly on host shows a loss between 1 and 4%.i But depending on
> > > > > > > > the needs, feature can be disabled at VM boot time by passing
> > > > > > > > indirect_desc=off argument to vhost-user device in Qemu.  
> > > > > > > 
> > > > > > > Even better, change guest pmd to only use indirect
> > > > > > > descriptors when this makes sense (e.g. sufficiently
> > > > > > > large packets).  
> > > > > > With the micro-benchmark, the degradation is quite constant whatever
> > > > > > the packet size.
> > > > > > 
> > > > > > For PVP, I could not test with larger packets than 64 bytes, as I don't
> > > > > > have a 40G interface,  
> > > > > 
> > > > > Don't 64 byte packets fit in a single slot anyway?  
> > > > No, indirect is used. I didn't checked in details, but I think this is
> > > > because there is no headroom reserved in the mbuf.
> > > > 
> > > > This is the condition to meet to fit in a single slot:
> > > > /* optimize ring usage */
> > > > if (vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) &&
> > > > 	rte_mbuf_refcnt_read(txm) == 1 &&
> > > > 	RTE_MBUF_DIRECT(txm) &&
> > > > 	txm->nb_segs == 1 &&
> > > > 	rte_pktmbuf_headroom(txm) >= hdr_size &&
> > > > 	rte_is_aligned(rte_pktmbuf_mtod(txm, char *),
> > > > 		__alignof__(struct virtio_net_hdr_mrg_rxbuf)))
> > > >     can_push = 1;
> > > > else if (vtpci_with_feature(hw, VIRTIO_RING_F_INDIRECT_DESC) &&
> > > > 	txm->nb_segs < VIRTIO_MAX_TX_INDIRECT)
> > > >     use_indirect = 1;
> > > > 
> > > > I will check more in details next week.  
> > > 
> > > Two thoughts then
> > > 1. so can some headroom be reserved?
> > > 2. how about using indirect with 3 s/g entries,
> > >    but direct with 2 and down?
> > 
> > The default mbuf allocator does keep headroom available. Sounds like a
> > test bug.
> 
> That's because we don't have VIRTIO_F_ANY_LAYOUT set, as Stephen claimed
> in v2's comment.
> 
> Since DPDK vhost actually supports VIRTIO_F_ANY_LAYOUT for a while, I'd
> like to add it in the features list (VHOST_SUPPORTED_FEATURES).
> 
> Will drop a patch shortly.
> 
> 	--yliu

If VERSION_1 is set then this implies ANY_LAYOUT without it being set.

-- 
MST

  reply	other threads:[~2016-09-26 12:25 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-23  8:28 Maxime Coquelin
2016-09-23 15:49 ` Michael S. Tsirkin
2016-09-23 18:02   ` Maxime Coquelin
2016-09-23 18:06     ` Michael S. Tsirkin
2016-09-23 18:16       ` Maxime Coquelin
2016-09-23 18:22         ` Michael S. Tsirkin
2016-09-23 20:24           ` Stephen Hemminger
2016-09-26  3:03             ` Yuanhan Liu
2016-09-26 12:25               ` Michael S. Tsirkin [this message]
2016-09-26 13:04                 ` Yuanhan Liu
2016-09-27  4:15 ` Yuanhan Liu
2016-09-27  7:25   ` Maxime Coquelin
2016-09-27  8:42 ` [dpdk-dev] [PATCH v4] " Maxime Coquelin
2016-09-27 12:18   ` Yuanhan Liu
2016-10-14  7:24   ` Wang, Zhihong
2016-10-14  7:34     ` Wang, Zhihong
2016-10-14 15:50     ` Maxime Coquelin
2016-10-17 11:23       ` Maxime Coquelin
2016-10-17 13:21         ` Yuanhan Liu
2016-10-17 14:14           ` Maxime Coquelin
2016-10-27  9:00             ` Wang, Zhihong
2016-10-27  9:10               ` Maxime Coquelin
2016-10-27  9:55                 ` Maxime Coquelin
2016-10-27 10:19                   ` Wang, Zhihong
2016-10-28  7:32                     ` Pierre Pfister (ppfister)
2016-10-28  7:58                       ` Maxime Coquelin
2016-11-01  8:15                         ` Yuanhan Liu
2016-11-01  9:39                           ` Thomas Monjalon
2016-11-02  2:44                             ` Yuanhan Liu
2016-10-27 10:33                 ` Yuanhan Liu
2016-10-27 10:35                   ` Maxime Coquelin
2016-10-27 10:46                     ` Yuanhan Liu
2016-10-28  0:49                       ` Wang, Zhihong
2016-10-28  7:42                         ` Maxime Coquelin
2016-10-31 10:01                           ` Wang, Zhihong
2016-11-02 10:51                             ` Maxime Coquelin
2016-11-03  8:11                               ` Maxime Coquelin
2016-11-04  6:18                                 ` Xu, Qian Q
2016-11-04  7:41                                   ` Maxime Coquelin
2016-11-04  7:20                                 ` Wang, Zhihong
2016-11-04  7:57                                   ` Maxime Coquelin
2016-11-04  7:59                                     ` Maxime Coquelin
2016-11-04 10:43                                       ` Wang, Zhihong
2016-11-04 11:22                                         ` Maxime Coquelin
2016-11-04 11:36                                           ` Yuanhan Liu
2016-11-04 11:39                                             ` Maxime Coquelin
2016-11-04 12:30                                           ` Wang, Zhihong
2016-11-04 12:54                                             ` Maxime Coquelin
2016-11-04 13:09                                               ` Wang, Zhihong
2016-11-08 10:51                                                 ` Wang, Zhihong
2016-10-27 10:53                   ` Maxime Coquelin
2016-10-28  6:05                     ` Xu, Qian Q

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160926152442-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=dev@dpdk.org \
    --cc=huawei.xie@intel.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=stephen@networkplumber.org \
    --cc=vkaplans@redhat.com \
    --cc=yuanhan.liu@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).