DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Ilya Maximets <i.maximets@samsung.com>
Cc: Maxime Coquelin <maxime.coquelin@redhat.com>,
	dev@dpdk.org, tiwei.bie@intel.com, zhihong.wang@intel.com,
	jfreimann@redhat.com
Subject: Re: [dpdk-dev] vhost: batch used descriptors chains write-back with packed ring
Date: Wed, 5 Dec 2018 19:56:43 -0500	[thread overview]
Message-ID: <20181205193227-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <7fbcfcea-3c81-d5d1-86bf-8fe8e63d4468@samsung.com>

On Wed, Dec 05, 2018 at 07:01:23PM +0300, Ilya Maximets wrote:
> On 28.11.2018 12:47, Maxime Coquelin wrote:
> > Instead of writing back descriptors chains in order, let's
> > write the first chain flags last in order to improve batching.
> 
> I'm not sure if this fully compliant with virtio spec.
> It says that 'each side (driver and device) are only required to poll
> (or test) a single location in memory', but it does not forbid to
> test other descriptors. So, if the driver will try to check not only
> 'the next device descriptor after the one they processed previously,
> in circular order' but a few descriptors ahead, it could read an
> inconsistent memory because there are no more write barriers between
> updates for flags and id/len for them.
> 
> What do you think ?

Write barriers for SMP effects are quite cheap on most architectures.
So adding them before each flag write is probably not a big deal.


> > 
> > With Kernel's pktgen benchmark, ~3% performance gain is measured.
> > 
> > Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> > Tested-by: Jens Freimann <jfreimann@redhat.com>
> > Reviewed-by: Jens Freimann <jfreimann@redhat.com>
> > ---
> >  lib/librte_vhost/virtio_net.c | 37 ++++++++++++++++++++++-------------
> >  1 file changed, 23 insertions(+), 14 deletions(-)
> > 
> > diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> > index 5e1a1a727..f54642c2d 100644
> > --- a/lib/librte_vhost/virtio_net.c
> > +++ b/lib/librte_vhost/virtio_net.c
> > @@ -135,19 +135,10 @@ flush_shadow_used_ring_packed(struct virtio_net *dev,
> >  			struct vhost_virtqueue *vq)
> >  {
> >  	int i;
> > -	uint16_t used_idx = vq->last_used_idx;
> > +	uint16_t head_flags, head_idx = vq->last_used_idx;
> >  
> > -	/* Split loop in two to save memory barriers */
> > -	for (i = 0; i < vq->shadow_used_idx; i++) {
> > -		vq->desc_packed[used_idx].id = vq->shadow_used_packed[i].id;
> > -		vq->desc_packed[used_idx].len = vq->shadow_used_packed[i].len;
> > -
> > -		used_idx += vq->shadow_used_packed[i].count;
> > -		if (used_idx >= vq->size)
> > -			used_idx -= vq->size;
> > -	}
> > -
> > -	rte_smp_wmb();
> > +	if (unlikely(vq->shadow_used_idx == 0))
> > +		return;
> >  
> >  	for (i = 0; i < vq->shadow_used_idx; i++) {
> >  		uint16_t flags;
> > @@ -165,12 +156,22 @@ flush_shadow_used_ring_packed(struct virtio_net *dev,
> >  			flags &= ~VRING_DESC_F_AVAIL;
> >  		}
> >  
> > -		vq->desc_packed[vq->last_used_idx].flags = flags;
> > +		vq->desc_packed[vq->last_used_idx].id =
> > +			vq->shadow_used_packed[i].id;
> > +		vq->desc_packed[vq->last_used_idx].len =
> > +			vq->shadow_used_packed[i].len;
> > +
> > +		if (i > 0) {

Specifically here?

> > +			vq->desc_packed[vq->last_used_idx].flags = flags;
> >  
> > -		vhost_log_cache_used_vring(dev, vq,
> > +			vhost_log_cache_used_vring(dev, vq,
> >  					vq->last_used_idx *
> >  					sizeof(struct vring_packed_desc),
> >  					sizeof(struct vring_packed_desc));
> > +		} else {
> > +			head_idx = vq->last_used_idx;
> > +			head_flags = flags;
> > +		}
> >  
> >  		vq->last_used_idx += vq->shadow_used_packed[i].count;
> >  		if (vq->last_used_idx >= vq->size) {
> > @@ -180,7 +181,15 @@ flush_shadow_used_ring_packed(struct virtio_net *dev,
> >  	}
> >  
> >  	rte_smp_wmb();
> > +
> > +	vq->desc_packed[head_idx].flags = head_flags;
> >  	vq->shadow_used_idx = 0;
> > +
> > +	vhost_log_cache_used_vring(dev, vq,
> > +				head_idx *
> > +				sizeof(struct vring_packed_desc),
> > +				sizeof(struct vring_packed_desc));
> > +
> >  	vhost_log_cache_sync(dev, vq);
> >  }
> >  
> > 

  reply	other threads:[~2018-12-06  0:56 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-28  9:47 [dpdk-dev] [PATCH] " Maxime Coquelin
2018-11-28 10:05 ` Jens Freimann
     [not found] ` <CGME20181205160124eucas1p1470e3dc9afe8e59ceab54a58140cf400@eucas1p1.samsung.com>
2018-12-05 16:01   ` [dpdk-dev] " Ilya Maximets
2018-12-06  0:56     ` Michael S. Tsirkin [this message]
2018-12-06 17:10     ` Maxime Coquelin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181205193227-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=dev@dpdk.org \
    --cc=i.maximets@samsung.com \
    --cc=jfreimann@redhat.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=tiwei.bie@intel.com \
    --cc=zhihong.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).