From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id B25065F11 for ; Thu, 6 Dec 2018 01:56:52 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 83F8F30832C7; Thu, 6 Dec 2018 00:56:51 +0000 (UTC) Received: from redhat.com (ovpn-122-208.rdu2.redhat.com [10.10.122.208]) by smtp.corp.redhat.com (Postfix) with SMTP id 347BB1001F58; Thu, 6 Dec 2018 00:56:43 +0000 (UTC) Date: Wed, 5 Dec 2018 19:56:43 -0500 From: "Michael S. Tsirkin" To: Ilya Maximets Cc: Maxime Coquelin , dev@dpdk.org, tiwei.bie@intel.com, zhihong.wang@intel.com, jfreimann@redhat.com Message-ID: <20181205193227-mutt-send-email-mst@kernel.org> References: <20181128094700.14598-1-maxime.coquelin@redhat.com> <7fbcfcea-3c81-d5d1-86bf-8fe8e63d4468@samsung.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <7fbcfcea-3c81-d5d1-86bf-8fe8e63d4468@samsung.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.44]); Thu, 06 Dec 2018 00:56:52 +0000 (UTC) Subject: Re: [dpdk-dev] vhost: batch used descriptors chains write-back with packed ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Dec 2018 00:56:53 -0000 On Wed, Dec 05, 2018 at 07:01:23PM +0300, Ilya Maximets wrote: > On 28.11.2018 12:47, Maxime Coquelin wrote: > > Instead of writing back descriptors chains in order, let's > > write the first chain flags last in order to improve batching. > > I'm not sure if this fully compliant with virtio spec. > It says that 'each side (driver and device) are only required to poll > (or test) a single location in memory', but it does not forbid to > test other descriptors. So, if the driver will try to check not only > 'the next device descriptor after the one they processed previously, > in circular order' but a few descriptors ahead, it could read an > inconsistent memory because there are no more write barriers between > updates for flags and id/len for them. > > What do you think ? Write barriers for SMP effects are quite cheap on most architectures. So adding them before each flag write is probably not a big deal. > > > > With Kernel's pktgen benchmark, ~3% performance gain is measured. > > > > Signed-off-by: Maxime Coquelin > > Tested-by: Jens Freimann > > Reviewed-by: Jens Freimann > > --- > > lib/librte_vhost/virtio_net.c | 37 ++++++++++++++++++++++------------- > > 1 file changed, 23 insertions(+), 14 deletions(-) > > > > diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c > > index 5e1a1a727..f54642c2d 100644 > > --- a/lib/librte_vhost/virtio_net.c > > +++ b/lib/librte_vhost/virtio_net.c > > @@ -135,19 +135,10 @@ flush_shadow_used_ring_packed(struct virtio_net *dev, > > struct vhost_virtqueue *vq) > > { > > int i; > > - uint16_t used_idx = vq->last_used_idx; > > + uint16_t head_flags, head_idx = vq->last_used_idx; > > > > - /* Split loop in two to save memory barriers */ > > - for (i = 0; i < vq->shadow_used_idx; i++) { > > - vq->desc_packed[used_idx].id = vq->shadow_used_packed[i].id; > > - vq->desc_packed[used_idx].len = vq->shadow_used_packed[i].len; > > - > > - used_idx += vq->shadow_used_packed[i].count; > > - if (used_idx >= vq->size) > > - used_idx -= vq->size; > > - } > > - > > - rte_smp_wmb(); > > + if (unlikely(vq->shadow_used_idx == 0)) > > + return; > > > > for (i = 0; i < vq->shadow_used_idx; i++) { > > uint16_t flags; > > @@ -165,12 +156,22 @@ flush_shadow_used_ring_packed(struct virtio_net *dev, > > flags &= ~VRING_DESC_F_AVAIL; > > } > > > > - vq->desc_packed[vq->last_used_idx].flags = flags; > > + vq->desc_packed[vq->last_used_idx].id = > > + vq->shadow_used_packed[i].id; > > + vq->desc_packed[vq->last_used_idx].len = > > + vq->shadow_used_packed[i].len; > > + > > + if (i > 0) { Specifically here? > > + vq->desc_packed[vq->last_used_idx].flags = flags; > > > > - vhost_log_cache_used_vring(dev, vq, > > + vhost_log_cache_used_vring(dev, vq, > > vq->last_used_idx * > > sizeof(struct vring_packed_desc), > > sizeof(struct vring_packed_desc)); > > + } else { > > + head_idx = vq->last_used_idx; > > + head_flags = flags; > > + } > > > > vq->last_used_idx += vq->shadow_used_packed[i].count; > > if (vq->last_used_idx >= vq->size) { > > @@ -180,7 +181,15 @@ flush_shadow_used_ring_packed(struct virtio_net *dev, > > } > > > > rte_smp_wmb(); > > + > > + vq->desc_packed[head_idx].flags = head_flags; > > vq->shadow_used_idx = 0; > > + > > + vhost_log_cache_used_vring(dev, vq, > > + head_idx * > > + sizeof(struct vring_packed_desc), > > + sizeof(struct vring_packed_desc)); > > + > > vhost_log_cache_sync(dev, vq); > > } > > > >