From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 423B65F34 for ; Thu, 6 Dec 2018 18:11:12 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 5295F308A96D; Thu, 6 Dec 2018 17:11:11 +0000 (UTC) Received: from [10.36.112.55] (ovpn-112-55.ams2.redhat.com [10.36.112.55]) by smtp.corp.redhat.com (Postfix) with ESMTPS id A66017FB91; Thu, 6 Dec 2018 17:11:00 +0000 (UTC) To: Ilya Maximets , dev@dpdk.org, tiwei.bie@intel.com, zhihong.wang@intel.com, jfreimann@redhat.com Cc: "Michael S. Tsirkin" References: <20181128094700.14598-1-maxime.coquelin@redhat.com> <7fbcfcea-3c81-d5d1-86bf-8fe8e63d4468@samsung.com> From: Maxime Coquelin Message-ID: Date: Thu, 6 Dec 2018 18:10:58 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1 MIME-Version: 1.0 In-Reply-To: <7fbcfcea-3c81-d5d1-86bf-8fe8e63d4468@samsung.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.41]); Thu, 06 Dec 2018 17:11:11 +0000 (UTC) Subject: Re: [dpdk-dev] vhost: batch used descriptors chains write-back with packed ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Dec 2018 17:11:12 -0000 On 12/5/18 5:01 PM, Ilya Maximets wrote: > On 28.11.2018 12:47, Maxime Coquelin wrote: >> Instead of writing back descriptors chains in order, let's >> write the first chain flags last in order to improve batching. > > I'm not sure if this fully compliant with virtio spec. > It says that 'each side (driver and device) are only required to poll > (or test) a single location in memory', but it does not forbid to > test other descriptors. So, if the driver will try to check not only > 'the next device descriptor after the one they processed previously, > in circular order' but a few descriptors ahead, it could read an > inconsistent memory because there are no more write barriers between > updates for flags and id/len for them. > > What do you think ? Yes, that makes sense. It should have no cost on x86 moreover. I'll fix it in v2. Thanks, Maxime >> >> With Kernel's pktgen benchmark, ~3% performance gain is measured. >> >> Signed-off-by: Maxime Coquelin >> Tested-by: Jens Freimann >> Reviewed-by: Jens Freimann >> --- >> lib/librte_vhost/virtio_net.c | 37 ++++++++++++++++++++++------------- >> 1 file changed, 23 insertions(+), 14 deletions(-) >> >> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c >> index 5e1a1a727..f54642c2d 100644 >> --- a/lib/librte_vhost/virtio_net.c >> +++ b/lib/librte_vhost/virtio_net.c >> @@ -135,19 +135,10 @@ flush_shadow_used_ring_packed(struct virtio_net *dev, >> struct vhost_virtqueue *vq) >> { >> int i; >> - uint16_t used_idx = vq->last_used_idx; >> + uint16_t head_flags, head_idx = vq->last_used_idx; >> >> - /* Split loop in two to save memory barriers */ >> - for (i = 0; i < vq->shadow_used_idx; i++) { >> - vq->desc_packed[used_idx].id = vq->shadow_used_packed[i].id; >> - vq->desc_packed[used_idx].len = vq->shadow_used_packed[i].len; >> - >> - used_idx += vq->shadow_used_packed[i].count; >> - if (used_idx >= vq->size) >> - used_idx -= vq->size; >> - } >> - >> - rte_smp_wmb(); >> + if (unlikely(vq->shadow_used_idx == 0)) >> + return; >> >> for (i = 0; i < vq->shadow_used_idx; i++) { >> uint16_t flags; >> @@ -165,12 +156,22 @@ flush_shadow_used_ring_packed(struct virtio_net *dev, >> flags &= ~VRING_DESC_F_AVAIL; >> } >> >> - vq->desc_packed[vq->last_used_idx].flags = flags; >> + vq->desc_packed[vq->last_used_idx].id = >> + vq->shadow_used_packed[i].id; >> + vq->desc_packed[vq->last_used_idx].len = >> + vq->shadow_used_packed[i].len; >> + >> + if (i > 0) { >> + vq->desc_packed[vq->last_used_idx].flags = flags; >> >> - vhost_log_cache_used_vring(dev, vq, >> + vhost_log_cache_used_vring(dev, vq, >> vq->last_used_idx * >> sizeof(struct vring_packed_desc), >> sizeof(struct vring_packed_desc)); >> + } else { >> + head_idx = vq->last_used_idx; >> + head_flags = flags; >> + } >> >> vq->last_used_idx += vq->shadow_used_packed[i].count; >> if (vq->last_used_idx >= vq->size) { >> @@ -180,7 +181,15 @@ flush_shadow_used_ring_packed(struct virtio_net *dev, >> } >> >> rte_smp_wmb(); >> + >> + vq->desc_packed[head_idx].flags = head_flags; >> vq->shadow_used_idx = 0; >> + >> + vhost_log_cache_used_vring(dev, vq, >> + head_idx * >> + sizeof(struct vring_packed_desc), >> + sizeof(struct vring_packed_desc)); >> + >> vhost_log_cache_sync(dev, vq); >> } >> >>