From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by dpdk.org (Postfix) with ESMTP id AE5171B468 for ; Fri, 6 Jul 2018 09:04:54 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 4176B4075637; Fri, 6 Jul 2018 07:04:54 +0000 (UTC) Received: from localhost.localdomain (unknown [10.36.112.10]) by smtp.corp.redhat.com (Postfix) with ESMTP id 36A652166BA9; Fri, 6 Jul 2018 07:04:53 +0000 (UTC) From: Maxime Coquelin To: tiwei.bie@intel.com, zhihong.wang@intel.com, dev@dpdk.org Cc: Maxime Coquelin Date: Fri, 6 Jul 2018 09:04:45 +0200 Message-Id: <20180706070449.1946-2-maxime.coquelin@redhat.com> In-Reply-To: <20180706070449.1946-1-maxime.coquelin@redhat.com> References: <20180706070449.1946-1-maxime.coquelin@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Fri, 06 Jul 2018 07:04:54 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Fri, 06 Jul 2018 07:04:54 +0000 (UTC) for IP:'10.11.54.6' DOMAIN:'int-mx06.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'maxime.coquelin@redhat.com' RCPT:'' Subject: [dpdk-dev] [PATCH v4 1/5] vhost: use shadow used ring in dequeue path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Jul 2018 07:04:55 -0000 Relax used ring contention by reusing the shadow used ring feature used by enqueue path. Signed-off-by: Maxime Coquelin --- lib/librte_vhost/virtio_net.c | 50 +++++++++---------------------------------- 1 file changed, 10 insertions(+), 40 deletions(-) diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 98ad8e936..741267345 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -1019,35 +1019,6 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, return error; } -static __rte_always_inline void -update_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq, - uint32_t used_idx, uint32_t desc_idx) -{ - vq->used->ring[used_idx].id = desc_idx; - vq->used->ring[used_idx].len = 0; - vhost_log_cache_used_vring(dev, vq, - offsetof(struct vring_used, ring[used_idx]), - sizeof(vq->used->ring[used_idx])); -} - -static __rte_always_inline void -update_used_idx(struct virtio_net *dev, struct vhost_virtqueue *vq, - uint32_t count) -{ - if (unlikely(count == 0)) - return; - - rte_smp_wmb(); - rte_smp_rmb(); - - vhost_log_cache_sync(dev, vq); - - vq->used->idx += count; - vhost_log_used_vring(dev, vq, offsetof(struct vring_used, idx), - sizeof(vq->used->idx)); - vhost_vring_call(dev, vq); -} - static __rte_always_inline struct zcopy_mbuf * get_zmbuf(struct vhost_virtqueue *vq) { @@ -1115,7 +1086,6 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, struct rte_mbuf *rarp_mbuf = NULL; struct vhost_virtqueue *vq; uint32_t desc_indexes[MAX_PKT_BURST]; - uint32_t used_idx; uint32_t i = 0; uint16_t free_entries; uint16_t avail_idx; @@ -1146,6 +1116,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, goto out_access_unlock; vq->batch_copy_nb_elems = 0; + vq->shadow_used_idx = 0; if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) vhost_user_iotlb_rd_lock(vq); @@ -1163,9 +1134,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, next = TAILQ_NEXT(zmbuf, next); if (mbuf_is_consumed(zmbuf->mbuf)) { - used_idx = vq->last_used_idx++ & (vq->size - 1); - update_used_ring(dev, vq, used_idx, - zmbuf->desc_idx); + update_shadow_used_ring(vq, zmbuf->desc_idx, 0); nr_updated += 1; TAILQ_REMOVE(&vq->zmbuf_list, zmbuf, next); @@ -1176,7 +1145,9 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, } } - update_used_idx(dev, vq, nr_updated); + flush_shadow_used_ring(dev, vq); + vhost_vring_call(dev, vq); + vq->shadow_used_idx = 0; } /* @@ -1217,9 +1188,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, /* Prefetch available and used ring */ avail_idx = vq->last_avail_idx & (vq->size - 1); - used_idx = vq->last_used_idx & (vq->size - 1); rte_prefetch0(&vq->avail->ring[avail_idx]); - rte_prefetch0(&vq->used->ring[used_idx]); count = RTE_MIN(count, MAX_PKT_BURST); count = RTE_MIN(count, free_entries); @@ -1229,11 +1198,10 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, /* Retrieve all of the head indexes first to avoid caching issues. */ for (i = 0; i < count; i++) { avail_idx = (vq->last_avail_idx + i) & (vq->size - 1); - used_idx = (vq->last_used_idx + i) & (vq->size - 1); desc_indexes[i] = vq->avail->ring[avail_idx]; if (likely(dev->dequeue_zero_copy == 0)) - update_used_ring(dev, vq, used_idx, desc_indexes[i]); + update_shadow_used_ring(vq, desc_indexes[i], 0); } /* Prefetch descriptor index. */ @@ -1326,8 +1294,10 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, if (likely(dev->dequeue_zero_copy == 0)) { do_data_copy_dequeue(vq); - vq->last_used_idx += i; - update_used_idx(dev, vq, i); + if (unlikely(i < count)) + vq->shadow_used_idx = i; + flush_shadow_used_ring(dev, vq); + vhost_vring_call(dev, vq); } out: -- 2.14.4