From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by dpdk.org (Postfix) with ESMTP id 7EBA41BF5B for ; Thu, 5 Jul 2018 23:08:01 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 0B96B400ADE4; Thu, 5 Jul 2018 21:08:01 +0000 (UTC) Received: from localhost.localdomain (ovpn-112-37.ams2.redhat.com [10.36.112.37]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4D9C82156899; Thu, 5 Jul 2018 21:07:59 +0000 (UTC) From: Maxime Coquelin To: tiwei.bie@intel.com, zhihong.wang@intel.com, jfreimann@redhat.com, dev@dpdk.org Cc: mst@redhat.com, jasowang@redhat.com, wexu@redhat.com, Maxime Coquelin Date: Thu, 5 Jul 2018 23:07:34 +0200 Message-Id: <20180705210741.17974-9-maxime.coquelin@redhat.com> In-Reply-To: <20180705210741.17974-1-maxime.coquelin@redhat.com> References: <20180705210741.17974-1-maxime.coquelin@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Thu, 05 Jul 2018 21:08:01 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Thu, 05 Jul 2018 21:08:01 +0000 (UTC) for IP:'10.11.54.6' DOMAIN:'int-mx06.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'maxime.coquelin@redhat.com' RCPT:'' Subject: [dpdk-dev] [PATCH v8 08/15] vhost: append shadow used ring function names with split X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Jul 2018 21:08:02 -0000 Signed-off-by: Maxime Coquelin --- lib/librte_vhost/virtio_net.c | 28 +++++++++++++++------------- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 7705c853b..7db3877d4 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -77,8 +77,9 @@ free_ind_table(void *idesc) } static __rte_always_inline void -do_flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq, - uint16_t to, uint16_t from, uint16_t size) +do_flush_shadow_used_ring_split(struct virtio_net *dev, + struct vhost_virtqueue *vq, + uint16_t to, uint16_t from, uint16_t size) { rte_memcpy(&vq->used->ring[to], &vq->shadow_used_ring[from], @@ -89,22 +90,22 @@ do_flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq, } static __rte_always_inline void -flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq) +flush_shadow_used_ring_split(struct virtio_net *dev, struct vhost_virtqueue *vq) { uint16_t used_idx = vq->last_used_idx & (vq->size - 1); if (used_idx + vq->shadow_used_idx <= vq->size) { - do_flush_shadow_used_ring(dev, vq, used_idx, 0, + do_flush_shadow_used_ring_split(dev, vq, used_idx, 0, vq->shadow_used_idx); } else { uint16_t size; /* update used ring interval [used_idx, vq->size] */ size = vq->size - used_idx; - do_flush_shadow_used_ring(dev, vq, used_idx, 0, size); + do_flush_shadow_used_ring_split(dev, vq, used_idx, 0, size); /* update the left half used ring interval [0, left_size] */ - do_flush_shadow_used_ring(dev, vq, 0, size, + do_flush_shadow_used_ring_split(dev, vq, 0, size, vq->shadow_used_idx - size); } vq->last_used_idx += vq->shadow_used_idx; @@ -120,7 +121,7 @@ flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq) } static __rte_always_inline void -update_shadow_used_ring(struct vhost_virtqueue *vq, +update_shadow_used_ring_split(struct vhost_virtqueue *vq, uint16_t desc_idx, uint16_t len) { uint16_t i = vq->shadow_used_idx++; @@ -353,7 +354,7 @@ reserve_avail_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq, VHOST_ACCESS_RW) < 0)) return -1; len = RTE_MIN(len, size); - update_shadow_used_ring(vq, head_idx, len); + update_shadow_used_ring_split(vq, head_idx, len); size -= len; cur_idx++; @@ -579,7 +580,7 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, do_data_copy_enqueue(dev, vq); if (likely(vq->shadow_used_idx)) { - flush_shadow_used_ring(dev, vq); + flush_shadow_used_ring_split(dev, vq); vhost_vring_call(dev, vq); } @@ -1047,7 +1048,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, next = TAILQ_NEXT(zmbuf, next); if (mbuf_is_consumed(zmbuf->mbuf)) { - update_shadow_used_ring(vq, zmbuf->desc_idx, 0); + update_shadow_used_ring_split(vq, + zmbuf->desc_idx, 0); nr_updated += 1; TAILQ_REMOVE(&vq->zmbuf_list, zmbuf, next); @@ -1058,7 +1060,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, } } - flush_shadow_used_ring(dev, vq); + flush_shadow_used_ring_split(dev, vq); vhost_vring_call(dev, vq); } @@ -1090,7 +1092,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, break; if (likely(dev->dequeue_zero_copy == 0)) - update_shadow_used_ring(vq, head_idx, 0); + update_shadow_used_ring_split(vq, head_idx, 0); rte_prefetch0((void *)(uintptr_t)buf_vec[0].buf_addr); @@ -1137,7 +1139,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, do_data_copy_dequeue(vq); if (unlikely(i < count)) vq->shadow_used_idx = i; - flush_shadow_used_ring(dev, vq); + flush_shadow_used_ring_split(dev, vq); vhost_vring_call(dev, vq); } -- 2.14.4