From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by dpdk.org (Postfix) with ESMTP id ED8281BFBB for ; Fri, 6 Jul 2018 09:07:42 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 97089818BAFC; Fri, 6 Jul 2018 07:07:42 +0000 (UTC) Received: from localhost.localdomain (unknown [10.36.112.10]) by smtp.corp.redhat.com (Postfix) with ESMTP id DB67E2026D68; Fri, 6 Jul 2018 07:07:40 +0000 (UTC) From: Maxime Coquelin To: tiwei.bie@intel.com, zhihong.wang@intel.com, jfreimann@redhat.com, dev@dpdk.org Cc: mst@redhat.com, jasowang@redhat.com, wexu@redhat.com, Maxime Coquelin Date: Fri, 6 Jul 2018 09:07:15 +0200 Message-Id: <20180706070722.2043-9-maxime.coquelin@redhat.com> In-Reply-To: <20180706070722.2043-1-maxime.coquelin@redhat.com> References: <20180706070722.2043-1-maxime.coquelin@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Fri, 06 Jul 2018 07:07:42 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Fri, 06 Jul 2018 07:07:42 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'maxime.coquelin@redhat.com' RCPT:'' Subject: [dpdk-dev] [PATCH v9 08/15] vhost: append shadow used ring function names with split X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Jul 2018 07:07:43 -0000 Signed-off-by: Maxime Coquelin --- lib/librte_vhost/virtio_net.c | 28 +++++++++++++++------------- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index bdfd6ebef..ae256e062 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -77,8 +77,9 @@ free_ind_table(void *idesc) } static __rte_always_inline void -do_flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq, - uint16_t to, uint16_t from, uint16_t size) +do_flush_shadow_used_ring_split(struct virtio_net *dev, + struct vhost_virtqueue *vq, + uint16_t to, uint16_t from, uint16_t size) { rte_memcpy(&vq->used->ring[to], &vq->shadow_used_ring[from], @@ -89,22 +90,22 @@ do_flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq, } static __rte_always_inline void -flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq) +flush_shadow_used_ring_split(struct virtio_net *dev, struct vhost_virtqueue *vq) { uint16_t used_idx = vq->last_used_idx & (vq->size - 1); if (used_idx + vq->shadow_used_idx <= vq->size) { - do_flush_shadow_used_ring(dev, vq, used_idx, 0, + do_flush_shadow_used_ring_split(dev, vq, used_idx, 0, vq->shadow_used_idx); } else { uint16_t size; /* update used ring interval [used_idx, vq->size] */ size = vq->size - used_idx; - do_flush_shadow_used_ring(dev, vq, used_idx, 0, size); + do_flush_shadow_used_ring_split(dev, vq, used_idx, 0, size); /* update the left half used ring interval [0, left_size] */ - do_flush_shadow_used_ring(dev, vq, 0, size, + do_flush_shadow_used_ring_split(dev, vq, 0, size, vq->shadow_used_idx - size); } vq->last_used_idx += vq->shadow_used_idx; @@ -120,7 +121,7 @@ flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq) } static __rte_always_inline void -update_shadow_used_ring(struct vhost_virtqueue *vq, +update_shadow_used_ring_split(struct vhost_virtqueue *vq, uint16_t desc_idx, uint16_t len) { uint16_t i = vq->shadow_used_idx++; @@ -353,7 +354,7 @@ reserve_avail_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq, VHOST_ACCESS_RW) < 0)) return -1; len = RTE_MIN(len, size); - update_shadow_used_ring(vq, head_idx, len); + update_shadow_used_ring_split(vq, head_idx, len); size -= len; cur_idx++; @@ -579,7 +580,7 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, do_data_copy_enqueue(dev, vq); if (likely(vq->shadow_used_idx)) { - flush_shadow_used_ring(dev, vq); + flush_shadow_used_ring_split(dev, vq); vhost_vring_call(dev, vq); } @@ -1048,7 +1049,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, next = TAILQ_NEXT(zmbuf, next); if (mbuf_is_consumed(zmbuf->mbuf)) { - update_shadow_used_ring(vq, zmbuf->desc_idx, 0); + update_shadow_used_ring_split(vq, + zmbuf->desc_idx, 0); nr_updated += 1; TAILQ_REMOVE(&vq->zmbuf_list, zmbuf, next); @@ -1059,7 +1061,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, } } - flush_shadow_used_ring(dev, vq); + flush_shadow_used_ring_split(dev, vq); vhost_vring_call(dev, vq); } @@ -1091,7 +1093,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, break; if (likely(dev->dequeue_zero_copy == 0)) - update_shadow_used_ring(vq, head_idx, 0); + update_shadow_used_ring_split(vq, head_idx, 0); rte_prefetch0((void *)(uintptr_t)buf_vec[0].buf_addr); @@ -1138,7 +1140,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, do_data_copy_dequeue(vq); if (unlikely(i < count)) vq->shadow_used_idx = i; - flush_shadow_used_ring(dev, vq); + flush_shadow_used_ring_split(dev, vq); vhost_vring_call(dev, vq); } -- 2.14.4