From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 3A0891B7D0 for ; Mon, 29 Jan 2018 15:13:27 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 9C5ED18B253; Mon, 29 Jan 2018 14:13:26 +0000 (UTC) Received: from localhost (dhcp-192-241.str.redhat.com [10.33.192.241]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 085FA5DD6C; Mon, 29 Jan 2018 14:13:18 +0000 (UTC) From: Jens Freimann To: dev@dpdk.org Cc: tiwei.bie@intel.com, yliu@fridaylinux.org, maxime.coquelin@redhat.com, mst@redhat.com Date: Mon, 29 Jan 2018 15:11:42 +0100 Message-Id: <20180129141143.13437-14-jfreimann@redhat.com> In-Reply-To: <20180129141143.13437-1-jfreimann@redhat.com> References: <20180129141143.13437-1-jfreimann@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Mon, 29 Jan 2018 14:13:26 +0000 (UTC) Subject: [dpdk-dev] [PATCH 13/14] vhost: packed queue enqueue path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Jan 2018 14:13:27 -0000 Implement enqueue of packets to the receive virtqueue. Set descriptor flag VIRTQ_DESC_F_USED and toggle used wrap counter if last descriptor in ring is used. Perform a write memory barrier before flags are written to descriptor. Chained descriptors are not supported with this patch. Signed-off-by: Jens Freimann --- lib/librte_vhost/virtio_net.c | 120 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 120 insertions(+) diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 5d4cfe8cc..c1b77fff5 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -695,6 +695,126 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id, return pkt_idx; } +static inline uint32_t __attribute__((always_inline)) +vhost_enqueue_burst_1_1(struct virtio_net *dev, uint16_t queue_id, + struct rte_mbuf **pkts, uint32_t count) +{ + struct vhost_virtqueue *vq; + struct vring_desc_1_1 *descs; + uint16_t idx; + uint16_t mask; + uint16_t i; + + vq = dev->virtqueue[queue_id]; + + rte_spinlock_lock(&vq->access_lock); + + if (unlikely(vq->enabled == 0)) { + i = 0; + goto out_access_unlock; + } + + descs = vq->desc_1_1; + mask = vq->size - 1; + + for (i = 0; i < count; i++) { + uint32_t desc_avail, desc_offset; + uint32_t mbuf_avail, mbuf_offset; + uint32_t cpy_len; + struct vring_desc_1_1 *desc; + uint64_t desc_addr; + struct virtio_net_hdr_mrg_rxbuf *hdr; + struct rte_mbuf *m = pkts[i]; + + /* XXX: there is an assumption that no desc will be chained */ + idx = vq->last_used_idx & mask; + desc = &descs[idx]; + + if (!desc_is_avail(vq, desc)) + break; + rte_smp_rmb(); + + desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr); + /* + * Checking of 'desc_addr' placed outside of 'unlikely' macro + * to avoid performance issue with some versions of gcc (4.8.4 + * and 5.3.0) which otherwise stores offset on the stack instead + * of in a register. + */ + if (unlikely(desc->len < dev->vhost_hlen) || !desc_addr) + break; + + hdr = (struct virtio_net_hdr_mrg_rxbuf *)(uintptr_t)desc_addr; + virtio_enqueue_offload(m, &hdr->hdr); + vhost_log_write(dev, desc->addr, dev->vhost_hlen); + PRINT_PACKET(dev, (uintptr_t)desc_addr, dev->vhost_hlen, 0); + + desc_offset = dev->vhost_hlen; + desc_avail = desc->len - dev->vhost_hlen; + + mbuf_avail = rte_pktmbuf_data_len(m); + mbuf_offset = 0; + while (mbuf_avail != 0 || m->next != NULL) { + /* done with current mbuf, fetch next */ + if (mbuf_avail == 0) { + m = m->next; + + mbuf_offset = 0; + mbuf_avail = rte_pktmbuf_data_len(m); + } + + /* done with current desc buf, fetch next */ + if (desc_avail == 0) { + if ((desc->flags & VRING_DESC_F_NEXT) == 0) { + /* Room in vring buffer is not enough */ + goto end_of_tx; + } + + idx = (idx + 1); + desc = &descs[idx]; + if (unlikely(!desc_is_avail(vq, desc))) + goto end_of_tx; + + desc_addr = rte_vhost_gpa_to_vva(dev->mem, + desc->addr); + if (unlikely(!desc_addr)) + goto end_of_tx; + + desc_offset = 0; + desc_avail = desc->len; + } + + cpy_len = RTE_MIN(desc_avail, mbuf_avail); + rte_memcpy((void *)((uintptr_t)(desc_addr + desc_offset)), + rte_pktmbuf_mtod_offset(m, void *, mbuf_offset), + cpy_len); + vhost_log_write(dev, desc->addr + desc_offset, cpy_len); + PRINT_PACKET(dev, (uintptr_t)(desc_addr + desc_offset), + cpy_len, 0); + + mbuf_avail -= cpy_len; + mbuf_offset += cpy_len; + desc_avail -= cpy_len; + desc_offset += cpy_len; + } + + descs[idx].len = pkts[i]->pkt_len + dev->vhost_hlen; + rte_smp_wmb(); + set_desc_used(vq, desc); + + vq->last_used_idx++; + if ((vq->last_used_idx & (vq->size - 1)) == 0) + toggle_wrap_counter(vq); + } + +out_access_unlock: + rte_spinlock_unlock(&vq->access_lock); +end_of_tx: + count = i; + + return count; +} + uint16_t rte_vhost_enqueue_burst(int vid, uint16_t queue_id, struct rte_mbuf **pkts, uint16_t count) -- 2.14.3