From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by dpdk.org (Postfix) with ESMTP id DFE611CE8C for ; Fri, 6 Apr 2018 11:36:06 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 7E8A976FB9; Fri, 6 Apr 2018 09:36:06 +0000 (UTC) Received: from [10.36.112.46] (ovpn-112-46.ams2.redhat.com [10.36.112.46]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 3FF9D2023227; Fri, 6 Apr 2018 09:36:05 +0000 (UTC) To: Jens Freimann , dev@dpdk.org Cc: tiwei.bie@intel.com, yliu@fridaylinux.org, mst@redhat.com References: <20180405101031.26468-1-jfreimann@redhat.com> <20180405101031.26468-16-jfreimann@redhat.com> From: Maxime Coquelin Message-ID: <9cbcb9c1-ee1f-1118-ed39-ad9b0d4be0b2@redhat.com> Date: Fri, 6 Apr 2018 11:36:03 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <20180405101031.26468-16-jfreimann@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Fri, 06 Apr 2018 09:36:06 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Fri, 06 Apr 2018 09:36:06 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'maxime.coquelin@redhat.com' RCPT:'' Subject: Re: [dpdk-dev] [PATCH v3 15/21] vhost: packed queue enqueue path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Apr 2018 09:36:07 -0000 On 04/05/2018 12:10 PM, Jens Freimann wrote: > Implement enqueue of packets to the receive virtqueue. > > Set descriptor flag VIRTQ_DESC_F_USED and toggle used wrap counter if > last descriptor in ring is used. Perform a write memory barrier before > flags are written to descriptor. > > Chained descriptors are not supported with this patch. > > Signed-off-by: Jens Freimann > --- > lib/librte_vhost/virtio_net.c | 129 ++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 129 insertions(+) > > diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c > index 7eea1da04..578e5612e 100644 > --- a/lib/librte_vhost/virtio_net.c > +++ b/lib/librte_vhost/virtio_net.c > @@ -695,6 +695,135 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id, > return pkt_idx; > } > > +static inline uint32_t __attribute__((always_inline)) > +vhost_enqueue_burst_packed(struct virtio_net *dev, uint16_t queue_id, > + struct rte_mbuf **pkts, uint32_t count) > +{ > + struct vhost_virtqueue *vq; > + struct vring_desc_packed *descs; > + uint16_t idx; > + uint16_t mask; > + uint16_t i; > + > + vq = dev->virtqueue[queue_id]; > + > + rte_spinlock_lock(&vq->access_lock); > + > + if (unlikely(vq->enabled == 0)) { > + i = 0; > + goto out_access_unlock; > + } > + > + if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) > + vhost_user_iotlb_rd_lock(vq); > + > + descs = vq->desc_packed; > + mask = vq->size - 1; > + > + for (i = 0; i < count; i++) { > + uint32_t desc_avail, desc_offset; > + uint32_t mbuf_avail, mbuf_offset; > + uint32_t cpy_len; > + struct vring_desc_packed *desc; > + uint64_t desc_addr; > + struct virtio_net_hdr_mrg_rxbuf *hdr; > + struct rte_mbuf *m = pkts[i]; > + > + /* XXX: there is an assumption that no desc will be chained */ Is this assumption still true? If not what are the plan to fix this? > + idx = vq->last_used_idx & mask; > + desc = &descs[idx]; > + > + if (!desc_is_avail(vq, desc)) IIUC, it means the ring is full. I think this is an unlikely case, so maybe better to use the unlikely macro here. > + break; > + rte_smp_rmb(); > + > + desc_addr = vhost_iova_to_vva(dev, vq, desc->addr, > + sizeof(*desc), VHOST_ACCESS_RW); > + /* > + * Checking of 'desc_addr' placed outside of 'unlikely' macro > + * to avoid performance issue with some versions of gcc (4.8.4 > + * and 5.3.0) which otherwise stores offset on the stack instead > + * of in a register. > + */ > + if (unlikely(desc->len < dev->vhost_hlen) || !desc_addr) > + break; > + > + hdr = (struct virtio_net_hdr_mrg_rxbuf *)(uintptr_t)desc_addr; > + virtio_enqueue_offload(m, &hdr->hdr); > + vhost_log_write(dev, desc->addr, dev->vhost_hlen); > + PRINT_PACKET(dev, (uintptr_t)desc_addr, dev->vhost_hlen, 0); > + > + desc_offset = dev->vhost_hlen; > + desc_avail = desc->len - dev->vhost_hlen; > + > + mbuf_avail = rte_pktmbuf_data_len(m); > + mbuf_offset = 0; > + while (mbuf_avail != 0 || m->next != NULL) { > + /* done with current mbuf, fetch next */ > + if (mbuf_avail == 0) { > + m = m->next; > + > + mbuf_offset = 0; > + mbuf_avail = rte_pktmbuf_data_len(m); > + } > + > + /* done with current desc buf, fetch next */ > + if (desc_avail == 0) { > + if ((desc->flags & VRING_DESC_F_NEXT) == 0) { > + /* Room in vring buffer is not enough */ > + goto out; > + } > + > + idx = (idx+1) & (vq->size - 1); > + desc = &descs[idx]; > + if (unlikely(!desc_is_avail(vq, desc))) > + goto out ; > + > + desc_addr = vhost_iova_to_vva(dev, vq, desc->addr, > + sizeof(*desc), > + VHOST_ACCESS_RW); > + if (unlikely(!desc_addr)) > + goto out; > + > + desc_offset = 0; > + desc_avail = desc->len; > + } > + > + cpy_len = RTE_MIN(desc_avail, mbuf_avail); > + rte_memcpy((void *)((uintptr_t)(desc_addr + desc_offset)), > + rte_pktmbuf_mtod_offset(m, void *, mbuf_offset), > + cpy_len); > + vhost_log_write(dev, desc->addr + desc_offset, cpy_len); > + PRINT_PACKET(dev, (uintptr_t)(desc_addr + desc_offset), > + cpy_len, 0); > + > + mbuf_avail -= cpy_len; > + mbuf_offset += cpy_len; > + desc_avail -= cpy_len; > + desc_offset += cpy_len; > + } > + > + descs[idx].len = pkts[i]->pkt_len + dev->vhost_hlen; > + rte_smp_wmb(); > + set_desc_used(vq, desc); > + > + vq->last_used_idx++; > + if ((vq->last_used_idx & (vq->size - 1)) == 0) > + toggle_wrap_counter(vq); > + } > + > +out: > + if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) > + vhost_user_iotlb_rd_unlock(vq); > + > +out_access_unlock: > + rte_spinlock_unlock(&vq->access_lock); > + > + count = i; > + > + return count; > +} > + > uint16_t > rte_vhost_enqueue_burst(int vid, uint16_t queue_id, > struct rte_mbuf **pkts, uint16_t count) >