From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D9642A04FD; Mon, 11 Apr 2022 12:01:06 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 26F274114D; Mon, 11 Apr 2022 12:01:04 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 25DFD410F3 for ; Mon, 11 Apr 2022 12:01:02 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649671263; x=1681207263; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=dSvZHw1eIr5Qfm2xH7JnWdGiebK13oWZ5LmTzu/DzgQ=; b=HczfXQnM4+88FYycupkOEDFP5lh9t3ciDbjP4OlsZsdpnKAJJn3wtlre b+nf81B9UW9TZwLM/EAPwFIifUaKPk1Qchcdz8JXPCTT8rnTW/B6x6Yuz aJh8vvTA9ppWGDNLtR4z7JmcGA7s5HWFlLRdjK5kV4dpWK0NtDYtJz7gk qAtEAe83NUXjR2vFHZGLqP67n4ndOsThdCVFLUNgUvDO5LBdQw7baOabM FMKx4AiGxYQUoIbToexyWcAMjI8ncGtBYirQu6LJpxQ5qTh1WmtVyWQBC BYQNKNrxkOhCzJ8EJSCvcuhiroO8rR7Jp5lvfQ5LU2mxymt9zbXgyjRU0 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10313"; a="249361589" X-IronPort-AV: E=Sophos;i="5.90,251,1643702400"; d="scan'208";a="249361589" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Apr 2022 03:01:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,251,1643702400"; d="scan'208";a="525446178" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.110.228]) by orsmga002.jf.intel.com with ESMTP; 11 Apr 2022 03:01:00 -0700 From: xuan.ding@intel.com To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, cheng1.jiang@intel.com, sunil.pai.g@intel.com, liangma@liangbit.com, Xuan Ding Subject: [PATCH v2 1/5] vhost: prepare sync for descriptor to mbuf refactoring Date: Mon, 11 Apr 2022 10:00:28 +0000 Message-Id: <20220411100032.114434-2-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220411100032.114434-1-xuan.ding@intel.com> References: <20220407152546.38167-1-xuan.ding@intel.com> <20220411100032.114434-1-xuan.ding@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xuan Ding This patch extracts the descriptors to buffers filling from copy_desc_to_mbuf() into a dedicated function. Besides, enqueue and dequeue path are refactored to use the same function sync_fill_seg() for preparing batch elements, which simplifies the code without performance degradation. Signed-off-by: Xuan Ding --- lib/vhost/virtio_net.c | 66 +++++++++++++++++++----------------------- 1 file changed, 29 insertions(+), 37 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 5f432b0d77..a2d04a1f60 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -1030,9 +1030,9 @@ async_mbuf_to_desc_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, } static __rte_always_inline void -sync_mbuf_to_desc_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, +sync_fill_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, uint32_t mbuf_offset, - uint64_t buf_addr, uint64_t buf_iova, uint32_t cpy_len) + uint64_t buf_addr, uint64_t buf_iova, uint32_t cpy_len, bool to_desc) { struct batch_copy_elem *batch_copy = vq->batch_copy_elems; @@ -1043,10 +1043,17 @@ sync_mbuf_to_desc_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, vhost_log_cache_write_iova(dev, vq, buf_iova, cpy_len); PRINT_PACKET(dev, (uintptr_t)(buf_addr), cpy_len, 0); } else { - batch_copy[vq->batch_copy_nb_elems].dst = - (void *)((uintptr_t)(buf_addr)); - batch_copy[vq->batch_copy_nb_elems].src = - rte_pktmbuf_mtod_offset(m, void *, mbuf_offset); + if (to_desc) { + batch_copy[vq->batch_copy_nb_elems].dst = + (void *)((uintptr_t)(buf_addr)); + batch_copy[vq->batch_copy_nb_elems].src = + rte_pktmbuf_mtod_offset(m, void *, mbuf_offset); + } else { + batch_copy[vq->batch_copy_nb_elems].dst = + rte_pktmbuf_mtod_offset(m, void *, mbuf_offset); + batch_copy[vq->batch_copy_nb_elems].src = + (void *)((uintptr_t)(buf_addr)); + } batch_copy[vq->batch_copy_nb_elems].log_addr = buf_iova; batch_copy[vq->batch_copy_nb_elems].len = cpy_len; vq->batch_copy_nb_elems++; @@ -1158,9 +1165,9 @@ mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, buf_iova + buf_offset, cpy_len) < 0) goto error; } else { - sync_mbuf_to_desc_seg(dev, vq, m, mbuf_offset, + sync_fill_seg(dev, vq, m, mbuf_offset, buf_addr + buf_offset, - buf_iova + buf_offset, cpy_len); + buf_iova + buf_offset, cpy_len, true); } mbuf_avail -= cpy_len; @@ -2474,7 +2481,7 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, bool legacy_ol_flags) { uint32_t buf_avail, buf_offset; - uint64_t buf_addr, buf_len; + uint64_t buf_addr, buf_iova, buf_len; uint32_t mbuf_avail, mbuf_offset; uint32_t cpy_len; struct rte_mbuf *cur = m, *prev = m; @@ -2482,16 +2489,13 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, struct virtio_net_hdr *hdr = NULL; /* A counter to avoid desc dead loop chain */ uint16_t vec_idx = 0; - struct batch_copy_elem *batch_copy = vq->batch_copy_elems; - int error = 0; buf_addr = buf_vec[vec_idx].buf_addr; + buf_iova = buf_vec[vec_idx].buf_iova; buf_len = buf_vec[vec_idx].buf_len; - if (unlikely(buf_len < dev->vhost_hlen && nr_vec <= 1)) { - error = -1; - goto out; - } + if (unlikely(buf_len < dev->vhost_hlen && nr_vec <= 1)) + return -1; if (virtio_net_with_host_offload(dev)) { if (unlikely(buf_len < sizeof(struct virtio_net_hdr))) { @@ -2515,11 +2519,12 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, buf_offset = dev->vhost_hlen - buf_len; vec_idx++; buf_addr = buf_vec[vec_idx].buf_addr; + buf_iova = buf_vec[vec_idx].buf_iova; buf_len = buf_vec[vec_idx].buf_len; buf_avail = buf_len - buf_offset; } else if (buf_len == dev->vhost_hlen) { if (unlikely(++vec_idx >= nr_vec)) - goto out; + goto error; buf_addr = buf_vec[vec_idx].buf_addr; buf_len = buf_vec[vec_idx].buf_len; @@ -2539,22 +2544,9 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, while (1) { cpy_len = RTE_MIN(buf_avail, mbuf_avail); - if (likely(cpy_len > MAX_BATCH_LEN || - vq->batch_copy_nb_elems >= vq->size || - (hdr && cur == m))) { - rte_memcpy(rte_pktmbuf_mtod_offset(cur, void *, - mbuf_offset), - (void *)((uintptr_t)(buf_addr + - buf_offset)), cpy_len); - } else { - batch_copy[vq->batch_copy_nb_elems].dst = - rte_pktmbuf_mtod_offset(cur, void *, - mbuf_offset); - batch_copy[vq->batch_copy_nb_elems].src = - (void *)((uintptr_t)(buf_addr + buf_offset)); - batch_copy[vq->batch_copy_nb_elems].len = cpy_len; - vq->batch_copy_nb_elems++; - } + sync_fill_seg(dev, vq, m, mbuf_offset, + buf_addr + buf_offset, + buf_iova + buf_offset, cpy_len, true); mbuf_avail -= cpy_len; mbuf_offset += cpy_len; @@ -2567,6 +2559,7 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, break; buf_addr = buf_vec[vec_idx].buf_addr; + buf_iova = buf_vec[vec_idx].buf_iova; buf_len = buf_vec[vec_idx].buf_len; buf_offset = 0; @@ -2585,8 +2578,7 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(cur == NULL)) { VHOST_LOG_DATA(ERR, "(%s) failed to allocate memory for mbuf.\n", dev->ifname); - error = -1; - goto out; + goto error; } prev->next = cur; @@ -2606,9 +2598,9 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (hdr) vhost_dequeue_offload(dev, hdr, m, legacy_ol_flags); -out: - - return error; + return 0; +error: + return -1; } static void -- 2.17.1