From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 00A99A00BE; Mon, 16 May 2022 13:15:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 527CE4282B; Mon, 16 May 2022 13:15:51 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 72FA340A7A for ; Mon, 16 May 2022 13:15:48 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652699748; x=1684235748; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=2WAZdj/g+khOfou0INzhGCxmI8sI6KY0aE1d/xqhlZU=; b=C6KUEI0IbSwRcbf8ph7N4nQpO48FpenSLbSCu28zCB3TVlvXvxIoOIJx vlMVM+KtUvulKUS/CwebxTM3/MOhTCfulMaCGQrsfqTL06NhZAwO51Ygl ko2zmy00WCYdO3kibjo1gs2gQ+yYU+oCBRrwhZ4B2ruu4788TNC5nuGFZ g087+wPfk5GlBfbJ5Um+OekhL+HltFjQ6b3AlLDTU+Dei+X92it4bk/x5 MQbEcMHV+iHlk9z2RB56QZ0TmAScVbD1BO0lY3DEwLXxaw54xzS5nx5Hf mCd3DVucQm9HNAm689JI+gVyHDYAusCucfwoDaxdei+kN9Ty0auIeqVRO w==; X-IronPort-AV: E=McAfee;i="6400,9594,10348"; a="296063038" X-IronPort-AV: E=Sophos;i="5.91,229,1647327600"; d="scan'208";a="296063038" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 May 2022 04:15:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,229,1647327600"; d="scan'208";a="568272347" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.110.228]) by orsmga007.jf.intel.com with ESMTP; 16 May 2022 04:15:35 -0700 From: xuan.ding@intel.com To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, cheng1.jiang@intel.com, sunil.pai.g@intel.com, liangma@liangbit.com, Xuan Ding Subject: [PATCH v8 1/5] vhost: prepare sync for descriptor to mbuf refactoring Date: Mon, 16 May 2022 11:10:37 +0000 Message-Id: <20220516111041.63914-2-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220516111041.63914-1-xuan.ding@intel.com> References: <20220407152546.38167-1-xuan.ding@intel.com> <20220516111041.63914-1-xuan.ding@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xuan Ding This patch extracts the descriptors to buffers filling from copy_desc_to_mbuf() into a dedicated function. Besides, enqueue and dequeue path are refactored to use the same function sync_fill_seg() for preparing batch elements, which simplifies the code without performance degradation. Signed-off-by: Xuan Ding Tested-by: Yvonne Yang Reviewed-by: Maxime Coquelin --- lib/vhost/virtio_net.c | 78 ++++++++++++++++++++---------------------- 1 file changed, 38 insertions(+), 40 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 5f432b0d77..d4c94d2a9b 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -1030,23 +1030,36 @@ async_mbuf_to_desc_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, } static __rte_always_inline void -sync_mbuf_to_desc_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, +sync_fill_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, uint32_t mbuf_offset, - uint64_t buf_addr, uint64_t buf_iova, uint32_t cpy_len) + uint64_t buf_addr, uint64_t buf_iova, uint32_t cpy_len, bool to_desc) { struct batch_copy_elem *batch_copy = vq->batch_copy_elems; if (likely(cpy_len > MAX_BATCH_LEN || vq->batch_copy_nb_elems >= vq->size)) { - rte_memcpy((void *)((uintptr_t)(buf_addr)), + if (to_desc) { + rte_memcpy((void *)((uintptr_t)(buf_addr)), rte_pktmbuf_mtod_offset(m, void *, mbuf_offset), cpy_len); + } else { + rte_memcpy(rte_pktmbuf_mtod_offset(m, void *, mbuf_offset), + (void *)((uintptr_t)(buf_addr)), + cpy_len); + } vhost_log_cache_write_iova(dev, vq, buf_iova, cpy_len); PRINT_PACKET(dev, (uintptr_t)(buf_addr), cpy_len, 0); } else { - batch_copy[vq->batch_copy_nb_elems].dst = - (void *)((uintptr_t)(buf_addr)); - batch_copy[vq->batch_copy_nb_elems].src = - rte_pktmbuf_mtod_offset(m, void *, mbuf_offset); + if (to_desc) { + batch_copy[vq->batch_copy_nb_elems].dst = + (void *)((uintptr_t)(buf_addr)); + batch_copy[vq->batch_copy_nb_elems].src = + rte_pktmbuf_mtod_offset(m, void *, mbuf_offset); + } else { + batch_copy[vq->batch_copy_nb_elems].dst = + rte_pktmbuf_mtod_offset(m, void *, mbuf_offset); + batch_copy[vq->batch_copy_nb_elems].src = + (void *)((uintptr_t)(buf_addr)); + } batch_copy[vq->batch_copy_nb_elems].log_addr = buf_iova; batch_copy[vq->batch_copy_nb_elems].len = cpy_len; vq->batch_copy_nb_elems++; @@ -1158,9 +1171,9 @@ mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, buf_iova + buf_offset, cpy_len) < 0) goto error; } else { - sync_mbuf_to_desc_seg(dev, vq, m, mbuf_offset, - buf_addr + buf_offset, - buf_iova + buf_offset, cpy_len); + sync_fill_seg(dev, vq, m, mbuf_offset, + buf_addr + buf_offset, + buf_iova + buf_offset, cpy_len, true); } mbuf_avail -= cpy_len; @@ -2473,8 +2486,8 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, struct rte_mempool *mbuf_pool, bool legacy_ol_flags) { - uint32_t buf_avail, buf_offset; - uint64_t buf_addr, buf_len; + uint32_t buf_avail, buf_offset, buf_len; + uint64_t buf_addr, buf_iova; uint32_t mbuf_avail, mbuf_offset; uint32_t cpy_len; struct rte_mbuf *cur = m, *prev = m; @@ -2482,16 +2495,13 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, struct virtio_net_hdr *hdr = NULL; /* A counter to avoid desc dead loop chain */ uint16_t vec_idx = 0; - struct batch_copy_elem *batch_copy = vq->batch_copy_elems; - int error = 0; buf_addr = buf_vec[vec_idx].buf_addr; + buf_iova = buf_vec[vec_idx].buf_iova; buf_len = buf_vec[vec_idx].buf_len; - if (unlikely(buf_len < dev->vhost_hlen && nr_vec <= 1)) { - error = -1; - goto out; - } + if (unlikely(buf_len < dev->vhost_hlen && nr_vec <= 1)) + return -1; if (virtio_net_with_host_offload(dev)) { if (unlikely(buf_len < sizeof(struct virtio_net_hdr))) { @@ -2515,11 +2525,12 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, buf_offset = dev->vhost_hlen - buf_len; vec_idx++; buf_addr = buf_vec[vec_idx].buf_addr; + buf_iova = buf_vec[vec_idx].buf_iova; buf_len = buf_vec[vec_idx].buf_len; buf_avail = buf_len - buf_offset; } else if (buf_len == dev->vhost_hlen) { if (unlikely(++vec_idx >= nr_vec)) - goto out; + goto error; buf_addr = buf_vec[vec_idx].buf_addr; buf_len = buf_vec[vec_idx].buf_len; @@ -2539,22 +2550,9 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, while (1) { cpy_len = RTE_MIN(buf_avail, mbuf_avail); - if (likely(cpy_len > MAX_BATCH_LEN || - vq->batch_copy_nb_elems >= vq->size || - (hdr && cur == m))) { - rte_memcpy(rte_pktmbuf_mtod_offset(cur, void *, - mbuf_offset), - (void *)((uintptr_t)(buf_addr + - buf_offset)), cpy_len); - } else { - batch_copy[vq->batch_copy_nb_elems].dst = - rte_pktmbuf_mtod_offset(cur, void *, - mbuf_offset); - batch_copy[vq->batch_copy_nb_elems].src = - (void *)((uintptr_t)(buf_addr + buf_offset)); - batch_copy[vq->batch_copy_nb_elems].len = cpy_len; - vq->batch_copy_nb_elems++; - } + sync_fill_seg(dev, vq, cur, mbuf_offset, + buf_addr + buf_offset, + buf_iova + buf_offset, cpy_len, false); mbuf_avail -= cpy_len; mbuf_offset += cpy_len; @@ -2567,6 +2565,7 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, break; buf_addr = buf_vec[vec_idx].buf_addr; + buf_iova = buf_vec[vec_idx].buf_iova; buf_len = buf_vec[vec_idx].buf_len; buf_offset = 0; @@ -2585,8 +2584,7 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(cur == NULL)) { VHOST_LOG_DATA(ERR, "(%s) failed to allocate memory for mbuf.\n", dev->ifname); - error = -1; - goto out; + goto error; } prev->next = cur; @@ -2606,9 +2604,9 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (hdr) vhost_dequeue_offload(dev, hdr, m, legacy_ol_flags); -out: - - return error; + return 0; +error: + return -1; } static void -- 2.17.1