From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8C65DA00C3; Tue, 19 Apr 2022 05:45:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CCB77427F3; Tue, 19 Apr 2022 05:45:02 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id C6809427EA for ; Tue, 19 Apr 2022 05:45:00 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650339900; x=1681875900; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=Ktb5N2K+vFofcBa5g3oa3MLFbXp7BIhbFibLOsWHQaI=; b=PtzJoLg0bKW9Lw709g3Mh5UAMu5XQCkoZuYf4GUnB3BvD2eDNWS+Not5 82xRzis8RbJ0Y/+h/S+W/j2ax7k0FGSwMaIJVV4n7X60B5Q+hxuKv+Avw 4cuRvn8sGJE43IyF068YLYQ5J/+PKQdaMHVReeMOqxc5t9I4eLy+kqz1R X3FZ841k5hN5HbNH5jbXs9EBh8luNbh1VzOt9SLQOJ6pgnPiaf0I01+Gq QOzxNy6+SMiD3foFc6Y+C0g7Oua67gp5+d7Q1iJ+72Ua91D5WzH6orcGc XuXrcC/iI8So4v5nUnQSNnDvzq6gJdii926zKvylRFnO+19D3CbpxbNsk Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10321"; a="263425291" X-IronPort-AV: E=Sophos;i="5.90,271,1643702400"; d="scan'208";a="263425291" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2022 20:45:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,271,1643702400"; d="scan'208";a="529134011" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.110.228]) by orsmga006.jf.intel.com with ESMTP; 18 Apr 2022 20:44:57 -0700 From: xuan.ding@intel.com To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, cheng1.jiang@intel.com, sunil.pai.g@intel.com, liangma@liangbit.com, Xuan Ding Subject: [PATCH v3 3/5] vhost: merge sync and async descriptor to mbuf filling Date: Tue, 19 Apr 2022 03:43:21 +0000 Message-Id: <20220419034323.92820-4-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220419034323.92820-1-xuan.ding@intel.com> References: <20220407152546.38167-1-xuan.ding@intel.com> <20220419034323.92820-1-xuan.ding@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xuan Ding This patches refactors copy_desc_to_mbuf() used by the sync path to support both sync and async descriptor to mbuf filling. Signed-off-by: Xuan Ding --- lib/vhost/vhost.h | 1 + lib/vhost/virtio_net.c | 48 ++++++++++++++++++++++++++++++++---------- 2 files changed, 38 insertions(+), 11 deletions(-) diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index a9edc271aa..9209558465 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -177,6 +177,7 @@ extern struct async_dma_info dma_copy_track[RTE_DMADEV_DEFAULT_MAX]; * inflight async packet information */ struct async_inflight_info { + struct virtio_net_hdr nethdr; struct rte_mbuf *mbuf; uint16_t descs; /* num of descs inflight */ uint16_t nr_buffers; /* num of buffers inflight for packed ring */ diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 391fb82f0e..6f5bd21946 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -2488,10 +2488,10 @@ copy_vnet_hdr_from_desc(struct virtio_net_hdr *hdr, } static __rte_always_inline int -copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, +desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, struct buf_vector *buf_vec, uint16_t nr_vec, struct rte_mbuf *m, struct rte_mempool *mbuf_pool, - bool legacy_ol_flags) + bool legacy_ol_flags, uint16_t slot_idx, bool is_async) { uint32_t buf_avail, buf_offset, buf_len; uint64_t buf_addr, buf_iova; @@ -2502,6 +2502,8 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, struct virtio_net_hdr *hdr = NULL; /* A counter to avoid desc dead loop chain */ uint16_t vec_idx = 0; + struct vhost_async *async = vq->async; + struct async_inflight_info *pkts_info; buf_addr = buf_vec[vec_idx].buf_addr; buf_iova = buf_vec[vec_idx].buf_iova; @@ -2539,6 +2541,7 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(++vec_idx >= nr_vec)) goto error; buf_addr = buf_vec[vec_idx].buf_addr; + buf_iova = buf_vec[vec_idx].buf_iova; buf_len = buf_vec[vec_idx].buf_len; buf_offset = 0; @@ -2554,12 +2557,25 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, mbuf_offset = 0; mbuf_avail = m->buf_len - RTE_PKTMBUF_HEADROOM; + + if (is_async) { + pkts_info = async->pkts_info; + if (async_iter_initialize(dev, async)) + return -1; + } + while (1) { cpy_len = RTE_MIN(buf_avail, mbuf_avail); - sync_fill_seg(dev, vq, cur, mbuf_offset, - buf_addr + buf_offset, - buf_iova + buf_offset, cpy_len, false); + if (is_async) { + if (async_fill_seg(dev, vq, cur, mbuf_offset, + buf_iova + buf_offset, cpy_len, false) < 0) + goto error; + } else { + sync_fill_seg(dev, vq, cur, mbuf_offset, + buf_addr + buf_offset, + buf_iova + buf_offset, cpy_len, false); + } mbuf_avail -= cpy_len; mbuf_offset += cpy_len; @@ -2608,11 +2624,20 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, prev->data_len = mbuf_offset; m->pkt_len += mbuf_offset; - if (hdr) - vhost_dequeue_offload(dev, hdr, m, legacy_ol_flags); + if (is_async) { + async_iter_finalize(async); + if (hdr) + pkts_info[slot_idx].nethdr = *hdr; + } else { + if (hdr) + vhost_dequeue_offload(dev, hdr, m, legacy_ol_flags); + } return 0; error: + if (is_async) + async_iter_cancel(async); + return -1; } @@ -2744,8 +2769,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, break; } - err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i], - mbuf_pool, legacy_ol_flags); + err = desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i], + mbuf_pool, legacy_ol_flags, 0, false); if (unlikely(err)) { if (!allocerr_warned) { VHOST_LOG_DATA(ERR, "(%s) failed to copy desc to mbuf.\n", @@ -2756,6 +2781,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, i++; break; } + } if (dropped) @@ -2937,8 +2963,8 @@ vhost_dequeue_single_packed(struct virtio_net *dev, return -1; } - err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts, - mbuf_pool, legacy_ol_flags); + err = desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts, + mbuf_pool, legacy_ol_flags, 0, false); if (unlikely(err)) { if (!allocerr_warned) { VHOST_LOG_DATA(ERR, "(%s) failed to copy desc to mbuf.\n", -- 2.17.1