From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A3E3AA00BE; Mon, 16 May 2022 04:48:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BECBE4280E; Mon, 16 May 2022 04:48:29 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 4EA7142B5A for ; Mon, 16 May 2022 04:48:27 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652669307; x=1684205307; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=Jp+zHdqvG30t8RC7x0aNp83adfDl7waXYzgZP9aiULg=; b=GTkKD966B5B+p6G6czft1iCWPXKxEjzmXLE3MamjFlnMx0/hDDXFcHSm FrN7ywWcRD0TWJdKImjmS7RQiEwQoyPNTN9V8yWnXHzfbK+R7LqWV0h9d UFpGVqtpkccU8QjKgV30FSVu6mehzp9zi9YtuD7pcml+tPIqdx+glYsVF v5bPWCmanIyf0AnnRGd+AvwekhukqqHuOcr2aRmBkkgcI3O5cul9uvFrt ZV62YNYGeDlCd5pBmiln9QidHQetKFtiKUttsDXsypH8+gSKlq6Qh80Sd qhrwfVYngp7x3tKAEzKhno8E2hQ2ATOWgZDpc37KlUAIiZy+/rsnVHDB/ g==; X-IronPort-AV: E=McAfee;i="6400,9594,10348"; a="331338381" X-IronPort-AV: E=Sophos;i="5.91,229,1647327600"; d="scan'208";a="331338381" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 May 2022 19:48:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,229,1647327600"; d="scan'208";a="544144699" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.110.228]) by orsmga006.jf.intel.com with ESMTP; 15 May 2022 19:48:24 -0700 From: xuan.ding@intel.com To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, cheng1.jiang@intel.com, sunil.pai.g@intel.com, liangma@liangbit.com, Xuan Ding Subject: [PATCH v7 3/5] vhost: merge sync and async descriptor to mbuf filling Date: Mon, 16 May 2022 02:43:23 +0000 Message-Id: <20220516024325.96067-4-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220516024325.96067-1-xuan.ding@intel.com> References: <20220407152546.38167-1-xuan.ding@intel.com> <20220516024325.96067-1-xuan.ding@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xuan Ding This patch refactors copy_desc_to_mbuf() used by the sync path to support both sync and async descriptor to mbuf filling. Signed-off-by: Xuan Ding Tested-by: Yvonne Yang Reviewed-by: Maxime Coquelin --- lib/vhost/vhost.h | 1 + lib/vhost/virtio_net.c | 48 ++++++++++++++++++++++++++++++++---------- 2 files changed, 38 insertions(+), 11 deletions(-) diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index a9edc271aa..00744b234f 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -180,6 +180,7 @@ struct async_inflight_info { struct rte_mbuf *mbuf; uint16_t descs; /* num of descs inflight */ uint16_t nr_buffers; /* num of buffers inflight for packed ring */ + struct virtio_net_hdr nethdr; }; struct vhost_async { diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index a9e2dcd9ce..5904839d5c 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -2487,10 +2487,10 @@ copy_vnet_hdr_from_desc(struct virtio_net_hdr *hdr, } static __rte_always_inline int -copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, +desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, struct buf_vector *buf_vec, uint16_t nr_vec, struct rte_mbuf *m, struct rte_mempool *mbuf_pool, - bool legacy_ol_flags) + bool legacy_ol_flags, uint16_t slot_idx, bool is_async) { uint32_t buf_avail, buf_offset, buf_len; uint64_t buf_addr, buf_iova; @@ -2501,6 +2501,8 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, struct virtio_net_hdr *hdr = NULL; /* A counter to avoid desc dead loop chain */ uint16_t vec_idx = 0; + struct vhost_async *async = vq->async; + struct async_inflight_info *pkts_info; buf_addr = buf_vec[vec_idx].buf_addr; buf_iova = buf_vec[vec_idx].buf_iova; @@ -2538,6 +2540,7 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(++vec_idx >= nr_vec)) goto error; buf_addr = buf_vec[vec_idx].buf_addr; + buf_iova = buf_vec[vec_idx].buf_iova; buf_len = buf_vec[vec_idx].buf_len; buf_offset = 0; @@ -2553,12 +2556,25 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, mbuf_offset = 0; mbuf_avail = m->buf_len - RTE_PKTMBUF_HEADROOM; + + if (is_async) { + pkts_info = async->pkts_info; + if (async_iter_initialize(dev, async)) + return -1; + } + while (1) { cpy_len = RTE_MIN(buf_avail, mbuf_avail); - sync_fill_seg(dev, vq, cur, mbuf_offset, - buf_addr + buf_offset, - buf_iova + buf_offset, cpy_len, false); + if (is_async) { + if (async_fill_seg(dev, vq, cur, mbuf_offset, + buf_iova + buf_offset, cpy_len, false) < 0) + goto error; + } else { + sync_fill_seg(dev, vq, cur, mbuf_offset, + buf_addr + buf_offset, + buf_iova + buf_offset, cpy_len, false); + } mbuf_avail -= cpy_len; mbuf_offset += cpy_len; @@ -2607,11 +2623,20 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, prev->data_len = mbuf_offset; m->pkt_len += mbuf_offset; - if (hdr) - vhost_dequeue_offload(dev, hdr, m, legacy_ol_flags); + if (is_async) { + async_iter_finalize(async); + if (hdr) + pkts_info[slot_idx].nethdr = *hdr; + } else { + if (hdr) + vhost_dequeue_offload(dev, hdr, m, legacy_ol_flags); + } return 0; error: + if (is_async) + async_iter_cancel(async); + return -1; } @@ -2743,8 +2768,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, break; } - err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i], - mbuf_pool, legacy_ol_flags); + err = desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i], + mbuf_pool, legacy_ol_flags, 0, false); if (unlikely(err)) { if (!allocerr_warned) { VHOST_LOG_DATA(ERR, "(%s) failed to copy desc to mbuf.\n", @@ -2755,6 +2780,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, i++; break; } + } if (dropped) @@ -2936,8 +2962,8 @@ vhost_dequeue_single_packed(struct virtio_net *dev, return -1; } - err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts, - mbuf_pool, legacy_ol_flags); + err = desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts, + mbuf_pool, legacy_ol_flags, 0, false); if (unlikely(err)) { if (!allocerr_warned) { VHOST_LOG_DATA(ERR, "(%s) failed to copy desc to mbuf.\n", -- 2.17.1