From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7F335A00BE; Mon, 16 May 2022 13:16:06 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 315B242B6B; Mon, 16 May 2022 13:15:53 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id BA73642830 for ; Mon, 16 May 2022 13:15:51 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652699752; x=1684235752; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=Jp+zHdqvG30t8RC7x0aNp83adfDl7waXYzgZP9aiULg=; b=FKJl2jn25OuUIpqABn6316RyNw/evu+yTB/d4C40CwQEYNfJ1AI8BP1v PoUWo6j8ijidPsA49QvkSeB8y1S/GZezvAXJ6crQOFBGfLhCok8GDyzzy tY49MNPfUeGVkTtUyMFmYBq5/hiG9n078Sm23KCTF2L9fCNSxh1jayed7 qiJZy8/zelAdGYA61LkTtOMfv9EvKtSQhihIY8oepxlGBTHH4tIcbONAD R/loo+ziKXOgbDlrG1W8/lp/ifDT5Bxr+fDjOBZms3MvWD0oqaBLAS7ce 9Heqz056l3IlsBjueCvP2amc1GaAKcBNeeXYFHBCmYigqMF3N50p78AkV Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10348"; a="296063077" X-IronPort-AV: E=Sophos;i="5.91,229,1647327600"; d="scan'208";a="296063077" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 May 2022 04:15:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,229,1647327600"; d="scan'208";a="568272383" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.110.228]) by orsmga007.jf.intel.com with ESMTP; 16 May 2022 04:15:42 -0700 From: xuan.ding@intel.com To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, cheng1.jiang@intel.com, sunil.pai.g@intel.com, liangma@liangbit.com, Xuan Ding Subject: [PATCH v8 3/5] vhost: merge sync and async descriptor to mbuf filling Date: Mon, 16 May 2022 11:10:39 +0000 Message-Id: <20220516111041.63914-4-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220516111041.63914-1-xuan.ding@intel.com> References: <20220407152546.38167-1-xuan.ding@intel.com> <20220516111041.63914-1-xuan.ding@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xuan Ding This patch refactors copy_desc_to_mbuf() used by the sync path to support both sync and async descriptor to mbuf filling. Signed-off-by: Xuan Ding Tested-by: Yvonne Yang Reviewed-by: Maxime Coquelin --- lib/vhost/vhost.h | 1 + lib/vhost/virtio_net.c | 48 ++++++++++++++++++++++++++++++++---------- 2 files changed, 38 insertions(+), 11 deletions(-) diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index a9edc271aa..00744b234f 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -180,6 +180,7 @@ struct async_inflight_info { struct rte_mbuf *mbuf; uint16_t descs; /* num of descs inflight */ uint16_t nr_buffers; /* num of buffers inflight for packed ring */ + struct virtio_net_hdr nethdr; }; struct vhost_async { diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index a9e2dcd9ce..5904839d5c 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -2487,10 +2487,10 @@ copy_vnet_hdr_from_desc(struct virtio_net_hdr *hdr, } static __rte_always_inline int -copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, +desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, struct buf_vector *buf_vec, uint16_t nr_vec, struct rte_mbuf *m, struct rte_mempool *mbuf_pool, - bool legacy_ol_flags) + bool legacy_ol_flags, uint16_t slot_idx, bool is_async) { uint32_t buf_avail, buf_offset, buf_len; uint64_t buf_addr, buf_iova; @@ -2501,6 +2501,8 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, struct virtio_net_hdr *hdr = NULL; /* A counter to avoid desc dead loop chain */ uint16_t vec_idx = 0; + struct vhost_async *async = vq->async; + struct async_inflight_info *pkts_info; buf_addr = buf_vec[vec_idx].buf_addr; buf_iova = buf_vec[vec_idx].buf_iova; @@ -2538,6 +2540,7 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(++vec_idx >= nr_vec)) goto error; buf_addr = buf_vec[vec_idx].buf_addr; + buf_iova = buf_vec[vec_idx].buf_iova; buf_len = buf_vec[vec_idx].buf_len; buf_offset = 0; @@ -2553,12 +2556,25 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, mbuf_offset = 0; mbuf_avail = m->buf_len - RTE_PKTMBUF_HEADROOM; + + if (is_async) { + pkts_info = async->pkts_info; + if (async_iter_initialize(dev, async)) + return -1; + } + while (1) { cpy_len = RTE_MIN(buf_avail, mbuf_avail); - sync_fill_seg(dev, vq, cur, mbuf_offset, - buf_addr + buf_offset, - buf_iova + buf_offset, cpy_len, false); + if (is_async) { + if (async_fill_seg(dev, vq, cur, mbuf_offset, + buf_iova + buf_offset, cpy_len, false) < 0) + goto error; + } else { + sync_fill_seg(dev, vq, cur, mbuf_offset, + buf_addr + buf_offset, + buf_iova + buf_offset, cpy_len, false); + } mbuf_avail -= cpy_len; mbuf_offset += cpy_len; @@ -2607,11 +2623,20 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, prev->data_len = mbuf_offset; m->pkt_len += mbuf_offset; - if (hdr) - vhost_dequeue_offload(dev, hdr, m, legacy_ol_flags); + if (is_async) { + async_iter_finalize(async); + if (hdr) + pkts_info[slot_idx].nethdr = *hdr; + } else { + if (hdr) + vhost_dequeue_offload(dev, hdr, m, legacy_ol_flags); + } return 0; error: + if (is_async) + async_iter_cancel(async); + return -1; } @@ -2743,8 +2768,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, break; } - err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i], - mbuf_pool, legacy_ol_flags); + err = desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i], + mbuf_pool, legacy_ol_flags, 0, false); if (unlikely(err)) { if (!allocerr_warned) { VHOST_LOG_DATA(ERR, "(%s) failed to copy desc to mbuf.\n", @@ -2755,6 +2780,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, i++; break; } + } if (dropped) @@ -2936,8 +2962,8 @@ vhost_dequeue_single_packed(struct virtio_net *dev, return -1; } - err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts, - mbuf_pool, legacy_ol_flags); + err = desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts, + mbuf_pool, legacy_ol_flags, 0, false); if (unlikely(err)) { if (!allocerr_warned) { VHOST_LOG_DATA(ERR, "(%s) failed to copy desc to mbuf.\n", -- 2.17.1