From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E73DDA0C43; Mon, 18 Oct 2021 15:03:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D3AC3410EE; Mon, 18 Oct 2021 15:03:47 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 5B99B40141 for ; Mon, 18 Oct 2021 15:03:46 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1634562225; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=x72n5PZzRvC+MHEcgyciapk4uuGHpP9CaOQNzNn2mcI=; b=ZhLyKcin8x4kxATntc/360lOlamEgUB6NdC+vsRXaoY55bXGXzYIFH0WksVswCIBEet6yR X0Bu87VnfvgD+hMHtcwAGWy2yX/MoG+CfQwIyM7AuOtw9OYOTkWg9Eh+ak/Aic2StuIkGI WJiR4j7SF37wJ7vf7MvYc1UbB6gPyF0= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-404-tyDMWJr6MCuS88nIUFChLQ-1; Mon, 18 Oct 2021 09:03:40 -0400 X-MC-Unique: tyDMWJr6MCuS88nIUFChLQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8879E5074E; Mon, 18 Oct 2021 13:03:38 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.22]) by smtp.corp.redhat.com (Postfix) with ESMTP id 41DB169119; Mon, 18 Oct 2021 13:03:21 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, wenwux.ma@intel.com, bruce.richardson@intel.com, john.mcnamara@intel.com, david.marchand@redhat.com Cc: Maxime Coquelin Date: Mon, 18 Oct 2021 15:02:18 +0200 Message-Id: <20211018130229.308694-4-maxime.coquelin@redhat.com> In-Reply-To: <20211018130229.308694-1-maxime.coquelin@redhat.com> References: <20211018130229.308694-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="US-ASCII" Subject: [dpdk-dev] [PATCH v1 03/14] vhost: simplify async IO vectors X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" IO vectors implementation is unnecessarily complex, mixing source and destinations vectors in the same array. This patch declares two arrays, one for the source and one for the destination. It also get rid off seg_awaits variable in both packed and split implementation, which is the same as iovec_idx. Signed-off-by: Maxime Coquelin --- lib/vhost/vhost.h | 5 +++-- lib/vhost/virtio_net.c | 28 +++++++++++----------------- 2 files changed, 14 insertions(+), 19 deletions(-) diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 9de87d20cc..f2d9535174 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -49,7 +49,7 @@ #define MAX_PKT_BURST 32 #define VHOST_MAX_ASYNC_IT (MAX_PKT_BURST * 2) -#define VHOST_MAX_ASYNC_VEC (BUF_VECTOR_MAX * 4) +#define VHOST_MAX_ASYNC_VEC (BUF_VECTOR_MAX * 2) #define PACKED_DESC_ENQUEUE_USED_FLAG(w) \ ((w) ? (VRING_DESC_F_AVAIL | VRING_DESC_F_USED | VRING_DESC_F_WRITE) : \ @@ -133,7 +133,8 @@ struct vhost_async { struct rte_vhost_async_channel_ops ops; struct rte_vhost_iov_iter it_pool[VHOST_MAX_ASYNC_IT]; - struct iovec vec_pool[VHOST_MAX_ASYNC_VEC]; + struct iovec src_iovec[VHOST_MAX_ASYNC_VEC]; + struct iovec dst_iovec[VHOST_MAX_ASYNC_VEC]; /* data transfer status */ struct async_inflight_info *pkts_info; diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index a109c2a316..4e0e1584b8 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -1512,14 +1512,12 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct vhost_async *async = vq->async; struct rte_vhost_iov_iter *it_pool = async->it_pool; - struct iovec *vec_pool = async->vec_pool; struct rte_vhost_async_desc tdes[MAX_PKT_BURST]; - struct iovec *src_iovec = vec_pool; - struct iovec *dst_iovec = vec_pool + (VHOST_MAX_ASYNC_VEC >> 1); + struct iovec *src_iovec = async->src_iovec; + struct iovec *dst_iovec = async->dst_iovec; struct async_inflight_info *pkts_info = async->pkts_info; uint32_t n_pkts = 0, pkt_err = 0; int32_t n_xfer; - uint16_t segs_await = 0; uint16_t iovec_idx = 0, it_idx = 0, slot_idx = 0; /* @@ -1562,7 +1560,6 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, pkts_info[slot_idx].mbuf = pkts[pkt_idx]; iovec_idx += it_pool[it_idx].nr_segs; - segs_await += it_pool[it_idx].nr_segs; it_idx += 2; vq->last_avail_idx += num_buffers; @@ -1573,8 +1570,7 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, * - unused async iov number is less than max vhost vector */ if (unlikely(pkt_burst_idx >= VHOST_ASYNC_BATCH_THRESHOLD || - ((VHOST_MAX_ASYNC_VEC >> 1) - segs_await < - BUF_VECTOR_MAX))) { + (VHOST_MAX_ASYNC_VEC - iovec_idx < BUF_VECTOR_MAX))) { n_xfer = async->ops.transfer_data(dev->vid, queue_id, tdes, 0, pkt_burst_idx); if (likely(n_xfer >= 0)) { @@ -1588,7 +1584,6 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, iovec_idx = 0; it_idx = 0; - segs_await = 0; if (unlikely(n_pkts < pkt_burst_idx)) { /* @@ -1745,8 +1740,11 @@ vhost_enqueue_async_packed(struct virtio_net *dev, if (unlikely(++tries > max_tries)) return -1; - if (unlikely(fill_vec_buf_packed(dev, vq, avail_idx, &desc_count, buf_vec, &nr_vec, - &buf_id, &len, VHOST_ACCESS_RW) < 0)) + if (unlikely(fill_vec_buf_packed(dev, vq, + avail_idx, &desc_count, + buf_vec, &nr_vec, + &buf_id, &len, + VHOST_ACCESS_RW) < 0)) return -1; len = RTE_MIN(len, size); @@ -1832,14 +1830,12 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct vhost_async *async = vq->async; struct rte_vhost_iov_iter *it_pool = async->it_pool; - struct iovec *vec_pool = async->vec_pool; struct rte_vhost_async_desc tdes[MAX_PKT_BURST]; - struct iovec *src_iovec = vec_pool; - struct iovec *dst_iovec = vec_pool + (VHOST_MAX_ASYNC_VEC >> 1); + struct iovec *src_iovec = async->src_iovec; + struct iovec *dst_iovec = async->dst_iovec; struct async_inflight_info *pkts_info = async->pkts_info; uint32_t n_pkts = 0, pkt_err = 0; uint16_t slot_idx = 0; - uint16_t segs_await = 0; uint16_t iovec_idx = 0, it_idx = 0; do { @@ -1861,7 +1857,6 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, pkts_info[slot_idx].nr_buffers = num_buffers; pkts_info[slot_idx].mbuf = pkts[pkt_idx]; iovec_idx += it_pool[it_idx].nr_segs; - segs_await += it_pool[it_idx].nr_segs; it_idx += 2; pkt_idx++; @@ -1874,7 +1869,7 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, * - unused async iov number is less than max vhost vector */ if (unlikely(pkt_burst_idx >= VHOST_ASYNC_BATCH_THRESHOLD || - ((VHOST_MAX_ASYNC_VEC >> 1) - segs_await < BUF_VECTOR_MAX))) { + (VHOST_MAX_ASYNC_VEC - iovec_idx < BUF_VECTOR_MAX))) { n_xfer = async->ops.transfer_data(dev->vid, queue_id, tdes, 0, pkt_burst_idx); if (likely(n_xfer >= 0)) { @@ -1888,7 +1883,6 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, iovec_idx = 0; it_idx = 0; - segs_await = 0; if (unlikely(n_pkts < pkt_burst_idx)) { /* -- 2.31.1