From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6E6C746093; Wed, 15 Jan 2025 14:00:05 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B5BF640613; Wed, 15 Jan 2025 13:59:56 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id E52934042C for ; Wed, 15 Jan 2025 13:59:53 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1736945993; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8G9d7qbnB7JvmjSH//eweGS7Knn7Y4Kb+5koNJSXg3A=; b=AlfCORyp+vb2ZyXZYvWMxMmA1UZBGQEQwHhVzUBD+v4gSZongIJmujopQJO5P1++W8ha3k RHHehlJcWIbbOylav7gqIJWJFdUQx0gLRul2UDS7DocFV1deSpJRJgCdeW7mxKDQJslmNv kL+W1q1zKthzdUfztNr43qojrdFRl4I= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-362-nZs9BpW6Me-ZUj01YPCeDQ-1; Wed, 15 Jan 2025 07:59:52 -0500 X-MC-Unique: nZs9BpW6Me-ZUj01YPCeDQ-1 X-Mimecast-MFC-AGG-ID: nZs9BpW6Me-ZUj01YPCeDQ Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 711A31956060; Wed, 15 Jan 2025 12:59:51 +0000 (UTC) Received: from max-p1.redhat.com (unknown [10.39.208.25]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id F032719560AD; Wed, 15 Jan 2025 12:59:49 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, david.marchand@redhat.com, chenbox@nvidia.com Cc: Maxime Coquelin Subject: [PATCH v2 3/4] vhost: rework async dequeue path error handling Date: Wed, 15 Jan 2025 13:59:37 +0100 Message-ID: <20250115125938.2699577-4-maxime.coquelin@redhat.com> In-Reply-To: <20250115125938.2699577-1-maxime.coquelin@redhat.com> References: <20250115125938.2699577-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: nj3MLkBRkoT9y9k5OVNkZUgTSf9qViyb_20t05PavtY_1736945991 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch refactors the error handling in the Vhost async dequeue path to ease its maintenance and readability. Suggested-by: David Marchand Signed-off-by: Maxime Coquelin --- lib/vhost/virtio_net.c | 31 ++++++++++++++----------------- 1 file changed, 14 insertions(+), 17 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 3a4955fd30..59ea2d16a5 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -4197,52 +4197,51 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, struct rte_mbuf *rarp_mbuf = NULL; struct vhost_virtqueue *vq; int16_t success = 1; + uint16_t nb_rx = 0; dev = get_device(vid); if (!dev || !nr_inflight) - return 0; + goto out_no_unlock; *nr_inflight = -1; if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) { VHOST_DATA_LOG(dev->ifname, ERR, "%s: built-in vhost net backend is disabled.", __func__); - return 0; + goto out_no_unlock; } if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) { VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid virtqueue idx %d.", __func__, queue_id); - return 0; + goto out_no_unlock; } if (unlikely(dma_id < 0 || dma_id >= RTE_DMADEV_DEFAULT_MAX)) { VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid dma id %d.", __func__, dma_id); - return 0; + goto out_no_unlock; } if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); - return 0; + goto out_no_unlock; } vq = dev->virtqueue[queue_id]; if (unlikely(rte_rwlock_read_trylock(&vq->access_lock) != 0)) - return 0; + goto out_no_unlock; if (unlikely(vq->enabled == 0)) { - count = 0; goto out_access_unlock; } if (unlikely(!vq->async)) { VHOST_DATA_LOG(dev->ifname, ERR, "%s: async not registered for queue id %d.", __func__, queue_id); - count = 0; goto out_access_unlock; } @@ -4253,7 +4252,6 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, rte_rwlock_read_unlock(&vq->access_lock); virtio_dev_vring_translate(dev, vq); - count = 0; goto out_no_unlock; } @@ -4280,7 +4278,6 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac); if (rarp_mbuf == NULL) { VHOST_DATA_LOG(dev->ifname, ERR, "failed to make RARP packet."); - count = 0; goto out; } /* @@ -4295,22 +4292,22 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, if (vq_is_packed(dev)) { if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS) - count = virtio_dev_tx_async_packed_legacy(dev, vq, mbuf_pool, + nb_rx = virtio_dev_tx_async_packed_legacy(dev, vq, mbuf_pool, pkts, count, dma_id, vchan_id); else - count = virtio_dev_tx_async_packed_compliant(dev, vq, mbuf_pool, + nb_rx = virtio_dev_tx_async_packed_compliant(dev, vq, mbuf_pool, pkts, count, dma_id, vchan_id); } else { if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS) - count = virtio_dev_tx_async_split_legacy(dev, vq, mbuf_pool, + nb_rx = virtio_dev_tx_async_split_legacy(dev, vq, mbuf_pool, pkts, count, dma_id, vchan_id); else - count = virtio_dev_tx_async_split_compliant(dev, vq, mbuf_pool, + nb_rx = virtio_dev_tx_async_split_compliant(dev, vq, mbuf_pool, pkts, count, dma_id, vchan_id); } *nr_inflight = vq->async->pkts_inflight_n; - vhost_queue_stats_update(dev, vq, pkts, count); + vhost_queue_stats_update(dev, vq, pkts, nb_rx); out: vhost_user_iotlb_rd_unlock(vq); @@ -4319,8 +4316,8 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, rte_rwlock_read_unlock(&vq->access_lock); if (unlikely(rarp_mbuf != NULL)) - count += 1; + nb_rx += 1; out_no_unlock: - return count; + return nb_rx; } -- 2.47.1