From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A835BA0547; Fri, 28 May 2021 12:27:01 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3592A40143; Fri, 28 May 2021 12:27:01 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by mails.dpdk.org (Postfix) with ESMTP id 7BE7640040 for ; Fri, 28 May 2021 12:26:59 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1622197618; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=DKbJPfrAuoYg4HaCwhv5H1QFfCbjJLSOk+OIGz5jNe0=; b=AN2Etbvf8RJWxekIA4SQwcrK0czX4iTjpKszh48Pji4zTTFuPpNrWRaC88hQY+dIhnN6lt UxfNNGFPZZmFkny6/Dz3myXfWv90kaiRy2J94YsAo4Xyv3N3xUYQWoGzNuVtivrpiDHmlb L9WUUXbJZ6z0hWUw6Gtoh/fruAHeX80= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-595-dtUF0obGMgmkjuT97neCPQ-1; Fri, 28 May 2021 06:26:56 -0400 X-MC-Unique: dtUF0obGMgmkjuT97neCPQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id EAAF6800D62 for ; Fri, 28 May 2021 10:26:55 +0000 (UTC) Received: from bnemeth.users.ipa.redhat.com (ovpn-112-201.ams2.redhat.com [10.36.112.201]) by smtp.corp.redhat.com (Postfix) with ESMTP id BB5246F125; Fri, 28 May 2021 10:26:48 +0000 (UTC) From: Balazs Nemeth To: bnemeth@redhat.com, dev@dpdk.org, maxime.coquelin@redhat.com, david.marchand@redhat.com Date: Fri, 28 May 2021 12:26:29 +0200 Message-Id: <62e04f299f15e4595959febb0bbbb65ad9a6df1d.1622197407.git.bnemeth@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=bnemeth@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="US-ASCII" Subject: [dpdk-dev] [PATCH] vhost: allocate and free packets in bulk in Tx split X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Same idea as commit a287ac28919d ("vhost: allocate and free packets in bulk in Tx packed"), allocate and free packets in bulk. Also remove the unused function virtio_dev_pktmbuf_alloc. Signed-off-by: Balazs Nemeth --- lib/vhost/virtio_net.c | 37 ++++++++----------------------------- 1 file changed, 8 insertions(+), 29 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 8da8a86a10..32aa2c19a9 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -2670,32 +2670,6 @@ virtio_dev_pktmbuf_prep(struct virtio_net *dev, struct rte_mbuf *pkt, return -1; } -/* - * Allocate a host supported pktmbuf. - */ -static __rte_always_inline struct rte_mbuf * -virtio_dev_pktmbuf_alloc(struct virtio_net *dev, struct rte_mempool *mp, - uint32_t data_len) -{ - struct rte_mbuf *pkt = rte_pktmbuf_alloc(mp); - - if (unlikely(pkt == NULL)) { - VHOST_LOG_DATA(ERR, - "Failed to allocate memory for mbuf.\n"); - return NULL; - } - - if (virtio_dev_pktmbuf_prep(dev, pkt, data_len)) { - /* Data doesn't fit into the buffer and the host supports - * only linear buffers - */ - rte_pktmbuf_free(pkt); - return NULL; - } - - return pkt; -} - __rte_always_inline static uint16_t virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, @@ -2725,6 +2699,9 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, VHOST_LOG_DATA(DEBUG, "(%d) about to dequeue %u buffers\n", dev->vid, count); + if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) + return 0; + for (i = 0; i < count; i++) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; uint16_t head_idx; @@ -2741,8 +2718,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, update_shadow_used_ring_split(vq, head_idx, 0); - pkts[i] = virtio_dev_pktmbuf_alloc(dev, mbuf_pool, buf_len); - if (unlikely(pkts[i] == NULL)) { + err = virtio_dev_pktmbuf_prep(dev, pkts[i], buf_len); + if (unlikely(err)) { /* * mbuf allocation fails for jumbo packets when external * buffer allocation is not allowed and linear buffer @@ -2762,7 +2739,6 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i], mbuf_pool, legacy_ol_flags); if (unlikely(err)) { - rte_pktmbuf_free(pkts[i]); if (!allocerr_warned) { VHOST_LOG_DATA(ERR, "Failed to copy desc to mbuf on %s.\n", @@ -2775,6 +2751,9 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, } } + if (i != count) + rte_pktmbuf_free_bulk(&pkts[i - 1], count - i); + vq->last_avail_idx += i; do_data_copy_dequeue(vq); -- 2.30.2