From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8980B46211 for ; Thu, 13 Feb 2025 11:01:19 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 856DD42EA9; Thu, 13 Feb 2025 11:01:19 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id A285042EC0 for ; Thu, 13 Feb 2025 11:01:17 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1739440877; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bwcJqwqezLb3+csT6EXTw4j4kn5+KZIMM7RhfSg0GCo=; b=fc5PGjq+LujkxeGxY/VywPXzujrQRF423w3IDSEUVan075i/rolWSSqSUoLqcdSJ53a/HQ /2KmlSlJ1Io7KKrI4VT7FfR/oKcatpz1sCAjXX7dwIxcWwaYz5EbebX9MSllezLku16NhW yJvWYTnHu17Fkq5Rj626o1XJkl8pp9o= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-307-w0H9nTt9NuOdCb0WwL-eeA-1; Thu, 13 Feb 2025 05:01:13 -0500 X-MC-Unique: w0H9nTt9NuOdCb0WwL-eeA-1 X-Mimecast-MFC-AGG-ID: w0H9nTt9NuOdCb0WwL-eeA_1739440873 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id DFC4F18EB2CF; Thu, 13 Feb 2025 10:01:12 +0000 (UTC) Received: from rh.Home (unknown [10.45.224.21]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 640881800365; Thu, 13 Feb 2025 10:01:11 +0000 (UTC) From: Kevin Traynor To: Praveen Kaligineedi Cc: Joshua Washington , dpdk stable Subject: patch 'net/gve: allocate Rx QPL pages using malloc' has been queued to stable release 24.11.2 Date: Thu, 13 Feb 2025 09:58:06 +0000 Message-ID: <20250213095933.362078-39-ktraynor@redhat.com> In-Reply-To: <20250213095933.362078-1-ktraynor@redhat.com> References: <20250213095933.362078-1-ktraynor@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: d9uZrASMbJEztGgm68TYSwx85Cv0MCyWRUkA48bAv1A_1739440873 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 24.11.2 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 02/17/25. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/kevintraynor/dpdk-stable This queued commit can be viewed at: https://github.com/kevintraynor/dpdk-stable/commit/9873a135bfba543a7d63145b3e304f839b3d116d Thanks. Kevin --- >From 9873a135bfba543a7d63145b3e304f839b3d116d Mon Sep 17 00:00:00 2001 From: Praveen Kaligineedi Date: Thu, 9 Jan 2025 11:46:38 -0800 Subject: [PATCH] net/gve: allocate Rx QPL pages using malloc [ upstream commit a71168a775e658ac7e9cc839f53d25953d45bed9 ] Allocating QPL for an RX queue might fail if enough contiguous IOVA memory cannot be allocated. This can commonly occur when using 2MB huge pages because the 1024 4K buffers are allocated for each RX ring by default, resulting in 4MB for each ring. However, the only requirement for RX QPLs is that each 4K buffer be IOVA contiguous, not the entire QPL. Therefore, malloc will be used to allocate RX QPLs instead. Note that TX queues require the entire QPL to be IOVA contiguous, so it will continue to use the memzone-based allocation. Fixes: a46583cf43c8 ("net/gve: support Rx/Tx") Signed-off-by: Praveen Kaligineedi Signed-off-by: Joshua Washington --- drivers/net/gve/gve_ethdev.c | 110 ++++++++++++++++++++++++++++------- drivers/net/gve/gve_ethdev.h | 5 +- drivers/net/gve/gve_rx.c | 2 +- 3 files changed, 93 insertions(+), 24 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index db4ebe7036..e471a34e61 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -23,40 +23,95 @@ gve_write_version(uint8_t *driver_version_register) } -static struct gve_queue_page_list * -gve_alloc_queue_page_list(const char *name, uint32_t num_pages) +static const struct rte_memzone * +gve_alloc_using_mz(const char *name, uint32_t num_pages) { - struct gve_queue_page_list *qpl; const struct rte_memzone *mz; - dma_addr_t page_bus; - uint32_t i; - - qpl = rte_zmalloc("qpl struct", sizeof(struct gve_queue_page_list), 0); - if (!qpl) - return NULL; - mz = rte_memzone_reserve_aligned(name, num_pages * PAGE_SIZE, rte_socket_id(), RTE_MEMZONE_IOVA_CONTIG, PAGE_SIZE); - if (mz == NULL) { - PMD_DRV_LOG(ERR, "Failed to alloc %s.", name); - goto free_qpl_struct; + if (mz == NULL) + PMD_DRV_LOG(ERR, "Failed to alloc memzone %s.", name); + return mz; +} + +static int +gve_alloc_using_malloc(void **bufs, uint32_t num_entries) +{ + uint32_t i; + + for (i = 0; i < num_entries; i++) { + bufs[i] = rte_malloc_socket(NULL, PAGE_SIZE, PAGE_SIZE, rte_socket_id()); + if (bufs[i] == NULL) { + PMD_DRV_LOG(ERR, "Failed to malloc"); + goto free_bufs; + } } + return 0; + +free_bufs: + while (i > 0) + rte_free(bufs[--i]); + + return -ENOMEM; +} + +static struct gve_queue_page_list * +gve_alloc_queue_page_list(const char *name, uint32_t num_pages, bool is_rx) +{ + struct gve_queue_page_list *qpl; + const struct rte_memzone *mz; + uint32_t i; + + qpl = rte_zmalloc("qpl struct", sizeof(struct gve_queue_page_list), 0); + if (!qpl) + return NULL; + qpl->page_buses = rte_zmalloc("qpl page buses", num_pages * sizeof(dma_addr_t), 0); if (qpl->page_buses == NULL) { PMD_DRV_LOG(ERR, "Failed to alloc qpl page buses"); - goto free_qpl_memzone; + goto free_qpl_struct; } - page_bus = mz->iova; - for (i = 0; i < num_pages; i++) { - qpl->page_buses[i] = page_bus; - page_bus += PAGE_SIZE; + + if (is_rx) { + /* RX QPL need not be IOVA contiguous. + * Allocate 4K size buffers using malloc + */ + qpl->qpl_bufs = rte_zmalloc("qpl bufs", + num_pages * sizeof(void *), 0); + if (qpl->qpl_bufs == NULL) { + PMD_DRV_LOG(ERR, "Failed to alloc qpl bufs"); + goto free_qpl_page_buses; + } + + if (gve_alloc_using_malloc(qpl->qpl_bufs, num_pages)) + goto free_qpl_page_bufs; + + /* Populate the IOVA addresses */ + for (i = 0; i < num_pages; i++) + qpl->page_buses[i] = + rte_malloc_virt2iova(qpl->qpl_bufs[i]); + } else { + /* TX QPL needs to be IOVA contiguous + * Allocate QPL using memzone + */ + mz = gve_alloc_using_mz(name, num_pages); + if (!mz) + goto free_qpl_page_buses; + + qpl->mz = mz; + + /* Populate the IOVA addresses */ + for (i = 0; i < num_pages; i++) + qpl->page_buses[i] = mz->iova + i * PAGE_SIZE; } - qpl->mz = mz; + qpl->num_entries = num_pages; return qpl; -free_qpl_memzone: - rte_memzone_free(qpl->mz); +free_qpl_page_bufs: + rte_free(qpl->qpl_bufs); +free_qpl_page_buses: + rte_free(qpl->page_buses); free_qpl_struct: rte_free(qpl); @@ -70,5 +125,16 @@ gve_free_queue_page_list(struct gve_queue_page_list *qpl) rte_memzone_free(qpl->mz); qpl->mz = NULL; + } else if (qpl->qpl_bufs) { + uint32_t i; + + for (i = 0; i < qpl->num_entries; i++) + rte_free(qpl->qpl_bufs[i]); } + + if (qpl->qpl_bufs) { + rte_free(qpl->qpl_bufs); + qpl->qpl_bufs = NULL; + } + if (qpl->page_buses) { rte_free(qpl->page_buses); @@ -90,5 +156,5 @@ gve_setup_queue_page_list(struct gve_priv *priv, uint16_t queue_id, bool is_rx, snprintf(qpl_name, sizeof(qpl_name), "gve_%s_%s_qpl%d", priv->pci_dev->device.name, queue_type_string, queue_id); - qpl = gve_alloc_queue_page_list(qpl_name, num_pages); + qpl = gve_alloc_queue_page_list(qpl_name, num_pages, is_rx); if (!qpl) { PMD_DRV_LOG(ERR, diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index c417a0b31c..35cb9062b1 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -63,5 +63,8 @@ struct gve_queue_page_list { uint32_t num_entries; dma_addr_t *page_buses; /* the dma addrs of the pages */ - const struct rte_memzone *mz; + union { + const struct rte_memzone *mz; /* memzone allocated for TX queue */ + void **qpl_bufs; /* RX qpl-buffer list allocated using malloc*/ + }; }; diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index 1f5fa3f1da..7a91c31ad2 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -118,5 +118,5 @@ gve_rx_mbuf(struct gve_rx_queue *rxq, struct rte_mbuf *rxe, uint16_t len, } if (rxq->is_gqi_qpl) { - addr = (uint64_t)(rxq->qpl->mz->addr) + rx_id * PAGE_SIZE + padding; + addr = (uint64_t)rxq->qpl->qpl_bufs[rx_id] + padding; rte_memcpy((void *)((size_t)rxe->buf_addr + rxe->data_off), (void *)(size_t)addr, len); -- 2.48.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2025-02-12 17:29:38.798525945 +0000 +++ 0039-net-gve-allocate-Rx-QPL-pages-using-malloc.patch 2025-02-12 17:29:34.315945721 +0000 @@ -1 +1 @@ -From a71168a775e658ac7e9cc839f53d25953d45bed9 Mon Sep 17 00:00:00 2001 +From 9873a135bfba543a7d63145b3e304f839b3d116d Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit a71168a775e658ac7e9cc839f53d25953d45bed9 ] + @@ -17 +18,0 @@ -Cc: stable@dpdk.org