From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EE246489EF for ; Mon, 27 Oct 2025 17:20:11 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E3A494028B; Mon, 27 Oct 2025 17:20:11 +0100 (CET) Received: from mail-wm1-f54.google.com (mail-wm1-f54.google.com [209.85.128.54]) by mails.dpdk.org (Postfix) with ESMTP id 71A2D4028B for ; Mon, 27 Oct 2025 17:20:10 +0100 (CET) Received: by mail-wm1-f54.google.com with SMTP id 5b1f17b1804b1-475dc0ed8aeso22403055e9.2 for ; Mon, 27 Oct 2025 09:20:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1761582010; x=1762186810; darn=dpdk.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=AAi/an+7vnjHK8DzLXnrspK6eoK/YUhX45z5iXMQMj4=; b=Ejbl9CUMWEcXEqoaPU0ZLUKbSkNMEu16b+Hwo7b9yN4PmpvCERtcyUtFi8BYQ8p1Q2 5Pds8EGrjEwiyWc6z8U3mrin4cr2fnANukaj7fobmZRhhOA/MJYXX+zWwUXjWMIgpYHH Nd/gqWg0x+Fw2uWp3wV0sqyc6YQe1lHR42VX/QmdCHkxpNLpP2D2erQdL1TM45QUwoLA X68lHv28WcT/rKqPO8u/uucyAy037WFqXA8o/912rWhYwFss+lwW+G/pbd/2S4vj61+M wBmg+8f4iqMeml8kS3ZGy5NtEmDNF28EBSB22Bse68kLiC12YzDUL74ldfeYoQVi7mYG MCLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761582010; x=1762186810; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=AAi/an+7vnjHK8DzLXnrspK6eoK/YUhX45z5iXMQMj4=; b=Mb9B/HUkOgwngoyHnUeqi7Nzlw0Z+OsE/vO9dwm82tB78B60ACL32cJMhTSQC8GgU7 LJasfLO2P5VWU78trxna7iXqqwViKt0pA/91rub0TLf+A9XjeoX3foh9G5qYydogo27t j5hOdXMJyvKN2kyDyRuEemHEfD2LkJ+7sqFZ6vqGK9PGFZCbQ3VR5IHE07HI5NQUqeU4 tIJgJIP6/3bj8CLiuDIxVA9/DZGU255osweswfKaRd86lhq8HXI8qipBB5ZSeEKwLN9n AzerTsiqefJFvHTV+xive9PhG0adrgygRcipotk+l7ihIeF6XztONAQQ+lOLoz/22iG8 KG3w== X-Forwarded-Encrypted: i=1; AJvYcCXYa+znb2kgMl2F/TmfO6Zk0fSEvV/m+T8RCAMQdiXi02LWkRHQ55DRh1jDGv41qoLdbztYhrM=@dpdk.org X-Gm-Message-State: AOJu0Yyo2BPeDyfpII+1z/wmljRyAd08nivWR3hmH/N9j4Sx3CwuRgnk LUzD+3cLZmlN4VlU9RMIH4DMO2PWoYLKM5fjrD3QprZ26hZlMvudPlRm X-Gm-Gg: ASbGncuswH1ZFC3ec8UcR5nEEhh+IaYwwG2SGluC6UvysB1SfrIQvAugrQxNnojvqQm XbK9cNXtuPlLMXlhqH1fw7GQVbMWyAyStDqgxtJUwp47UyRqUME0WOLPQveR3VZtmZEc7CNhTwS 1AiNzdH6VTmcE670ALx2bXQDzfF3QCZy06JmZM+mJWxMuwPGNLBdQ6NSBHsDnpZURB764VvLApm P6Et8OqPucqlAmkx+dFOut94iv7ioxX8lOYlZZWZS68F2LwSC0GFUtmUSkg3X+noklIZl211+dh txd5/dVglob70Jl+uKKKiXOF+fl2uNclQVxuy5r0Tf9Pcd3Zza/b57NhvLmqfUjdouQ6YIX4LvS RSIN6VycMlnT4PaAyNg5Gha8vqmHpa8M+QoG+bsHN8lPtmamHJvYlrvquaIvek9+Bw7SeZt/vj7 K9q28FXQ== X-Google-Smtp-Source: AGHT+IHIQcS4pEAtDeaQ7UcJ8slrMu05Q1F4KdRhmaglrpShQB4nj444GmkKKwHiDjLI47NzZOZUCQ== X-Received: by 2002:a05:600d:41ed:b0:475:dc85:4681 with SMTP id 5b1f17b1804b1-47717def85cmr2041965e9.6.1761582009693; Mon, 27 Oct 2025 09:20:09 -0700 (PDT) Received: from localhost ([2a01:4b00:d036:ae00:6fc5:c3bc:147e:832c]) by smtp.gmail.com with UTF8SMTPSA id 5b1f17b1804b1-475dd05098dsm69007975e9.5.2025.10.27.09.20.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Oct 2025 09:20:09 -0700 (PDT) From: luca.boccassi@gmail.com To: Praveen Kaligineedi Cc: Joshua Washington , dpdk stable Subject: patch 'net/gve: allocate Rx QPL pages using malloc' has been queued to stable release 22.11.11 Date: Mon, 27 Oct 2025 16:18:39 +0000 Message-ID: <20251027162001.3710450-1-luca.boccassi@gmail.com> X-Mailer: git-send-email 2.47.3 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 22.11.11 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 10/29/25. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/bluca/dpdk-stable This queued commit can be viewed at: https://github.com/bluca/dpdk-stable/commit/1d1ce9c3bf042be1763e2164ab6427f2d674a590 Thanks. Luca Boccassi --- >From 1d1ce9c3bf042be1763e2164ab6427f2d674a590 Mon Sep 17 00:00:00 2001 From: Praveen Kaligineedi Date: Thu, 4 Sep 2025 13:59:32 -0700 Subject: [PATCH] net/gve: allocate Rx QPL pages using malloc Allocating QPL for an RX queue might fail if enough contiguous IOVA memory cannot be allocated. This can commonly occur when using 2MB huge pages because the 1024 4K buffers are allocated for each RX ring by default, resulting in 4MB for each ring. However, the only requirement for RX QPLs is that each 4K buffer be IOVA contiguous, not the entire QPL. Therefore, malloc will be used to allocate RX QPLs instead. Note that TX queues require the entire QPL to be IOVA contiguous, so it will continue to use the memzone-based allocation. Fixes: a46583cf43c8 ("net/gve: support Rx/Tx") Signed-off-by: Praveen Kaligineedi Signed-off-by: Joshua Washington --- drivers/net/gve/gve_ethdev.c | 142 +++++++++++++++++++++++++++++------ drivers/net/gve/gve_ethdev.h | 5 +- drivers/net/gve/gve_rx.c | 4 +- 3 files changed, 126 insertions(+), 25 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 0796d37760..43b2b8b2b0 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -28,13 +28,45 @@ gve_write_version(uint8_t *driver_version_register) writeb('\n', driver_version_register); } +static const struct rte_memzone * +gve_alloc_using_mz(const char *name, uint32_t num_pages) +{ + const struct rte_memzone *mz; + mz = rte_memzone_reserve_aligned(name, num_pages * PAGE_SIZE, + rte_socket_id(), + RTE_MEMZONE_IOVA_CONTIG, PAGE_SIZE); + if (mz == NULL) + PMD_DRV_LOG(ERR, "Failed to alloc memzone %s.", name); + return mz; +} + +static int +gve_alloc_using_malloc(void **bufs, uint32_t num_entries) +{ + uint32_t i; + + for (i = 0; i < num_entries; i++) { + bufs[i] = rte_malloc_socket(NULL, PAGE_SIZE, PAGE_SIZE, rte_socket_id()); + if (bufs[i] == NULL) { + PMD_DRV_LOG(ERR, "Failed to malloc"); + goto free_bufs; + } + } + return 0; + +free_bufs: + while (i > 0) + rte_free(bufs[--i]); + + return -ENOMEM; +} + static int -gve_alloc_queue_page_list(struct gve_priv *priv, uint32_t id, uint32_t pages) +gve_alloc_queue_page_list(struct gve_priv *priv, uint32_t id, uint32_t pages, + bool is_rx) { - char z_name[RTE_MEMZONE_NAMESIZE]; struct gve_queue_page_list *qpl; - const struct rte_memzone *mz; - dma_addr_t page_bus; + int err = 0; uint32_t i; if (priv->num_registered_pages + pages > @@ -45,31 +77,79 @@ gve_alloc_queue_page_list(struct gve_priv *priv, uint32_t id, uint32_t pages) return -EINVAL; } qpl = &priv->qpl[id]; - snprintf(z_name, sizeof(z_name), "gve_%s_qpl%d", priv->pci_dev->device.name, id); - mz = rte_memzone_reserve_aligned(z_name, pages * PAGE_SIZE, - rte_socket_id(), - RTE_MEMZONE_IOVA_CONTIG, PAGE_SIZE); - if (mz == NULL) { - PMD_DRV_LOG(ERR, "Failed to alloc %s.", z_name); - return -ENOMEM; - } + qpl->page_buses = rte_zmalloc("qpl page buses", pages * sizeof(dma_addr_t), 0); if (qpl->page_buses == NULL) { PMD_DRV_LOG(ERR, "Failed to alloc qpl %u page buses", id); return -ENOMEM; } - page_bus = mz->iova; - for (i = 0; i < pages; i++) { - qpl->page_buses[i] = page_bus; - page_bus += PAGE_SIZE; + + if (is_rx) { + /* RX QPL need not be IOVA contiguous. + * Allocate 4K size buffers using malloc + */ + qpl->qpl_bufs = rte_zmalloc("qpl bufs", + pages * sizeof(void *), 0); + if (qpl->qpl_bufs == NULL) { + PMD_DRV_LOG(ERR, "Failed to alloc qpl bufs"); + err = -ENOMEM; + goto free_qpl_page_buses; + } + + err = gve_alloc_using_malloc(qpl->qpl_bufs, pages); + if (err) + goto free_qpl_page_bufs; + + /* Populate the IOVA addresses */ + for (i = 0; i < pages; i++) + qpl->page_buses[i] = + rte_malloc_virt2iova(qpl->qpl_bufs[i]); + } else { + char z_name[RTE_MEMZONE_NAMESIZE]; + + snprintf(z_name, sizeof(z_name), "gve_%s_qpl%d", priv->pci_dev->device.name, id); + + /* TX QPL needs to be IOVA contiguous + * Allocate QPL using memzone + */ + qpl->mz = gve_alloc_using_mz(z_name, pages); + if (!qpl->mz) { + err = -ENOMEM; + goto free_qpl_page_buses; + } + + /* Populate the IOVA addresses */ + for (i = 0; i < pages; i++) + qpl->page_buses[i] = qpl->mz->iova + i * PAGE_SIZE; } + qpl->id = id; - qpl->mz = mz; qpl->num_entries = pages; priv->num_registered_pages += pages; return 0; + +free_qpl_page_bufs: + rte_free(qpl->qpl_bufs); +free_qpl_page_buses: + rte_free(qpl->page_buses); + return err; +} + +/* + * Free QPL bufs in RX QPLs. Should not be used on TX QPLs. + **/ +static void +gve_free_qpl_bufs(struct gve_queue_page_list *qpl) +{ + uint32_t i; + + for (i = 0; i < qpl->num_entries; i++) + rte_free(qpl->qpl_bufs[i]); + + rte_free(qpl->qpl_bufs); + qpl->qpl_bufs = NULL; } static void @@ -79,9 +159,22 @@ gve_free_qpls(struct gve_priv *priv) uint16_t nb_rxqs = priv->max_nb_rxq; uint32_t i; - for (i = 0; i < nb_txqs + nb_rxqs; i++) { - if (priv->qpl[i].mz != NULL) + if (priv->queue_format != GVE_GQI_QPL_FORMAT) + return; + + /* Free TX QPLs. */ + for (i = 0; i < nb_txqs; i++) { + if (priv->qpl[i].mz) { rte_memzone_free(priv->qpl[i].mz); + priv->qpl[i].mz = NULL; + } + rte_free(priv->qpl[i].page_buses); + } + + /* Free RX QPLs. */ + for (; i < nb_rxqs; i++) { + if (priv->qpl[i].qpl_bufs) + gve_free_qpl_bufs(&priv->qpl[i]); rte_free(priv->qpl[i].page_buses); } @@ -562,11 +655,16 @@ gve_init_priv(struct gve_priv *priv, bool skip_describe_device) } for (i = 0; i < priv->max_nb_txq + priv->max_nb_rxq; i++) { - if (i < priv->max_nb_txq) + bool is_rx; + + if (i < priv->max_nb_txq) { pages = priv->tx_pages_per_qpl; - else + is_rx = false; + } else { pages = priv->rx_data_slot_cnt; - err = gve_alloc_queue_page_list(priv, i, pages); + is_rx = true; + } + err = gve_alloc_queue_page_list(priv, i, pages, is_rx); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to alloc qpl %u.", i); goto err_qpl; diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index b7702a1249..effacc2795 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -39,7 +39,10 @@ struct gve_queue_page_list { uint32_t id; /* unique id */ uint32_t num_entries; dma_addr_t *page_buses; /* the dma addrs of the pages */ - const struct rte_memzone *mz; + union { + const struct rte_memzone *mz; /* memzone allocated for TX queue */ + void **qpl_bufs; /* RX qpl-buffer list allocated using malloc*/ + }; }; /* A TX desc ring entry */ diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index 50f9f5c370..e020b4af10 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -105,9 +105,9 @@ gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) len = rte_be_to_cpu_16(rxd->len) - GVE_RX_PAD; rxe = rxq->sw_ring[rx_id]; if (rxq->is_gqi_qpl) { - addr = (uint64_t)(rxq->qpl->mz->addr) + rx_id * PAGE_SIZE + GVE_RX_PAD; + addr = (uint64_t)rxq->qpl->qpl_bufs[rx_id] + GVE_RX_PAD; rte_memcpy((void *)((size_t)rxe->buf_addr + rxe->data_off), - (void *)(size_t)addr, len); + (void *)(size_t)addr, len); } rxe->pkt_len = len; rxe->data_len = len; -- 2.47.3