From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4380B45F25; Mon, 23 Dec 2024 21:16:27 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D19D7402AE; Mon, 23 Dec 2024 21:16:26 +0100 (CET) Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by mails.dpdk.org (Postfix) with ESMTP id 23F5F4029C for ; Mon, 23 Dec 2024 21:16:26 +0100 (CET) Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-21648ddd461so49638465ad.0 for ; Mon, 23 Dec 2024 12:16:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734984985; x=1735589785; darn=dpdk.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LISoHAk6m08z5A7hVJGPluT/aFqhCv5V6y5CaIkIuO8=; b=ajUB7hNn56b27PWh9exv9uClV5sswk6OViTMhTA8T7p6md9gJgcCvaGov+zIZoQ9eB d1sfL4mMsICAtP1MBVUt0OacXzvFJxG719Zc079q1Wtz+R1rAtHBiXnnhbKUFJP9Ovjl Rwk9nsEX820mK6tpTXhpVyOfgEdt7rN6IPB8WLOO1Weq0aFhekOcDug8RBgB8As6QPdg 3+lxpxm6hnGUihDvWUAJcogLZ3jRnrs1ip/Wyu+RLlHa9X79KSkgySy2bH+KCssbgD0N /zrdok9qSfGYajpzCmPUS2xXfUBjotfW5zLhSULpwezFVS4B3rxaYUZI0YIE3o5D75Rv gpPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734984985; x=1735589785; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LISoHAk6m08z5A7hVJGPluT/aFqhCv5V6y5CaIkIuO8=; b=SiPv3Fw5tjbmmS8VdsLidHMbm9ULHGYD+TxMgcyGdpqDtM8cjKSEcGhmMHRpQsaMsJ GiNQxr7VXvTs3F0KRRoGneBytdmHpi3WkqvC/5GUJikZbsDTD2X/Rdn9HeelH+KsNvMl 94s2QYoPaPr9/McioB4rAhTlpit3H9hCv8L96iBk0Rmxm9U5IhLglAv9EA47SHTNYKx+ AqpFDqeGNqupyC+yTb4oVZ0N82ydx86Sc8OV7ocL7RSTfh9MiNDClxsltS3KFo3uJMC9 F/ulJsU59WfzFTEP+g+K6eKpYr63nJqukW5n/hR+3sh8o3r35OOS5y/pEAx5AyAY2GIz TuZA== X-Gm-Message-State: AOJu0YzmbtJEgCOTeTON+zlotuBA5us0xqYCDANSxb8l1JkkOUwV14zF g/enZEZRmMECAuz2lPTDhVIEngjMPnhKpefm6hVEw8FfgKl6rNn0Yk/H6rco+gybYGo9+zWekSp ySETrDtZnzA== X-Google-Smtp-Source: AGHT+IEIe87Vte5uOyJBJLWItny51UBWyZ3m6IGfv8dc7kLUmVz5VX085bM0wZXNVcxE3aXTUCSMCYOiJxjTKw== X-Received: from pfxa4.prod.google.com ([2002:a05:6a00:1d04:b0:725:e05b:5150]) (user=joshwash job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:300e:b0:725:e057:c3dd with SMTP id d2e1a72fcca58-72abdeb7adfmr20207771b3a.22.1734984985028; Mon, 23 Dec 2024 12:16:25 -0800 (PST) Date: Mon, 23 Dec 2024 12:16:23 -0800 In-Reply-To: <20241218234635.2009033-1-joshwash@google.com> Mime-Version: 1.0 References: <20241218234635.2009033-1-joshwash@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241223201623.5666-1-joshwash@google.com> Subject: [PATCH v2] net/gve: Allocate qpl pages using malloc if memzone allocation fails From: Joshua Washington To: Jeroen de Borst , Rushil Gupta , Joshua Washington Cc: dev@dpdk.org, Ferruh Yigit , Praveen Kaligineedi Content-Type: text/plain; charset="UTF-8" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Praveen Kaligineedi Allocating QPL for an RX queue might fail if enough contiguous IOVA memory cannot be allocated. However, the only requirement for QPL for RX is that each 4K buffer be IOVA contiguous, not the entire QPL. Therefore, use malloc to allocate 4K buffers if the allocation using memzone fails. Use memzone based allocation for TX since TX queue requires IOVA contiguous QPL memory. Google-Bug-Id: 372857163 Signed-off-by: Praveen Kaligineedi Reviewed-by: Joshua Washington --- drivers/net/gve/gve_ethdev.c | 102 ++++++++++++++++++++++++++++------- drivers/net/gve/gve_ethdev.h | 5 +- drivers/net/gve/gve_rx.c | 2 +- 3 files changed, 89 insertions(+), 20 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index db4ebe7036..e471a34e61 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -22,42 +22,97 @@ gve_write_version(uint8_t *driver_version_register) writeb('\n', driver_version_register); } +static const struct rte_memzone * +gve_alloc_using_mz(const char *name, uint32_t num_pages) +{ + const struct rte_memzone *mz; + mz = rte_memzone_reserve_aligned(name, num_pages * PAGE_SIZE, + rte_socket_id(), + RTE_MEMZONE_IOVA_CONTIG, PAGE_SIZE); + if (mz == NULL) + PMD_DRV_LOG(ERR, "Failed to alloc memzone %s.", name); + return mz; +} + +static int +gve_alloc_using_malloc(void **bufs, uint32_t num_entries) +{ + uint32_t i; + + for (i = 0; i < num_entries; i++) { + bufs[i] = rte_malloc_socket(NULL, PAGE_SIZE, PAGE_SIZE, rte_socket_id()); + if (bufs[i] == NULL) { + PMD_DRV_LOG(ERR, "Failed to malloc"); + goto free_bufs; + } + } + return 0; + +free_bufs: + while (i > 0) + rte_free(bufs[--i]); + + return -ENOMEM; +} + static struct gve_queue_page_list * -gve_alloc_queue_page_list(const char *name, uint32_t num_pages) +gve_alloc_queue_page_list(const char *name, uint32_t num_pages, bool is_rx) { struct gve_queue_page_list *qpl; const struct rte_memzone *mz; - dma_addr_t page_bus; uint32_t i; qpl = rte_zmalloc("qpl struct", sizeof(struct gve_queue_page_list), 0); if (!qpl) return NULL; - mz = rte_memzone_reserve_aligned(name, num_pages * PAGE_SIZE, - rte_socket_id(), - RTE_MEMZONE_IOVA_CONTIG, PAGE_SIZE); - if (mz == NULL) { - PMD_DRV_LOG(ERR, "Failed to alloc %s.", name); - goto free_qpl_struct; - } qpl->page_buses = rte_zmalloc("qpl page buses", num_pages * sizeof(dma_addr_t), 0); if (qpl->page_buses == NULL) { PMD_DRV_LOG(ERR, "Failed to alloc qpl page buses"); - goto free_qpl_memzone; + goto free_qpl_struct; } - page_bus = mz->iova; - for (i = 0; i < num_pages; i++) { - qpl->page_buses[i] = page_bus; - page_bus += PAGE_SIZE; + + if (is_rx) { + /* RX QPL need not be IOVA contiguous. + * Allocate 4K size buffers using malloc + */ + qpl->qpl_bufs = rte_zmalloc("qpl bufs", + num_pages * sizeof(void *), 0); + if (qpl->qpl_bufs == NULL) { + PMD_DRV_LOG(ERR, "Failed to alloc qpl bufs"); + goto free_qpl_page_buses; + } + + if (gve_alloc_using_malloc(qpl->qpl_bufs, num_pages)) + goto free_qpl_page_bufs; + + /* Populate the IOVA addresses */ + for (i = 0; i < num_pages; i++) + qpl->page_buses[i] = + rte_malloc_virt2iova(qpl->qpl_bufs[i]); + } else { + /* TX QPL needs to be IOVA contiguous + * Allocate QPL using memzone + */ + mz = gve_alloc_using_mz(name, num_pages); + if (!mz) + goto free_qpl_page_buses; + + qpl->mz = mz; + + /* Populate the IOVA addresses */ + for (i = 0; i < num_pages; i++) + qpl->page_buses[i] = mz->iova + i * PAGE_SIZE; } - qpl->mz = mz; + qpl->num_entries = num_pages; return qpl; -free_qpl_memzone: - rte_memzone_free(qpl->mz); +free_qpl_page_bufs: + rte_free(qpl->qpl_bufs); +free_qpl_page_buses: + rte_free(qpl->page_buses); free_qpl_struct: rte_free(qpl); return NULL; @@ -69,7 +124,18 @@ gve_free_queue_page_list(struct gve_queue_page_list *qpl) if (qpl->mz) { rte_memzone_free(qpl->mz); qpl->mz = NULL; + } else if (qpl->qpl_bufs) { + uint32_t i; + + for (i = 0; i < qpl->num_entries; i++) + rte_free(qpl->qpl_bufs[i]); + } + + if (qpl->qpl_bufs) { + rte_free(qpl->qpl_bufs); + qpl->qpl_bufs = NULL; } + if (qpl->page_buses) { rte_free(qpl->page_buses); qpl->page_buses = NULL; @@ -89,7 +155,7 @@ gve_setup_queue_page_list(struct gve_priv *priv, uint16_t queue_id, bool is_rx, /* Allocate a new QPL. */ snprintf(qpl_name, sizeof(qpl_name), "gve_%s_%s_qpl%d", priv->pci_dev->device.name, queue_type_string, queue_id); - qpl = gve_alloc_queue_page_list(qpl_name, num_pages); + qpl = gve_alloc_queue_page_list(qpl_name, num_pages, is_rx); if (!qpl) { PMD_DRV_LOG(ERR, "Failed to alloc %s qpl for queue %hu.", diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index c417a0b31c..35cb9062b1 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -62,7 +62,10 @@ struct gve_queue_page_list { uint32_t id; /* unique id */ uint32_t num_entries; dma_addr_t *page_buses; /* the dma addrs of the pages */ - const struct rte_memzone *mz; + union { + const struct rte_memzone *mz; /* memzone allocated for TX queue */ + void **qpl_bufs; /* RX qpl-buffer list allocated using malloc*/ + }; }; /* A TX desc ring entry */ diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index 1f5fa3f1da..7a91c31ad2 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -117,7 +117,7 @@ gve_rx_mbuf(struct gve_rx_queue *rxq, struct rte_mbuf *rxe, uint16_t len, rxq->ctx.mbuf_tail = rxe; } if (rxq->is_gqi_qpl) { - addr = (uint64_t)(rxq->qpl->mz->addr) + rx_id * PAGE_SIZE + padding; + addr = (uint64_t)rxq->qpl->qpl_bufs[rx_id] + padding; rte_memcpy((void *)((size_t)rxe->buf_addr + rxe->data_off), (void *)(size_t)addr, len); } -- 2.47.1.613.gc27f4b7a9f-goog