From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8045846039; Thu, 9 Jan 2025 20:46:42 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D4311410D5; Thu, 9 Jan 2025 20:46:41 +0100 (CET) Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by mails.dpdk.org (Postfix) with ESMTP id A562440FDE for ; Thu, 9 Jan 2025 20:46:40 +0100 (CET) Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ef9e38b0cfso2263574a91.0 for ; Thu, 09 Jan 2025 11:46:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736452000; x=1737056800; darn=dpdk.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BJKsgJIjFbuctUlcvseZ/+ndxpS+OCzMgK47OGXBZkU=; b=U/wErnBsmcaqsLsa0vp4kSvq2VjusojeXGXQBUe78QX3Hyw8v+sa7e8vJC2Wc7mOJE wk9WRkLq256PAByQ9CDur384pRWkL7rYd9HYwx/rHXxVP5g1VK2l9njt3JyMSw59TmHc 91yZ2MiE1eVMDaV/mpL1ovOgJHdP2udomneKHsjoREVSp9mXM/DfLu6KwrYqnQwTca3O YNJ9VMxlfGJCRVnJoSU443wriUBLjRidFalxjJ25+nOlFUs216zSoOL1qaCKg7tRX6nc fsv1P2L6car7o1IpHrCBshVc6aFducHSpD/hu4fRiLNQxPwKOkCnnc6dVUUaXALZeGU8 S+WQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736452000; x=1737056800; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BJKsgJIjFbuctUlcvseZ/+ndxpS+OCzMgK47OGXBZkU=; b=ro+MsWQCaux+JnEZr5Z1LeMUBCi+soZaDyERUIcI5NNALXrJK0H4Xkoie1NlwBFqXZ 1mNejAyRuscigjudvTwGcjzgDcxD7cBwtGFss8yGNBUj2gi9xpTfqzTZsfxPGBwnQHkk R+aLS5ar/S54jTzxL8DvxsyiQDV7sOmYDMBMxUxRYbiah30FNZcVmrVUMWmuJCSobGZ7 yd0AtxY0sFK5/y7x9d2UrcOSPAPXr+Y1WaQvys0OH2GtBCqv5UlNxw+Qn5oV+c8YNl1R lp+6oYrSOKd9yuuOPehsB1DDbshrTCF0mJQqsdK5pOzXw4OiLmk8U8s16q1cuIGAmtCz gwHw== X-Gm-Message-State: AOJu0YzKDI6YPdvzw9ARoUoFytXdYvUq2Jow+Jy9AEEFnlj5DK/p0fNQ UCsaE7wFLsyqMNQ59J3TDeYKwlxjFldOrlPhuRCyKBcnVF86B44M0YC+pNQ+5+Ma6jq6KnMq8MN Y6VVMjOEr3A== X-Google-Smtp-Source: AGHT+IGZ0sRghaEpnIB3vw7Etg6vXKRassqrTWEv3R4/ZEynMvD9Xnwm85casFZorCZ0wr95VmQsgP88eDDRQw== X-Received: from pjm5.prod.google.com ([2002:a17:90b:2fc5:b0:2ee:3cc1:793d]) (user=joshwash job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:520e:b0:2ee:9d65:65a7 with SMTP id 98e67ed59e1d1-2f548f447b8mr11865618a91.29.1736451999782; Thu, 09 Jan 2025 11:46:39 -0800 (PST) Date: Thu, 9 Jan 2025 11:46:38 -0800 In-Reply-To: <20250107190258.2107909-1-joshwash@google.com> Mime-Version: 1.0 References: <20250107190258.2107909-1-joshwash@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109194638.3262043-1-joshwash@google.com> Subject: [PATCH v4] net/gve: allocate RX QPL pages using malloc From: Joshua Washington To: Jeroen de Borst , Rushil Gupta , Joshua Washington , Junfeng Guo , Xiaoyun Li Cc: dev@dpdk.org, stable@dpdk.org, Praveen Kaligineedi Content-Type: text/plain; charset="UTF-8" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Praveen Kaligineedi Allocating QPL for an RX queue might fail if enough contiguous IOVA memory cannot be allocated. This can commonly occur when using 2MB huge pages because the 1024 4K buffers are allocated for each RX ring by default, resulting in 4MB for each ring. However, the only requirement for RX QPLs is that each 4K buffer be IOVA contiguous, not the entire QPL. Therefore, malloc will be used to allocate RX QPLs instead. Note that TX queues require the entire QPL to be IOVA contiguous, so it will continue to use the memzone-based allocation. v2: Updated RX path to use malloc exclusively v3: Changed commit description to match updated code v4: Add fixes tag to allow 2M hugepages to be used on older versions of DPDK Fixes: a46583cf43c8 ("net/gve: support Rx/Tx") Cc: junfeng.guo@intel.com Cc: stable@dpdk.org Signed-off-by: Praveen Kaligineedi Signed-off-by: Joshua Washington --- drivers/net/gve/gve_ethdev.c | 102 ++++++++++++++++++++++++++++------- drivers/net/gve/gve_ethdev.h | 5 +- drivers/net/gve/gve_rx.c | 2 +- 3 files changed, 89 insertions(+), 20 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index db4ebe7036..e471a34e61 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -22,42 +22,97 @@ gve_write_version(uint8_t *driver_version_register) writeb('\n', driver_version_register); } +static const struct rte_memzone * +gve_alloc_using_mz(const char *name, uint32_t num_pages) +{ + const struct rte_memzone *mz; + mz = rte_memzone_reserve_aligned(name, num_pages * PAGE_SIZE, + rte_socket_id(), + RTE_MEMZONE_IOVA_CONTIG, PAGE_SIZE); + if (mz == NULL) + PMD_DRV_LOG(ERR, "Failed to alloc memzone %s.", name); + return mz; +} + +static int +gve_alloc_using_malloc(void **bufs, uint32_t num_entries) +{ + uint32_t i; + + for (i = 0; i < num_entries; i++) { + bufs[i] = rte_malloc_socket(NULL, PAGE_SIZE, PAGE_SIZE, rte_socket_id()); + if (bufs[i] == NULL) { + PMD_DRV_LOG(ERR, "Failed to malloc"); + goto free_bufs; + } + } + return 0; + +free_bufs: + while (i > 0) + rte_free(bufs[--i]); + + return -ENOMEM; +} + static struct gve_queue_page_list * -gve_alloc_queue_page_list(const char *name, uint32_t num_pages) +gve_alloc_queue_page_list(const char *name, uint32_t num_pages, bool is_rx) { struct gve_queue_page_list *qpl; const struct rte_memzone *mz; - dma_addr_t page_bus; uint32_t i; qpl = rte_zmalloc("qpl struct", sizeof(struct gve_queue_page_list), 0); if (!qpl) return NULL; - mz = rte_memzone_reserve_aligned(name, num_pages * PAGE_SIZE, - rte_socket_id(), - RTE_MEMZONE_IOVA_CONTIG, PAGE_SIZE); - if (mz == NULL) { - PMD_DRV_LOG(ERR, "Failed to alloc %s.", name); - goto free_qpl_struct; - } qpl->page_buses = rte_zmalloc("qpl page buses", num_pages * sizeof(dma_addr_t), 0); if (qpl->page_buses == NULL) { PMD_DRV_LOG(ERR, "Failed to alloc qpl page buses"); - goto free_qpl_memzone; + goto free_qpl_struct; } - page_bus = mz->iova; - for (i = 0; i < num_pages; i++) { - qpl->page_buses[i] = page_bus; - page_bus += PAGE_SIZE; + + if (is_rx) { + /* RX QPL need not be IOVA contiguous. + * Allocate 4K size buffers using malloc + */ + qpl->qpl_bufs = rte_zmalloc("qpl bufs", + num_pages * sizeof(void *), 0); + if (qpl->qpl_bufs == NULL) { + PMD_DRV_LOG(ERR, "Failed to alloc qpl bufs"); + goto free_qpl_page_buses; + } + + if (gve_alloc_using_malloc(qpl->qpl_bufs, num_pages)) + goto free_qpl_page_bufs; + + /* Populate the IOVA addresses */ + for (i = 0; i < num_pages; i++) + qpl->page_buses[i] = + rte_malloc_virt2iova(qpl->qpl_bufs[i]); + } else { + /* TX QPL needs to be IOVA contiguous + * Allocate QPL using memzone + */ + mz = gve_alloc_using_mz(name, num_pages); + if (!mz) + goto free_qpl_page_buses; + + qpl->mz = mz; + + /* Populate the IOVA addresses */ + for (i = 0; i < num_pages; i++) + qpl->page_buses[i] = mz->iova + i * PAGE_SIZE; } - qpl->mz = mz; + qpl->num_entries = num_pages; return qpl; -free_qpl_memzone: - rte_memzone_free(qpl->mz); +free_qpl_page_bufs: + rte_free(qpl->qpl_bufs); +free_qpl_page_buses: + rte_free(qpl->page_buses); free_qpl_struct: rte_free(qpl); return NULL; @@ -69,7 +124,18 @@ gve_free_queue_page_list(struct gve_queue_page_list *qpl) if (qpl->mz) { rte_memzone_free(qpl->mz); qpl->mz = NULL; + } else if (qpl->qpl_bufs) { + uint32_t i; + + for (i = 0; i < qpl->num_entries; i++) + rte_free(qpl->qpl_bufs[i]); + } + + if (qpl->qpl_bufs) { + rte_free(qpl->qpl_bufs); + qpl->qpl_bufs = NULL; } + if (qpl->page_buses) { rte_free(qpl->page_buses); qpl->page_buses = NULL; @@ -89,7 +155,7 @@ gve_setup_queue_page_list(struct gve_priv *priv, uint16_t queue_id, bool is_rx, /* Allocate a new QPL. */ snprintf(qpl_name, sizeof(qpl_name), "gve_%s_%s_qpl%d", priv->pci_dev->device.name, queue_type_string, queue_id); - qpl = gve_alloc_queue_page_list(qpl_name, num_pages); + qpl = gve_alloc_queue_page_list(qpl_name, num_pages, is_rx); if (!qpl) { PMD_DRV_LOG(ERR, "Failed to alloc %s qpl for queue %hu.", diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index c417a0b31c..35cb9062b1 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -62,7 +62,10 @@ struct gve_queue_page_list { uint32_t id; /* unique id */ uint32_t num_entries; dma_addr_t *page_buses; /* the dma addrs of the pages */ - const struct rte_memzone *mz; + union { + const struct rte_memzone *mz; /* memzone allocated for TX queue */ + void **qpl_bufs; /* RX qpl-buffer list allocated using malloc*/ + }; }; /* A TX desc ring entry */ diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index 1f5fa3f1da..7a91c31ad2 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -117,7 +117,7 @@ gve_rx_mbuf(struct gve_rx_queue *rxq, struct rte_mbuf *rxe, uint16_t len, rxq->ctx.mbuf_tail = rxe; } if (rxq->is_gqi_qpl) { - addr = (uint64_t)(rxq->qpl->mz->addr) + rx_id * PAGE_SIZE + padding; + addr = (uint64_t)rxq->qpl->qpl_bufs[rx_id] + padding; rte_memcpy((void *)((size_t)rxe->buf_addr + rxe->data_off), (void *)(size_t)addr, len); } -- 2.47.1.613.gc27f4b7a9f-goog