From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 849FC46332 for ; Tue, 4 Mar 2025 00:06:11 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6699640041; Tue, 4 Mar 2025 00:06:11 +0100 (CET) Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by mails.dpdk.org (Postfix) with ESMTP id C603F40041 for ; Tue, 4 Mar 2025 00:06:10 +0100 (CET) Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2fed5debb85so4572704a91.2 for ; Mon, 03 Mar 2025 15:06:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741043170; x=1741647970; darn=dpdk.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=Ss0QXw2rDFmHGMHqR/a5btvvf1dd/z4XiMXKz9NtQ90=; b=o4w9V96MWWfxyYfoAG9FRqnZ4VKDZJk0WX3ApwwLq6sO0eJ/2xcnMHaimqbSLZcls+ tVqmw/fWJ/nRR4xNmMZzUYqnrV9YrUVCGYoRHfC+JPGzvJ1ucVCwIJXiALxzcYfpnkrG H+UkhgAFXFU844lZ6CJjtUwwoOabL1KsyRk3vfHTCq40skevUvmT7Mt7Zim6nKaB15VW YfqbGJxjdZ6GBGITOO2ELm1/78ind7wtjIhu2RS+s8fdSix8+X94RlldjNnOVPPmumqJ Jz3VxBnBm88SDgz5ZfdAUfa+YdIcldq24O+iNiGacViSG7ZzygLo1vYS50gyX17vmhr0 nSLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741043170; x=1741647970; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=Ss0QXw2rDFmHGMHqR/a5btvvf1dd/z4XiMXKz9NtQ90=; b=ujDDtH/dmqZW5qCK/Gc7BGbfFaNimK8EUv6QYIa9jG8HI7nqQw8vy+jzH6/LjZtUbP lVWPn8ADoHSOMvTDQ28gagHWfnKOFzIb9Nf5BqTxetZYoUHUClNuXb+ZLfxkJpO0M1av x6DeHUvTnzHIGHsMtaB7iIQbytyyIy77Gc9G3oNo4g4b+31jJZpbtMttVKp49jep1oz6 7iG30lJ3kdb2ikLj0OPE5/Gtl/3MY/k/2nM80OIdqmcl23dAn2znf3XCNcwyfgiIpWAe RWr6xBKKIzuXNXQSZxkwGt7xMOP317aWk9Y2TVx8Vcim0VdYyYbCypHFgE1Fyj5sDqiS epXg== X-Gm-Message-State: AOJu0YwBfPmuo5pYiKmSk4S2GCdEdehqlT9O/di1pjpr+82KEBHyphx8 t3zAD0rR/BAIzVtaiiXz7FJlD6kZjbqrbb5W1KJjgcn2iQEtNWs6TqyiggSbFm+eNeMlaLQ3hmB WGYfSzB5ltrSKYyqDy1ZkAU2k5IHrGugr5hv5uZD6lqQzuAb+str/Np/zUG38TguTlspSUeyCx4 iqUptF3Xwqr6j87XK9iprCN3GcoUe9K65wpw== X-Google-Smtp-Source: AGHT+IGg6vTgy3aW0KIiylz9fEZasPYHWXPmbMJRITYEVPN/6tMiaSqpZqPPsuUrTusbdMlwkTuN0A0rFr+R8g== X-Received: from pjh5.prod.google.com ([2002:a17:90b:3f85:b0:2fc:e37d:85dc]) (user=joshwash job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:52cc:b0:2ee:d63f:d73 with SMTP id 98e67ed59e1d1-2febab3e63fmr24077675a91.11.1741043169799; Mon, 03 Mar 2025 15:06:09 -0800 (PST) Date: Mon, 3 Mar 2025 15:06:08 -0800 Mime-Version: 1.0 X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250303230608.2228640-1-joshwash@google.com> Subject: [PATCH 23.11] net/gve: allocate Rx QPL pages using malloc From: Joshua Washington To: stable@dpdk.org, Junfeng Guo , Jeroen de Borst , Rushil Gupta , Joshua Washington , Xiaoyun Li Cc: Praveen Kaligineedi Content-Type: text/plain; charset="UTF-8" X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org From: Praveen Kaligineedi Allocating QPL for an RX queue might fail if enough contiguous IOVA memory cannot be allocated. This can commonly occur when using 2MB huge pages because the 1024 4K buffers are allocated for each RX ring by default, resulting in 4MB for each ring. However, the only requirement for RX QPLs is that each 4K buffer be IOVA contiguous, not the entire QPL. Therefore, malloc will be used to allocate RX QPLs instead. Note that TX queues require the entire QPL to be IOVA contiguous, so it will continue to use the memzone-based allocation. Fixes: a46583cf43c8 ("net/gve: support Rx/Tx") Cc: stable@dpdk.org Signed-off-by: Praveen Kaligineedi Signed-off-by: Joshua Washington --- drivers/net/gve/gve_ethdev.c | 139 +++++++++++++++++++++++++++++------ drivers/net/gve/gve_ethdev.h | 5 +- drivers/net/gve/gve_rx.c | 2 +- 3 files changed, 122 insertions(+), 24 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index ecd37ff37f..d020e0be66 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -20,13 +20,45 @@ gve_write_version(uint8_t *driver_version_register) writeb('\n', driver_version_register); } +static const struct rte_memzone * +gve_alloc_using_mz(const char *name, uint32_t num_pages) +{ + const struct rte_memzone *mz; + mz = rte_memzone_reserve_aligned(name, num_pages * PAGE_SIZE, + rte_socket_id(), + RTE_MEMZONE_IOVA_CONTIG, PAGE_SIZE); + if (mz == NULL) + PMD_DRV_LOG(ERR, "Failed to alloc memzone %s.", name); + return mz; +} + static int -gve_alloc_queue_page_list(struct gve_priv *priv, uint32_t id, uint32_t pages) +gve_alloc_using_malloc(void **bufs, uint32_t num_entries) +{ + uint32_t i; + + for (i = 0; i < num_entries; i++) { + bufs[i] = rte_malloc_socket(NULL, PAGE_SIZE, PAGE_SIZE, rte_socket_id()); + if (bufs[i] == NULL) { + PMD_DRV_LOG(ERR, "Failed to malloc"); + goto free_bufs; + } + } + return 0; + +free_bufs: + while (i > 0) + rte_free(bufs[--i]); + + return -ENOMEM; +} + +static int +gve_alloc_queue_page_list(struct gve_priv *priv, uint32_t id, uint32_t pages, + bool is_rx) { - char z_name[RTE_MEMZONE_NAMESIZE]; struct gve_queue_page_list *qpl; - const struct rte_memzone *mz; - dma_addr_t page_bus; + int err = 0; uint32_t i; if (priv->num_registered_pages + pages > @@ -37,31 +69,79 @@ gve_alloc_queue_page_list(struct gve_priv *priv, uint32_t id, uint32_t pages) return -EINVAL; } qpl = &priv->qpl[id]; - snprintf(z_name, sizeof(z_name), "gve_%s_qpl%d", priv->pci_dev->device.name, id); - mz = rte_memzone_reserve_aligned(z_name, pages * PAGE_SIZE, - rte_socket_id(), - RTE_MEMZONE_IOVA_CONTIG, PAGE_SIZE); - if (mz == NULL) { - PMD_DRV_LOG(ERR, "Failed to alloc %s.", z_name); - return -ENOMEM; - } + qpl->page_buses = rte_zmalloc("qpl page buses", pages * sizeof(dma_addr_t), 0); if (qpl->page_buses == NULL) { PMD_DRV_LOG(ERR, "Failed to alloc qpl %u page buses", id); return -ENOMEM; } - page_bus = mz->iova; - for (i = 0; i < pages; i++) { - qpl->page_buses[i] = page_bus; - page_bus += PAGE_SIZE; + + if (is_rx) { + /* RX QPL need not be IOVA contiguous. + * Allocate 4K size buffers using malloc + */ + qpl->qpl_bufs = rte_zmalloc("qpl bufs", + pages * sizeof(void *), 0); + if (qpl->qpl_bufs == NULL) { + PMD_DRV_LOG(ERR, "Failed to alloc qpl bufs"); + err = -ENOMEM; + goto free_qpl_page_buses; + } + + err = gve_alloc_using_malloc(qpl->qpl_bufs, pages); + if (err) + goto free_qpl_page_bufs; + + /* Populate the IOVA addresses */ + for (i = 0; i < pages; i++) + qpl->page_buses[i] = + rte_malloc_virt2iova(qpl->qpl_bufs[i]); + } else { + char z_name[RTE_MEMZONE_NAMESIZE]; + + snprintf(z_name, sizeof(z_name), "gve_%s_qpl%d", priv->pci_dev->device.name, id); + + /* TX QPL needs to be IOVA contiguous + * Allocate QPL using memzone + */ + qpl->mz = gve_alloc_using_mz(z_name, pages); + if (!qpl->mz) { + err = -ENOMEM; + goto free_qpl_page_buses; + } + + /* Populate the IOVA addresses */ + for (i = 0; i < pages; i++) + qpl->page_buses[i] = qpl->mz->iova + i * PAGE_SIZE; } + qpl->id = id; - qpl->mz = mz; qpl->num_entries = pages; priv->num_registered_pages += pages; return 0; + +free_qpl_page_bufs: + rte_free(qpl->qpl_bufs); +free_qpl_page_buses: + rte_free(qpl->page_buses); + return err; +} + +/* + * Free QPL bufs in RX QPLs. Should not be used on TX QPLs. + **/ +static void +gve_free_qpl_bufs(struct gve_queue_page_list *qpl) +{ + uint32_t i; + + for (i = 0; i < qpl->num_entries; i++) + rte_free(qpl->qpl_bufs[i]); + + rte_free(qpl->qpl_bufs); + qpl->qpl_bufs = NULL; } static void @@ -74,9 +154,19 @@ gve_free_qpls(struct gve_priv *priv) if (priv->queue_format != GVE_GQI_QPL_FORMAT) return; - for (i = 0; i < nb_txqs + nb_rxqs; i++) { - if (priv->qpl[i].mz != NULL) + /* Free TX QPLs. */ + for (i = 0; i < nb_txqs; i++) { + if (priv->qpl[i].mz) { rte_memzone_free(priv->qpl[i].mz); + priv->qpl[i].mz = NULL; + } + rte_free(priv->qpl[i].page_buses); + } + + /* Free RX QPLs. */ + for (; i < nb_rxqs; i++) { + if (priv->qpl[i].qpl_bufs) + gve_free_qpl_bufs(&priv->qpl[i]); rte_free(priv->qpl[i].page_buses); } @@ -755,11 +845,16 @@ gve_init_priv(struct gve_priv *priv, bool skip_describe_device) } for (i = 0; i < priv->max_nb_txq + priv->max_nb_rxq; i++) { - if (i < priv->max_nb_txq) + bool is_rx; + + if (i < priv->max_nb_txq) { pages = priv->tx_pages_per_qpl; - else + is_rx = false; + } else { pages = priv->rx_data_slot_cnt; - err = gve_alloc_queue_page_list(priv, i, pages); + is_rx = true; + } + err = gve_alloc_queue_page_list(priv, i, pages, is_rx); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to alloc qpl %u.", i); goto err_qpl; diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 58d8943e71..59febc153e 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -40,7 +40,10 @@ struct gve_queue_page_list { uint32_t id; /* unique id */ uint32_t num_entries; dma_addr_t *page_buses; /* the dma addrs of the pages */ - const struct rte_memzone *mz; + union { + const struct rte_memzone *mz; /* memzone allocated for TX queue */ + void **qpl_bufs; /* RX qpl-buffer list allocated using malloc*/ + }; }; /* A TX desc ring entry */ diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index 36a1b73c65..b8ef625b5c 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -117,7 +117,7 @@ gve_rx_mbuf(struct gve_rx_queue *rxq, struct rte_mbuf *rxe, uint16_t len, rxq->ctx.mbuf_tail = rxe; } if (rxq->is_gqi_qpl) { - addr = (uint64_t)(rxq->qpl->mz->addr) + rx_id * PAGE_SIZE + padding; + addr = (uint64_t)rxq->qpl->qpl_bufs[rx_id] + padding; rte_memcpy((void *)((size_t)rxe->buf_addr + rxe->data_off), (void *)(size_t)addr, len); } -- 2.48.1.601.g30ceb7b040-goog