From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 730CF46532 for ; Tue, 8 Apr 2025 10:25:07 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 46AFF4027F; Tue, 8 Apr 2025 10:25:07 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2069.outbound.protection.outlook.com [40.107.223.69]) by mails.dpdk.org (Postfix) with ESMTP id 3CE914027F for ; Tue, 8 Apr 2025 10:25:05 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=OCZCwPVeAO136KtkrrNyI+854H5fcy8WY2fM4W/WfqTp8Jv1fYRqdROGOqFkontUOtpjd8S4FJpFY8ZtmjofNp2SqPPSeNOXBEndmaLVBfmXvNlFf0QlEsT7sO3fNdDwppT/G1PhZ2F0f49gZWc2Y19IIzoidXLjY+X72FKAD3m1e5npz+JH8lR39sutJ+mAGBQ7da8ICuiamS6zSNXzJL0tJEdBKDgiVlvIgXRrC/mTxtMC2VG2ugocThd44MVdHB6un6y7sileMbOsdSuXWQuA7U5PcIP5U/TwfAODeaCZuvYSanl39Njk8BH08pdaOWwnWzuxEngL7CAvYOKKKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ptQArFDQ1DuOalMIJ16l+Crt5xvHCwS026ZCke1EWcE=; b=T3HMcfEdRv4V9sc9i9u6Nugc7iFssI8dvg35VOBzy31nvQrgluCt2bacOvJ5R1BZ1QJJu0+v+qeHOPgbGDSXeZZj59rfq9k9uFnYHHyt2smh9a/Zi1G+wkSLCHotNo5TAtwkquBddlR3e9UiwtO4cfC3f/0kdINNI2h7iPCP59t2jxGIJ4dSzjJogyZ17MD7Tpo3WiGjgi9ZbRDhcmcGwyUJ4AHtz1zR4JJqPPocglvnbWn3L297xFxc7/2e5pYgo4DkSn8s6PJg3pksz9X/qzrCHsVaXRz81PiIJxa4PDc1V5er7hvqVYppT2pSaMM+f6+xW1WMgQb3VP8OWIqtZg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=google.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ptQArFDQ1DuOalMIJ16l+Crt5xvHCwS026ZCke1EWcE=; b=XEPSxdaBtrdk91zj7bEko+sv+InnDE1deMSRmwCt3mQO1cRkiKZv3FltgEvqT+JrUV/3KJCfFBYwfaQP2iPqD8vNjIOGYKogzCoZvZB8KIX4SvUrl+N6C4WhzL+3oeFv9xb/2roHZfA3Ui+CZYhzoDa68ATmWqmdgw4fOEPZukcfnJ3y5u8ZF6wKv/ngGZr+Ip54nHBS1Q1+KtdtK30kgZbRJ52ze24zKKrudFsI36VD4Bitzk1h8kF8ljnkURgZRYSxlEGtNxU47xLgwJlyE2ZAay8cvJ06leTUB3p8KTdFQSpd6opJhwpD6kRpNyq7Leq/Twk9NLdp7cAOBawdGw== Received: from MW4P221CA0005.NAMP221.PROD.OUTLOOK.COM (2603:10b6:303:8b::10) by BL3PR12MB6521.namprd12.prod.outlook.com (2603:10b6:208:3bd::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8606.34; Tue, 8 Apr 2025 08:24:59 +0000 Received: from CY4PEPF0000EE39.namprd03.prod.outlook.com (2603:10b6:303:8b:cafe::c2) by MW4P221CA0005.outlook.office365.com (2603:10b6:303:8b::10) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8606.34 via Frontend Transport; Tue, 8 Apr 2025 08:24:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CY4PEPF0000EE39.mail.protection.outlook.com (10.167.242.11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8606.22 via Frontend Transport; Tue, 8 Apr 2025 08:24:58 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 8 Apr 2025 01:24:39 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.14; Tue, 8 Apr 2025 01:17:00 -0700 From: Xueming Li To: Praveen Kaligineedi CC: Xueming Li , Joshua Washington , dpdk stable Subject: patch 'net/gve: allocate Rx QPL pages using malloc' has been queued to stable release 23.11.4 Date: Tue, 8 Apr 2025 16:16:25 +0800 Message-ID: <20250408081625.377877-4-xuemingl@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250408081625.377877-1-xuemingl@nvidia.com> References: <20250218123523.36836-1-xuemingl@nvidia.com> <20250408081625.377877-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE39:EE_|BL3PR12MB6521:EE_ X-MS-Office365-Filtering-Correlation-Id: 7f76e9a0-ac10-4d7f-c604-08dd7676d98a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|36860700013|82310400026|1800799024|376014|7053199007|13003099007; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?DULw2ytVfUzKdxNsmMQ0pJu18loZlsJlypTbDLMzYVTxflJ/h33d5X9R03tw?= =?us-ascii?Q?qxnRc8rw4X+ltwi16aW5C12VckE1FMsi7XGKAo8BODRa7jobViK3KRBbXgV/?= =?us-ascii?Q?ozaVMplJi60Ejd2vpgYAI88wCrY9uwobKT8vwwqayNHdtiEh+IFsmmyfugBP?= =?us-ascii?Q?JEZhu+ZL4nezGbqQXUg93jLukZRT4CZU0vkLhH7NpBdUNG1d/xxpaawGMwuH?= =?us-ascii?Q?W5OefdsiiGZduALmBvRdWncxBudOGzlS56fywGm2CefoeODWZCnMLLA5vbak?= =?us-ascii?Q?s1duC7TPRNPjXee8gUAsNrMRlGYVUqhQMQjFGI3huIHexAujsECalrjpCrKX?= =?us-ascii?Q?UG6MBx9YrCPx4EF2GB02mWr6lcv4FXN4Er1Jf+nPdAyC7t2M5kz2u07CHyvE?= =?us-ascii?Q?XzhRX9G9xnE2vguLNEZ3v9qewx5ZLNQhPMOtxZls8tJ1GGO/934TaghMarSg?= =?us-ascii?Q?cOYD+rrg6Tjx2ycQ7OWhNwNm95TuP25VxNQgzUeR+JwATuv9IAxsUaxUTiEC?= =?us-ascii?Q?pWeg/Pd5DCOGia481CbOeWs/q/4xj8Vbpywk9iwMltZOfbGSv/lyac/wHjYg?= =?us-ascii?Q?Td/c3z+2AZgW60FuQDdKsgZ397fd7KOwjheHL7ne0n4LPLNA+hS3HVPAk7nE?= =?us-ascii?Q?5oJ0/IyqtAeQ1oO8lO1svj7/91DqJ7Y++6n6WluttYGrMQIqaHZr1BFovDNJ?= =?us-ascii?Q?1ZCQwPiVDNGYypkeCRDaAmbZH6fmRhgFgzyH1hfJt9BTUfnZOPG0icMyXjqF?= =?us-ascii?Q?HcisYD7fHFdEKx1NdPqNf+qg/gawNax92vDCGuUY+dOofKc4plCgMK6/X252?= =?us-ascii?Q?S2fysFY+dcvfrMbtQ0KmeAl++d2yvXjwe3vLRaq5uBK9/oJUPgLrwzWnvJuW?= =?us-ascii?Q?wlwdPRKI0w23jDmr+yE3YjBRXyEgqCUmBv/uZ9jotcrni9DOPJr9c0VM8OPX?= =?us-ascii?Q?JyfemjwMnv+CvwEKJK0VHLBytTp/MkF0wW+GjiVuXQiZWjGNgxocMxmTQbpo?= =?us-ascii?Q?WyG1gsIvuuA29aNeHef7PhhTIizkWTFJxg1Ri0N+gmy9BCvf1zgjyywHO5PP?= =?us-ascii?Q?qf0gTGHEKEkE3/jiY01hJx50kdpm03rSCAYgPIp4WQu6TYyazQUYFz8UmhAP?= =?us-ascii?Q?i4bHQzjL58saHIYNDhiTyFbR6vWFvlZFjiGMivaORDKCOh5W3hjE21LIS7tF?= =?us-ascii?Q?DlAnfNlnSItiawgKNxi9hNxyp7C63z8ipK8I1MALfZqyjPmRK4duazn+DsPa?= =?us-ascii?Q?W652J7N0KJakKnCiLaXKVPd8WpeNW1af1QZyJ6B0y5jcmnq0PGuZPv7wI9V9?= =?us-ascii?Q?h3en0TJPxKiqSscbpyrwS6EgZXLijLvMiZThpRJ7/+rtl8chHpCQamnl4zX5?= =?us-ascii?Q?a1jg+BpQ1lIp012fN9uMhFmOMmdWqzJorW+oxJx7ykQU3J76J2IG1mVvWC0X?= =?us-ascii?Q?F508O7HCnpjX/BEVueatUrClWBV3Wh8urbfe/FSZCQimjJbKl68jbzQcO9qv?= =?us-ascii?Q?pwRAC+SE2m1elKk=3D?= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230040)(36860700013)(82310400026)(1800799024)(376014)(7053199007)(13003099007); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Apr 2025 08:24:58.3041 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7f76e9a0-ac10-4d7f-c604-08dd7676d98a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE39.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL3PR12MB6521 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 23.11.4 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 04/10/25. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging This queued commit can be viewed at: https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=aca75f1cb950a151b76403203d0c50832f01c566 Thanks. Xueming Li --- >From aca75f1cb950a151b76403203d0c50832f01c566 Mon Sep 17 00:00:00 2001 From: Praveen Kaligineedi Date: Mon, 3 Mar 2025 15:06:08 -0800 Subject: [PATCH] net/gve: allocate Rx QPL pages using malloc Cc: Xueming Li Allocating QPL for an RX queue might fail if enough contiguous IOVA memory cannot be allocated. This can commonly occur when using 2MB huge pages because the 1024 4K buffers are allocated for each RX ring by default, resulting in 4MB for each ring. However, the only requirement for RX QPLs is that each 4K buffer be IOVA contiguous, not the entire QPL. Therefore, malloc will be used to allocate RX QPLs instead. Note that TX queues require the entire QPL to be IOVA contiguous, so it will continue to use the memzone-based allocation. Fixes: a46583cf43c8 ("net/gve: support Rx/Tx") Cc: stable@dpdk.org Signed-off-by: Praveen Kaligineedi Signed-off-by: Joshua Washington --- drivers/net/gve/gve_ethdev.c | 139 +++++++++++++++++++++++++++++------ drivers/net/gve/gve_ethdev.h | 5 +- drivers/net/gve/gve_rx.c | 2 +- 3 files changed, 122 insertions(+), 24 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index bd683a64d7..2dc47c1226 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -20,13 +20,45 @@ gve_write_version(uint8_t *driver_version_register) writeb('\n', driver_version_register); } +static const struct rte_memzone * +gve_alloc_using_mz(const char *name, uint32_t num_pages) +{ + const struct rte_memzone *mz; + mz = rte_memzone_reserve_aligned(name, num_pages * PAGE_SIZE, + rte_socket_id(), + RTE_MEMZONE_IOVA_CONTIG, PAGE_SIZE); + if (mz == NULL) + PMD_DRV_LOG(ERR, "Failed to alloc memzone %s.", name); + return mz; +} + static int -gve_alloc_queue_page_list(struct gve_priv *priv, uint32_t id, uint32_t pages) +gve_alloc_using_malloc(void **bufs, uint32_t num_entries) +{ + uint32_t i; + + for (i = 0; i < num_entries; i++) { + bufs[i] = rte_malloc_socket(NULL, PAGE_SIZE, PAGE_SIZE, rte_socket_id()); + if (bufs[i] == NULL) { + PMD_DRV_LOG(ERR, "Failed to malloc"); + goto free_bufs; + } + } + return 0; + +free_bufs: + while (i > 0) + rte_free(bufs[--i]); + + return -ENOMEM; +} + +static int +gve_alloc_queue_page_list(struct gve_priv *priv, uint32_t id, uint32_t pages, + bool is_rx) { - char z_name[RTE_MEMZONE_NAMESIZE]; struct gve_queue_page_list *qpl; - const struct rte_memzone *mz; - dma_addr_t page_bus; + int err = 0; uint32_t i; if (priv->num_registered_pages + pages > @@ -37,31 +69,79 @@ gve_alloc_queue_page_list(struct gve_priv *priv, uint32_t id, uint32_t pages) return -EINVAL; } qpl = &priv->qpl[id]; - snprintf(z_name, sizeof(z_name), "gve_%s_qpl%d", priv->pci_dev->device.name, id); - mz = rte_memzone_reserve_aligned(z_name, pages * PAGE_SIZE, - rte_socket_id(), - RTE_MEMZONE_IOVA_CONTIG, PAGE_SIZE); - if (mz == NULL) { - PMD_DRV_LOG(ERR, "Failed to alloc %s.", z_name); - return -ENOMEM; - } + qpl->page_buses = rte_zmalloc("qpl page buses", pages * sizeof(dma_addr_t), 0); if (qpl->page_buses == NULL) { PMD_DRV_LOG(ERR, "Failed to alloc qpl %u page buses", id); return -ENOMEM; } - page_bus = mz->iova; - for (i = 0; i < pages; i++) { - qpl->page_buses[i] = page_bus; - page_bus += PAGE_SIZE; + + if (is_rx) { + /* RX QPL need not be IOVA contiguous. + * Allocate 4K size buffers using malloc + */ + qpl->qpl_bufs = rte_zmalloc("qpl bufs", + pages * sizeof(void *), 0); + if (qpl->qpl_bufs == NULL) { + PMD_DRV_LOG(ERR, "Failed to alloc qpl bufs"); + err = -ENOMEM; + goto free_qpl_page_buses; + } + + err = gve_alloc_using_malloc(qpl->qpl_bufs, pages); + if (err) + goto free_qpl_page_bufs; + + /* Populate the IOVA addresses */ + for (i = 0; i < pages; i++) + qpl->page_buses[i] = + rte_malloc_virt2iova(qpl->qpl_bufs[i]); + } else { + char z_name[RTE_MEMZONE_NAMESIZE]; + + snprintf(z_name, sizeof(z_name), "gve_%s_qpl%d", priv->pci_dev->device.name, id); + + /* TX QPL needs to be IOVA contiguous + * Allocate QPL using memzone + */ + qpl->mz = gve_alloc_using_mz(z_name, pages); + if (!qpl->mz) { + err = -ENOMEM; + goto free_qpl_page_buses; + } + + /* Populate the IOVA addresses */ + for (i = 0; i < pages; i++) + qpl->page_buses[i] = qpl->mz->iova + i * PAGE_SIZE; } + qpl->id = id; - qpl->mz = mz; qpl->num_entries = pages; priv->num_registered_pages += pages; return 0; + +free_qpl_page_bufs: + rte_free(qpl->qpl_bufs); +free_qpl_page_buses: + rte_free(qpl->page_buses); + return err; +} + +/* + * Free QPL bufs in RX QPLs. Should not be used on TX QPLs. + **/ +static void +gve_free_qpl_bufs(struct gve_queue_page_list *qpl) +{ + uint32_t i; + + for (i = 0; i < qpl->num_entries; i++) + rte_free(qpl->qpl_bufs[i]); + + rte_free(qpl->qpl_bufs); + qpl->qpl_bufs = NULL; } static void @@ -74,9 +154,19 @@ gve_free_qpls(struct gve_priv *priv) if (priv->queue_format != GVE_GQI_QPL_FORMAT) return; - for (i = 0; i < nb_txqs + nb_rxqs; i++) { - if (priv->qpl[i].mz != NULL) + /* Free TX QPLs. */ + for (i = 0; i < nb_txqs; i++) { + if (priv->qpl[i].mz) { rte_memzone_free(priv->qpl[i].mz); + priv->qpl[i].mz = NULL; + } + rte_free(priv->qpl[i].page_buses); + } + + /* Free RX QPLs. */ + for (; i < nb_rxqs; i++) { + if (priv->qpl[i].qpl_bufs) + gve_free_qpl_bufs(&priv->qpl[i]); rte_free(priv->qpl[i].page_buses); } @@ -772,11 +862,16 @@ gve_init_priv(struct gve_priv *priv, bool skip_describe_device) } for (i = 0; i < priv->max_nb_txq + priv->max_nb_rxq; i++) { - if (i < priv->max_nb_txq) + bool is_rx; + + if (i < priv->max_nb_txq) { pages = priv->tx_pages_per_qpl; - else + is_rx = false; + } else { pages = priv->rx_data_slot_cnt; - err = gve_alloc_queue_page_list(priv, i, pages); + is_rx = true; + } + err = gve_alloc_queue_page_list(priv, i, pages, is_rx); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to alloc qpl %u.", i); goto err_qpl; diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 133860488c..e145d6b639 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -42,7 +42,10 @@ struct gve_queue_page_list { uint32_t id; /* unique id */ uint32_t num_entries; dma_addr_t *page_buses; /* the dma addrs of the pages */ - const struct rte_memzone *mz; + union { + const struct rte_memzone *mz; /* memzone allocated for TX queue */ + void **qpl_bufs; /* RX qpl-buffer list allocated using malloc*/ + }; }; /* A TX desc ring entry */ diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index 36a1b73c65..b8ef625b5c 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -117,7 +117,7 @@ gve_rx_mbuf(struct gve_rx_queue *rxq, struct rte_mbuf *rxe, uint16_t len, rxq->ctx.mbuf_tail = rxe; } if (rxq->is_gqi_qpl) { - addr = (uint64_t)(rxq->qpl->mz->addr) + rx_id * PAGE_SIZE + padding; + addr = (uint64_t)rxq->qpl->qpl_bufs[rx_id] + padding; rte_memcpy((void *)((size_t)rxe->buf_addr + rxe->data_off), (void *)(size_t)addr, len); } -- 2.34.1