From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 04177A0548; Thu, 11 Nov 2021 14:31:27 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C4D7141163; Thu, 11 Nov 2021 14:31:20 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2040.outbound.protection.outlook.com [40.107.236.40]) by mails.dpdk.org (Postfix) with ESMTP id 5BFA641147 for ; Thu, 11 Nov 2021 14:31:19 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MNPzHoofrro19VU7SnGzi9X/KIqH/mta7im8kq4BHADISMMWES53Bnmhiwn59U6qRm+tJfjR5Xi3pv0OjCoEvHyVndbGlQwK4D3GFtdmEsIoqA8c/Ir/4tf5qC5zdcHfXIqMgr3sw6ZUbXTvcelf5icMHFlPtoxBOkMfRpy+KdR0pNB/QILB9MaTNnfFA8in7Y1XoLDEfAL8l0LbNnwf6a+cgRVEH7uAs4e8prqUTH29+WWeHGF6FpLnkzeK4G0j4G2tame0aWpH7juFWN2zZisbB5mIhsbF01b4t6b8eal1q6FgxC5Ir7jMU2fmAtJEoo5JJ/RK3IK9bnAbF8J8ZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Fr5eYQMvieelxFA+vHFjsXxz+5MhQT6+16YOBkbk0LQ=; b=Skjo1e79g93M340QejKZWZ4sDhyFUPRsI2cbYKakzLsj//ABF2GmLtDM1wHz2GkTDsveEIgp8m7gSGUfobT6qXY6gBV/77MQ7byKHZig671Dsx/SzpKvISb2LwGbvCMZOpkDjRUsZEo6zXMzZ05nHrN8JWpnt9IOeilpCxQa13Vy5iIoiy8yAPraxUF+acgrPjU5RIWn+U1IYXCtNS4gT6nvkQdQQXT8LGb6ax4g0moeEfAwJ71CeGy6LDCFfy6ofJ7oZTck2euF54i8T8Ru3LKt/ZMGs5suTkkyD8Q6Q3XDNlughozhoJNZ+3F6A/J6ct7D0Kk9mpIQyX9+XpvIQQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Fr5eYQMvieelxFA+vHFjsXxz+5MhQT6+16YOBkbk0LQ=; b=kibK1od6JLP9BCnNpQxrklUtV+EdBMt6i9VM9eNOjA0/e42r1p7/ic7HzaN4aAfLYx2jrmeHiAR0CaE/kjgAs00mHuDhOHJ1DXJKqDtITJzJJgG4m1yJvrm0Yng9f5fvCm1RtnFXcOePbXd4EFmLlWvefyXiCZSlReeqwbRdMFRfOfyqgceyx+DPvzdfjTsQhmv7ntTw2sx8p+WBuwoP6PwavapWcuO9L4zoqLI2myLFvhWXiP2QliX7XNGSEqSZ5ojyL8qD02+BiR8nTJdUL21pX6ARmTYJ6vJiewFTjg+rK1g6tqQbo7WplIPciGWhJvY0MZUNXkxnfp2CdLugVA== Received: from DM5PR18CA0055.namprd18.prod.outlook.com (2603:10b6:3:22::17) by CY4PR12MB1845.namprd12.prod.outlook.com (2603:10b6:903:126::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11; Thu, 11 Nov 2021 13:31:16 +0000 Received: from DM6NAM11FT041.eop-nam11.prod.protection.outlook.com (2603:10b6:3:22:cafe::4d) by DM5PR18CA0055.outlook.office365.com (2603:10b6:3:22::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4690.16 via Frontend Transport; Thu, 11 Nov 2021 13:31:16 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT041.mail.protection.outlook.com (10.13.172.98) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4690.15 via Frontend Transport; Thu, 11 Nov 2021 13:31:15 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 11 Nov 2021 13:31:13 +0000 From: To: CC: eagostini Subject: [PATCH v2 1/1] app/testpmd: add GPU memory option in iofwd engine Date: Thu, 11 Nov 2021 21:41:41 +0000 Message-ID: <20211111214141.26612-2-eagostini@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211111214141.26612-1-eagostini@nvidia.com> References: <20211029204909.21318-1-eagostini@nvidia.com> <20211111214141.26612-1-eagostini@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5323ec5b-7272-4113-6b88-08d9a5178972 X-MS-TrafficTypeDiagnostic: CY4PR12MB1845: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:639; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: DBvGVRktNXvc0GG91YudYWINjUALx4ZjzEsnVXLdFyyr/ttHcBuCLYnZUhLTTYiWAduPqhklxUh0Z6cQ0F5cF69fNxrG1adsmXoouL6lC0HN2SeljLTdKohcwhJ7Af+zYie39pZhY40kR3JDjW58s5OiK7y52984n+WIA5L6WgEahq88c2SyZhHLezgNMW2HJpTwlnMunKN5lX6rlBAJKB99PsrYsNiHMCVwZWKQoEUE7WdHuyYDDiBCzUW4Cge2+Yzj1ku1kS3saxMY7nv94VccZ82u0KflmLIezSgRm740pT1HPjfY3oeMqWkzz4n3DsV77Mh1a2oE7xF1owOxUqfzPlYnY1TYOeYL1tMRmTyr4dMaqN0YF/Eexheni1SLBrGPH0VCZhoD9A9zi3EmTTCNw8dnkVTtrecc9X4F3fThXE7lsjLqRRipRRHEzwpE0a1JwOouGV53nP00829rzUi0Fl0i4AlTH++5ujnPwDR8BG+n1ZWSxhqOYrJVKEe6q01Il1zm6M1SHcsY3pgXkH5JgDdIWvTEKCGHZ6bKjaN/9xGy4KHxFDPmZ5AmMl+hYkdIbhUIO0xPZzVhYAYhSzlwM75dZHaCen57ej5oaGKBeKok/Ag37nlKQf+vSs3yxnIDFQp7ti77tdD8Tq8wkt0cp3IXn/KqTC3c9GRAekeA4pCZ+2Z6mWN3VN8JzL8eXSQikS98Tfweu3vluYR30Q== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(7696005)(70206006)(16526019)(82310400003)(26005)(186003)(30864003)(7636003)(2906002)(4326008)(6916009)(36906005)(86362001)(316002)(2616005)(426003)(83380400001)(336012)(70586007)(36860700001)(356005)(508600001)(2876002)(8676002)(6286002)(8936002)(1076003)(6666004)(5660300002)(47076005)(55016002)(107886003)(36756003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Nov 2021 13:31:15.7434 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5323ec5b-7272-4113-6b88-08d9a5178972 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT041.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1845 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: eagostini This patch introduces GPU memory in testpmd through the gpudev library. Testpmd can be used for network benchmarks when using GPU memory instead of regular CPU memory to send and receive packets. This option is currently limited to iofwd engine. Signed-off-by: Elena Agostini Depends-on: series-19465 ("GPU library") Depends-on: series-20422 ("common/mlx5: fix external memory pool registration") --- app/test-pmd/cmdline.c | 14 +++ app/test-pmd/meson.build | 2 +- app/test-pmd/parameters.c | 13 ++- app/test-pmd/testpmd.c | 133 +++++++++++++++++++++++--- app/test-pmd/testpmd.h | 16 +++- doc/guides/testpmd_app_ug/run_app.rst | 3 + 6 files changed, 164 insertions(+), 17 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 4f51b259fe..36193bc566 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -3614,6 +3614,7 @@ parse_item_list(const char *str, const char *item_name, unsigned int max_items, unsigned int j; int value_ok; char c; + int gpu_mbuf = 0; /* * First parse all items in the list and store their value. @@ -3628,6 +3629,14 @@ parse_item_list(const char *str, const char *item_name, unsigned int max_items, value_ok = 1; continue; } + if (c == 'g') { + /* + * When this flag is set, mbufs for this segment + * will be created on GPU memory. + */ + gpu_mbuf = 1; + continue; + } if (c != ',') { fprintf(stderr, "character %c is not a decimal digit\n", c); return 0; @@ -3640,6 +3649,8 @@ parse_item_list(const char *str, const char *item_name, unsigned int max_items, parsed_items[nb_item] = value; value_ok = 0; value = 0; + mbuf_mem_types[nb_item] = gpu_mbuf ? MBUF_MEM_GPU : MBUF_MEM_CPU; + gpu_mbuf = 0; } nb_item++; } @@ -3648,6 +3659,9 @@ parse_item_list(const char *str, const char *item_name, unsigned int max_items, item_name, nb_item + 1, max_items); return 0; } + + mbuf_mem_types[nb_item] = gpu_mbuf ? MBUF_MEM_GPU : MBUF_MEM_CPU; + parsed_items[nb_item++] = value; if (! check_unique_values) return nb_item; diff --git a/app/test-pmd/meson.build b/app/test-pmd/meson.build index d5df52c470..5c8ca68c9d 100644 --- a/app/test-pmd/meson.build +++ b/app/test-pmd/meson.build @@ -32,7 +32,7 @@ if dpdk_conf.has('RTE_HAS_JANSSON') ext_deps += jansson_dep endif -deps += ['ethdev', 'gro', 'gso', 'cmdline', 'metrics', 'bus_pci'] +deps += ['ethdev', 'gro', 'gso', 'cmdline', 'metrics', 'bus_pci', 'gpudev'] if dpdk_conf.has('RTE_CRYPTO_SCHEDULER') deps += 'crypto_scheduler' endif diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index 0974b0a38f..d41f7f220b 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -87,7 +87,10 @@ usage(char* progname) "in NUMA mode.\n"); printf(" --mbuf-size=N,[N1[,..Nn]: set the data size of mbuf to " "N bytes. If multiple numbers are specified the extra pools " - "will be created to receive with packet split features\n"); + "will be created to receive with packet split features\n" + "Use 'g' suffix for GPU memory.\n" + "If no or an unrecognized suffix is provided, CPU is assumed\n"); + printf(" --total-num-mbufs=N: set the number of mbufs to be allocated " "in mbuf pools.\n"); printf(" --max-pkt-len=N: set the maximum size of packet to N bytes.\n"); @@ -595,6 +598,7 @@ launch_args_parse(int argc, char** argv) struct rte_eth_dev_info dev_info; uint16_t rec_nb_pkts; int ret; + uint32_t idx = 0; static struct option lgopts[] = { { "help", 0, 0, 0 }, @@ -1538,4 +1542,11 @@ launch_args_parse(int argc, char** argv) "ignored\n"); mempool_flags = 0; } + + for (idx = 0; idx < mbuf_data_size_n; idx++) { + if (mbuf_mem_types[idx] == MBUF_MEM_GPU && strcmp(cur_fwd_eng->fwd_mode_name, "io") != 0) { + fprintf(stderr, "GPU memory mbufs can be used with iofwd engine only\n"); + rte_exit(EXIT_FAILURE, "Command line is incorrect\n"); + } + } } diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index a66dfb297c..1af235c4d8 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -205,6 +205,12 @@ uint32_t mbuf_data_size_n = 1; /* Number of specified mbuf sizes. */ uint16_t mbuf_data_size[MAX_SEGS_BUFFER_SPLIT] = { DEFAULT_MBUF_DATA_SIZE }; /**< Mbuf data space size. */ + +/* Mbuf memory types. */ +enum mbuf_mem_type mbuf_mem_types[MAX_SEGS_BUFFER_SPLIT]; +/* Pointers to external memory allocated for mempools. */ +uintptr_t mempools_ext_ptr[MAX_SEGS_BUFFER_SPLIT]; + uint32_t param_total_num_mbufs = 0; /**< number of mbufs in all pools - if * specified on command-line. */ uint16_t stats_period; /**< Period to show statistics (disabled by default) */ @@ -543,6 +549,12 @@ int proc_id; */ unsigned int num_procs = 1; +/* + * In case of GPU memory external mbufs use, for simplicity, + * the first GPU device in the list. + */ +int gpu_id = 0; + static void eth_rx_metadata_negotiate_mp(uint16_t port_id) { @@ -1103,6 +1115,79 @@ setup_extbuf(uint32_t nb_mbufs, uint16_t mbuf_sz, unsigned int socket_id, return ext_num; } +static struct rte_mempool * +gpu_mbuf_pool_create(uint16_t mbuf_seg_size, unsigned int nb_mbuf, + unsigned int socket_id, uint16_t port_id, + int gpu_id, uintptr_t * mp_addr) +{ + int ret = 0; + char pool_name[RTE_MEMPOOL_NAMESIZE]; + struct rte_eth_dev_info dev_info; + struct rte_mempool *rte_mp = NULL; + struct rte_pktmbuf_extmem gpu_mem; + struct rte_gpu_info ginfo; + uint8_t gpu_page_shift = 16; + uint32_t gpu_page_size = (1UL << gpu_page_shift); + + ret = eth_dev_info_get_print_err(port_id, &dev_info); + if (ret != 0) + rte_exit(EXIT_FAILURE, + "Failed to get device info for port %d\n", + port_id); + + mbuf_poolname_build(socket_id, pool_name, sizeof(pool_name), port_id, MBUF_MEM_GPU); + if (!is_proc_primary()) { + rte_mp = rte_mempool_lookup(pool_name); + if (rte_mp == NULL) + rte_exit(EXIT_FAILURE, + "Get mbuf pool for socket %u failed: %s\n", + socket_id, rte_strerror(rte_errno)); + return rte_mp; + } + + if (rte_gpu_info_get(gpu_id, &ginfo)) + rte_exit(EXIT_FAILURE, "Can't retrieve info about GPU %d - bye\n", gpu_id); + + TESTPMD_LOG(INFO, + "create a new mbuf pool <%s>: n=%u, size=%u, socket=%u GPU device=%s\n", + pool_name, nb_mbuf, mbuf_seg_size, socket_id, ginfo.name); + + /* Create an external memory mempool using memory allocated on the GPU. */ + + gpu_mem.elt_size = RTE_MBUF_DEFAULT_BUF_SIZE; + gpu_mem.buf_len = RTE_ALIGN_CEIL(nb_mbuf * gpu_mem.elt_size, gpu_page_size); + gpu_mem.buf_iova = RTE_BAD_IOVA; + + gpu_mem.buf_ptr = rte_gpu_mem_alloc(gpu_id, gpu_mem.buf_len); + if (gpu_mem.buf_ptr == NULL) + rte_exit(EXIT_FAILURE, "Could not allocate GPU device memory\n"); + + ret = rte_extmem_register(gpu_mem.buf_ptr, gpu_mem.buf_len, NULL, gpu_mem.buf_iova, gpu_page_size); + if (ret) + rte_exit(EXIT_FAILURE, "Unable to register addr 0x%p, ret %d\n", gpu_mem.buf_ptr, ret); + + uint16_t pid = 0; + + RTE_ETH_FOREACH_DEV(pid) + { + ret = rte_dev_dma_map(dev_info.device, gpu_mem.buf_ptr, + gpu_mem.buf_iova, gpu_mem.buf_len); + if (ret) { + rte_exit(EXIT_FAILURE, "Unable to DMA map addr 0x%p for device %s\n", + gpu_mem.buf_ptr, dev_info.device->name); + } + } + + rte_mp = rte_pktmbuf_pool_create_extbuf(pool_name, nb_mbuf, mb_mempool_cache, 0, mbuf_seg_size, socket_id, &gpu_mem, 1); + if (rte_mp == NULL) { + rte_exit(EXIT_FAILURE, "Creation of GPU mempool <%s> failed\n", pool_name); + } + + *mp_addr = (uintptr_t) gpu_mem.buf_ptr; + + return rte_mp; +} + /* * Configuration initialisation done once at init time. */ @@ -1117,7 +1202,7 @@ mbuf_pool_create(uint16_t mbuf_seg_size, unsigned nb_mbuf, mb_size = sizeof(struct rte_mbuf) + mbuf_seg_size; #endif - mbuf_poolname_build(socket_id, pool_name, sizeof(pool_name), size_idx); + mbuf_poolname_build(socket_id, pool_name, sizeof(pool_name), size_idx, MBUF_MEM_CPU); if (!is_proc_primary()) { rte_mp = rte_mempool_lookup(pool_name); if (rte_mp == NULL) @@ -1700,19 +1785,42 @@ init_config(void) for (i = 0; i < num_sockets; i++) for (j = 0; j < mbuf_data_size_n; j++) - mempools[i * MAX_SEGS_BUFFER_SPLIT + j] = - mbuf_pool_create(mbuf_data_size[j], - nb_mbuf_per_pool, - socket_ids[i], j); + { + if (mbuf_mem_types[j] == MBUF_MEM_GPU) { + if (rte_gpu_count_avail() == 0) + rte_exit(EXIT_FAILURE, "No GPU device available.\n"); + + mempools[i * MAX_SEGS_BUFFER_SPLIT + j] = + gpu_mbuf_pool_create(mbuf_data_size[j], + nb_mbuf_per_pool, + socket_ids[i], j, gpu_id, + &(mempools_ext_ptr[j])); + } else { + mempools[i * MAX_SEGS_BUFFER_SPLIT + j] = + mbuf_pool_create(mbuf_data_size[j], + nb_mbuf_per_pool, + socket_ids[i], j); + } + } } else { uint8_t i; for (i = 0; i < mbuf_data_size_n; i++) - mempools[i] = mbuf_pool_create - (mbuf_data_size[i], - nb_mbuf_per_pool, - socket_num == UMA_NO_CONFIG ? - 0 : socket_num, i); + { + if (mbuf_mem_types[i] == MBUF_MEM_GPU) { + mempools[i] = gpu_mbuf_pool_create(mbuf_data_size[i], + nb_mbuf_per_pool, + socket_num == UMA_NO_CONFIG ? 0 : socket_num, + i, gpu_id, + &(mempools_ext_ptr[i])); + } else { + mempools[i] = mbuf_pool_create(mbuf_data_size[i], + nb_mbuf_per_pool, + socket_num == UMA_NO_CONFIG ? + 0 : socket_num, i); + } + } + } init_port_config(); @@ -3415,8 +3523,11 @@ pmd_test_exit(void) } } for (i = 0 ; i < RTE_DIM(mempools) ; i++) { - if (mempools[i]) + if (mempools[i]) { mempool_free_mp(mempools[i]); + if (mbuf_mem_types[i] == MBUF_MEM_GPU) + rte_gpu_mem_free(gpu_id, (void *)mempools_ext_ptr[i]); + } } free(xstats_display); diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 669ce1e87d..9919044372 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #ifdef RTE_HAS_JANSSON @@ -474,6 +475,11 @@ extern uint8_t dcb_config; extern uint32_t mbuf_data_size_n; extern uint16_t mbuf_data_size[MAX_SEGS_BUFFER_SPLIT]; /**< Mbuf data space size. */ +enum mbuf_mem_type { + MBUF_MEM_CPU, + MBUF_MEM_GPU +}; +extern enum mbuf_mem_type mbuf_mem_types[MAX_SEGS_BUFFER_SPLIT]; extern uint32_t param_total_num_mbufs; extern uint16_t stats_period; @@ -717,14 +723,16 @@ current_fwd_lcore(void) /* Mbuf Pools */ static inline void mbuf_poolname_build(unsigned int sock_id, char *mp_name, - int name_size, uint16_t idx) + int name_size, uint16_t idx, enum mbuf_mem_type mem_type) { + const char *suffix = mem_type == MBUF_MEM_GPU ? "_gpu" : ""; + if (!idx) snprintf(mp_name, name_size, - MBUF_POOL_NAME_PFX "_%u", sock_id); + MBUF_POOL_NAME_PFX "_%u%s", sock_id, suffix); else snprintf(mp_name, name_size, - MBUF_POOL_NAME_PFX "_%hu_%hu", (uint16_t)sock_id, idx); + MBUF_POOL_NAME_PFX "_%hu_%hu%s", (uint16_t)sock_id, idx, suffix); } static inline struct rte_mempool * @@ -732,7 +740,7 @@ mbuf_pool_find(unsigned int sock_id, uint16_t idx) { char pool_name[RTE_MEMPOOL_NAMESIZE]; - mbuf_poolname_build(sock_id, pool_name, sizeof(pool_name), idx); + mbuf_poolname_build(sock_id, pool_name, sizeof(pool_name), idx, mbuf_mem_types[idx]); return rte_mempool_lookup((const char *)pool_name); } diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index 30edef07ea..ede7b79abb 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -119,6 +119,9 @@ The command line options are: The default value is 2048. If multiple mbuf-size values are specified the extra memory pools will be created for allocating mbufs to receive packets with buffer splitting features. + Providing mbuf size with a 'g' suffix (e.g. ``--mbuf-size=2048g``), + will cause the mempool to be created on a GPU memory area allocated. + This option is currently limited to iofwd engine with the first GPU. * ``--total-num-mbufs=N`` -- 2.17.1