From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5CD9343BF0; Tue, 27 Feb 2024 10:57:58 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 26CE84027D; Tue, 27 Feb 2024 10:57:58 +0100 (CET) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2087.outbound.protection.outlook.com [40.107.243.87]) by mails.dpdk.org (Postfix) with ESMTP id AF1AA40150 for ; Tue, 27 Feb 2024 10:57:56 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AYvhzb9scNU+k8pGKyqgRXN1YYmCBMPEYpRnwjeqMa5tVvhJpviZCxwXk1EJ2dYyD6/d4mVuHvnkMspZY7VlxQaypYfXB3gtiQeshzyDx0krUs3/Y8BtJZhzw7SyEATuapVTutH4DcwrXIaFB2NFFwYv1Uhzl/NElIkmqqKGb7rP6VPj57z0aQ5HOF7E42f/5ipVbeMSji3RqjIFvBHdQbf9+XBfYFuMNHmB2XrTS3tNUiv/GusBfTyOYFxqgsETtOif8WCj+BErU/S6ZKnE7Hv+1/YUBquJTvdPbZR/k2FBaL0kHWqAoU63t392YUXs4xB7q3mNYfSPjFHQ2In4pw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=pwViJt/ZBTPaJ5HvX/ca9vBQAsm90NVaoJ+b3ApygKQ=; b=fK4YDLhhBsCVknU9/uAIRenvK8guTmkb/0v/WmrUwUz+Gi/PzvHgOIN5XwUUgm3gRvMQzCeOi2CY+TnXgJkPOcyLPpckHWoE8/UEHU5DjW1wwR0zxrwwErNWI7+8/jlPtynbDVgiKjm4YValXfGShJG0mwvE63UTH/tvtoxoOOnt5SD6WT/kva3aTXZViDX18KNAdIZ6c25fUHxnXiZzzfA0z2LqRk6hRfr2oFg49oD4DMIgeXs3pODpE7l49MDyPSrZkNDXcbTw3h9KFtJIfGcDhUVqkHg6A+56dn1CKUjlNwm0WLhn3TY/0mkDcRXwd2TygDjME2u27e+h5EB0Sg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pwViJt/ZBTPaJ5HvX/ca9vBQAsm90NVaoJ+b3ApygKQ=; b=eU4YcAWGeq6TxcNlKZdOzwa5DmuSUZMxZwBJdIaGIRj2ZGGO6fUYMDuuTQrNME27A3UDiQjdMtfj1J71agVFG9aOb1pEK10bt3jcvJyiRZYaH56pkvfu4txs43i8YAZEctJGTXyVx6oThz/ubFoJLiD/xveAViv5YXX400fD3zo= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com; Received: from PH7PR12MB8596.namprd12.prod.outlook.com (2603:10b6:510:1b7::6) by DS0PR12MB9398.namprd12.prod.outlook.com (2603:10b6:8:1b3::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.36; Tue, 27 Feb 2024 09:57:54 +0000 Received: from PH7PR12MB8596.namprd12.prod.outlook.com ([fe80::5f0d:af7:7f6b:9b9c]) by PH7PR12MB8596.namprd12.prod.outlook.com ([fe80::5f0d:af7:7f6b:9b9c%5]) with mapi id 15.20.7316.035; Tue, 27 Feb 2024 09:57:54 +0000 Message-ID: <333a22c5-3a05-45a8-b12e-61f553e8c490@amd.com> Date: Tue, 27 Feb 2024 15:27:45 +0530 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2] app/dma-perf: replace pktmbuf with mempool objects Content-Language: en-US To: fengchengwen , dev@dpdk.org, david.marchand@redhat.com, honest.jiang@foxmail.com Cc: =?UTF-8?Q?Morten_Br=C3=B8rup?= , Thiyagrajan P , Ferruh Yigit References: <20231212103746.1910-1-vipin.varghese@amd.com> <20231220110333.619-1-vipin.varghese@amd.com> <591e44d2-8a6a-b762-0ca5-c8ed9777b577@huawei.com> From: "Varghese, Vipin" In-Reply-To: <591e44d2-8a6a-b762-0ca5-c8ed9777b577@huawei.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-ClientProxiedBy: PN3PR01CA0172.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c01:de::16) To PH7PR12MB8596.namprd12.prod.outlook.com (2603:10b6:510:1b7::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH7PR12MB8596:EE_|DS0PR12MB9398:EE_ X-MS-Office365-Filtering-Correlation-Id: 06547711-209e-4c12-20dc-08dc377a913e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +j6hIGdZOz2sNMQSHwi1q+nx+fTbRVCoS6wWPKeilXQqnTpa+/jJSDhT7EatkVb2MTAb1nfDHnVk0VKPg70CcD2NbSP75zJmoXtiLNluhe6WuTNYnA43zktliIiQY4Rw/xBBHQAfStJsfoaGQ3ce7+43T0h/b1I88X2M0DbLnZkNH4ZnXN4zk1oth6P3j1u2GPK90eVH9UEufqZcG2lYTiBn9tQPd8G5+lE97qqEWa/V9xsUCRx1O3hW1U0tCMg3wJFFOCR2rVzGTiohKNk6rCzzguEe2/j6taPK4THd58nwyUinh3wCP3MAPdwZoF93UwUrPb3iJFE1FvdBrGE+e3KVRO0MNgqkxO7FcusVxgl2u5RNMjI6axFyj9FOyhfptEmgsz52jAqe92cDj8y7SuqEnJ2z5m8C2wL5fYVkuozGqva9q/W7gXSbleutA0Z0sjGibuJMgA4zXTpnYaEMWKG0j39Lq/no99BUDJerXISL+oETVumUyvfoPb+zzeMbpFt3HOBRHYx+UqpTg5y5LK4BgYXVRT3jY/bSA8M0iLHsquumVSx/WoPFBu08wWQxuCvVLyyqBfLV28vtC3sbb+XFlgGofFDMCBhs43UnJnhaxjQerVwtkNlDLMgTcxLvc3Zz+M2mtYndLFUi0DqeT2cZKklb5DhcO0NitZX++cc= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PH7PR12MB8596.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(230273577357003); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?Y05JQUhsUEFmcTVXY3Y1NDk4TDREKzdSYWw3ZTVDNUl5dDRncEVoTEVqU0Y0?= =?utf-8?B?eEtHZnhuaXhocno4b0tPaWp2MUJwZ0N6QnBwU05lOEh2MWRQSGVhVDQ2N2Q4?= =?utf-8?B?cUdpRTM1Wms3WVJOb1NWUHVoTkU2TmppVW4rZ3V3aTFlQU5CVFNHcVB3RDhv?= =?utf-8?B?Q2lLditiTTZaOU15dDUyUjBlU2xSaGQwR01VbmhkQk8zbmJBT2hzeDc4Wldi?= =?utf-8?B?d3NOZGszbTU3MUd3ZGx4Mmc0WUJJcGsrV2dOUm1mM1gxZDIxc3JYMXBlL3dq?= =?utf-8?B?VEk4QmdPR25DcTdEUkhybGVMLzZNbjZiVzMyS2xxSEE2ZDZRU2I2UHh4UjlT?= =?utf-8?B?a2p6ck9lMWQzVmNZOExZWjhzcktuNjdaYThRQzVlUWs5SFVjcVNlMVZvaWVt?= =?utf-8?B?Z0ZvRzZkQXlMUE5PN05tNkZmZkJWRHlvVi95b09GNGdoemVsWWM3aHlDVUtq?= =?utf-8?B?Yjg2NGY3dEF2aWtJSWdqTERSdEx2OVZ0cEFKVWZnbmRTRzFPcVhJc3BDQmVQ?= =?utf-8?B?OEZMOEdkblcxUTBLOFliWVMxSU1WaVVza0dHbnFjMFpWSVlGcnlvclJCczdF?= =?utf-8?B?WVRpTXhQQkFOZTZRSmY2QTJ5d1c1WHNOUFUyenhKOThSVFlCT0FCUVpjMnZy?= =?utf-8?B?TXZFd3ZjNTN3MXpqNm83aWlVZnh4aGwzSy9zN1FOVmgxNE9vMkFNWEY1Q2ta?= =?utf-8?B?ZEdNVGZ0UmdLOWtmSm5UdWtyb0M3UmJpa0t4dzloRzRjOHFCUnNpbG1tTm42?= =?utf-8?B?WDI5czBrT1VsUmIzR0p0K0dIZC81bWl5cG02M3FkN3V5WFNCVlJQSVFFMEIx?= =?utf-8?B?SXhmcGIxYkdvSlY0aGFmY3RwdkI1dGJCMXlENGJQaGd2M1VJK3VmeU1BY1F6?= =?utf-8?B?KzhlMTdvdnl0Rm1ldDhUQXpLRnNWYTd2Tkd1MUMzbEhXZE5IV3pZNTJ4d0RZ?= =?utf-8?B?eldrSmpVVXhyMTRKNGVmaDZhT0gvd20ySWVtbGhLTVRMcGZJQTE1dlVqQUNt?= =?utf-8?B?YS9PQStnWmFDTDdldE1RTWZaang0bWUrVStlU04rc3Jka2djdzl5V3VzMHRD?= =?utf-8?B?TXBZaUFVenVHWWdQNGFCTmRyT0tEUDlzcUVZb0cwZExpQ1N3Y0JXZFFDalhN?= =?utf-8?B?SlBHV0Y2S01uTWVtK3JBNVFNM1hxSld0a3prRTZQOVFRSDBNTDZkU1pRRm9l?= =?utf-8?B?YWRVNmJNWUFpSzA0WUs2UTNiZXRWdGw1Qmk5eDBTUWxzVXkvUlhSWVJ6TlhN?= =?utf-8?B?aXNpMnczRVBwY1BRWmZFcE5xcm9HRHp3Zkl1R3FTNUt5UE5ISC85MDJLcmxr?= =?utf-8?B?dzNnRGR5M1psSEdPQnRjQTJYYXVBL3hwQlJKcCtYbWJjVXJMcG0rWmlFWi9Y?= =?utf-8?B?UXpERElrcnROVExhTGh3V1RvRk1idWgzMlcyVk1XKzZYMGw0L0VjcEJLTERE?= =?utf-8?B?QXpDTStTK2E4WEcyMmdXUWMwN1dxeDJXMmhoK2czbzJTOWNnV21odXU5SHF1?= =?utf-8?B?ZkVBdGhqU3BzL1JuL1J3d1BsQTlmZEFaaFlhSFgzemdkZnRBZk0yQk9Tdm5E?= =?utf-8?B?dys0dERSOWsrSlNoTVUvajY4Z3VadmJxL1pZSExoRldvcjhUcWtScGduZWNh?= =?utf-8?B?NWdEN1ZibG4yOHRtTzBzSndmUVNFd0Z2S3JOVHFndXl6T3AycFdIdXk4WC9o?= =?utf-8?B?NEwzWHZoOWE4b2h1cDQxNndqVzg3SjhKUXI4YUxwdmJQRzJOazZMdEVPcDkz?= =?utf-8?B?cEIwdUJjelZMbnl1ajkwM3JDbUtacFUvc2dPVUxUdjdieWN4dW9hMnRRWUJR?= =?utf-8?B?VnIrdnhjTnBQVXVBVSt3R2VqWFprSlF1NmtJK1FaaUdDUS9KZVd3S2NyTzVy?= =?utf-8?B?ZEdCR0crblRRaFNnOFhsSE1lV1ZNMDladnkweWFYMzdQZTZWRGZSdlhEeVUz?= =?utf-8?B?dTBQSVlXd2NiQzUyTTF3ekd3empEWW1CMzdtMzdxNFFSeldqUkw1UTg2R0g4?= =?utf-8?B?Z3N4Q0czajBGSnlSWmQ4RGQ4MGgzUjFiaGdWT1FCdnZwSVZKc3lKQkZqOENt?= =?utf-8?B?VFpXRFpsVDFwVE5qZU5jREM5SUFVSTFZLzlEbzlhRnlLdWpmMUUzTURKSHlI?= =?utf-8?Q?9s47MXa87uRQYH6/ZNZfrynA2?= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 06547711-209e-4c12-20dc-08dc377a913e X-MS-Exchange-CrossTenant-AuthSource: PH7PR12MB8596.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Feb 2024 09:57:54.5634 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: ND6nWfMALHZVLAPUPZfQPRTMnYfY3plp4fyT0PoKW7z8X2aWXb3UM+KJsxwQT4CH3K/WpXFgoPF9NXNEiENPog== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB9398 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 2/26/2024 7:35 AM, fengchengwen wrote: > Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding. > > > Hi Vipin, > > On 2023/12/20 19:03, Vipin Varghese wrote: >> From: Vipin Varghese >> >> Replace pktmbuf pool with mempool, this allows increase in MOPS >> especially in lower buffer size. Using Mempool, allows to reduce >> the extra CPU cycles. >> >> Changes made are >> 1. pktmbuf pool create with mempool create. >> 2. create src & dst pointer array from the appropaite numa. >> 3. use get pool and put for mempool objects. >> 4. remove pktmbuf_mtod for dma and cpu memcpy. >> >> v2 changes: >> - add ACK from Morten Brørup >> >> v1 changes: >> - pktmbuf pool create with mempool create. >> - create src & dst pointer array from the appropaite numa. >> - use get pool and put for mempool objects. >> - remove pktmbuf_mtod for dma and cpu memcpy. >> >> Test Results for pktmbuf vs mempool: >> ==================================== >> >> Format: Buffer Size | % AVG cycles | % AVG Gbps >> >> Category-1: HW-DSA >> ------------------- >> 64|-13.11| 14.97 >> 128|-41.49| 0.41 >> 256| -1.85| 1.20 >> 512| -9.38| 8.81 >> 1024| 1.82| -2.00 >> 1518| 0.00| -0.80 >> 2048| 1.03| -0.91 >> 4096| 0.00| -0.35 >> 8192| 0.07| -0.08 >> >> Category-2: MEMCPY >> ------------------- >> 64|-12.50|14.14 >> 128|-40.63|67.26 >> 256|-38.78|59.35 >> 512|-30.26|43.36 >> 1024|-21.80|27.04 >> 1518|-16.23|19.33 >> 2048|-14.75|16.81 >> 4096| -9.56|10.01 >> 8192| -3.32| 3.12 >> >> Signed-off-by: Vipin Varghese >> Acked-by: Morten Brørup >> Tested-by: Thiyagrajan P >> --- >> --- >> app/test-dma-perf/benchmark.c | 74 +++++++++++++++++++++-------------- >> 1 file changed, 44 insertions(+), 30 deletions(-) >> >> diff --git a/app/test-dma-perf/benchmark.c b/app/test-dma-perf/benchmark.c >> index 9b1f58c78c..dc6f16cc01 100644 >> --- a/app/test-dma-perf/benchmark.c >> +++ b/app/test-dma-perf/benchmark.c >> @@ -43,8 +43,8 @@ struct lcore_params { >> uint16_t kick_batch; >> uint32_t buf_size; >> uint16_t test_secs; >> - struct rte_mbuf **srcs; >> - struct rte_mbuf **dsts; >> + void **srcs; >> + void **dsts; >> volatile struct worker_info worker_info; >> }; >> >> @@ -110,17 +110,17 @@ output_result(uint8_t scenario_id, uint32_t lcore_id, char *dma_name, uint16_t r >> } >> >> static inline void >> -cache_flush_buf(__rte_unused struct rte_mbuf **array, >> +cache_flush_buf(__rte_unused void **array, >> __rte_unused uint32_t buf_size, >> __rte_unused uint32_t nr_buf) >> { >> #ifdef RTE_ARCH_X86_64 >> char *data; >> - struct rte_mbuf **srcs = array; >> + void **srcs = array; >> uint32_t i, offset; >> >> for (i = 0; i < nr_buf; i++) { >> - data = rte_pktmbuf_mtod(srcs[i], char *); >> + data = (char *) srcs[i]; >> for (offset = 0; offset < buf_size; offset += 64) >> __builtin_ia32_clflush(data + offset); >> } >> @@ -224,8 +224,8 @@ do_dma_mem_copy(void *p) >> const uint32_t nr_buf = para->nr_buf; >> const uint16_t kick_batch = para->kick_batch; >> const uint32_t buf_size = para->buf_size; >> - struct rte_mbuf **srcs = para->srcs; >> - struct rte_mbuf **dsts = para->dsts; >> + void **srcs = para->srcs; >> + void **dsts = para->dsts; >> uint16_t nr_cpl; >> uint64_t async_cnt = 0; >> uint32_t i; >> @@ -241,8 +241,12 @@ do_dma_mem_copy(void *p) >> while (1) { >> for (i = 0; i < nr_buf; i++) { >> dma_copy: >> - ret = rte_dma_copy(dev_id, 0, rte_mbuf_data_iova(srcs[i]), >> - rte_mbuf_data_iova(dsts[i]), buf_size, 0); >> + ret = rte_dma_copy(dev_id, >> + 0, >> + (rte_iova_t) srcs[i], >> + (rte_iova_t) dsts[i], Thank you ChengWen for the suggestion, please find my observations below > should consider IOVA != VA, so here should be with rte_mempool_virt2iova(), > but this commit is mainly to eliminate the address convert overload, so we > should prepare IOVA for DMA copy, and VA for memory copy. yes, Ferruh helped me to understand. Please let me look into this and share a v3 soon. > > I prefer keep pkt_mbuf, but new add two field, and create this two field when setup_memory_env(), > then direct use them in do_xxx_mem_copy。 Please help me understand if you are suggesting, in function `setup_memory_env` we still keep pkt_mbuf creation. But when the arrays are created instead of populating them with mbuf, we directly call `pktmbuf_mtod` and store the starting address. Thus in cpu-copy or dma-copy we do not spent time in compute. Is this what you mean? My reasoning for not using pktmbuf is as follows 1. pkt_mbuf has rte_mbuf metadata + private + headroom + tailroom 2. so when create payload for 2K, 4K, 8K, 16K, 32K, 1GB we are accounting for extra headroom. which is not efficent 3. dma-perf is targeted for performance and not network function. 4. there is an existing example which makes use pktmbuf and dma calls. hence I would like to use mempool which also helps per numa with flags. > > Thanks. > >> + buf_size, >> + 0); >> if (unlikely(ret < 0)) { >> if (ret == -ENOSPC) { >> do_dma_submit_and_poll(dev_id, &async_cnt, worker_info); >> @@ -276,8 +280,8 @@ do_cpu_mem_copy(void *p) >> volatile struct worker_info *worker_info = &(para->worker_info); >> const uint32_t nr_buf = para->nr_buf; >> const uint32_t buf_size = para->buf_size; >> - struct rte_mbuf **srcs = para->srcs; >> - struct rte_mbuf **dsts = para->dsts; >> + void **srcs = para->srcs; >> + void **dsts = para->dsts; >> uint32_t i; >> >> worker_info->stop_flag = false; >> @@ -288,8 +292,8 @@ do_cpu_mem_copy(void *p) >> >> while (1) { >> for (i = 0; i < nr_buf; i++) { >> - const void *src = rte_pktmbuf_mtod(dsts[i], void *); >> - void *dst = rte_pktmbuf_mtod(srcs[i], void *); >> + const void *src = (void *) dsts[i]; >> + void *dst = (void *) srcs[i]; >> >> /* copy buffer form src to dst */ >> rte_memcpy(dst, src, (size_t)buf_size); >> @@ -303,8 +307,8 @@ do_cpu_mem_copy(void *p) >> } >> >> static int >> -setup_memory_env(struct test_configure *cfg, struct rte_mbuf ***srcs, >> - struct rte_mbuf ***dsts) >> +setup_memory_env(struct test_configure *cfg, void ***srcs, >> + void ***dsts) >> { >> unsigned int buf_size = cfg->buf_size.cur; >> unsigned int nr_sockets; >> @@ -317,47 +321,57 @@ setup_memory_env(struct test_configure *cfg, struct rte_mbuf ***srcs, >> return -1; >> } >> >> - src_pool = rte_pktmbuf_pool_create("Benchmark_DMA_SRC", >> + src_pool = rte_mempool_create("Benchmark_DMA_SRC", >> nr_buf, >> + buf_size, >> 0, >> 0, >> - buf_size + RTE_PKTMBUF_HEADROOM, >> - cfg->src_numa_node); >> + NULL, >> + NULL, >> + NULL, >> + NULL, >> + cfg->src_numa_node, >> + RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET); >> if (src_pool == NULL) { >> PRINT_ERR("Error with source mempool creation.\n"); >> return -1; >> } >> >> - dst_pool = rte_pktmbuf_pool_create("Benchmark_DMA_DST", >> + dst_pool = rte_mempool_create("Benchmark_DMA_DST", >> nr_buf, >> + buf_size, >> 0, >> 0, >> - buf_size + RTE_PKTMBUF_HEADROOM, >> - cfg->dst_numa_node); >> + NULL, >> + NULL, >> + NULL, >> + NULL, >> + cfg->dst_numa_node, >> + RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET); >> if (dst_pool == NULL) { >> PRINT_ERR("Error with destination mempool creation.\n"); >> return -1; >> } >> >> - *srcs = rte_malloc(NULL, nr_buf * sizeof(struct rte_mbuf *), 0); >> + *srcs = rte_malloc_socket(NULL, nr_buf * sizeof(unsigned char *), 0, cfg->src_numa_node); >> if (*srcs == NULL) { >> printf("Error: srcs malloc failed.\n"); >> return -1; >> } >> >> - *dsts = rte_malloc(NULL, nr_buf * sizeof(struct rte_mbuf *), 0); >> + *dsts = rte_malloc_socket(NULL, nr_buf * sizeof(unsigned char *), 0, cfg->dst_numa_node); >> if (*dsts == NULL) { >> printf("Error: dsts malloc failed.\n"); >> return -1; >> } >> >> - if (rte_pktmbuf_alloc_bulk(src_pool, *srcs, nr_buf) != 0) { >> - printf("alloc src mbufs failed.\n"); >> + if (rte_mempool_get_bulk(src_pool, *srcs, nr_buf) != 0) { >> + printf("alloc src bufs failed.\n"); >> return -1; >> } >> >> - if (rte_pktmbuf_alloc_bulk(dst_pool, *dsts, nr_buf) != 0) { >> - printf("alloc dst mbufs failed.\n"); >> + if (rte_mempool_get_bulk(dst_pool, *dsts, nr_buf) != 0) { >> + printf("alloc dst bufs failed.\n"); >> return -1; >> } >> >> @@ -370,7 +384,7 @@ mem_copy_benchmark(struct test_configure *cfg, bool is_dma) >> uint16_t i; >> uint32_t offset; >> unsigned int lcore_id = 0; >> - struct rte_mbuf **srcs = NULL, **dsts = NULL; >> + void **srcs = NULL, **dsts = NULL; >> struct lcore_dma_map_t *ldm = &cfg->lcore_dma_map; >> unsigned int buf_size = cfg->buf_size.cur; >> uint16_t kick_batch = cfg->kick_batch.cur; >> @@ -478,9 +492,9 @@ mem_copy_benchmark(struct test_configure *cfg, bool is_dma) >> out: >> /* free mbufs used in the test */ >> if (srcs != NULL) >> - rte_pktmbuf_free_bulk(srcs, nr_buf); >> + rte_mempool_put_bulk(src_pool, srcs, nr_buf); >> if (dsts != NULL) >> - rte_pktmbuf_free_bulk(dsts, nr_buf); >> + rte_mempool_put_bulk(dst_pool, dsts, nr_buf); >> >> /* free the points for the mbufs */ >> rte_free(srcs); >>