From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E8F7443C14; Wed, 28 Feb 2024 04:08:58 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C5FEA402E8; Wed, 28 Feb 2024 04:08:58 +0100 (CET) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2048.outbound.protection.outlook.com [40.107.223.48]) by mails.dpdk.org (Postfix) with ESMTP id 04FEA402AC for ; Wed, 28 Feb 2024 04:08:54 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mePVOF9edTD/rrVP2iRC5YLqZq6cblP3Fo9KAp7tAPgq6HzXTKh1sA7VJupP7BlqORQ5SbwuD9d49noRXYpn28Z7i1oX2hUJuN7kqngAA6tZkpkwRQR6+W0V5quOEfSRlyZulSEfqPakE/a4cznCtz2HwMz4SyEMJG5UPrBkK7vfKZ+OZrF50aJ1m/mbeh5UfA0nwR0dUFy4V0vF8jpzpB7QY7I9I+iXrwK/rkXl80SHp+M6QKDwTH/NLXATrxU46v0lFm0VpbJLO9EaCAB6zQxG15Usj9KEZjE5C29JBRkEv0eis10TGZQgRjeh2M2xP7yvsnGDBiht0xWnCFjANQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=TP30jG+aKTTYVrbPPtEY3w2sjVEV5Vl6aXD/kWNDBFs=; b=cOzOacD0VzZJ4m7JxJ60ZlwMQ/Sk3zW5mO1vHpb2lvP/qvI5bYr/U7YhCepGO4t8CS8YnLhu0Xje8GW770wrCxTVlrl1RVbKsSTtbz+OWgGQIAi2C8X7IjYSDImsvaC2HLoy1ZJxOcpMuXSaxi6jKWKnPpCmWGtUrgctBtwDPK4fdW9tmBfkbfdCq4XkRzWIeQ+X1FyI6ONpY2rU8NGlUgrdMJGFGS4ODmRf8qXZRGdRWJYJ/D6ndGrfW42w1SwrRfAUrT3ND0UBAIYCJn9RaNiebl9z2fzf/bN00glvtyM/Cw0GlP5HcWY7qRJIdQ5PgQqWSK/Vgy4CtKqFQnltmg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TP30jG+aKTTYVrbPPtEY3w2sjVEV5Vl6aXD/kWNDBFs=; b=P4Nw2SINZwsJOBWdpWcHKB0fAI1g3JlBHrkeBB/mg2At2RUOZPhhU/yBjKMOsDD2Sg7LzK5OtQZLajHSMhlR69kBulc3oAs3u4qd4Ha7ceHQPpZ+A/FBFGuTBKagG648Er0GGKjuZ9EOPKF4LLsNWHeF4SILRgIivQTYCoICqzE= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com; Received: from PH7PR12MB8596.namprd12.prod.outlook.com (2603:10b6:510:1b7::6) by DS0PR12MB9324.namprd12.prod.outlook.com (2603:10b6:8:1b6::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.34; Wed, 28 Feb 2024 03:08:52 +0000 Received: from PH7PR12MB8596.namprd12.prod.outlook.com ([fe80::5f0d:af7:7f6b:9b9c]) by PH7PR12MB8596.namprd12.prod.outlook.com ([fe80::5f0d:af7:7f6b:9b9c%5]) with mapi id 15.20.7316.035; Wed, 28 Feb 2024 03:08:52 +0000 Message-ID: <481ad603-8acd-420e-84d8-a9dd27f6f5f8@amd.com> Date: Wed, 28 Feb 2024 08:38:44 +0530 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2] app/dma-perf: replace pktmbuf with mempool objects Content-Language: en-US To: fengchengwen , dev@dpdk.org, david.marchand@redhat.com, honest.jiang@foxmail.com Cc: =?UTF-8?Q?Morten_Br=C3=B8rup?= , Thiyagrajan P , Ferruh Yigit References: <20231212103746.1910-1-vipin.varghese@amd.com> <20231220110333.619-1-vipin.varghese@amd.com> <591e44d2-8a6a-b762-0ca5-c8ed9777b577@huawei.com> <333a22c5-3a05-45a8-b12e-61f553e8c490@amd.com> <6905e830-4be4-2b00-fbfb-1fe0d39e16fa@huawei.com> From: "Varghese, Vipin" In-Reply-To: <6905e830-4be4-2b00-fbfb-1fe0d39e16fa@huawei.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-ClientProxiedBy: PN2PR01CA0196.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c01:e9::8) To PH7PR12MB8596.namprd12.prod.outlook.com (2603:10b6:510:1b7::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH7PR12MB8596:EE_|DS0PR12MB9324:EE_ X-MS-Office365-Filtering-Correlation-Id: f9496239-338c-401b-f262-08dc380a9728 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: OjNwEzzCsgLSCdrbga5OPkIrd6npNR6HyC/k5i6OTTPqxAa294AzR8D1MoHZ20Idxq0bXrW2ODXvvnHdH1vKVYRpyCOxFAKtKdDa7CVp1vmeTvxDbHd8s/iA9JvLEIxlKNTxAuCwLpYoAhFwzR0iuxxg7xNCa77EELZva29vnGBbZvv9AM9750AHf6B6PpvOMfWOzy1cmI43UWgNuIzSxwYB0FIE4xYsRuZhA9wMkCWFqIiInC41kkAvE/cuDoNvpU8WTuiTyOVcXYVolYnvxydt4Zrb13YAPQ+UqPSObNottgafmfmJRc7iSoEOG0P0KM5Ir40+5+hkN2JfjR201wzvRV+U7qEbJ6C+RdDE1AY6dZs/rmBHMEKR0WX0vRi3WNKqiMcy2lCnS1nJe42GncKpLgQ3MRTjR9ilJqYPSOjnBJuquVn8tPydwfOrYw4C0wZxN/ESS4JyyQLkG88BNwvKwtEt52VWqzdQzHNMzfwgOHxu0PEL+gd2CtbRgIZaDA8pKHrTjmJzTSRrUcloMCk3vn7gKND6zRFrTaceo04MjEhWhG3jpEhujVZoqW1CvSm/kCOlUEPlUla0i04extp3a+ORL2K7gE+MnZsKRmTzFM3JF+3OVQ0wTu9xg18m751eLwY0wcWSkUv/GtVpvj0ZuvxFhANL0eHDEFw/YXE= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PH7PR12MB8596.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(230273577357003); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?RVRNckR0b3FJc1VHVGJFMGhJOFhNaThIU21vL1g2cnpNZkxxaFBFdzVYVzUz?= =?utf-8?B?WUpLdjV3UzAzR0JVWGdvdFA5Y3RQNUxIMXk0Z3UxQklFcnlCRG1PbllyVU5h?= =?utf-8?B?dHhYV3RZT25tZ1psMC95SHNwU295SjFuVXdoNVIzV3dqLzRpSW9sSDd0cDB6?= =?utf-8?B?MzdibDh2d1VkdjZUb3dvb2M0RTllaE1zYkl0TzgvOEhmK1V6WWgwMkEyeWhm?= =?utf-8?B?ekVlQlpCeWpmTUJJeFFBTWpoeUthZGdCV05uV1ZMUWdBaUdFaVRBYkUvOW45?= =?utf-8?B?TnJybVBNRWtUaWIxMVlkaHNjMlBBa2g3T0R6cHhxdi9ZRlo3S1VkOXZQcjlC?= =?utf-8?B?cTYxQisvNnFSa2E5Wmh0am9GU1Q2bkVVVGhzTXpyZVFRR1RmRVNhdVFkSDFx?= =?utf-8?B?ZE9tdlkrUWs2U0wzUVlxMmFWdUd5cldOVDZpeWVLNmxxVGR3SEM1L0JObEda?= =?utf-8?B?QWNkd1EzNGdKOFhyNGdxNkh5SUpUazBadkFyVXhDak5kM3ZPT1ZRai9maitM?= =?utf-8?B?L0YvV25mWnRIZzV4MTFlTGg2dmJ0bW4wa1ZBdWZIN2V1dHoxcDArOXE1YklE?= =?utf-8?B?S1JxRmExNU92S21jUHFvOXdEMENuVTdJR1d1ajhrYVJLa3hVTXRTUWdBZnF0?= =?utf-8?B?L05odmpHdFdNWUU1QXRCUGpCZkw3ZUxOU2didEJOcVh5QUpNdUhrWll2YUY2?= =?utf-8?B?eDVlNDBBdkl4V3ZNUmhaNHM3Z0cvWWhwNVFrVVp0eWV3YkZKdkIxVlIzQ0xY?= =?utf-8?B?Lzgray9XQUxDRk1BN1hhNUtuemlzalg5dE0yUnBkU2NIWEoxaGU0VWtFWHlj?= =?utf-8?B?UTVJMlBJbVN6SUE2WWVVOS8rWUYwV0hOdHRUK1NQNmVtblhjKzlmS092Smk2?= =?utf-8?B?a1EvRzhTajVwWHUzUm9rNzFwY2RFanRzN1prbVd5RWdlNjBoQ3Z5YlBGLzcz?= =?utf-8?B?SHYrN3U3aTdlRTVkT0JDM3BCVW5NWWxaWlZzTGliSXRhV0cybmxjMW1DUHRD?= =?utf-8?B?WVFEMTlxbkVZR3dKdlcvWCtJVzZOMkR5VFFFclBRdEVkUXBJV1l6VGZtYXlE?= =?utf-8?B?L0I0MEozVG1ieGU3OHlEZVNjRlVUd21La1RrcEVNRlVaT0pEb25tN3hoby8z?= =?utf-8?B?aUVpaHZUTDc0MEJFajFsb2N0ZUdZMWRXNjFTRU5mQ29nbmRTNlZOTUNXVGts?= =?utf-8?B?dE55NDVHU3BldlBEYkFmTnNuSW8rVU5reXJzVml4UGZFTG5XZEN1bklCZE5M?= =?utf-8?B?RWN0Uy92emR6L0Yvb1NlYTg1d3NJLzR5UFRSTk9NbzRRaXJPaUJmS252cTVN?= =?utf-8?B?U3BSTVNqcytXWWJMVEhZUk1pT2NkZkY1eDRuVEQ3V21pWU95QlNBSWhIaEhN?= =?utf-8?B?ZWpRR3ZpNmQ4RVFoSGFhcE5rTkxXQzVFbWRmWkZXdDRITGcvRGFYc1Y4dkVW?= =?utf-8?B?cytCQjQvdFJEUURYaS9ZZmd2RXoxL2RtM0xjdFd1SWlHRHh0U0M0MXR1ZEpj?= =?utf-8?B?ZkgvYjkrVGV5WXpOaXltaDJiQ3pyenZTR0M3cUVnb3UxYUdWUGg0b3AzMDIw?= =?utf-8?B?eFpwbE5COVBmUzRTcU5WVWxJNmlhY1RHT0twWlNxM2xTcmdDbWpxZE5kT3kw?= =?utf-8?B?Y0lFMUU2YWVUY0llcUV1YStDVU0rNUlmYTJjd2kxRGY4VzRRZ21Nd1pJbXEx?= =?utf-8?B?M3FUZkRVRmV1blUrendRQ0ZRZ0tjYVFMamR0U05xWWdlWnczOGszaFhDVXFo?= =?utf-8?B?L0VMNHMwSGFQYzZ4ZldEdHorMGV0c3BVSjhXUHNLQ0kwQlNYNzNEKzZrRTg3?= =?utf-8?B?WmE2d3FZaGhJcnFLcVBaZFRBWlFTSjRsVlpxaExOZHVhNXVkYXRKeml6aTU1?= =?utf-8?B?Y2JTZDg0cERzcHJUbEtGbnJqVE5QWE1JdGgyTFFUOXg1eUM0azNDY2g3VDN2?= =?utf-8?B?eFRBaFlpSHJCRVFvSDJGc21wZGJrZVRRMkF0UjBMNjA1enhTMlV3UkJkZjJx?= =?utf-8?B?TWo2ZUxzWlNYL2tjcGNxK0VwQkd6OSt4S0hVMkpVVnJFa2JvclMzR2cyVDM5?= =?utf-8?B?VnphMFMxOExUT1dZSld2V0hlSTBVSW1PTDEvMDRSOStXa1RYYVFmQXZ4dE8y?= =?utf-8?Q?W2feEL5xLLKj0BIqeqIQz0Bfi?= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: f9496239-338c-401b-f262-08dc380a9728 X-MS-Exchange-CrossTenant-AuthSource: PH7PR12MB8596.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Feb 2024 03:08:51.9798 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: Eo7XZG9/EFArKMAId/SBxWp32fmfIF6SYwwNbPeSHY/GysBGKuhoTyNiX/dKUQv9Y2YdX+VSLP25y0zcQZsf8g== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB9324 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 2/27/2024 5:57 PM, fengchengwen wrote: > Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding. > > > Hi Vipin, > > On 2024/2/27 17:57, Varghese, Vipin wrote: >> On 2/26/2024 7:35 AM, fengchengwen wrote: >>> Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding. >>> >>> >>> Hi Vipin, >>> >>> On 2023/12/20 19:03, Vipin Varghese wrote: >>>> From: Vipin Varghese >>>> >>>> Replace pktmbuf pool with mempool, this allows increase in MOPS >>>> especially in lower buffer size. Using Mempool, allows to reduce >>>> the extra CPU cycles. >>>> >>>> Changes made are >>>> 1. pktmbuf pool create with mempool create. >>>> 2. create src & dst pointer array from the appropaite numa. >>>> 3. use get pool and put for mempool objects. >>>> 4. remove pktmbuf_mtod for dma and cpu memcpy. >>>> >>>> v2 changes: >>>> - add ACK from Morten Brørup >>>> >>>> v1 changes: >>>> - pktmbuf pool create with mempool create. >>>> - create src & dst pointer array from the appropaite numa. >>>> - use get pool and put for mempool objects. >>>> - remove pktmbuf_mtod for dma and cpu memcpy. >>>> >>>> Test Results for pktmbuf vs mempool: >>>> ==================================== >>>> >>>> Format: Buffer Size | % AVG cycles | % AVG Gbps >>>> >>>> Category-1: HW-DSA >>>> ------------------- >>>> 64|-13.11| 14.97 >>>> 128|-41.49|  0.41 >>>> 256| -1.85|  1.20 >>>> 512| -9.38|  8.81 >>>> 1024|  1.82| -2.00 >>>> 1518|  0.00| -0.80 >>>> 2048|  1.03| -0.91 >>>> 4096|  0.00| -0.35 >>>> 8192|  0.07| -0.08 >>>> >>>> Category-2: MEMCPY >>>> ------------------- >>>> 64|-12.50|14.14 >>>> 128|-40.63|67.26 >>>> 256|-38.78|59.35 >>>> 512|-30.26|43.36 >>>> 1024|-21.80|27.04 >>>> 1518|-16.23|19.33 >>>> 2048|-14.75|16.81 >>>> 4096| -9.56|10.01 >>>> 8192| -3.32| 3.12 >>>> >>>> Signed-off-by: Vipin Varghese >>>> Acked-by: Morten Brørup >>>> Tested-by: Thiyagrajan P >>>> --- >>>> --- >>>> app/test-dma-perf/benchmark.c | 74 +++++++++++++++++++++-------------- >>>> 1 file changed, 44 insertions(+), 30 deletions(-) >>>> >>>> diff --git a/app/test-dma-perf/benchmark.c b/app/test-dma-perf/benchmark.c >>>> index 9b1f58c78c..dc6f16cc01 100644 >>>> --- a/app/test-dma-perf/benchmark.c >>>> +++ b/app/test-dma-perf/benchmark.c >>>> @@ -43,8 +43,8 @@ struct lcore_params { >>>> uint16_t kick_batch; >>>> uint32_t buf_size; >>>> uint16_t test_secs; >>>> - struct rte_mbuf **srcs; >>>> - struct rte_mbuf **dsts; >>>> + void **srcs; >>>> + void **dsts; >>>> volatile struct worker_info worker_info; >>>> }; >>>> >>>> @@ -110,17 +110,17 @@ output_result(uint8_t scenario_id, uint32_t lcore_id, char *dma_name, uint16_t r >>>> } >>>> >>>> static inline void >>>> -cache_flush_buf(__rte_unused struct rte_mbuf **array, >>>> +cache_flush_buf(__rte_unused void **array, >>>> __rte_unused uint32_t buf_size, >>>> __rte_unused uint32_t nr_buf) >>>> { >>>> #ifdef RTE_ARCH_X86_64 >>>> char *data; >>>> - struct rte_mbuf **srcs = array; >>>> + void **srcs = array; >>>> uint32_t i, offset; >>>> >>>> for (i = 0; i < nr_buf; i++) { >>>> - data = rte_pktmbuf_mtod(srcs[i], char *); >>>> + data = (char *) srcs[i]; >>>> for (offset = 0; offset < buf_size; offset += 64) >>>> __builtin_ia32_clflush(data + offset); >>>> } >>>> @@ -224,8 +224,8 @@ do_dma_mem_copy(void *p) >>>> const uint32_t nr_buf = para->nr_buf; >>>> const uint16_t kick_batch = para->kick_batch; >>>> const uint32_t buf_size = para->buf_size; >>>> - struct rte_mbuf **srcs = para->srcs; >>>> - struct rte_mbuf **dsts = para->dsts; >>>> + void **srcs = para->srcs; >>>> + void **dsts = para->dsts; >>>> uint16_t nr_cpl; >>>> uint64_t async_cnt = 0; >>>> uint32_t i; >>>> @@ -241,8 +241,12 @@ do_dma_mem_copy(void *p) >>>> while (1) { >>>> for (i = 0; i < nr_buf; i++) { >>>> dma_copy: >>>> - ret = rte_dma_copy(dev_id, 0, rte_mbuf_data_iova(srcs[i]), >>>> - rte_mbuf_data_iova(dsts[i]), buf_size, 0); >>>> + ret = rte_dma_copy(dev_id, >>>> + 0, >>>> + (rte_iova_t) srcs[i], >>>> + (rte_iova_t) dsts[i], >> Thank you ChengWen for the suggestion, please find my observations below >>> should consider IOVA != VA, so here should be with rte_mempool_virt2iova(), >>> but this commit is mainly to eliminate the address convert overload, so we >>> should prepare IOVA for DMA copy, and VA for memory copy. >> yes, Ferruh helped me to understand. Please let me look into this and share a v3 soon. >> >>> I prefer keep pkt_mbuf, but new add two field, and create this two field when setup_memory_env(), >>> then direct use them in do_xxx_mem_copy。 >> Please help me understand if you are suggesting, in function `setup_memory_env` we still keep pkt_mbuf creation. >> >> But when the arrays are created instead of populating them with mbuf, we directly call `pktmbuf_mtod` and store the >> >> starting address. Thus in cpu-copy or dma-copy we do not spent time in compute. Is this what you mean? > Yes > >> >> My reasoning for not using pktmbuf is as follows >> >> 1. pkt_mbuf has rte_mbuf metadata + private + headroom + tailroom >> >> 2. so when create payload for 2K, 4K, 8K, 16K, 32K, 1GB we are accounting for extra headroom. which is not efficent >> >> 3. dma-perf is targeted for performance and not network function. >> >> 4. there is an existing example which makes use pktmbuf and dma calls. >> >> >> hence I would like to use mempool which also helps per numa with flags. > What I understand the low performance, mainly due to the CPU can't take DMA device performance, > so the CPU is a bottleneck, when we reduce the tasks of the CPU (just like this commit did), then > the performance is improved. > > This commit can test the maximum performance when the CPU and DMA cowork together, so I think we can > add this commit. > > pktmbuf is a popular programming entity, and almost all application (including examples) in the DPDK > community are based on pktmbuf. > > I think that keeping the use of pktbuf provides a flexibility, someone who want do more operates with > pktmbuf (maybe emulate the real logic) could be easily modified and testing. thank you Chengwen for the comments, I have also noticed some merges on `benchmark.c`. Let me take that as new baseline and modify as v3 as new option for mempool. > > Thanks > >> >>> Thanks. >>> >>>> + buf_size, >>>> + 0); >>>> if (unlikely(ret < 0)) { >>>> if (ret == -ENOSPC) { >>>> do_dma_submit_and_poll(dev_id, &async_cnt, worker_info); >>>> @@ -276,8 +280,8 @@ do_cpu_mem_copy(void *p) >>>> volatile struct worker_info *worker_info = &(para->worker_info); >>>> const uint32_t nr_buf = para->nr_buf; >>>> const uint32_t buf_size = para->buf_size; >>>> - struct rte_mbuf **srcs = para->srcs; >>>> - struct rte_mbuf **dsts = para->dsts; >>>> + void **srcs = para->srcs; >>>> + void **dsts = para->dsts; >>>> uint32_t i; >>>> >>>> worker_info->stop_flag = false; >>>> @@ -288,8 +292,8 @@ do_cpu_mem_copy(void *p) >>>> >>>> while (1) { >>>> for (i = 0; i < nr_buf; i++) { >>>> - const void *src = rte_pktmbuf_mtod(dsts[i], void *); >>>> - void *dst = rte_pktmbuf_mtod(srcs[i], void *); >>>> + const void *src = (void *) dsts[i]; >>>> + void *dst = (void *) srcs[i]; >>>> >>>> /* copy buffer form src to dst */ >>>> rte_memcpy(dst, src, (size_t)buf_size); >>>> @@ -303,8 +307,8 @@ do_cpu_mem_copy(void *p) >>>> } >>>> >>>> static int >>>> -setup_memory_env(struct test_configure *cfg, struct rte_mbuf ***srcs, >>>> - struct rte_mbuf ***dsts) >>>> +setup_memory_env(struct test_configure *cfg, void ***srcs, >>>> + void ***dsts) >>>> { >>>> unsigned int buf_size = cfg->buf_size.cur; >>>> unsigned int nr_sockets; >>>> @@ -317,47 +321,57 @@ setup_memory_env(struct test_configure *cfg, struct rte_mbuf ***srcs, >>>> return -1; >>>> } >>>> >>>> - src_pool = rte_pktmbuf_pool_create("Benchmark_DMA_SRC", >>>> + src_pool = rte_mempool_create("Benchmark_DMA_SRC", >>>> nr_buf, >>>> + buf_size, >>>> 0, >>>> 0, >>>> - buf_size + RTE_PKTMBUF_HEADROOM, >>>> - cfg->src_numa_node); >>>> + NULL, >>>> + NULL, >>>> + NULL, >>>> + NULL, >>>> + cfg->src_numa_node, >>>> + RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET); >>>> if (src_pool == NULL) { >>>> PRINT_ERR("Error with source mempool creation.\n"); >>>> return -1; >>>> } >>>> >>>> - dst_pool = rte_pktmbuf_pool_create("Benchmark_DMA_DST", >>>> + dst_pool = rte_mempool_create("Benchmark_DMA_DST", >>>> nr_buf, >>>> + buf_size, >>>> 0, >>>> 0, >>>> - buf_size + RTE_PKTMBUF_HEADROOM, >>>> - cfg->dst_numa_node); >>>> + NULL, >>>> + NULL, >>>> + NULL, >>>> + NULL, >>>> + cfg->dst_numa_node, >>>> + RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET); >>>> if (dst_pool == NULL) { >>>> PRINT_ERR("Error with destination mempool creation.\n"); >>>> return -1; >>>> } >>>> >>>> - *srcs = rte_malloc(NULL, nr_buf * sizeof(struct rte_mbuf *), 0); >>>> + *srcs = rte_malloc_socket(NULL, nr_buf * sizeof(unsigned char *), 0, cfg->src_numa_node); >>>> if (*srcs == NULL) { >>>> printf("Error: srcs malloc failed.\n"); >>>> return -1; >>>> } >>>> >>>> - *dsts = rte_malloc(NULL, nr_buf * sizeof(struct rte_mbuf *), 0); >>>> + *dsts = rte_malloc_socket(NULL, nr_buf * sizeof(unsigned char *), 0, cfg->dst_numa_node); >>>> if (*dsts == NULL) { >>>> printf("Error: dsts malloc failed.\n"); >>>> return -1; >>>> } >>>> >>>> - if (rte_pktmbuf_alloc_bulk(src_pool, *srcs, nr_buf) != 0) { >>>> - printf("alloc src mbufs failed.\n"); >>>> + if (rte_mempool_get_bulk(src_pool, *srcs, nr_buf) != 0) { >>>> + printf("alloc src bufs failed.\n"); >>>> return -1; >>>> } >>>> >>>> - if (rte_pktmbuf_alloc_bulk(dst_pool, *dsts, nr_buf) != 0) { >>>> - printf("alloc dst mbufs failed.\n"); >>>> + if (rte_mempool_get_bulk(dst_pool, *dsts, nr_buf) != 0) { >>>> + printf("alloc dst bufs failed.\n"); >>>> return -1; >>>> } >>>> >>>> @@ -370,7 +384,7 @@ mem_copy_benchmark(struct test_configure *cfg, bool is_dma) >>>> uint16_t i; >>>> uint32_t offset; >>>> unsigned int lcore_id = 0; >>>> - struct rte_mbuf **srcs = NULL, **dsts = NULL; >>>> + void **srcs = NULL, **dsts = NULL; >>>> struct lcore_dma_map_t *ldm = &cfg->lcore_dma_map; >>>> unsigned int buf_size = cfg->buf_size.cur; >>>> uint16_t kick_batch = cfg->kick_batch.cur; >>>> @@ -478,9 +492,9 @@ mem_copy_benchmark(struct test_configure *cfg, bool is_dma) >>>> out: >>>> /* free mbufs used in the test */ >>>> if (srcs != NULL) >>>> - rte_pktmbuf_free_bulk(srcs, nr_buf); >>>> + rte_mempool_put_bulk(src_pool, srcs, nr_buf); >>>> if (dsts != NULL) >>>> - rte_pktmbuf_free_bulk(dsts, nr_buf); >>>> + rte_mempool_put_bulk(dst_pool, dsts, nr_buf); >>>> >>>> /* free the points for the mbufs */ >>>> rte_free(srcs); >>>> >> .