From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 390EA436D0; Tue, 12 Dec 2023 11:38:45 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 192AA42DBD; Tue, 12 Dec 2023 11:38:45 +0100 (CET) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on2088.outbound.protection.outlook.com [40.107.96.88]) by mails.dpdk.org (Postfix) with ESMTP id 8744D4026E; Tue, 12 Dec 2023 11:38:43 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FkEe8RVtLQSDlM1JsT1VeLRZkTDAXEPv+9NtlppV0KLCgT/ZKowZK2t9QIoRG1ZHXFQBugxy4yCc4GoIl9jMf/g8KHu61CJ3S3Q9N3guOrFYyIl1cXqy0AX/Q7zhDIx/6cWjmQPpseaOuXUISCzg5NXMHYcKBVcQZLPOdsYH3dRY6OyqCNS0aGX2KAQRR+m36WVf0K2RIuQLxeJlKrVGswCZNtGKfWHjOfyjsY9HgOsu3EJ/61QnUWzyC54xmYThXNZCJ1WkjSIt9fUfiofoanbaUuIcopmiyE2VVmpU3OjoPVgwDxKiNbvsWQXEnvNIcogo5hkTtef0PdmmU8WMNg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=a56H36vossoV6kC/NZeFJvkoBfHVrDW3oS7yEaePaxI=; b=MPOd9OO2dSBGbZQF12ABSqIBeVhsIZYYqbJcj6SBmdBO2kV3m2ZU/0h6mMGlTT5np4APRk3IU0SQ1g9dEMpcM4Try0DUCCar+0+rv+5dN1n+jTd2xicUXHeYSEGbLA/worDrI+Ct9uVAia9cBGJpLcz3R1EgVI27dmm1VP+d1er+kKZ7qam1TC8G/5aoRleSPDsgJF7X/HWyqpCAudEgMs+7V7IDu0lNp28Gm3HEgtqXDqYD+RlBiN+QH0oX2NsYhqJUYwIaHQptfJy0kgiDDgwql+v/wv1+UTQXXfTpnfhmvKtFpDeXURIeXS18xOd/7uEZQLsj/UvCp8au5Xr9Fg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=dpdk.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=a56H36vossoV6kC/NZeFJvkoBfHVrDW3oS7yEaePaxI=; b=oK5Axmu48Snvzb8bFRiFAwBVSBUSc5/+vYSruwcySenwG3usgRooy3IpbMavX7SLawPrW1lr5qvvFhLn2xvvZzClf+zcgvghu4hme8Zla+SroDFLLh73iXaGCR1A2e4khlLf+XjF+IhHr7ZFv8cONZX6DSzLhVwj2izj/cck3oc= Received: from DM5PR07CA0089.namprd07.prod.outlook.com (2603:10b6:4:ae::18) by DM4PR12MB5748.namprd12.prod.outlook.com (2603:10b6:8:5f::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7068.33; Tue, 12 Dec 2023 10:38:38 +0000 Received: from CY4PEPF0000EE3C.namprd03.prod.outlook.com (2603:10b6:4:ae:cafe::9c) by DM5PR07CA0089.outlook.office365.com (2603:10b6:4:ae::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7091.26 via Frontend Transport; Tue, 12 Dec 2023 10:38:38 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CY4PEPF0000EE3C.mail.protection.outlook.com (10.167.242.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7091.26 via Frontend Transport; Tue, 12 Dec 2023 10:38:38 +0000 Received: from BLR-5CG134626B.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.34; Tue, 12 Dec 2023 04:38:33 -0600 From: Vipin Varghese To: , , CC: Thiyagrajan P Subject: [PATCH] app/dma-perf: replace pktmbuf with mempool objects Date: Tue, 12 Dec 2023 16:07:46 +0530 Message-ID: <20231212103746.1910-1-vipin.varghese@amd.com> X-Mailer: git-send-email 2.41.0.windows.3 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE3C:EE_|DM4PR12MB5748:EE_ X-MS-Office365-Filtering-Correlation-Id: ff7b23da-c18c-4e7e-bc04-08dbfafe805e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 50cl/cq4Sasq7zZ98FOA9t+uEveSokLVgUvYqWpM9immr8jJuBPQ/hVMKnCA9UGNDpaxTO05B0QwobaHw7zlmIZoPD85oKeXdl+9OA1zXCdMLwfw8DBpb98mpRiKkiNWLdYbCycOs2uKY4YEhjtha8Fh38EcpF5esTFw17aOuMqAKe8SD69KH6SCXIIikYgQW/ST95WGyp7cjBZKg5gpOQjrousg+sk0hUnPIqqpJXPn4iX4h/w6tZOrnjfFmrFN3jnpDbso7bkBDwhxwdM4IGl2fZMPeWZWdRtltc2yVT/waueKc69EbEBEYPWwRar2jp9sunzXtFx/Yw9oi1Qa0SDdJWUD4or0BAi9xVmNUyWW5a2FncT8rs2VuIe2gCz1YLCy5vy8Zv7Q4jgYqOVg1wfNE3JkJ20b2y0SMUEYI1MQN4c0jjoURWNrP1JnV7EUPFEWDHspAbCAmB1B34QB3L/oWRptvFGpp2UD92n82ZKyUluhdp1o3lQx8euPk8ssDJxfIRLypC5s1oIWzCinDHYUSHpjxOcHtcDctkwxMsRA+Q6va9ZljzUU7klvYRbDAzkazBYCcBAKz3E0+Q9AJh8WaLDmu/PSJ7sDBPmsFDcq4744Px5DaMzRDJSHNuYsHkFlnLRgBepyAIZ7hTQn8Ay4uYh54kFn9hg2DxHnnJSrCgPP+vqNOtBhaDzGeA+IhjAxCRJ59Zszj1yrYKLX6P2zUP3nVCbbtJVekyRcOYxemqkxuCf2OQNwFavzAbWPWLEvuFceKQUTci7XiYw93g== X-Forefront-Antispam-Report: CIP:165.204.84.17; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:SATLEXMB04.amd.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230031)(4636009)(39860400002)(396003)(346002)(376002)(136003)(230922051799003)(1800799012)(186009)(451199024)(64100799003)(82310400011)(36840700001)(46966006)(40470700004)(40480700001)(40460700003)(70586007)(70206006)(82740400003)(81166007)(356005)(36756003)(86362001)(36860700001)(47076005)(83380400001)(426003)(16526019)(2616005)(1076003)(7696005)(2906002)(316002)(110136005)(6666004)(26005)(336012)(478600001)(5660300002)(4326008)(8676002)(44832011)(41300700001)(8936002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Dec 2023 10:38:38.4292 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ff7b23da-c18c-4e7e-bc04-08dbfafe805e X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE3C.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5748 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Replace pktmbuf pool with mempool, this allows increase in MOPS especially in lower buffer size. Using Mempool, allows to reduce the extra CPU cycles. v1 changes: 1. pktmbuf pool create with mempool create. 2. create src & dst pointer array from the appropaite numa. 3. use get pool and put for mempool objects. 4. remove pktmbuf_mtod for dma and cpu memcpy. Test Results for pktmbuf vs mempool: ==================================== Format: Buffer Size | % AVG cycles | % AVG Gbps Category-1: HW-DSA ------------------- 64|-13.11| 14.97 128|-41.49| 0.41 256| -1.85| 1.20 512| -9.38| 8.81 1024| 1.82| -2.00 1518| 0.00| -0.80 2048| 1.03| -0.91 4096| 0.00| -0.35 8192| 0.07| -0.08 Category-2: MEMCPY ------------------- 64|-12.50|14.14 128|-40.63|67.26 256|-38.78|59.35 512|-30.26|43.36 1024|-21.80|27.04 1518|-16.23|19.33 2048|-14.75|16.81 4096| -9.56|10.01 8192| -3.32| 3.12 Signed-off-by: Vipin Varghese Tested-by: Thiyagrajan P --- app/test-dma-perf/benchmark.c | 74 +++++++++++++++++++++-------------- 1 file changed, 44 insertions(+), 30 deletions(-) diff --git a/app/test-dma-perf/benchmark.c b/app/test-dma-perf/benchmark.c index 9b1f58c78c..dc6f16cc01 100644 --- a/app/test-dma-perf/benchmark.c +++ b/app/test-dma-perf/benchmark.c @@ -43,8 +43,8 @@ struct lcore_params { uint16_t kick_batch; uint32_t buf_size; uint16_t test_secs; - struct rte_mbuf **srcs; - struct rte_mbuf **dsts; + void **srcs; + void **dsts; volatile struct worker_info worker_info; }; @@ -110,17 +110,17 @@ output_result(uint8_t scenario_id, uint32_t lcore_id, char *dma_name, uint16_t r } static inline void -cache_flush_buf(__rte_unused struct rte_mbuf **array, +cache_flush_buf(__rte_unused void **array, __rte_unused uint32_t buf_size, __rte_unused uint32_t nr_buf) { #ifdef RTE_ARCH_X86_64 char *data; - struct rte_mbuf **srcs = array; + void **srcs = array; uint32_t i, offset; for (i = 0; i < nr_buf; i++) { - data = rte_pktmbuf_mtod(srcs[i], char *); + data = (char *) srcs[i]; for (offset = 0; offset < buf_size; offset += 64) __builtin_ia32_clflush(data + offset); } @@ -224,8 +224,8 @@ do_dma_mem_copy(void *p) const uint32_t nr_buf = para->nr_buf; const uint16_t kick_batch = para->kick_batch; const uint32_t buf_size = para->buf_size; - struct rte_mbuf **srcs = para->srcs; - struct rte_mbuf **dsts = para->dsts; + void **srcs = para->srcs; + void **dsts = para->dsts; uint16_t nr_cpl; uint64_t async_cnt = 0; uint32_t i; @@ -241,8 +241,12 @@ do_dma_mem_copy(void *p) while (1) { for (i = 0; i < nr_buf; i++) { dma_copy: - ret = rte_dma_copy(dev_id, 0, rte_mbuf_data_iova(srcs[i]), - rte_mbuf_data_iova(dsts[i]), buf_size, 0); + ret = rte_dma_copy(dev_id, + 0, + (rte_iova_t) srcs[i], + (rte_iova_t) dsts[i], + buf_size, + 0); if (unlikely(ret < 0)) { if (ret == -ENOSPC) { do_dma_submit_and_poll(dev_id, &async_cnt, worker_info); @@ -276,8 +280,8 @@ do_cpu_mem_copy(void *p) volatile struct worker_info *worker_info = &(para->worker_info); const uint32_t nr_buf = para->nr_buf; const uint32_t buf_size = para->buf_size; - struct rte_mbuf **srcs = para->srcs; - struct rte_mbuf **dsts = para->dsts; + void **srcs = para->srcs; + void **dsts = para->dsts; uint32_t i; worker_info->stop_flag = false; @@ -288,8 +292,8 @@ do_cpu_mem_copy(void *p) while (1) { for (i = 0; i < nr_buf; i++) { - const void *src = rte_pktmbuf_mtod(dsts[i], void *); - void *dst = rte_pktmbuf_mtod(srcs[i], void *); + const void *src = (void *) dsts[i]; + void *dst = (void *) srcs[i]; /* copy buffer form src to dst */ rte_memcpy(dst, src, (size_t)buf_size); @@ -303,8 +307,8 @@ do_cpu_mem_copy(void *p) } static int -setup_memory_env(struct test_configure *cfg, struct rte_mbuf ***srcs, - struct rte_mbuf ***dsts) +setup_memory_env(struct test_configure *cfg, void ***srcs, + void ***dsts) { unsigned int buf_size = cfg->buf_size.cur; unsigned int nr_sockets; @@ -317,47 +321,57 @@ setup_memory_env(struct test_configure *cfg, struct rte_mbuf ***srcs, return -1; } - src_pool = rte_pktmbuf_pool_create("Benchmark_DMA_SRC", + src_pool = rte_mempool_create("Benchmark_DMA_SRC", nr_buf, + buf_size, 0, 0, - buf_size + RTE_PKTMBUF_HEADROOM, - cfg->src_numa_node); + NULL, + NULL, + NULL, + NULL, + cfg->src_numa_node, + RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET); if (src_pool == NULL) { PRINT_ERR("Error with source mempool creation.\n"); return -1; } - dst_pool = rte_pktmbuf_pool_create("Benchmark_DMA_DST", + dst_pool = rte_mempool_create("Benchmark_DMA_DST", nr_buf, + buf_size, 0, 0, - buf_size + RTE_PKTMBUF_HEADROOM, - cfg->dst_numa_node); + NULL, + NULL, + NULL, + NULL, + cfg->dst_numa_node, + RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET); if (dst_pool == NULL) { PRINT_ERR("Error with destination mempool creation.\n"); return -1; } - *srcs = rte_malloc(NULL, nr_buf * sizeof(struct rte_mbuf *), 0); + *srcs = rte_malloc_socket(NULL, nr_buf * sizeof(unsigned char *), 0, cfg->src_numa_node); if (*srcs == NULL) { printf("Error: srcs malloc failed.\n"); return -1; } - *dsts = rte_malloc(NULL, nr_buf * sizeof(struct rte_mbuf *), 0); + *dsts = rte_malloc_socket(NULL, nr_buf * sizeof(unsigned char *), 0, cfg->dst_numa_node); if (*dsts == NULL) { printf("Error: dsts malloc failed.\n"); return -1; } - if (rte_pktmbuf_alloc_bulk(src_pool, *srcs, nr_buf) != 0) { - printf("alloc src mbufs failed.\n"); + if (rte_mempool_get_bulk(src_pool, *srcs, nr_buf) != 0) { + printf("alloc src bufs failed.\n"); return -1; } - if (rte_pktmbuf_alloc_bulk(dst_pool, *dsts, nr_buf) != 0) { - printf("alloc dst mbufs failed.\n"); + if (rte_mempool_get_bulk(dst_pool, *dsts, nr_buf) != 0) { + printf("alloc dst bufs failed.\n"); return -1; } @@ -370,7 +384,7 @@ mem_copy_benchmark(struct test_configure *cfg, bool is_dma) uint16_t i; uint32_t offset; unsigned int lcore_id = 0; - struct rte_mbuf **srcs = NULL, **dsts = NULL; + void **srcs = NULL, **dsts = NULL; struct lcore_dma_map_t *ldm = &cfg->lcore_dma_map; unsigned int buf_size = cfg->buf_size.cur; uint16_t kick_batch = cfg->kick_batch.cur; @@ -478,9 +492,9 @@ mem_copy_benchmark(struct test_configure *cfg, bool is_dma) out: /* free mbufs used in the test */ if (srcs != NULL) - rte_pktmbuf_free_bulk(srcs, nr_buf); + rte_mempool_put_bulk(src_pool, srcs, nr_buf); if (dsts != NULL) - rte_pktmbuf_free_bulk(dsts, nr_buf); + rte_mempool_put_bulk(dst_pool, dsts, nr_buf); /* free the points for the mbufs */ rte_free(srcs); -- 2.34.1