From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 63E721B518 for ; Fri, 30 Nov 2018 00:12:41 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from yskoh@mellanox.com) with ESMTPS (AES256-SHA encrypted); 30 Nov 2018 01:18:31 +0200 Received: from scfae-sc-2.mti.labs.mlnx (scfae-sc-2.mti.labs.mlnx [10.101.0.96]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id wATNCW73032075; Fri, 30 Nov 2018 01:12:36 +0200 From: Yongseok Koh To: Alejandro Lucero Cc: Anatoly Burakov , Eelco Chaudron , dpdk stable Date: Thu, 29 Nov 2018 15:09:57 -0800 Message-Id: <20181129231202.30436-3-yskoh@mellanox.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20181129231202.30436-1-yskoh@mellanox.com> References: <20181129231202.30436-1-yskoh@mellanox.com> Subject: [dpdk-stable] patch 'mem: fix memory initialization time' has been queued to LTS release 17.11.5 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 29 Nov 2018 23:12:41 -0000 Hi, FYI, your patch has been queued to LTS release 17.11.5 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 12/01/18. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. If the code is different (ie: not only metadata diffs), due for example to a change in context or macro names, please double check it. Thanks. Yongseok --- >>From 2aadc16b2f8323490d4017e6e77c80d56c45ca41 Mon Sep 17 00:00:00 2001 From: Alejandro Lucero Date: Mon, 12 Nov 2018 11:18:19 +0000 Subject: [PATCH] mem: fix memory initialization time When using large amount of hugepage based memory, doing all the hugepages mapping can take quite significant time. The problem is hugepages being initially mmaped to virtual addresses which will be tried later for the final hugepage mmaping. This causes the final mapping requiring calling mmap with another hint address which can happen several times, depending on the amount of memory to mmap, and which each mmmap taking more than a second. This patch changes the hint for the initial hugepage mmaping using a starting address which will not collide with the final mmaping. Fixes: 293c0c4b957f ("mem: use address hint for mapping hugepages") Signed-off-by: Alejandro Lucero Acked-by: Anatoly Burakov Acked-by: Eelco Chaudron Tested-by: Eelco Chaudron --- lib/librte_eal/linuxapp/eal/eal_memory.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c index bac969a12..0675809b7 100644 --- a/lib/librte_eal/linuxapp/eal/eal_memory.c +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c @@ -421,6 +421,21 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, } #endif +#ifdef RTE_ARCH_64 + /* + * Hugepages are first mmaped individually and then re-mmapped to + * another region for having contiguous physical pages in contiguous + * virtual addresses. Setting here vma_addr for the first hugepage + * mapped to a virtual address which will not collide with the second + * mmaping later. The next hugepages will use increments of this + * initial address. + * + * The final virtual address will be based on baseaddr which is + * 0x100000000. We use a hint here starting at 0x200000000, leaving + * another 4GB just in case, plus the total available hugepages memory. + */ + vma_addr = (char *)0x200000000 + (hpi->hugepage_sz * hpi->num_pages[0]); +#endif for (i = 0; i < hpi->num_pages[0]; i++) { uint64_t hugepage_sz = hpi->hugepage_sz; -- 2.11.0