From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 641095F13; Wed, 14 Nov 2018 13:45:32 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 620153097042; Wed, 14 Nov 2018 12:45:31 +0000 (UTC) Received: from [10.36.116.26] (ovpn-116-26.ams2.redhat.com [10.36.116.26]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 8248B5D9C7; Wed, 14 Nov 2018 12:45:30 +0000 (UTC) From: "Eelco Chaudron" To: "ovs dev" Cc: dev@dpdk.org, stable@dpdk.org, "Alejandro Lucero" Date: Wed, 14 Nov 2018 13:45:28 +0100 Message-ID: In-Reply-To: <618249C0-F8A9-48D4-9A8A-0703029A47F7@redhat.com> References: <20181112111819.25087-1-alejandro.lucero@netronome.com> <618249C0-F8A9-48D4-9A8A-0703029A47F7@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; format=flowed X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.43]); Wed, 14 Nov 2018 12:45:31 +0000 (UTC) Subject: Re: [dpdk-dev] [dpdk-stable] [PATCH 17.11] mem: fix memory initialization time X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 14 Nov 2018 12:45:32 -0000 On 12 Nov 2018, at 12:26, Eelco Chaudron wrote: > On 12 Nov 2018, at 12:18, Alejandro Lucero wrote: > >> When using large amount of hugepage based memory, doing all the >> hugepages mapping can take quite significant time. >> >> The problem is hugepages being initially mmaped to virtual addresses >> which will be tried later for the final hugepage mmaping. This causes >> the final mapping requiring calling mmap with another hint address >> which >> can happen several times, depending on the amount of memory to mmap, >> and >> which each mmmap taking more than a second. >> >> This patch changes the hint for the initial hugepage mmaping using >> a starting address which will not collide with the final mmaping. >> >> Fixes: 293c0c4b957f ("mem: use address hint for mapping hugepages") >> >> Signed-off-by: Alejandro Lucero > > Thanks Alejandro for sending the patch. This issue was found in an > OVS-DPDK environment. > I verified/tested the patch. > > Acked-by: Eelco Chaudron > Tested-by: Eelco Chaudron >> --- >> lib/librte_eal/linuxapp/eal/eal_memory.c | 15 +++++++++++++++ >> 1 file changed, 15 insertions(+) >> >> diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c >> b/lib/librte_eal/linuxapp/eal/eal_memory.c >> index bac969a12..0675809b7 100644 >> --- a/lib/librte_eal/linuxapp/eal/eal_memory.c >> +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c >> @@ -421,6 +421,21 @@ map_all_hugepages(struct hugepage_file >> *hugepg_tbl, struct hugepage_info *hpi, >> } >> #endif >> >> +#ifdef RTE_ARCH_64 >> + /* >> + * Hugepages are first mmaped individually and then re-mmapped to >> + * another region for having contiguous physical pages in >> contiguous >> + * virtual addresses. Setting here vma_addr for the first hugepage >> + * mapped to a virtual address which will not collide with the >> second >> + * mmaping later. The next hugepages will use increments of this >> + * initial address. >> + * >> + * The final virtual address will be based on baseaddr which is >> + * 0x100000000. We use a hint here starting at 0x200000000, leaving >> + * another 4GB just in case, plus the total available hugepages >> memory. >> + */ >> + vma_addr = (char *)0x200000000 + (hpi->hugepage_sz * >> hpi->num_pages[0]); >> +#endif >> for (i = 0; i < hpi->num_pages[0]; i++) { >> uint64_t hugepage_sz = hpi->hugepage_sz; >> >> -- >> 2.17.1 Adding OVS dev to this thread, as this issue is introduced in DPDK 17.11.4.