From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-it0-f52.google.com (mail-it0-f52.google.com [209.85.214.52]) by dpdk.org (Postfix) with ESMTP id 812C012A8 for ; Tue, 4 Oct 2016 13:27:44 +0200 (CEST) Received: by mail-it0-f52.google.com with SMTP id o19so124473763ito.1 for ; Tue, 04 Oct 2016 04:27:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=nttcADdr7m+2YqNuxSDW7b8NdNba6TQC1LZLHtk5SG0=; b=iaSSCpc3ifj9pxJz5qXkMgVQd61yWCvHILUyyNqbl+7xLR//IFJSicdfJ4OfaWdMWS 19hizpUQRuVp3vDXKJbHZf0eQ2WgKs3cM4j9oQKdkTqzRf9jh80jzP7hxgewxpRFrVSu WbZQipGszo8FadCZAJ6bCbL877D6WwsGQjq5fSwhh2xT/rmtsrPILkUyB8O44e9lB+vI 5rW5pAPzKgobXJ5zch3+aeml2gOV3RE0J/N+0ZpFRWwigAFxCthyJN9Ot8Mla2hklExr XYJyvfXwlhdY4Gc+xViOfUFKdrKw5LAbnfqQcAHUsxgrXmDKQRozYO9JvdypE0w/K0ei BD1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=nttcADdr7m+2YqNuxSDW7b8NdNba6TQC1LZLHtk5SG0=; b=jGsdhUlxObRJ36k/a/uaHvm4Xq1v50zJiz8gsd8PEijgZ7xv6Cm8qbpCsmRvYzTBYK Bq/ElnjoeIiGTqzQVdcb1OcYvgOUEpb3atNvcufpZQc2rC1znEVzDh6Y7yExPrgqtQF+ dj73ryq+PNXfvJbSw7CpOG/JHnqr+KsVI1U9/XRPykLxYOG/PQsP74QnJpA1gMIPCC/L W5LsuFngg9EcoBerR1k9X7j3q4F16mo02au/ZxGWbW/emFZw8W0t1l9iXpSysbBPTIHs m3thn+6HhNLYVFyf26XI5HPqD25+NUSHWujHRHo8WwzseC0bycE75au7YGFaZBuewgLq ei9A== X-Gm-Message-State: AA6/9Rmi2htbfKSBLmNWFe8uv8zGzVMcZSofLE+vsrXheN3OBEvTUCdhJ2D/Bdt/np1SdKmJ6PGIikQsJeBHNg== X-Received: by 10.36.52.141 with SMTP id z135mr23044803itz.78.1475580463883; Tue, 04 Oct 2016 04:27:43 -0700 (PDT) MIME-Version: 1.0 Received: by 10.79.157.155 with HTTP; Tue, 4 Oct 2016 04:27:23 -0700 (PDT) In-Reply-To: <57F388E5.3010405@oneaccess-net.com> References: <57F36199.5020100@oneaccess-net.com> <57F3787A.6060105@oneaccess-net.com> <57F388E5.3010405@oneaccess-net.com> From: Andriy Berestovskyy Date: Tue, 4 Oct 2016 13:27:23 +0200 Message-ID: To: Renata Saiakhova Cc: Sergio Gonzalez Monroy , users Content-Type: text/plain; charset=UTF-8 Subject: Re: [dpdk-users] rte_segments: hugepages are not in contiguous memory X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Oct 2016 11:27:45 -0000 Renata, In theory 512 contiguous 2MB huge pages might get transparently promoted to a single 1GB "superpage" and single TLB entry, but I am not even sure if it is implemented in Linux... So, I do not think there will be any noticeable performance difference between contiguous and non-contiguous 2MB huge pages. But you better measure it to make sure ;) Regards, Andriy On Tue, Oct 4, 2016 at 12:48 PM, Renata Saiakhova wrote: > Hi Andriy, > > thanks for your reply. I guess that contiguous memory is requested because > of the performance reasons. Do you know if I can expect a noticeable > performance drop using non-contiguous memory? > > Renata > > > On 10/04/2016 12:13 PM, Andriy Berestovskyy wrote: >> >> Hi Renata, >> DPDK supports non-contiguous memory pools, but >> rte_pktmbuf_pool_create() uses rte_mempool_create_empty() with flags >> set to zero, i.e. requests contiguous memory. >> >> As a workaround, in rte_pktmbuf_pool_create() try to pass >> MEMPOOL_F_NO_PHYS_CONTIG flag as the last argument to >> rte_mempool_create_empty(). >> >> Note that KNI and some PMDs in 16.07 still require contiguous memory >> pools, so the trick might not work for your setup. For the KNI try the >> DPDK's master branch which includes the commit by Ferruh Yigit: >> >> 8451269 kni: remove continuous memory restriction >> >> Regards, >> Andriy >> >> >> On Tue, Oct 4, 2016 at 11:38 AM, Renata Saiakhova >> wrote: >>> >>> Hi Sergio, >>> >>> thank you for your quick answer. I also tried to allocate 1GB hugepage, >>> but >>> seems kernel fails to allocate it: previously I've seen that >>> HugePages_Total >>> in /proc/meminfo is set to 0, now - kernel hangs at boot time (don't know >>> why). >>> But anyway, if there is no way to control hugepage allocation in the >>> sense >>> they are in contiguous memory there is only way to accept it and adapt >>> the >>> code that it creates several pools which in total satisfy the requested >>> size. >>> >>> Renata >>> >>> >>> On 10/04/2016 10:27 AM, Sergio Gonzalez Monroy wrote: >>>> >>>> On 04/10/2016 09:00, Renata Saiakhova wrote: >>>>> >>>>> Hi all, >>>>> >>>>> I'm using dpdk 16.04 (I tried 16.07 with the same results) and linux >>>>> kernel 4.4.20 in a virtual machine (I'm using libvirt framework). I >>>>> pass a >>>>> parameter in kernel command line to allocate 512 hugepages of 2 MB at >>>>> boot >>>>> time. They are successfully allocated. When an application with dpdk >>>>> starts >>>>> it calls rte_pktmbuf_pool_create() which in turns requests internally >>>>> 649363712 bytes. Those bytes should be allocated from one of >>>>> rte_memseg. >>>>> rte_memsegs describes contiguous portions of memory (both physical and >>>>> virtual) built on hugepages. This allocation fails, because there are >>>>> no >>>>> rte_memsegs of this size (or bigger). Further debugging shows that >>>>> hugepages >>>>> are allocated in non-contiguous physical memory and therefore >>>>> rte_memsegs >>>>> are built respecting gaps in physical memory. >>>>> Below are the sizes of segments built on hugepages (in bytes) >>>>> 2097152 >>>>> 6291456 >>>>> 2097152 >>>>> 524288000 >>>>> 2097152 >>>>> 532676608 >>>>> 2097152 >>>>> 2097152 >>>>> So there are 5 segments which includes only one hugepage! >>>>> This behavior is completely different to what I observe with linux >>>>> kernel >>>>> 3.8 (used with the same application with dpdk) - where all hugepages >>>>> are >>>>> allocated in contiguous memory. >>>>> Does anyone experience the same issue? Could it be some kernel option >>>>> which can do the magic? If not, and kernel can allocated hugepages in >>>>> non-contiguous memory how dpdk is going to resolve it? >>>>> >>>> I don't think there is anything we can do to force the kernel to >>>> pre-allocate contig hugepages on boot. If there was, we wouldn't need to >>>> do >>>> all this mapping sorting and grouping we do on DPDK >>>> as we would rely on the kernel giving us pre-allocated contig hugepages. >>>> >>>> If you have plenty of memory one possible work around would be to >>>> increase >>>> the number of default hugepages so we are likely to find more contiguous >>>> ones. >>>> >>>> Is using 1GB hugepages a possibility in your case? >>>> >>>> Sergio >>>> >>>>> Thanks in advance, >>>>> Renata >>>>> >>>> . >>>> >> >> > -- Andriy Berestovskyy