From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-it0-f48.google.com (mail-it0-f48.google.com [209.85.214.48]) by dpdk.org (Postfix) with ESMTP id 08E6F292D for ; Tue, 4 Oct 2016 12:14:00 +0200 (CEST) Received: by mail-it0-f48.google.com with SMTP id 188so58709632iti.1 for ; Tue, 04 Oct 2016 03:13:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=drSCCSpD9B57XZdOwWgmk+q4Ee1aWcC4qsDbpnwlftw=; b=eWJvo9Ys5yrtIzNzDXpMVyEC4lsjKjI+dEXCS5Pr6qpA2La9WHdjIJn60fz0VAd0S2 lIxv9bncKb/gVeXodU1IT0ocRoxnEiNMy22f61Tq5nWBoifwxLClx4Hg36+DlsXZydE6 S6cDk5iQtQx+1vqgw6EKeFno2OJxrNlOOF7YZThHwNTwjiynIz78btewhPH/Kixt8dqe jpd5qqBCPDUZID6on7j9Uo2w9+A6z76OtVVFIl14KvYSxB12FMJ6PMQBOFg6zR9OZsnb jlXUyBXbvUXUQ4X4iu7dirqdre5z8AiALRVkU/0L1nFkaIOSpWMVLqgOTxpDLcJvKeCq RfkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=drSCCSpD9B57XZdOwWgmk+q4Ee1aWcC4qsDbpnwlftw=; b=Bq+UO8t682zsxEZWYkPj8AYmgksAs5f1rmSqieC5nA/VP7C0RqI3HAfIap1ZvrnUyt GPMF/yMIEwDRB8q/5ba8+oLE+GNkXs4aowuUhHUZZmH++F2Qlb8wgkCkqverepdwNzvz BZXoEzRjOCmHAlMAy6GCHS+kJMfbE/uCPAkxOE8nRv0UdAY3r48FO36lC4TfJk3VIy3p /ZMOlXGv9WJhW3cd7yYWn05+6C7PKYdUgMtf8oxTYA0klBGFbob3Lll+5u4IlrAQrr0h HBiYG3Of3OeQCfQtDXzY4R4S7LLWFko9IvhZ3I41bIFXBISGKX3i3ubF9KRIHLRTMNFk myOQ== X-Gm-Message-State: AA6/9RmvckIl5m/ZjbNjVEWR+5Y33hz6vLsh1ZYUr4483JBX/zIkSWa53Edheptan4b2Lp9LaPGJXN4rqTaQ9Q== X-Received: by 10.36.52.141 with SMTP id z135mr22745342itz.78.1475576039413; Tue, 04 Oct 2016 03:13:59 -0700 (PDT) MIME-Version: 1.0 Received: by 10.79.157.155 with HTTP; Tue, 4 Oct 2016 03:13:39 -0700 (PDT) In-Reply-To: <57F3787A.6060105@oneaccess-net.com> References: <57F36199.5020100@oneaccess-net.com> <57F3787A.6060105@oneaccess-net.com> From: Andriy Berestovskyy Date: Tue, 4 Oct 2016 12:13:39 +0200 Message-ID: To: Renata Saiakhova Cc: Sergio Gonzalez Monroy , users Content-Type: text/plain; charset=UTF-8 Subject: Re: [dpdk-users] rte_segments: hugepages are not in contiguous memory X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Oct 2016 10:14:00 -0000 Hi Renata, DPDK supports non-contiguous memory pools, but rte_pktmbuf_pool_create() uses rte_mempool_create_empty() with flags set to zero, i.e. requests contiguous memory. As a workaround, in rte_pktmbuf_pool_create() try to pass MEMPOOL_F_NO_PHYS_CONTIG flag as the last argument to rte_mempool_create_empty(). Note that KNI and some PMDs in 16.07 still require contiguous memory pools, so the trick might not work for your setup. For the KNI try the DPDK's master branch which includes the commit by Ferruh Yigit: 8451269 kni: remove continuous memory restriction Regards, Andriy On Tue, Oct 4, 2016 at 11:38 AM, Renata Saiakhova wrote: > Hi Sergio, > > thank you for your quick answer. I also tried to allocate 1GB hugepage, but > seems kernel fails to allocate it: previously I've seen that HugePages_Total > in /proc/meminfo is set to 0, now - kernel hangs at boot time (don't know > why). > But anyway, if there is no way to control hugepage allocation in the sense > they are in contiguous memory there is only way to accept it and adapt the > code that it creates several pools which in total satisfy the requested > size. > > Renata > > > On 10/04/2016 10:27 AM, Sergio Gonzalez Monroy wrote: >> >> On 04/10/2016 09:00, Renata Saiakhova wrote: >>> >>> Hi all, >>> >>> I'm using dpdk 16.04 (I tried 16.07 with the same results) and linux >>> kernel 4.4.20 in a virtual machine (I'm using libvirt framework). I pass a >>> parameter in kernel command line to allocate 512 hugepages of 2 MB at boot >>> time. They are successfully allocated. When an application with dpdk starts >>> it calls rte_pktmbuf_pool_create() which in turns requests internally >>> 649363712 bytes. Those bytes should be allocated from one of rte_memseg. >>> rte_memsegs describes contiguous portions of memory (both physical and >>> virtual) built on hugepages. This allocation fails, because there are no >>> rte_memsegs of this size (or bigger). Further debugging shows that hugepages >>> are allocated in non-contiguous physical memory and therefore rte_memsegs >>> are built respecting gaps in physical memory. >>> Below are the sizes of segments built on hugepages (in bytes) >>> 2097152 >>> 6291456 >>> 2097152 >>> 524288000 >>> 2097152 >>> 532676608 >>> 2097152 >>> 2097152 >>> So there are 5 segments which includes only one hugepage! >>> This behavior is completely different to what I observe with linux kernel >>> 3.8 (used with the same application with dpdk) - where all hugepages are >>> allocated in contiguous memory. >>> Does anyone experience the same issue? Could it be some kernel option >>> which can do the magic? If not, and kernel can allocated hugepages in >>> non-contiguous memory how dpdk is going to resolve it? >>> >> >> I don't think there is anything we can do to force the kernel to >> pre-allocate contig hugepages on boot. If there was, we wouldn't need to do >> all this mapping sorting and grouping we do on DPDK >> as we would rely on the kernel giving us pre-allocated contig hugepages. >> >> If you have plenty of memory one possible work around would be to increase >> the number of default hugepages so we are likely to find more contiguous >> ones. >> >> Is using 1GB hugepages a possibility in your case? >> >> Sergio >> >>> Thanks in advance, >>> Renata >>> >> >> . >> > -- Andriy Berestovskyy