From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qc0-f169.google.com (mail-qc0-f169.google.com [209.85.216.169]) by dpdk.org (Postfix) with ESMTP id 9F582282 for ; Sat, 20 Dec 2014 02:34:55 +0100 (CET) Received: by mail-qc0-f169.google.com with SMTP id w7so1472384qcr.28 for ; Fri, 19 Dec 2014 17:34:55 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=REr6DhgHbhZ5hqPHkDvFJIF92dR4XLKmedHQtCskOMI=; b=PqM/bBeqNtPTyvcW2hMYQ4h+hQJtRG5psXrTDWK32iJRs1lgJ3ZLfmkMFVZ2BFifDl Ipw1Sf4eApc030q1oe3v5AwYSu3KsLhUzzBbflTaxl1Imcvn+grMhdT20gtn2MCw8NkV fczsLRtvnp8IrEvI9Agb8Y/Bh91IEEGhXQis02F0dHIQ3rTMSleBlPyODgTEYFZR4q5X WzDelUHqqjj8b+gwV06XlmVzC6R7o/9EZBSN75/Mgg0dWwAoY2O4m6c6CV95PQjJdFlu WfQiQud+2vSDUeLNr5MqOiqpvi64ewtZ3RES5wghaaUpOFx2uhdML8ZR2w93SaZ2eJ90 atbg== X-Gm-Message-State: ALoCoQlTIjTTWfIVqM2pTx7hizX/ORKhLeRoPvpCgrkZyDWlAQ0yJrY+gXoVjyEPsDkcdKl6mqaw MIME-Version: 1.0 X-Received: by 10.140.35.114 with SMTP id m105mr6679486qgm.79.1419039295118; Fri, 19 Dec 2014 17:34:55 -0800 (PST) Received: by 10.140.16.19 with HTTP; Fri, 19 Dec 2014 17:34:55 -0800 (PST) In-Reply-To: References: <2601191342CEEE43887BDE71AB977258213C2046@IRSMSX105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB977258213C2099@IRSMSX105.ger.corp.intel.com> Date: Fri, 19 Dec 2014 17:34:55 -0800 Message-ID: From: Stephen Hemminger To: Newman Poborsky Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] rte_mempool_create fails with ENOMEM X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Dec 2014 01:34:56 -0000 You can reserve hugepages on the kernel cmdline (GRUB). On Fri, Dec 19, 2014 at 12:13 PM, Newman Poborsky wrote: > On Thu, Dec 18, 2014 at 9:03 PM, Ananyev, Konstantin < > konstantin.ananyev@intel.com> wrote: > > > > > > > > -----Original Message----- > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ananyev, > Konstantin > > > Sent: Thursday, December 18, 2014 5:43 PM > > > To: Newman Poborsky; dev@dpdk.org > > > Subject: Re: [dpdk-dev] rte_mempool_create fails with ENOMEM > > > > > > Hi > > > > > > > -----Original Message----- > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Newman Poborsky > > > > Sent: Thursday, December 18, 2014 1:26 PM > > > > To: dev@dpdk.org > > > > Subject: [dpdk-dev] rte_mempool_create fails with ENOMEM > > > > > > > > Hi, > > > > > > > > could someone please provide any explanation why sometimes mempool > > creation > > > > fails with ENOMEM? > > > > > > > > I run my test app several times without any problems and then I start > > > > getting ENOMEM error when creating mempool that are used for packets. > > I try > > > > to delete everything from /mnt/huge, I increase the number of huge > > pages, > > > > remount /mnt/huge but nothing helps. > > > > > > > > There is more than enough memory on server. I tried to debug > > > > rte_mempool_create() call and it seems that after server is restarted > > free > > > > mem segments are bigger than 2MB, but after running test app for > > several > > > > times, it seems that all free mem segments have a size of 2MB, and > > since I > > > > am requesting 8MB for my packet mempool, this fails. I'm not really > > sure > > > > that this conclusion is correct. > > > > > > Yes,rte_mempool_create uses rte_memzone_reserve() to allocate > > > single physically continuous chunk of memory. > > > If no such chunk exist, then it would fail. > > > Why physically continuous? > > > Main reason - to make things easier for us, as in that case we don't > > have to worry > > > about situation when mbuf crosses page boundary. > > > So you can overcome that problem like that: > > > Allocate max amount of memory you would need to hold all mbufs in worst > > case (all pages physically disjoint) > > > using rte_malloc(). > > > > Actually my wrong: rte_malloc()s wouldn't help you here. > > You probably need to allocate some external (not managed by EAL) memory > in > > that case, > > may be mmap() with MAP_HUGETLB, or something similar. > > > > > Figure out it's physical mappings. > > > Call rte_mempool_xmem_create(). > > > You can look at: app/test-pmd/mempool_anon.c as a reference. > > > It uses same approach to create mempool over 4K pages. > > > > > > We probably add similar function into mempool API > > (create_scatter_mempool or something) > > > or just add a new flag (USE_SCATTER_MEM) into rte_mempool_create(). > > > Though right now it is not there. > > > > > > Another quick alternative - use 1G pages. > > > > > > Konstantin > > > > > Ok, thanks for the explanation. I understand that this is probably an OS > question more than DPDK, but is there a way to again allocate a contiguous > memory for n-th run of my test app? It seems that hugepages get > divded/separated to individual 2MB hugepage. Shouldn't OS's memory > management system try to group those hupages back to one contiguous chunk > once my app/process is done? Again, I know very little about Linux memory > management and hugepages, so forgive me if this is a stupid question. > Is rebooting the OS the only way to deal with this problem? Or should I > just try to use 1GB hugepages? > > p.s. Konstantin, sorry for the double reply, I accidentally forgot to > include dev list in my first reply :) > > Newman > > > > > > > > > > > Does anybody have any idea what to check and how running my test app > > > > several times affects hugepages? > > > > > > > > For me, this doesn't make any since because after test app exits, > > resources > > > > should be freed, right? > > > > > > > > This has been driving me crazy for days now. I tried reading a bit > more > > > > theory about hugepages, but didn't find out anything that could help > > me. > > > > Maybe it's something else and completely trivial, but I can't figure > it > > > > out, so any help is appreciated. > > > > > > > > Thank you! > > > > > > > > BR, > > > > Newman P. > > >