From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lb0-f171.google.com (mail-lb0-f171.google.com [209.85.217.171]) by dpdk.org (Postfix) with ESMTP id F420F7F70 for ; Fri, 19 Dec 2014 21:13:25 +0100 (CET) Received: by mail-lb0-f171.google.com with SMTP id w7so1423647lbi.30 for ; Fri, 19 Dec 2014 12:13:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=qD/KTNucRHFlCeYri5e4q8jyQE0FYvE41HpX8o8m0tQ=; b=AjuaZRoxsOUJV7wRiPhNwtnWoBt4MbzacAISK4zTba0g60QxhX0uaT9OO07i7y9YZx sphaNPItOg3drKDtHvL8uTwns/Y1nE8c34dinuv555In7YHm8+Q0UeIn3f21/UUcJ2Bx 4tfYy1PgqiG8Fnm6qldHsD4N6KJuDrsdIYKPbL4K/IFtcRbdGd6Ldgx8IKfjiK6dhvOU UPqpzH/3qHT7f/jZVZiC3utR9DYMrU1G9D50yx66KfVxbigW8nNmZWGUA5zIZZdy1M4W diJSSl3mNKayXz39dUq9Ev1h+n0HHcijtBQuC5iJWfe2Em62WI0BwsdhLiMlFsIyuR7p +CQA== MIME-Version: 1.0 X-Received: by 10.152.27.130 with SMTP id t2mr9458412lag.28.1419020005597; Fri, 19 Dec 2014 12:13:25 -0800 (PST) Received: by 10.25.216.133 with HTTP; Fri, 19 Dec 2014 12:13:25 -0800 (PST) In-Reply-To: <2601191342CEEE43887BDE71AB977258213C2099@IRSMSX105.ger.corp.intel.com> References: <2601191342CEEE43887BDE71AB977258213C2046@IRSMSX105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB977258213C2099@IRSMSX105.ger.corp.intel.com> Date: Fri, 19 Dec 2014 21:13:25 +0100 Message-ID: From: Newman Poborsky To: "Ananyev, Konstantin" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] rte_mempool_create fails with ENOMEM X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Dec 2014 20:13:26 -0000 On Thu, Dec 18, 2014 at 9:03 PM, Ananyev, Konstantin < konstantin.ananyev@intel.com> wrote: > > > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ananyev, Konstantin > > Sent: Thursday, December 18, 2014 5:43 PM > > To: Newman Poborsky; dev@dpdk.org > > Subject: Re: [dpdk-dev] rte_mempool_create fails with ENOMEM > > > > Hi > > > > > -----Original Message----- > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Newman Poborsky > > > Sent: Thursday, December 18, 2014 1:26 PM > > > To: dev@dpdk.org > > > Subject: [dpdk-dev] rte_mempool_create fails with ENOMEM > > > > > > Hi, > > > > > > could someone please provide any explanation why sometimes mempool > creation > > > fails with ENOMEM? > > > > > > I run my test app several times without any problems and then I start > > > getting ENOMEM error when creating mempool that are used for packets. > I try > > > to delete everything from /mnt/huge, I increase the number of huge > pages, > > > remount /mnt/huge but nothing helps. > > > > > > There is more than enough memory on server. I tried to debug > > > rte_mempool_create() call and it seems that after server is restarted > free > > > mem segments are bigger than 2MB, but after running test app for > several > > > times, it seems that all free mem segments have a size of 2MB, and > since I > > > am requesting 8MB for my packet mempool, this fails. I'm not really > sure > > > that this conclusion is correct. > > > > Yes,rte_mempool_create uses rte_memzone_reserve() to allocate > > single physically continuous chunk of memory. > > If no such chunk exist, then it would fail. > > Why physically continuous? > > Main reason - to make things easier for us, as in that case we don't > have to worry > > about situation when mbuf crosses page boundary. > > So you can overcome that problem like that: > > Allocate max amount of memory you would need to hold all mbufs in worst > case (all pages physically disjoint) > > using rte_malloc(). > > Actually my wrong: rte_malloc()s wouldn't help you here. > You probably need to allocate some external (not managed by EAL) memory in > that case, > may be mmap() with MAP_HUGETLB, or something similar. > > > Figure out it's physical mappings. > > Call rte_mempool_xmem_create(). > > You can look at: app/test-pmd/mempool_anon.c as a reference. > > It uses same approach to create mempool over 4K pages. > > > > We probably add similar function into mempool API > (create_scatter_mempool or something) > > or just add a new flag (USE_SCATTER_MEM) into rte_mempool_create(). > > Though right now it is not there. > > > > Another quick alternative - use 1G pages. > > > > Konstantin > Ok, thanks for the explanation. I understand that this is probably an OS question more than DPDK, but is there a way to again allocate a contiguous memory for n-th run of my test app? It seems that hugepages get divded/separated to individual 2MB hugepage. Shouldn't OS's memory management system try to group those hupages back to one contiguous chunk once my app/process is done? Again, I know very little about Linux memory management and hugepages, so forgive me if this is a stupid question. Is rebooting the OS the only way to deal with this problem? Or should I just try to use 1GB hugepages? p.s. Konstantin, sorry for the double reply, I accidentally forgot to include dev list in my first reply :) Newman > > > > > > > Does anybody have any idea what to check and how running my test app > > > several times affects hugepages? > > > > > > For me, this doesn't make any since because after test app exits, > resources > > > should be freed, right? > > > > > > This has been driving me crazy for days now. I tried reading a bit more > > > theory about hugepages, but didn't find out anything that could help > me. > > > Maybe it's something else and completely trivial, but I can't figure it > > > out, so any help is appreciated. > > > > > > Thank you! > > > > > > BR, > > > Newman P. >