From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lb0-f180.google.com (mail-lb0-f180.google.com [209.85.217.180]) by dpdk.org (Postfix) with ESMTP id 3285B11A2 for ; Mon, 22 Dec 2014 11:48:39 +0100 (CET) Received: by mail-lb0-f180.google.com with SMTP id l4so3704153lbv.11 for ; Mon, 22 Dec 2014 02:48:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=vYKbc9NKZTLTpOj1R1lgRaUyrPVyEnth4EEr7zG7G14=; b=ThPXebghTPUhqRCy86+46uNbHDYMeJfqviUaclfsLMzaM+caXCkSFEhBWoa1ybNK1l sZ9vBq61C761KTOUuIYvMT+dfxq2vBFA2AosA80JyIA4f/uYu7kBDUiZUS6v3RJavpkM Fegx4YdAPZgTdx5liEBIYKdTJvy6sIf7/5v/x7AntcDth2aLJYmd9LNOC21I/qkrmy6u hJXYcu9ProGDgclgV8XN/QTZGr/cLVAMRDPGD0qaTSLTjABxmCaO/rOgFWfdezxoi8cm RnFqgXFz2GdHL2uSi+kaG4IhTVHNe5XoiqxZad6sPOvpLnvy+cdePquE2gF5z/2/Afsw +tjw== MIME-Version: 1.0 X-Received: by 10.152.161.168 with SMTP id xt8mr21366244lab.35.1419245318808; Mon, 22 Dec 2014 02:48:38 -0800 (PST) Received: by 10.25.216.133 with HTTP; Mon, 22 Dec 2014 02:48:38 -0800 (PST) In-Reply-To: References: <2601191342CEEE43887BDE71AB977258213C2046@IRSMSX105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB977258213C2099@IRSMSX105.ger.corp.intel.com> Date: Mon, 22 Dec 2014 11:48:38 +0100 Message-ID: From: Newman Poborsky To: Stephen Hemminger Content-Type: text/plain; charset=UTF-8 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] rte_mempool_create fails with ENOMEM X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 22 Dec 2014 10:48:39 -0000 On Sat, Dec 20, 2014 at 2:34 AM, Stephen Hemminger wrote: > You can reserve hugepages on the kernel cmdline (GRUB). Great, thanks, I'll try that! Newman > > On Fri, Dec 19, 2014 at 12:13 PM, Newman Poborsky > wrote: >> >> On Thu, Dec 18, 2014 at 9:03 PM, Ananyev, Konstantin < >> konstantin.ananyev@intel.com> wrote: >> >> > >> > >> > > -----Original Message----- >> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ananyev, >> > > Konstantin >> > > Sent: Thursday, December 18, 2014 5:43 PM >> > > To: Newman Poborsky; dev@dpdk.org >> > > Subject: Re: [dpdk-dev] rte_mempool_create fails with ENOMEM >> > > >> > > Hi >> > > >> > > > -----Original Message----- >> > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Newman Poborsky >> > > > Sent: Thursday, December 18, 2014 1:26 PM >> > > > To: dev@dpdk.org >> > > > Subject: [dpdk-dev] rte_mempool_create fails with ENOMEM >> > > > >> > > > Hi, >> > > > >> > > > could someone please provide any explanation why sometimes mempool >> > creation >> > > > fails with ENOMEM? >> > > > >> > > > I run my test app several times without any problems and then I >> > > > start >> > > > getting ENOMEM error when creating mempool that are used for >> > > > packets. >> > I try >> > > > to delete everything from /mnt/huge, I increase the number of huge >> > pages, >> > > > remount /mnt/huge but nothing helps. >> > > > >> > > > There is more than enough memory on server. I tried to debug >> > > > rte_mempool_create() call and it seems that after server is >> > > > restarted >> > free >> > > > mem segments are bigger than 2MB, but after running test app for >> > several >> > > > times, it seems that all free mem segments have a size of 2MB, and >> > since I >> > > > am requesting 8MB for my packet mempool, this fails. I'm not really >> > sure >> > > > that this conclusion is correct. >> > > >> > > Yes,rte_mempool_create uses rte_memzone_reserve() to allocate >> > > single physically continuous chunk of memory. >> > > If no such chunk exist, then it would fail. >> > > Why physically continuous? >> > > Main reason - to make things easier for us, as in that case we don't >> > have to worry >> > > about situation when mbuf crosses page boundary. >> > > So you can overcome that problem like that: >> > > Allocate max amount of memory you would need to hold all mbufs in >> > > worst >> > case (all pages physically disjoint) >> > > using rte_malloc(). >> > >> > Actually my wrong: rte_malloc()s wouldn't help you here. >> > You probably need to allocate some external (not managed by EAL) memory >> > in >> > that case, >> > may be mmap() with MAP_HUGETLB, or something similar. >> > >> > > Figure out it's physical mappings. >> > > Call rte_mempool_xmem_create(). >> > > You can look at: app/test-pmd/mempool_anon.c as a reference. >> > > It uses same approach to create mempool over 4K pages. >> > > >> > > We probably add similar function into mempool API >> > (create_scatter_mempool or something) >> > > or just add a new flag (USE_SCATTER_MEM) into rte_mempool_create(). >> > > Though right now it is not there. >> > > >> > > Another quick alternative - use 1G pages. >> > > >> > > Konstantin >> > >> >> >> Ok, thanks for the explanation. I understand that this is probably an OS >> question more than DPDK, but is there a way to again allocate a contiguous >> memory for n-th run of my test app? It seems that hugepages get >> divded/separated to individual 2MB hugepage. Shouldn't OS's memory >> management system try to group those hupages back to one contiguous chunk >> once my app/process is done? Again, I know very little about Linux >> memory >> management and hugepages, so forgive me if this is a stupid question. >> Is rebooting the OS the only way to deal with this problem? Or should I >> just try to use 1GB hugepages? >> >> p.s. Konstantin, sorry for the double reply, I accidentally forgot to >> include dev list in my first reply :) >> >> Newman >> >> > >> > > > >> > > > Does anybody have any idea what to check and how running my test app >> > > > several times affects hugepages? >> > > > >> > > > For me, this doesn't make any since because after test app exits, >> > resources >> > > > should be freed, right? >> > > > >> > > > This has been driving me crazy for days now. I tried reading a bit >> > > > more >> > > > theory about hugepages, but didn't find out anything that could help >> > me. >> > > > Maybe it's something else and completely trivial, but I can't figure >> > > > it >> > > > out, so any help is appreciated. >> > > > >> > > > Thank you! >> > > > >> > > > BR, >> > > > Newman P. >> > > >