From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id AC00DA052B; Tue, 28 Jul 2020 12:10:18 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8D692F04; Tue, 28 Jul 2020 12:10:16 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id B78D9E07 for ; Tue, 28 Jul 2020 12:10:14 +0200 (CEST) IronPort-SDR: 4eJqs8E7N5+xq4II2KkfDFVms+FArdWGcvtvHZTCLQ1dQ4+f7A+TFTv0jL2X7ZnRlpyq1sJhxc MYG6BTzx0Bzw== X-IronPort-AV: E=McAfee;i="6000,8403,9695"; a="151168338" X-IronPort-AV: E=Sophos;i="5.75,406,1589266800"; d="scan'208";a="151168338" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jul 2020 03:10:13 -0700 IronPort-SDR: 04biz7RHco0mJI1zgVs/eX8K2NWG/STLRFlKO2vcOvhWOtVT9OSlmvxvCqEsCOA3ZwMe3glnE0 ZMGzax62Sphg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,406,1589266800"; d="scan'208";a="286105445" Received: from aburakov-mobl.ger.corp.intel.com (HELO [10.213.196.108]) ([10.213.196.108]) by orsmga003.jf.intel.com with ESMTP; 28 Jul 2020 03:10:12 -0700 To: Kamaraj P Cc: Bruce Richardson , dev@dpdk.org, mmahmoud@ciso.com References: <20200710102837.GB684@bricha3-MOBL.ger.corp.intel.com> <75824075-9690-814a-1849-1107504ce344@intel.com> From: "Burakov, Anatoly" Message-ID: Date: Tue, 28 Jul 2020 11:10:11 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] DPDK hugepage memory fragmentation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 27-Jul-20 4:30 PM, Kamaraj P wrote: > Hi Anatoly, > Since we do not have the driver support of SRIOv with VFIO, we are using > IGB_UIO . I believe it's coming :) > Basically our application is crashing due to the buffer allocation > failure. I believe because it didn't get a contiguous memory location > and fails to allocate the memory. Again, "crashing due to buffer allocation failure" is not very descriptive. When allocation fails, EAL will produce an error log, so if your failures are indeed due to memory allocation failures, an EAL log will tell you if it's actually the case (and enabling debug level logging will tell you more). By default, all memory allocations will *not* be contiguous and therefore will not fail if the memory is not contiguous. In order for such an allocation to fail, you actually have to run out of memory. If there is indeed a place where you are specifically requesting contiguous memory, it will be signified by a call to memzone reserve API with a RTE_MEMZONE_IOVA_CONTIG flag (or a call to rte_eth_dma_zone_reserve(), if your driver makes use of that API). So if you're not willing to provide any logs to help with debugging, i would at least suggest you grep your codebase for the above two things, and put GDB breakpoints right after the calls to either memzone reserve API or a ethdev DMA zone reserve API. To summarize: a regular allocation *will not fail* if memory is non contiguous, so you can disregard those. If you find all places where you're requesting *contiguous* memory (which should be at most one or two), you'll be in a better position to determine whether this is what's causing the failures. > Is there any API, I can use to dump before our application dies ? > Please let me know. Not sure what you mean by that, but you could use rte_dump_physmem_layout() function to dump your hugepage layout. That said, i believe a debugger is, in most cases, a better way to diagnose the issue. > > Thanks, > Kamaraj > > > On Mon, Jul 13, 2020 at 2:57 PM Burakov, Anatoly > > wrote: > > On 11-Jul-20 8:51 AM, Kamaraj P wrote: > > Hello Anatoly/Bruce, > > > > We are using the 18_11 version of DPDK and we are using igb_uio. > > The way we observe an issue here is that, after we tried multiple > > iterations of start/stop of container application(which has DPDK), > > we were not able to allocate the memory for port during the init. > > We thought that it could be an issue of not getting continuous > > allocation hence it fails. > > > > Is there an API where I can check if the memory is fragmented > before we > > invoke an allocation ? > > Or do we have any such mechanism to defragment the memory allocation > > once we exist from the application ? > > Please advise. > > > > This is unlikely due to fragmentation, because the only way for > 18.11 to > be affected my memory fragmentation is 1) if you're using legacy mem > mode, or 2) you're using IOVA as PA mode and you need huge amounts of > contiguous memory. (you are using igb_uio, so you would be in IOVA > as PA > mode) > > NICs very rarely, if ever, allocate more than a 2M-page worth of > contiguous memory, because their descriptor rings aren't that big, and > they'll usually get all the IOVA-contiguous space they need even in the > face of heavily fragmented memory. Similarly, while 18.11 mempools will > request IOVA-contiguous memory first, they have a fallback to using > non-contiguous memory and thus too work just fine in the face of high > memory fragmentation. > > The nature of the 18.11 memory subsystem is such that IOVA layout is > decoupled from VA layout, so fragmentation does not affect DPDK as much > as it has for previous versions. The first thing i'd suggest is using > VFIO as opposed to igb_uio, as it's safer to use in a container > environment, and it's less susceptible to memory fragmentation issues > because it can remap memory to appear IOVA-contiguous. > > Could you please provide detailed logs of the init process? You can add > '--log-level=eal,8' to the EAL command-line to enable debug logging in > the EAL. > > > Thanks, > > Kamaraj > > > > > > > > On Fri, Jul 10, 2020 at 9:14 PM Burakov, Anatoly > > > >> wrote: > > > >     On 10-Jul-20 11:28 AM, Bruce Richardson wrote: > >      > On Fri, Jul 10, 2020 at 02:52:16PM +0530, Kamaraj P wrote: > >      >> Hello All, > >      >> > >      >> We are running to run DPDK based application in a > container mode, > >      >> When we do multiple start/stop of our container > application, the > >     DPDK > >      >> initialization seems to be failing. > >      >> This is because the hugepage memory fragementated and is not > >     able to find > >      >> the continuous allocation of the memory to initialize the > buffer > >     in the > >      >> dpdk init. > >      >> > >      >> As part of the cleanup of the container, we do call > >     rte_eal_cleanup() to > >      >> cleanup the memory w.r.t our application. However after > >     iterations we still > >      >> see the memory allocation failure due to the > fragmentation issue. > >      >> > >      >> We also tried to set the "--huge-unlink" as an argument > before > >     when we > >      >> called the rte_eal_init() and it did not help. > >      >> > >      >> Could you please suggest if there is an option or any > existing > >     patches > >      >> available to clean up the memory to avoid fragmentation > issues > >     in the > >      >> future. > >      >> > >      >> Please advise. > >      >> > >      > What version of DPDK are you using, and what kernel driver > for NIC > >      > interfacing are you using? > >      > DPDK versions since 18.05 should be more forgiving of > fragmented > >     memory, > >      > especially if using the vfio-pci kernel driver. > >      > > > > >     This sounds odd, to be honest. > > > >     Unless you're allocating huge chunks of IOVA-contiguous memory, > >     fragmentation shouldn't be an issue. How did you determine > that this > >     was > >     in fact due to fragmentation? > > > >      > Regards, > >      > /Bruce > >      > > > > > > >     -- > >     Thanks, > >     Anatoly > > > > > -- > Thanks, > Anatoly > -- Thanks, Anatoly