From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0DA8BA052A; Fri, 10 Jul 2020 12:28:45 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 177DB1D9AD; Fri, 10 Jul 2020 12:28:44 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id EB91D1D9A3 for ; Fri, 10 Jul 2020 12:28:42 +0200 (CEST) IronPort-SDR: pdhmdyRhZavV0TQSHGEMljRaLIkC2/KZZYtKQM6VYdpFloxTUPAoACfXFX52qQKg9LK/boUBGa SUbiIsQRQNiA== X-IronPort-AV: E=McAfee;i="6000,8403,9677"; a="128245754" X-IronPort-AV: E=Sophos;i="5.75,335,1589266800"; d="scan'208";a="128245754" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Jul 2020 03:28:42 -0700 IronPort-SDR: depUcEMktPr/OqHxGyRyXmLa7ZAiJhCNxNPDAh0H5veDiUGT+yv5U8hWIqGWNQS0oim0tr5Ou0 vqwQ1d5Hcckg== X-IronPort-AV: E=Sophos;i="5.75,335,1589266800"; d="scan'208";a="458238475" Received: from bricha3-mobl.ger.corp.intel.com ([10.249.32.166]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-SHA; 10 Jul 2020 03:28:40 -0700 Date: Fri, 10 Jul 2020 11:28:37 +0100 From: Bruce Richardson To: Kamaraj P Cc: dev@dpdk.org, "Burakov, Anatoly" Message-ID: <20200710102837.GB684@bricha3-MOBL.ger.corp.intel.com> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Subject: Re: [dpdk-dev] DPDK hugepage memory fragmentation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Fri, Jul 10, 2020 at 02:52:16PM +0530, Kamaraj P wrote: > Hello All, > > We are running to run DPDK based application in a container mode, > When we do multiple start/stop of our container application, the DPDK > initialization seems to be failing. > This is because the hugepage memory fragementated and is not able to find > the continuous allocation of the memory to initialize the buffer in the > dpdk init. > > As part of the cleanup of the container, we do call rte_eal_cleanup() to > cleanup the memory w.r.t our application. However after iterations we still > see the memory allocation failure due to the fragmentation issue. > > We also tried to set the "--huge-unlink" as an argument before when we > called the rte_eal_init() and it did not help. > > Could you please suggest if there is an option or any existing patches > available to clean up the memory to avoid fragmentation issues in the > future. > > Please advise. > What version of DPDK are you using, and what kernel driver for NIC interfacing are you using? DPDK versions since 18.05 should be more forgiving of fragmented memory, especially if using the vfio-pci kernel driver. Regards, /Bruce