From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id CB5CC4F93 for ; Thu, 28 Feb 2019 11:36:19 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Feb 2019 02:36:18 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,423,1544515200"; d="scan'208";a="130041355" Received: from aburakov-mobl1.ger.corp.intel.com (HELO [10.237.220.83]) ([10.237.220.83]) by orsmga003.jf.intel.com with ESMTP; 28 Feb 2019 02:36:17 -0800 To: Edwin Leung , Iain Barker , "Wiles, Keith" Cc: dev@dpdk.org References: <631579E3-02F5-4E12-8BE6-DDAC0AE2E4A7@oracle.com> <549A6EB0-6E19-460D-9BE5-52AA40003AF0@intel.com> <345EDE69-C416-405F-B88C-04EE8384D20F@oracle.com> <896AF59A-4CCF-42FE-B2D7-160C69427DD2@intel.com> <2b3d84de-f0a5-4b38-f670-6318725821ab@intel.com> From: "Burakov, Anatoly" Message-ID: Date: Thu, 28 Feb 2019 10:36:16 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.5.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] Question about DPDK hugepage fd change X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 28 Feb 2019 10:36:20 -0000 On 27-Feb-19 6:02 PM, Edwin Leung wrote: > Hi Anatoly, > > In my test for DPDK 18.11, I notice the following: > > 1. Using --legacy-mem switch, DPDK still opens 1 fd/huge page. In essence, it is the same with or without this switch. > > 2. Using --single-file-segments does reduce the open fd to 1. However, for each huge page that is in-use, a .lock file is opened. As a result, it still uses up a large number of fd's. > > Thanks. > -- edwin > > -----Original Message----- > From: Iain Barker > Sent: Wednesday, February 27, 2019 8:57 AM > To: Burakov, Anatoly ; Wiles, Keith > Cc: dev@dpdk.org; Edwin Leung > Subject: RE: [dpdk-dev] Question about DPDK hugepage fd change > > Original Message from: Burakov, Anatoly [mailto:anatoly.burakov@intel.com] > >> I just realized that, unless you're using --legacy-mem switch, one >> other way to alleviate the issue would be to use --single-file-segments >> option. This will still store the fd's, however it will only do so per >> memseg list, not per page. So, instead of 1000's of fd's with 2MB >> pages, you'd end up with under 10. Hope this helps! > > Hi Anatoly, > > Thanks for the update and suggestion. We did try using --single-file-segments previously. Although it lowers the amount of fd's allocated for tracking the segments as you noted, there is still a problem. > > It seems that a .lock file is created for each huge page, not for each segment. So with 2MB pages the glibc limit of 1024 fd's is still exhausted quickly if there is ~2GB of 2MB huge pages. > > Edwin can provide more details from his testing. In our case much sooner, as we already use >500 fd's for the application, just 1GB of 2MB huge pages is enough to hit the fd limit due to the .lock files. > > Thanks. > Right, i forgot about that. Thanks for noticing! :) By the way, i've proposed a patch for 19.05 to address this issue. The downside is that you'd be losing virtio with vhost-user backend support: http://patches.dpdk.org/patch/50469/ It would be good if you tested it and reported back. Thanks! (i should fix the wording of the documentation to avoid mentioning --single-file-segments as a solution - i completely forgot that it creates lock files...) -- Thanks, Anatoly