From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 92C719A8B for ; Fri, 3 Apr 2015 11:04:41 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga101.jf.intel.com with ESMTP; 03 Apr 2015 02:04:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.11,516,1422950400"; d="scan'208";a="476311114" Received: from smonroyx-mobl.ger.corp.intel.com (HELO [10.237.220.29]) ([10.237.220.29]) by FMSMGA003.fm.intel.com with ESMTP; 03 Apr 2015 02:04:38 -0700 Message-ID: <551E57A6.9070405@intel.com> Date: Fri, 03 Apr 2015 10:04:38 +0100 From: "Gonzalez Monroy, Sergio" User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0 MIME-Version: 1.0 To: Jay Rolette References: <1427974230-8572-1-git-send-email-jerry.lilijun@huawei.com> <2705739.59RNFm1sab@xps13> In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Cc: DPDK Subject: Re: [dpdk-dev] [PATCH] eal: decrease the memory init time with many hugepages setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 Apr 2015 09:04:42 -0000 On 02/04/2015 14:41, Jay Rolette wrote: > On Thu, Apr 2, 2015 at 7:55 AM, Thomas Monjalon > wrote: > >> 2015-04-02 19:30, jerry.lilijun@huawei.com: >>> From: Lilijun >>> >>> In the function map_all_hugepages(), hugepage memory is truly allocated >> by >>> memset(virtaddr, 0, hugepage_sz). Then it costs about 40s to finish the >>> dpdk memory initialization when 40000 2M hugepages are setup in host os. >> Yes it's something we should try to reduce. >> > I have a patch in my tree that does the same opto, but it is commented out > right now. In our case, 2/3's of the startup time for our entire app was > due to that particular call - memset(virtaddr, 0, hugepage_sz). Just > zeroing 1 byte per huge page reduces that by 30% in my tests. > > The only reason I have it commented out is that I didn't have time to make > sure there weren't side-effects for DPDK or my app. For normal shared > memory on Linux, pages are initialized to zero automatically once they are > touched, so the memset isn't required but I wasn't sure whether that > applied to huge pages. Also wasn't sure how hugetlbfs factored into the > equation. > > Hopefully someone can chime in on that. Would love to uncomment the opto :) > I think the opto/patch is good ;) I had a look at the Linux kernel sources (mm/hugetlb.c)and at least since 2.6.32 (minimum Linux kernel version supported by DPDK) the kernel clears the hugepage (clear_huge_page) when it faults (hugetlb_no_page). Primary DPDK apps do clear_hugedir, clearing previously allocated hugepages, thus triggering hugepage faults (hugetlb_no_page) during map_all_hugepages. Note that even when we exit a primary DPDK app, hugepages remain allocated, reason why apps such as dump_cfg are able to retrieve config/memory information. Sergio >> In fact we can only write one byte to finish the allocation. >> >> Isn't it a security hole? >> > Not necessarily. If the kernel pre-zeros the huge pages via CoW like normal > pages, then definitely not. > > Even if the kernel doesn't pre-zero the pages, if DPDK takes care of > properly initializing memory structures on startup as they are carved out > of the huge pages, then it isn't a security hole. However, that approach is > susceptible to bit rot... You can audit the code and make sure everything > is kosher at first, but you have to worry about new code making assumptions > about how memory is initialized. > > >> This article speaks about "prezeroing optimizations" in Linux kernel: >> http://landley.net/writing/memory-faq.txt > > I read through that when I was trying to figure out what whether huge pages > were pre-zeroed or not. It doesn't talk about huge pages much beyond why > they are useful for reducing TLB swaps. > > Jay > >