From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 0852A5A43 for ; Fri, 3 Apr 2015 14:00:45 +0200 (CEST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP; 03 Apr 2015 05:00:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.11,517,1422950400"; d="scan'208";a="689965136" Received: from smonroyx-mobl.ger.corp.intel.com (HELO [10.237.220.29]) ([10.237.220.29]) by fmsmga001.fm.intel.com with ESMTP; 03 Apr 2015 05:00:41 -0700 Message-ID: <551E80E6.502@intel.com> Date: Fri, 03 Apr 2015 13:00:38 +0100 From: "Gonzalez Monroy, Sergio" User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0 MIME-Version: 1.0 To: Thomas Monjalon , Lilijun References: <1427974230-8572-1-git-send-email-jerry.lilijun@huawei.com> <551E57A6.9070405@intel.com> <2447953.M95UbNe7b9@xps13> In-Reply-To: <2447953.M95UbNe7b9@xps13> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH] eal: decrease the memory init time with many hugepages setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 Apr 2015 12:00:46 -0000 On 03/04/2015 10:14, Thomas Monjalon wrote: > 2015-04-03 10:04, Gonzalez Monroy, Sergio: >> On 02/04/2015 14:41, Jay Rolette wrote: >>> On Thu, Apr 2, 2015 at 7:55 AM, Thomas Monjalon >>> wrote: >>> >>>> 2015-04-02 19:30, jerry.lilijun@huawei.com: >>>>> From: Lilijun >>>>> >>>>> In the function map_all_hugepages(), hugepage memory is truly allocated >>>> by >>>>> memset(virtaddr, 0, hugepage_sz). Then it costs about 40s to finish the >>>>> dpdk memory initialization when 40000 2M hugepages are setup in host os. >>>> Yes it's something we should try to reduce. >>>> >>> I have a patch in my tree that does the same opto, but it is commented out >>> right now. In our case, 2/3's of the startup time for our entire app was >>> due to that particular call - memset(virtaddr, 0, hugepage_sz). Just >>> zeroing 1 byte per huge page reduces that by 30% in my tests. >>> >>> The only reason I have it commented out is that I didn't have time to make >>> sure there weren't side-effects for DPDK or my app. For normal shared >>> memory on Linux, pages are initialized to zero automatically once they are >>> touched, so the memset isn't required but I wasn't sure whether that >>> applied to huge pages. Also wasn't sure how hugetlbfs factored into the >>> equation. >>> >>> Hopefully someone can chime in on that. Would love to uncomment the opto :) >>> >> I think the opto/patch is good ;) >> >> I had a look at the Linux kernel sources (mm/hugetlb.c)and at least >> since 2.6.32 (minimum >> Linux kernel version supported by DPDK) the kernel clears the hugepage >> (clear_huge_page) >> when it faults (hugetlb_no_page). >> >> Primary DPDK apps do clear_hugedir, clearing previously allocated >> hugepages, thus triggering >> hugepage faults (hugetlb_no_page) during map_all_hugepages. >> >> Note that even when we exit a primary DPDK app, hugepages remain >> allocated, reason why >> apps such as dump_cfg are able to retrieve config/memory information. > OK, thanks Sergio. > > So the patch should add a comment to explain page fault reason of memset and > why 1 byte is enough. > I think we should also consider remap_all_hugepages() function. Good point! You are right, I don't think we would even need to do memset at all in remap_all_hugepages as we already have touched/allocated those pages. Sergio >>>> Isn't it a security hole? >>>> >>> Not necessarily. If the kernel pre-zeros the huge pages via CoW like normal >>> pages, then definitely not. >>> >>> Even if the kernel doesn't pre-zero the pages, if DPDK takes care of >>> properly initializing memory structures on startup as they are carved out >>> of the huge pages, then it isn't a security hole. However, that approach is >>> susceptible to bit rot... You can audit the code and make sure everything >>> is kosher at first, but you have to worry about new code making assumptions >>> about how memory is initialized. >>> >>>> This article speaks about "prezeroing optimizations" in Linux kernel: >>>> http://landley.net/writing/memory-faq.txt >>> I read through that when I was trying to figure out what whether huge pages >>> were pre-zeroed or not. It doesn't talk about huge pages much beyond why >>> they are useful for reducing TLB swaps. >>> >>> Jay >