From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 97F0F8DB5 for ; Fri, 20 Nov 2015 09:56:48 +0100 (CET) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga101.jf.intel.com with ESMTP; 20 Nov 2015 00:56:36 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,321,1444719600"; d="scan'208";a="855586451" Received: from unknown (HELO dpdk5.sh.intel.com) ([10.239.129.244]) by fmsmga002.fm.intel.com with ESMTP; 20 Nov 2015 00:56:21 -0800 From: Zhihong Wang To: dev@dpdk.org Date: Thu, 19 Nov 2015 20:53:48 -0500 Message-Id: <1447984428-137897-3-git-send-email-zhihong.wang@intel.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1447984428-137897-1-git-send-email-zhihong.wang@intel.com> References: <1447984428-137897-1-git-send-email-zhihong.wang@intel.com> Subject: [dpdk-dev] [PATCH RFC v2 2/2] lib/librte_eal: Remove unnecessary hugepage zero-filling X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Nov 2015 08:56:49 -0000 The kernel fills new allocated (huge) pages with zeros. DPDK just has to touch the pages to trigger the allocation. Signed-off-by: Zhihong Wang --- lib/librte_eal/linuxapp/eal/eal_memory.c | 20 ++++++-------------- 1 file changed, 6 insertions(+), 14 deletions(-) diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c index 0de75cd..21a5146 100644 --- a/lib/librte_eal/linuxapp/eal/eal_memory.c +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c @@ -399,8 +399,10 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, return -1; } + /* map the segment, and populate page tables, + * the kernel fills this segment with zeros */ virtaddr = mmap(vma_addr, hugepage_sz, PROT_READ | PROT_WRITE, - MAP_SHARED, fd, 0); + MAP_SHARED | MAP_POPULATE, fd, 0); if (virtaddr == MAP_FAILED) { RTE_LOG(ERR, EAL, "%s(): mmap failed: %s\n", __func__, strerror(errno)); @@ -410,7 +412,6 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, if (orig) { hugepg_tbl[i].orig_va = virtaddr; - memset(virtaddr, 0, hugepage_sz); } else { hugepg_tbl[i].final_va = virtaddr; @@ -529,22 +530,16 @@ remap_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) old_addr = vma_addr; - /* map new, bigger segment */ + /* map new, bigger segment, and populate page tables, + * the kernel fills this segment with zeros */ vma_addr = mmap(vma_addr, total_size, - PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE, fd, 0); if (vma_addr == MAP_FAILED || vma_addr != old_addr) { RTE_LOG(ERR, EAL, "%s(): mmap failed: %s\n", __func__, strerror(errno)); close(fd); return -1; } - - /* touch the page. this is needed because kernel postpones mapping - * creation until the first page fault. with this, we pin down - * the page and it is marked as used and gets into process' pagemap. - */ - for (offset = 0; offset < total_size; offset += hugepage_sz) - *((volatile uint8_t*) RTE_PTR_ADD(vma_addr, offset)); } /* set shared flock on the file. */ @@ -592,9 +587,6 @@ remap_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) } } - /* zero out the whole segment */ - memset(hugepg_tbl[page_idx].final_va, 0, total_size); - page_idx++; } -- 2.5.0