From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id CAC978E73 for ; Mon, 23 Nov 2015 03:16:12 +0100 (CET) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga102.fm.intel.com with ESMTP; 22 Nov 2015 18:16:11 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,334,1444719600"; d="scan'208";a="826784505" Received: from unknown (HELO dpdk5.sh.intel.com) ([10.239.129.244]) by orsmga001.jf.intel.com with ESMTP; 22 Nov 2015 18:16:10 -0800 From: Zhihong Wang To: dev@dpdk.org Date: Sun, 22 Nov 2015 14:13:35 -0500 Message-Id: <1448219615-63746-3-git-send-email-zhihong.wang@intel.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1448219615-63746-1-git-send-email-zhihong.wang@intel.com> References: <1448219615-63746-1-git-send-email-zhihong.wang@intel.com> Subject: [dpdk-dev] [PATCH 2/2] lib/librte_eal: Remove unnecessary hugepage zero-filling X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 23 Nov 2015 02:16:13 -0000 The kernel fills new allocated (huge) pages with zeros. DPDK just has to populate page tables to trigger the allocation. Signed-off-by: Zhihong Wang --- lib/librte_eal/linuxapp/eal/eal_memory.c | 20 ++++++-------------- 1 file changed, 6 insertions(+), 14 deletions(-) diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c index 0de75cd..21a5146 100644 --- a/lib/librte_eal/linuxapp/eal/eal_memory.c +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c @@ -399,8 +399,10 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, return -1; } + /* map the segment, and populate page tables, + * the kernel fills this segment with zeros */ virtaddr = mmap(vma_addr, hugepage_sz, PROT_READ | PROT_WRITE, - MAP_SHARED, fd, 0); + MAP_SHARED | MAP_POPULATE, fd, 0); if (virtaddr == MAP_FAILED) { RTE_LOG(ERR, EAL, "%s(): mmap failed: %s\n", __func__, strerror(errno)); @@ -410,7 +412,6 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, if (orig) { hugepg_tbl[i].orig_va = virtaddr; - memset(virtaddr, 0, hugepage_sz); } else { hugepg_tbl[i].final_va = virtaddr; @@ -529,22 +530,16 @@ remap_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) old_addr = vma_addr; - /* map new, bigger segment */ + /* map new, bigger segment, and populate page tables, + * the kernel fills this segment with zeros */ vma_addr = mmap(vma_addr, total_size, - PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE, fd, 0); if (vma_addr == MAP_FAILED || vma_addr != old_addr) { RTE_LOG(ERR, EAL, "%s(): mmap failed: %s\n", __func__, strerror(errno)); close(fd); return -1; } - - /* touch the page. this is needed because kernel postpones mapping - * creation until the first page fault. with this, we pin down - * the page and it is marked as used and gets into process' pagemap. - */ - for (offset = 0; offset < total_size; offset += hugepage_sz) - *((volatile uint8_t*) RTE_PTR_ADD(vma_addr, offset)); } /* set shared flock on the file. */ @@ -592,9 +587,6 @@ remap_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) } } - /* zero out the whole segment */ - memset(hugepg_tbl[page_idx].final_va, 0, total_size); - page_idx++; } -- 2.5.0