From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-f48.google.com (mail-pa0-f48.google.com [209.85.220.48]) by dpdk.org (Postfix) with ESMTP id 43B7A8E6C for ; Mon, 23 Nov 2015 03:27:49 +0100 (CET) Received: by pacdm15 with SMTP id dm15so175862747pac.3 for ; Sun, 22 Nov 2015 18:27:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:in-reply-to:references :mime-version:content-type:content-transfer-encoding; bh=8cYSuqzagqPJC/pMkTeuOe9ETiJQGjwJ1c5I5JvQqZg=; b=nO3beqZzm3bDinczDRqvhM2hWR9XS3ZoY6h5oB1cLoHvs3xs3bJrNaSPDhZ/62QZus pEpoZpcbFcPhFuCbVZrYVdeyXihuJOvQss7+NRCOvoHPYYCKP3nPQS9qoRVxnX6GDOCw 1h3s53KBdbqiL/AXZr5P1lBdN3fyY7+/Ou7UcGmfp/rjvG2sG8QWhDmnnOx4d/+BsvSu InRRHXsyeNj+r7WqnP5dXpNOy3TU2x/hdk5MQskqtF4aox0RB6YVPlocLfTLrzOfgL+o pNNdL4NJ6FokrvLdrEWxvRNlkkZGqRaRMXklCtB9k6cEYT4yR8DmrKN+u1p+ennrrFqt eptg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-type:content-transfer-encoding; bh=8cYSuqzagqPJC/pMkTeuOe9ETiJQGjwJ1c5I5JvQqZg=; b=TklMWZBehcsfNNbX73mxfaBy0FjJTdfIluWjlLZ3umv8Q2UJ/RJ//Mo6XeyVaJDhFs pWQ6IvRLRPzqpAbJ7m5WiSNBd6xgzmXYbITAlalo6H8gBIRAV1Zvrn9AD8/aRmK5MfEc arvevcpYgo4IugAwKU4l141juw1Sl7Kk1BSyql2MHE5o/4HCht3T7uHi66XRKD82DyxA 92sM083KaxhtKj0R5N7WH7juL18F98SB77DZCGwTBfgPxWBgwsAHkQoPLnM331jXMEMS cTty9GMizwvc2E5i+aj3HROAgLl+N8/CZw7eltb+iH3Z8OASjdUNXZgzD3DTWmhIyOav 8hNg== X-Gm-Message-State: ALoCoQlbXy+k49/BivWqVZiVT/lMl9ATgkKfih8smJexip03pvj5aKmbX/bCc3hn0ey/Z7Oxb/Vu X-Received: by 10.66.227.197 with SMTP id sc5mr33566492pac.157.1448245668596; Sun, 22 Nov 2015 18:27:48 -0800 (PST) Received: from xeon-e3 (static-50-53-82-155.bvtn.or.frontiernet.net. [50.53.82.155]) by smtp.gmail.com with ESMTPSA id pa4sm8073620pbc.0.2015.11.22.18.27.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 22 Nov 2015 18:27:48 -0800 (PST) Date: Sun, 22 Nov 2015 18:28:00 -0800 From: Stephen Hemminger To: Zhihong Wang Message-ID: <20151122182800.397e0701@xeon-e3> In-Reply-To: <1448219615-63746-3-git-send-email-zhihong.wang@intel.com> References: <1448219615-63746-1-git-send-email-zhihong.wang@intel.com> <1448219615-63746-3-git-send-email-zhihong.wang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH 2/2] lib/librte_eal: Remove unnecessary hugepage zero-filling X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 23 Nov 2015 02:27:49 -0000 On Sun, 22 Nov 2015 14:13:35 -0500 Zhihong Wang wrote: > The kernel fills new allocated (huge) pages with zeros. > DPDK just has to populate page tables to trigger the allocation. > > Signed-off-by: Zhihong Wang > --- > lib/librte_eal/linuxapp/eal/eal_memory.c | 20 ++++++-------------- > 1 file changed, 6 insertions(+), 14 deletions(-) > > diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c > index 0de75cd..21a5146 100644 > --- a/lib/librte_eal/linuxapp/eal/eal_memory.c > +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c > @@ -399,8 +399,10 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, > return -1; > } > > + /* map the segment, and populate page tables, > + * the kernel fills this segment with zeros */ > virtaddr = mmap(vma_addr, hugepage_sz, PROT_READ | PROT_WRITE, > - MAP_SHARED, fd, 0); > + MAP_SHARED | MAP_POPULATE, fd, 0); > if (virtaddr == MAP_FAILED) { > RTE_LOG(ERR, EAL, "%s(): mmap failed: %s\n", __func__, > strerror(errno)); > @@ -410,7 +412,6 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, > > if (orig) { > hugepg_tbl[i].orig_va = virtaddr; > - memset(virtaddr, 0, hugepage_sz); > } > else { > hugepg_tbl[i].final_va = virtaddr; > @@ -529,22 +530,16 @@ remap_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) > > old_addr = vma_addr; > > - /* map new, bigger segment */ > + /* map new, bigger segment, and populate page tables, > + * the kernel fills this segment with zeros */ > vma_addr = mmap(vma_addr, total_size, > - PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); > + PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE, fd, 0); > > if (vma_addr == MAP_FAILED || vma_addr != old_addr) { > RTE_LOG(ERR, EAL, "%s(): mmap failed: %s\n", __func__, strerror(errno)); > close(fd); > return -1; > } > - > - /* touch the page. this is needed because kernel postpones mapping > - * creation until the first page fault. with this, we pin down > - * the page and it is marked as used and gets into process' pagemap. > - */ > - for (offset = 0; offset < total_size; offset += hugepage_sz) > - *((volatile uint8_t*) RTE_PTR_ADD(vma_addr, offset)); > } > > /* set shared flock on the file. */ > @@ -592,9 +587,6 @@ remap_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) > } > } > > - /* zero out the whole segment */ > - memset(hugepg_tbl[page_idx].final_va, 0, total_size); > - > page_idx++; > } > Nice, especially on slow machines or with large memory. Acked-by: Stephen Hemminger