From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id C83F85B3C; Fri, 1 Jun 2018 15:06:26 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 01 Jun 2018 06:06:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,467,1520924400"; d="scan'208";a="60434896" Received: from kraken.imu.intel.com (HELO Sent) ([10.217.246.153]) by fmsmga001.fm.intel.com with SMTP; 01 Jun 2018 06:06:23 -0700 Received: by Sent (sSMTP sendmail emulation); Fri, 01 Jun 2018 15:00:00 +0200 From: Dariusz Stojaczyk To: dev@dpdk.org, Anatoly Burakov Cc: Dariusz Stojaczyk , stable@dpdk.org Date: Fri, 1 Jun 2018 14:59:19 +0200 Message-Id: <1527857960-109306-1-git-send-email-dariuszx.stojaczyk@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1527860361-162114-1-git-send-email-dariuszx.stojaczyk@intel.com> References: <1527860361-162114-1-git-send-email-dariuszx.stojaczyk@intel.com> Subject: [dpdk-stable] [PATCH v3 1/2] memalloc: do not leave unmapped holes in EAL virtual memory area X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Jun 2018 13:06:27 -0000 EAL reserves a huge area in virtual address space to provide virtual address contiguity for e.g. future memory extensions (memory hotplug). During memory hotplug, if the hugepage mmap succeeds but doesn't suffice EAL's requiriments, the EAL would unmap this mapping straight away, leaving a hole in its virtual memory area and making it available to everyone. As EAL still thinks it owns the entire region, it may try to mmap it later with MAP_FIXED, possibly overriding a user's mapping that was made in the meantime. This patch ensures each hole is mapped back by EAL, so that it won't be available to anyone else. Changes from v2: * replaced rte_panic() with a CRIT log. * added "git fixline" tags Changes from v1: * checkpatch fixes Fixes: 582bed1e1d1d ("mem: support mapping hugepages at runtime") Cc: anatoly.burakov@intel.com Cc: stable@dpdk.org Signed-off-by: Dariusz Stojaczyk --- lib/librte_eal/linuxapp/eal/eal_memalloc.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/lib/librte_eal/linuxapp/eal/eal_memalloc.c b/lib/librte_eal/linuxapp/eal/eal_memalloc.c index 8c11f98..6be6680 100644 --- a/lib/librte_eal/linuxapp/eal/eal_memalloc.c +++ b/lib/librte_eal/linuxapp/eal/eal_memalloc.c @@ -39,6 +39,7 @@ #include "eal_filesystem.h" #include "eal_internal_cfg.h" #include "eal_memalloc.h" +#include "eal_private.h" /* * not all kernel version support fallocate on hugetlbfs, so fall back to @@ -490,6 +491,8 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, int ret = 0; int fd; size_t alloc_sz; + int flags; + void *new_addr; /* takes out a read lock on segment or segment list */ fd = get_seg_fd(path, sizeof(path), hi, list_idx, seg_idx); @@ -585,6 +588,20 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, mapped: munmap(addr, alloc_sz); + flags = MAP_FIXED; +#ifdef RTE_ARCH_PPC_64 + flags |= MAP_HUGETLB; +#endif + new_addr = eal_get_virtual_area(addr, &alloc_sz, alloc_sz, 0, flags); + if (new_addr != addr) { + if (new_addr != NULL) + munmap(new_addr, alloc_sz); + /* we're leaving a hole in our virtual address space. if + * somebody else maps this hole now, we could accidentally + * override it in the future. + */ + RTE_LOG(CRIT, EAL, "Can't mmap holes in our virtual address space\n"); + } resized: if (internal_config.single_file_segments) { resize_hugefile(fd, path, list_idx, seg_idx, map_offset, -- 2.7.4