From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 360F6A0534; Fri, 24 Jan 2020 18:05:51 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 907E4374E; Fri, 24 Jan 2020 18:05:50 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 540D12A6C; Fri, 24 Jan 2020 18:05:49 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Jan 2020 09:05:48 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,358,1574150400"; d="scan'208";a="221074675" Received: from silpixa00399498.ir.intel.com (HELO silpixa00399498.ger.corp.intel.com) ([10.237.223.151]) by orsmga008.jf.intel.com with ESMTP; 24 Jan 2020 09:05:47 -0800 From: Anatoly Burakov To: dev@dpdk.org Cc: stable@dpdk.org Date: Fri, 24 Jan 2020 17:05:46 +0000 Message-Id: <99d72c1b91e3fce36713b529ab47e8aa3fd78454.1579885526.git.anatoly.burakov@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2] eal/mem: preallocate VA space in no-huge mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When --no-huge mode is used, the memory is currently allocated with mmap(NULL, ...). This is fine in most cases, but can fail in cases where DPDK is run on a machine with an IOMMU that is of more limited address width than that of a VA, because we're not specifying the address hint for mmap() call. Fix it by preallocating VA space before mapping it. Cc: stable@dpdk.org Signed-off-by: Anatoly Burakov --- Notes: v2: - Add unmap on unsuccessful mmap I couldn't figure out which specific commit has introduced the issue, so there's no fix tag. The most likely candidate is one that introduced the DMA mask thing in the first place but i'm not sure. lib/librte_eal/linux/eal/eal_memory.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/lib/librte_eal/linux/eal/eal_memory.c b/lib/librte_eal/linux/eal/eal_memory.c index 43e4ffc757..ce6326672f 100644 --- a/lib/librte_eal/linux/eal/eal_memory.c +++ b/lib/librte_eal/linux/eal/eal_memory.c @@ -1340,6 +1340,8 @@ eal_legacy_hugepage_init(void) /* hugetlbfs can be disabled */ if (internal_config.no_hugetlbfs) { + void *prealloc_addr; + size_t mem_sz; struct rte_memseg_list *msl; int n_segs, cur_seg, fd, flags; #ifdef MEMFD_SUPPORTED @@ -1395,11 +1397,25 @@ eal_legacy_hugepage_init(void) } } #endif - addr = mmap(NULL, internal_config.memory, PROT_READ | PROT_WRITE, - flags, fd, 0); + /* preallocate address space for the memory, so that it can be + * fit into the DMA mask. + */ + mem_sz = internal_config.memory; + prealloc_addr = eal_get_virtual_area( + NULL, &mem_sz, page_sz, 0, 0); + if (prealloc_addr == NULL) { + RTE_LOG(ERR, EAL, + "%s: reserving memory area failed: " + "%s\n", + __func__, strerror(errno)); + return -1; + } + addr = mmap(prealloc_addr, internal_config.memory, + PROT_READ | PROT_WRITE, flags, fd, MAP_FIXED); if (addr == MAP_FAILED) { RTE_LOG(ERR, EAL, "%s: mmap() failed: %s\n", __func__, strerror(errno)); + munmap(prealloc_addr, mem_sz); return -1; } msl->base_va = addr; -- 2.17.1