From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from cbtest1.netronome.com (host-79-78-33-110.static.as9105.net [79.78.33.110]) by dpdk.org (Postfix) with ESMTP id 4FC126CA8 for ; Thu, 12 May 2016 16:34:04 +0200 (CEST) Received: from cbtest1.netronome.com (localhost [127.0.0.1]) by cbtest1.netronome.com (8.14.4/8.14.4/Debian-4.1ubuntu1) with ESMTP id u4CEY3ht030783 for ; Thu, 12 May 2016 15:34:03 +0100 Received: (from alucero@localhost) by cbtest1.netronome.com (8.14.4/8.14.4/Submit) id u4CEY32Z030782 for dev@dpdk.org; Thu, 12 May 2016 15:34:03 +0100 From: Alejandro Lucero To: dev@dpdk.org Date: Thu, 12 May 2016 15:33:58 +0100 Message-Id: <1463063640-30715-2-git-send-email-alejandro.lucero@netronome.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1463063640-30715-1-git-send-email-alejandro.lucero@netronome.com> References: <1463063640-30715-1-git-send-email-alejandro.lucero@netronome.com> Subject: [dpdk-dev] [PATCH 1/3] eal/linux: add function for checking hugepages within device supported address range X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 May 2016 14:34:04 -0000 - This is needed for avoiding problems with devices not being able to address all the physical available memory. Signed-off-by: Alejandro Lucero --- lib/librte_eal/common/include/rte_memory.h | 6 ++++++ lib/librte_eal/linuxapp/eal/eal_memory.c | 27 +++++++++++++++++++++++++++ 2 files changed, 33 insertions(+) diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h index f8dbece..67b0b28 100644 --- a/lib/librte_eal/common/include/rte_memory.h +++ b/lib/librte_eal/common/include/rte_memory.h @@ -256,6 +256,12 @@ rte_mem_phy2mch(uint32_t memseg_id __rte_unused, const phys_addr_t phy_addr) } #endif +/** + * Check hugepages are within the supported + * device address space range. + */ +int rte_eal_hugepage_check_address_mask(uint64_t dma_mask); + #ifdef __cplusplus } #endif diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c index 5b9132c..2cd046d 100644 --- a/lib/librte_eal/linuxapp/eal/eal_memory.c +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c @@ -1037,6 +1037,33 @@ calc_num_pages_per_socket(uint64_t * memory, } /* + * Some devices have addressing limitations. A PMD will indirectly call this + * function raising an error if any hugepage is out of address range supported. + * As hugepages are ordered by physical address, there is nothing to do as + * any other hugepage available will be out of range as well. + */ +int +rte_eal_hugepage_check_address_mask(uint64_t dma_mask) +{ + const struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config; + phys_addr_t physaddr; + int i =0; + + while (i < RTE_MAX_MEMSEG && mcfg->memseg[i].len > 0) { + physaddr = mcfg->memseg[i].phys_addr + mcfg->memseg[i].len; + RTE_LOG(DEBUG, EAL, "Checking page with address %"PRIx64" and device" + " mask 0x%"PRIx64"\n", physaddr, dma_mask); + if (physaddr & ~dma_mask) { + RTE_LOG(ERR, EAL, "Allocated hugepages are out of device address" + " range."); + return -1; + } + i++; + } + return 0; +} + +/* * Prepare physical memory mapping: fill configuration structure with * these infos, return 0 on success. * 1. map N huge pages in separate files in hugetlbfs -- 1.9.1