From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from netronome.com (host-79-78-33-110.static.as9105.net [79.78.33.110]) by dpdk.org (Postfix) with ESMTP id 5CE541B142 for ; Thu, 1 Nov 2018 20:53:31 +0100 (CET) Received: from netronome.com (localhost [127.0.0.1]) by netronome.com (8.15.2/8.15.2/Debian-10) with ESMTP id wA1JrVot019539 for ; Thu, 1 Nov 2018 19:53:31 GMT Received: (from root@localhost) by netronome.com (8.15.2/8.15.2/Submit) id wA1JrV32019538 for dev@dpdk.org; Thu, 1 Nov 2018 19:53:31 GMT From: Alejandro Lucero To: dev@dpdk.org Date: Thu, 1 Nov 2018 19:53:29 +0000 Message-Id: <20181101195330.19464-7-alejandro.lucero@netronome.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181101195330.19464-1-alejandro.lucero@netronome.com> References: <20181101195330.19464-1-alejandro.lucero@netronome.com> Subject: [dpdk-dev] [PATCH v2 6/7] eal/mem: use DMA mask check for legacy memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2018 19:53:31 -0000 If a device reports addressing limitations through a dma mask, the IOVAs for mapped memory needs to be checked out for ensuring correct functionality. Previous patches introduced this DMA check for main memory code currently being used but other options like legacy memory and the no hugepages option need to be also considered. This patch adds the DMA check for those cases. Signed-off-by: Alejandro Lucero --- lib/librte_eal/linuxapp/eal/eal_memory.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c index fce86fda6..c7935879a 100644 --- a/lib/librte_eal/linuxapp/eal/eal_memory.c +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c @@ -1393,6 +1393,18 @@ eal_legacy_hugepage_init(void) addr = RTE_PTR_ADD(addr, (size_t)page_sz); } + if (mcfg->dma_maskbits && + rte_mem_check_dma_mask(mcfg->dma_maskbits)) { + RTE_LOG(ERR, EAL, + "%s(): couldnt allocate memory due to IOVA exceeding limits of current DMA mask.\n", + __func__); + if (rte_eal_iova_mode() == RTE_IOVA_VA && + rte_eal_using_phys_addrs()) + RTE_LOG(ERR, EAL, + "%s(): Please try initializing EAL with --iova-mode=pa parameter.\n", + __func__); + goto fail; + } return 0; } @@ -1628,6 +1640,14 @@ eal_legacy_hugepage_init(void) rte_fbarray_destroy(&msl->memseg_arr); } + if (mcfg->dma_maskbits && + rte_mem_check_dma_mask(mcfg->dma_maskbits)) { + RTE_LOG(ERR, EAL, + "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.\n", + __func__); + goto fail; + } + return 0; fail: -- 2.17.1