From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from netronome.com (host-79-78-33-110.static.as9105.net [79.78.33.110]) by dpdk.org (Postfix) with ESMTP id 39B524CA2 for ; Wed, 31 Oct 2018 18:29:32 +0100 (CET) Received: from netronome.com (localhost [127.0.0.1]) by netronome.com (8.15.2/8.15.2/Debian-10) with ESMTP id w9VHTVKL011973 for ; Wed, 31 Oct 2018 17:29:31 GMT Received: (from root@localhost) by netronome.com (8.15.2/8.15.2/Submit) id w9VHTVgl011972 for dev@dpdk.org; Wed, 31 Oct 2018 17:29:31 GMT From: Alejandro Lucero To: dev@dpdk.org Date: Wed, 31 Oct 2018 17:29:31 +0000 Message-Id: <20181031172931.11894-8-alejandro.lucero@netronome.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181031172931.11894-1-alejandro.lucero@netronome.com> References: <20181031172931.11894-1-alejandro.lucero@netronome.com> Subject: [dpdk-dev] [PATCH 7/7] eal/mem: use DMA mask check for legacy memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 Oct 2018 17:29:32 -0000 If a device reports addressing limitations through a dma mask, the IOVAs for mapped memory needs to be checked out for ensuring correct functionality. Previous patches introduced this DMA check for main memory code currently being used but other options like legacy memory and the no hugepages one need to be also considered. This patch adds the DMA check for those cases. Signed-off-by: Alejandro Lucero --- lib/librte_eal/linuxapp/eal/eal_memory.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c index fce86fda6..2a3a8c7a3 100644 --- a/lib/librte_eal/linuxapp/eal/eal_memory.c +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c @@ -1393,6 +1393,14 @@ eal_legacy_hugepage_init(void) addr = RTE_PTR_ADD(addr, (size_t)page_sz); } + if (mcfg->dma_maskbits) { + if (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) { + RTE_LOG(ERR, EAL, + "%s(): couldn't allocate memory due to DMA mask\n", + __func__); + goto fail; + } + } return 0; } @@ -1628,6 +1636,15 @@ eal_legacy_hugepage_init(void) rte_fbarray_destroy(&msl->memseg_arr); } + if (mcfg->dma_maskbits) { + if (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) { + RTE_LOG(ERR, EAL, + "%s(): couldn't allocate memory due to DMA mask\n", + __func__); + goto fail; + } + } + return 0; fail: -- 2.17.1