From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from netronome.com (host-79-78-33-110.static.as9105.net [79.78.33.110]) by dpdk.org (Postfix) with ESMTP id 1890134EF for ; Wed, 31 Oct 2018 18:29:31 +0100 (CET) Received: from netronome.com (localhost [127.0.0.1]) by netronome.com (8.15.2/8.15.2/Debian-10) with ESMTP id w9VHTVIF011965 for ; Wed, 31 Oct 2018 17:29:31 GMT Received: (from root@localhost) by netronome.com (8.15.2/8.15.2/Submit) id w9VHTVld011964 for dev@dpdk.org; Wed, 31 Oct 2018 17:29:31 GMT From: Alejandro Lucero To: dev@dpdk.org Date: Wed, 31 Oct 2018 17:29:29 +0000 Message-Id: <20181031172931.11894-6-alejandro.lucero@netronome.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181031172931.11894-1-alejandro.lucero@netronome.com> References: <20181031172931.11894-1-alejandro.lucero@netronome.com> Subject: [dpdk-dev] [PATCH 5/7] mem: modify error message for DMA mask check X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 Oct 2018 17:29:32 -0000 If DMA mask checks shows mapped memory out of the supported range specified by the DMA mask, nothing can be done but return an error an report the error. This can imply the app not being executed at all or precluding dynamic memory allocation once the app is running. In any case, we can advice the user to force IOVA as PA if currently IOVA being VA and user being root. Signed-off-by: Alejandro Lucero --- lib/librte_eal/common/malloc_heap.c | 35 +++++++++++++++++++++++++---- 1 file changed, 31 insertions(+), 4 deletions(-) diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c index 7d423089d..711622f19 100644 --- a/lib/librte_eal/common/malloc_heap.c +++ b/lib/librte_eal/common/malloc_heap.c @@ -5,8 +5,10 @@ #include #include #include +#include #include #include +#include #include #include @@ -294,7 +296,6 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, size_t alloc_sz; int allocd_pages; void *ret, *map_addr; - uint64_t mask; alloc_sz = (size_t)pg_sz * n_segs; @@ -322,11 +323,37 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, goto fail; } + /* Once we have all the memseg lists configured, if there is a dma mask + * set, check iova addresses are not out of range. Otherwise the device + * setting the dma mask could have problems with the mapped memory. + * + * There are two situations when this can happen: + * 1) memory initialization + * 2) dynamic memory allocation + * + * For 1), an error when checking dma mask implies app can not be + * executed. For 2) implies the new memory can not be added. + */ if (mcfg->dma_maskbits) { if (rte_mem_check_dma_mask(mcfg->dma_maskbits)) { - RTE_LOG(ERR, EAL, - "%s(): couldn't allocate memory due to DMA mask\n", - __func__); + /* Currently this can only happen if IOMMU is enabled + * with RTE_ARCH_X86. It is not safe to use this memory + * so returning an error here. + * + * If IOVA is VA, advice to try with '--iova-mode pa' + * which could solve some situations when IOVA VA is not + * really needed. + */ + uid_t user = getuid(); + if ((rte_eal_iova_mode() == RTE_IOVA_VA) && user == 0) + RTE_LOG(ERR, EAL, + "%s(): couldn't allocate memory due to DMA mask.\n" + "Try with 'iova-mode=pa'\n", + __func__); + else + RTE_LOG(ERR, EAL, + "%s(): couldn't allocate memory due to DMA mask\n", + __func__); goto fail; } } -- 2.17.1