From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B4C71A04B8; Tue, 5 May 2020 17:49:01 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 91A791D628; Tue, 5 May 2020 17:49:01 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 9904A1D614 for ; Tue, 5 May 2020 17:48:59 +0200 (CEST) IronPort-SDR: zYJrxCl8WwSBu1y0lzKNKlB2n0LH/tzKOqUESj5SnQWh5EkeZd9bc+0aGVGdxDjqDrVvL91vM7 ibRh8lmXBR7g== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 May 2020 08:48:57 -0700 IronPort-SDR: hJYs/9XnKs1gmmkglZvH5XjgLHlHr97v8QuUVH9dx4W6rns/SwsIb8+k6SCXlxebCtxedbV91n rX1KkQIo+mNg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,355,1583222400"; d="scan'208";a="284294612" Received: from aburakov-mobl.ger.corp.intel.com (HELO ubuntu-vm.mshome.net) ([10.213.197.31]) by fmsmga004.fm.intel.com with ESMTP; 05 May 2020 08:48:56 -0700 From: Anatoly Burakov To: dev@dpdk.org Date: Tue, 5 May 2020 15:48:56 +0000 Message-Id: X-Mailer: git-send-email 2.17.1 Subject: [dpdk-dev] [PATCH] mem: check DMA mask for user-supplied IOVA addresses X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Currently, external memory API will silently succeed if the IOVA addresses supplied by the user do not fit into the DMA mask. This can cause difficult to debug issues or accepting failed kernel VFIO DMA mappings. Fix it so that if the IOVA addresses are provided, they are checked to see if they fit into the DMA mask. Signed-off-by: Anatoly Burakov --- lib/librte_eal/common/rte_malloc.c | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/lib/librte_eal/common/rte_malloc.c b/lib/librte_eal/common/rte_malloc.c index f1b73168bd..0d3a3ef93f 100644 --- a/lib/librte_eal/common/rte_malloc.c +++ b/lib/librte_eal/common/rte_malloc.c @@ -392,6 +392,29 @@ find_named_heap(const char *name) return NULL; } +static int +check_iova_addrs_dma_mask(rte_iova_t iova_addrs[], unsigned int n_pages, + size_t page_sz) +{ + unsigned int i, bits; + rte_iova_t max = 0; + + /* we only care for the biggest address we will get */ + for (i = 0; i < n_pages; i++) { + rte_iova_t first = iova_addrs[i]; + rte_iova_t last = first + page_sz - 1; + max = RTE_MAX(last, max); + } + + bits = rte_fls_u64(max); + if (rte_mem_check_dma_mask(bits) != 0) { + RTE_LOG(ERR, EAL, "IOVA 0x%zx does not fit into the DMA mask\n", + max); + return -1; + } + return 0; +} + int rte_malloc_heap_memory_add(const char *heap_name, void *va_addr, size_t len, rte_iova_t iova_addrs[], unsigned int n_pages, size_t page_sz) @@ -412,6 +435,12 @@ rte_malloc_heap_memory_add(const char *heap_name, void *va_addr, size_t len, rte_errno = EINVAL; return -1; } + /* check if all IOVA's fit into the DMA mask */ + if (iova_addrs != NULL && check_iova_addrs_dma_mask(iova_addrs, + n_pages, page_sz) != 0) { + rte_errno = EINVAL; + return -1; + } rte_mcfg_mem_write_lock(); /* find our heap */ -- 2.17.1