From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A75D6A046B for ; Thu, 25 Jul 2019 11:52:31 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 897B31C2D3; Thu, 25 Jul 2019 11:52:29 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id C5DD91C2D3 for ; Thu, 25 Jul 2019 11:52:27 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jul 2019 02:52:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,306,1559545200"; d="scan'208";a="172588349" Received: from silpixa00399498.ir.intel.com (HELO silpixa00399498.ger.corp.intel.com) ([10.237.223.125]) by orsmga003.jf.intel.com with ESMTP; 25 Jul 2019 02:52:25 -0700 From: Anatoly Burakov To: dev@dpdk.org Cc: david.marchand@redhat.com, jerinj@marvell.com, thomas@monjalon.net Date: Thu, 25 Jul 2019 10:52:24 +0100 Message-Id: X-Mailer: git-send-email 2.17.1 In-Reply-To: <5d8f83fb7dd574d83a044c6a01e2613798f256c3.1563986790.git.anatoly.burakov@intel.com> References: <5d8f83fb7dd574d83a044c6a01e2613798f256c3.1563986790.git.anatoly.burakov@intel.com> Subject: [dpdk-dev] [PATCH v2] eal: pick IOVA as PA if IOMMU is not available X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When IOMMU is not available, /sys/kernel/iommu_groups will not be populated. This is happening since at least 3.6 when VFIO support was added. If the directory is empty, EAL should not pick IOVA as VA as the default IOVA mode. Signed-off-by: Anatoly Burakov --- Notes: v2: - Decouple IOMMU from VFIO - Add a check for physical addresses availability lib/librte_eal/linux/eal/eal.c | 21 ++++++++++++++++++-- lib/librte_eal/linux/eal/eal_vfio.c | 30 +++++++++++++++++++++++++++++ lib/librte_eal/linux/eal/eal_vfio.h | 2 ++ 3 files changed, 51 insertions(+), 2 deletions(-) diff --git a/lib/librte_eal/linux/eal/eal.c b/lib/librte_eal/linux/eal/eal.c index 34db78753..207ee0b1c 100644 --- a/lib/librte_eal/linux/eal/eal.c +++ b/lib/librte_eal/linux/eal/eal.c @@ -1061,8 +1061,25 @@ rte_eal_init(int argc, char **argv) enum rte_iova_mode iova_mode = rte_bus_get_iommu_class(); if (iova_mode == RTE_IOVA_DC) { - iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "Buses did not request a specific IOVA mode, select IOVA as VA mode.\n"); + RTE_LOG(DEBUG, EAL, "Buses did not request a specific IOVA mode.\n"); + + if (!phys_addrs) { + /* if we have no access to physical addreses, + * pick IOVA as VA mode. + */ + iova_mode = RTE_IOVA_VA; + RTE_LOG(DEBUG, EAL, "Physical addresses are unavailable, selecting IOVA as VA mode.\n"); + } else if (vfio_iommu_enabled()) { + /* we have an IOMMU, pick IOVA as VA mode */ + iova_mode = RTE_IOVA_VA; + RTE_LOG(DEBUG, EAL, "IOMMU is available, selecting IOVA as VA mode.\n"); + } else { + /* physical addresses available, and no IOMMU + * found, so pick IOVA as PA. + */ + iova_mode = RTE_IOVA_PA; + RTE_LOG(DEBUG, EAL, "IOMMU is not available, selecting IOVA as PA mode.\n"); + } } #ifdef RTE_LIBRTE_KNI /* Workaround for KNI which requires physical address to work */ diff --git a/lib/librte_eal/linux/eal/eal_vfio.c b/lib/librte_eal/linux/eal/eal_vfio.c index 501c74f23..92d290284 100644 --- a/lib/librte_eal/linux/eal/eal_vfio.c +++ b/lib/librte_eal/linux/eal/eal_vfio.c @@ -2,6 +2,7 @@ * Copyright(c) 2010-2018 Intel Corporation */ +#include #include #include #include @@ -19,6 +20,8 @@ #include "eal_vfio.h" #include "eal_private.h" +#define VFIO_KERNEL_IOMMU_GROUPS_PATH "/sys/kernel/iommu_groups" + #ifdef VFIO_PRESENT #define VFIO_MEM_EVENT_CLB_NAME "vfio_mem_event_clb" @@ -2147,3 +2150,30 @@ rte_vfio_container_dma_unmap(__rte_unused int container_fd, } #endif /* VFIO_PRESENT */ + +/* + * on Linux 3.6+, even if VFIO is not loaded, whenever IOMMU is enabled in the + * BIOS and in the kernel, /sys/kernel/iommu_groups path will contain kernel + * IOMMU groups. If IOMMU is not enabled, that path would be empty. Therefore, + * checking if the path is empty will tell us if IOMMU is enabled. + */ +int +vfio_iommu_enabled(void) +{ + DIR *dir = opendir(VFIO_KERNEL_IOMMU_GROUPS_PATH); + struct dirent *d; + int n = 0; + + /* if directory doesn't exist, assume IOMMU is not enabled */ + if (dir == NULL) + return 0; + + while ((d = readdir(dir)) != NULL) { + /* skip dot and dot-dot */ + if (++n > 2) + break; + } + closedir(dir); + + return n > 2; +} diff --git a/lib/librte_eal/linux/eal/eal_vfio.h b/lib/librte_eal/linux/eal/eal_vfio.h index cb2d35fb1..58c7a7309 100644 --- a/lib/librte_eal/linux/eal/eal_vfio.h +++ b/lib/librte_eal/linux/eal/eal_vfio.h @@ -133,6 +133,8 @@ vfio_has_supported_extensions(int vfio_container_fd); int vfio_mp_sync_setup(void); +int vfio_iommu_enabled(void); + #define EAL_VFIO_MP "eal_vfio_mp_sync" #define SOCKET_REQ_CONTAINER 0x100 -- 2.17.1