From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3ED1BA046B for ; Tue, 23 Jul 2019 12:30:42 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AEBF71BFC4; Tue, 23 Jul 2019 12:30:41 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id 14DB31BFA3; Tue, 23 Jul 2019 12:30:39 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 23 Jul 2019 03:30:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,298,1559545200"; d="scan'208";a="171129227" Received: from irsmsx102.ger.corp.intel.com ([163.33.3.155]) by fmsmga007.fm.intel.com with ESMTP; 23 Jul 2019 03:30:38 -0700 Received: from irsmsx156.ger.corp.intel.com (10.108.20.68) by IRSMSX102.ger.corp.intel.com (163.33.3.155) with Microsoft SMTP Server (TLS) id 14.3.439.0; Tue, 23 Jul 2019 11:30:37 +0100 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.164]) by IRSMSX156.ger.corp.intel.com ([169.254.3.73]) with mapi id 14.03.0439.000; Tue, 23 Jul 2019 11:30:37 +0100 From: "Sirvys, Andrius" To: "Burakov, Anatoly" , "dev@dpdk.org" CC: "stable@dpdk.org" Thread-Topic: [PATCH] vfio: use contiguous mapping for IOVA as VA mode Thread-Index: AQHVQT2oB/4fnR9E3Ueynw4sHejjaabYAHvA Date: Tue, 23 Jul 2019 10:30:36 +0000 Message-ID: <49C20589590A5F43815A562C49658C1E70EAE427@irsmsx105.ger.corp.intel.com> References: <6ee9d8ddb5de3b2de880ad42c37b012888a6facd.1563876069.git.anatoly.burakov@intel.com> In-Reply-To: <6ee9d8ddb5de3b2de880ad42c37b012888a6facd.1563876069.git.anatoly.burakov@intel.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 11.0.600.7 dlp-reaction: no-action x-ctpclassification: CTP_NT x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNzEzOTZiOWQtNTQ2MS00MDZhLTgzNmEtMWZkMDM2ZWNmZmE5IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiUkk1Sk5xSjJ3azJxRUZjamFxcjNPaXBcL1wvT0puRUV4V2pGc3RySFVodDQ1ZklpMGVTV0pZbWxtM2tNOElNT1VJIn0= x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] vfio: use contiguous mapping for IOVA as VA mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Subject: [PATCH] vfio: use contiguous mapping for IOVA as VA mode When using IOVA as VA mode, there is no need to map segments page by page. = This normally isn't a problem, but it becomes one when attempting to use DP= DK in no-huge mode, where VFIO subsystem simply runs out of space to store = mappings. Fix this for x86 by triggering different callbacks based on whether IOVA as= VA mode is enabled. Fixes: 73a639085938 ("vfio: allow to map other memory regions") Cc: stable@dpdk.org Signed-off-by: Anatoly Burakov --- lib/librte_eal/linux/eal/eal_vfio.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/lib/librte_eal/linux/eal/eal_vfio.c b/lib/librte_eal/linux/eal= /eal_vfio.c index ed04231b1..501c74f23 100644 --- a/lib/librte_eal/linux/eal/eal_vfio.c +++ b/lib/librte_eal/linux/eal/eal_vfio.c @@ -1231,6 +1231,19 @@ rte_vfio_get_group_num(const char *sysfs_base, return 1; } =20 +static int +type1_map_contig(const struct rte_memseg_list *msl, const struct rte_memse= g *ms, + size_t len, void *arg) +{ + int *vfio_container_fd =3D arg; + + if (msl->external) + return 0; + + return vfio_type1_dma_mem_map(*vfio_container_fd, ms->addr_64, ms->iova, + len, 1); +} + static int type1_map(const struct rte_memseg_list *msl, const struct rte_memseg *ms, void *arg) @@ -1300,6 +1313,13 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64= _t vaddr, uint64_t iova, static int vfio_type1_dma_map(int vfio_container= _fd) { + if (rte_eal_iova_mode() =3D=3D RTE_IOVA_VA) { + /* with IOVA as VA mode, we can get away with mapping contiguous + * chunks rather than going page-by-page. + */ + return rte_memseg_contig_walk(type1_map_contig, + &vfio_container_fd); + } return rte_memseg_walk(type1_map, &vfio_container_fd); } Tested-by: Andrius Sirvys