From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id EA0071B4B5 for ; Tue, 23 Apr 2019 10:56:05 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 23 Apr 2019 01:56:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,385,1549958400"; d="scan'208";a="167062312" Received: from aburakov-mobl1.ger.corp.intel.com (HELO [10.252.1.168]) ([10.252.1.168]) by fmsmga001.fm.intel.com with ESMTP; 23 Apr 2019 01:56:02 -0700 To: kirankumark@marvell.com, ferruh.yigit@intel.com Cc: dev@dpdk.org References: <20190416045556.9428-1-kirankumark@marvell.com> <20190422043912.18060-1-kirankumark@marvell.com> From: "Burakov, Anatoly" Message-ID: Date: Tue, 23 Apr 2019 09:56:02 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <20190422043912.18060-1-kirankumark@marvell.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v4] kni: add IOVA va support for kni X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 23 Apr 2019 08:56:06 -0000 On 22-Apr-19 5:39 AM, kirankumark@marvell.com wrote: > From: Kiran Kumar K > > With current KNI implementation kernel module will work only in > IOVA=PA mode. This patch will add support for kernel module to work > with IOVA=VA mode. > > The idea is to get the physical address from iova address using > api iommu_iova_to_phys. Using this API, we will get the physical > address from iova address and later use phys_to_virt API to > convert the physical address to kernel virtual address. > > With this approach we have compared the performance with IOVA=PA > and there is no difference observed. Seems like kernel is the > overhead. > > This approach will not work with the kernel versions less than 4.4.0 > because of API compatibility issues. > > Signed-off-by: Kiran Kumar K > --- > +/* iova to kernel virtual address */ > +static void * > +iova2kva(struct kni_dev *kni, void *pa) > +{ > + return phys_to_virt(iommu_iova_to_phys(kni->domain, > + (dma_addr_t)pa)); > +} > + > +static void * > +iova2data_kva(struct kni_dev *kni, struct rte_kni_mbuf *m) > +{ > + return phys_to_virt((iommu_iova_to_phys(kni->domain, > + (dma_addr_t)m->buf_physaddr) + > + m->data_off)); Does this account for mbufs crossing page boundary? In IOVA as VA mode, the mempool is likely allocated in one go, so the mempool allocator will not care for preventing mbufs from crossing page boundary. The data may very well start at the very end of a page, and continue through the beginning of next page, which will have a different physical address. -- Thanks, Anatoly From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 1B975A05D3 for ; Tue, 23 Apr 2019 10:56:08 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4D93A1B4CC; Tue, 23 Apr 2019 10:56:07 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id EA0071B4B5 for ; Tue, 23 Apr 2019 10:56:05 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 23 Apr 2019 01:56:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,385,1549958400"; d="scan'208";a="167062312" Received: from aburakov-mobl1.ger.corp.intel.com (HELO [10.252.1.168]) ([10.252.1.168]) by fmsmga001.fm.intel.com with ESMTP; 23 Apr 2019 01:56:02 -0700 To: kirankumark@marvell.com, ferruh.yigit@intel.com Cc: dev@dpdk.org References: <20190416045556.9428-1-kirankumark@marvell.com> <20190422043912.18060-1-kirankumark@marvell.com> From: "Burakov, Anatoly" Message-ID: Date: Tue, 23 Apr 2019 09:56:02 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <20190422043912.18060-1-kirankumark@marvell.com> Content-Type: text/plain; charset="UTF-8"; format="flowed" Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v4] kni: add IOVA va support for kni X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Message-ID: <20190423085602.gWsIIgbBbEGjFyvXtV_27WQg2pG40BQ-H5j44PG9y2k@z> On 22-Apr-19 5:39 AM, kirankumark@marvell.com wrote: > From: Kiran Kumar K > > With current KNI implementation kernel module will work only in > IOVA=PA mode. This patch will add support for kernel module to work > with IOVA=VA mode. > > The idea is to get the physical address from iova address using > api iommu_iova_to_phys. Using this API, we will get the physical > address from iova address and later use phys_to_virt API to > convert the physical address to kernel virtual address. > > With this approach we have compared the performance with IOVA=PA > and there is no difference observed. Seems like kernel is the > overhead. > > This approach will not work with the kernel versions less than 4.4.0 > because of API compatibility issues. > > Signed-off-by: Kiran Kumar K > --- > +/* iova to kernel virtual address */ > +static void * > +iova2kva(struct kni_dev *kni, void *pa) > +{ > + return phys_to_virt(iommu_iova_to_phys(kni->domain, > + (dma_addr_t)pa)); > +} > + > +static void * > +iova2data_kva(struct kni_dev *kni, struct rte_kni_mbuf *m) > +{ > + return phys_to_virt((iommu_iova_to_phys(kni->domain, > + (dma_addr_t)m->buf_physaddr) + > + m->data_off)); Does this account for mbufs crossing page boundary? In IOVA as VA mode, the mempool is likely allocated in one go, so the mempool allocator will not care for preventing mbufs from crossing page boundary. The data may very well start at the very end of a page, and continue through the beginning of next page, which will have a different physical address. -- Thanks, Anatoly