From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 0F29F1B535 for ; Fri, 26 Apr 2019 11:11:03 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 26 Apr 2019 02:11:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,397,1549958400"; d="scan'208";a="153936201" Received: from aburakov-mobl1.ger.corp.intel.com (HELO [10.251.92.20]) ([10.251.92.20]) by orsmga002.jf.intel.com with ESMTP; 26 Apr 2019 02:11:01 -0700 To: kirankumark@marvell.com, ferruh.yigit@intel.com Cc: dev@dpdk.org References: <20190422043912.18060-1-kirankumark@marvell.com> <20190422061533.17538-1-kirankumark@marvell.com> From: "Burakov, Anatoly" Message-ID: <92076f68-d65c-da52-ecd2-782843bd3973@intel.com> Date: Fri, 26 Apr 2019 10:11:00 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <20190422061533.17538-1-kirankumark@marvell.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v5] kni: add IOVA va support for kni X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Apr 2019 09:11:04 -0000 On 22-Apr-19 7:15 AM, kirankumark@marvell.com wrote: > From: Kiran Kumar K > > With current KNI implementation kernel module will work only in > IOVA=PA mode. This patch will add support for kernel module to work > with IOVA=VA mode. > > The idea is to get the physical address from iova address using > api iommu_iova_to_phys. Using this API, we will get the physical > address from iova address and later use phys_to_virt API to > convert the physical address to kernel virtual address. > > With this approach we have compared the performance with IOVA=PA > and there is no difference observed. Seems like kernel is the > overhead. > > This approach will not work with the kernel versions less than 4.4.0 > because of API compatibility issues. > > Signed-off-by: Kiran Kumar K > --- > diff --git a/kernel/linux/kni/kni_net.c b/kernel/linux/kni/kni_net.c > index be9e6b0b9..e77a28066 100644 > --- a/kernel/linux/kni/kni_net.c > +++ b/kernel/linux/kni/kni_net.c > @@ -35,6 +35,22 @@ static void kni_net_rx_normal(struct kni_dev *kni); > /* kni rx function pointer, with default to normal rx */ > static kni_net_rx_t kni_net_rx_func = kni_net_rx_normal; > > +/* iova to kernel virtual address */ > +static void * > +iova2kva(struct kni_dev *kni, void *pa) > +{ > + return phys_to_virt(iommu_iova_to_phys(kni->domain, > + (uintptr_t)pa)); > +} > + > +static void * > +iova2data_kva(struct kni_dev *kni, struct rte_kni_mbuf *m) > +{ > + return phys_to_virt((iommu_iova_to_phys(kni->domain, > + (uintptr_t)m->buf_physaddr) + > + m->data_off)); > +} > + Apologies, i've accidentally responded to the previous version with this comment. I don't see how this could possibly work, because for any IOVA-contiguous chunk of memory, mbufs are allowed to cross page boundaries. In this function, you're getting the start address of a buffer, but there are no guarantees that the end of the buffer is on the same physical page. -- Thanks, Anatoly From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id B2E6EA05D3 for ; Fri, 26 Apr 2019 11:11:08 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 26C341B5E6; Fri, 26 Apr 2019 11:11:06 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 0F29F1B535 for ; Fri, 26 Apr 2019 11:11:03 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 26 Apr 2019 02:11:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,397,1549958400"; d="scan'208";a="153936201" Received: from aburakov-mobl1.ger.corp.intel.com (HELO [10.251.92.20]) ([10.251.92.20]) by orsmga002.jf.intel.com with ESMTP; 26 Apr 2019 02:11:01 -0700 To: kirankumark@marvell.com, ferruh.yigit@intel.com Cc: dev@dpdk.org References: <20190422043912.18060-1-kirankumark@marvell.com> <20190422061533.17538-1-kirankumark@marvell.com> From: "Burakov, Anatoly" Message-ID: <92076f68-d65c-da52-ecd2-782843bd3973@intel.com> Date: Fri, 26 Apr 2019 10:11:00 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <20190422061533.17538-1-kirankumark@marvell.com> Content-Type: text/plain; charset="UTF-8"; format="flowed" Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v5] kni: add IOVA va support for kni X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Message-ID: <20190426091100.ZD0LwQfKQ364YnKAcyQ0ZiNsR1yBlWmcD_WjO1yDpFM@z> On 22-Apr-19 7:15 AM, kirankumark@marvell.com wrote: > From: Kiran Kumar K > > With current KNI implementation kernel module will work only in > IOVA=PA mode. This patch will add support for kernel module to work > with IOVA=VA mode. > > The idea is to get the physical address from iova address using > api iommu_iova_to_phys. Using this API, we will get the physical > address from iova address and later use phys_to_virt API to > convert the physical address to kernel virtual address. > > With this approach we have compared the performance with IOVA=PA > and there is no difference observed. Seems like kernel is the > overhead. > > This approach will not work with the kernel versions less than 4.4.0 > because of API compatibility issues. > > Signed-off-by: Kiran Kumar K > --- > diff --git a/kernel/linux/kni/kni_net.c b/kernel/linux/kni/kni_net.c > index be9e6b0b9..e77a28066 100644 > --- a/kernel/linux/kni/kni_net.c > +++ b/kernel/linux/kni/kni_net.c > @@ -35,6 +35,22 @@ static void kni_net_rx_normal(struct kni_dev *kni); > /* kni rx function pointer, with default to normal rx */ > static kni_net_rx_t kni_net_rx_func = kni_net_rx_normal; > > +/* iova to kernel virtual address */ > +static void * > +iova2kva(struct kni_dev *kni, void *pa) > +{ > + return phys_to_virt(iommu_iova_to_phys(kni->domain, > + (uintptr_t)pa)); > +} > + > +static void * > +iova2data_kva(struct kni_dev *kni, struct rte_kni_mbuf *m) > +{ > + return phys_to_virt((iommu_iova_to_phys(kni->domain, > + (uintptr_t)m->buf_physaddr) + > + m->data_off)); > +} > + Apologies, i've accidentally responded to the previous version with this comment. I don't see how this could possibly work, because for any IOVA-contiguous chunk of memory, mbufs are allowed to cross page boundaries. In this function, you're getting the start address of a buffer, but there are no guarantees that the end of the buffer is on the same physical page. -- Thanks, Anatoly