From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id A83672C24 for ; Mon, 11 Mar 2019 11:19:55 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Mar 2019 03:19:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,467,1544515200"; d="scan'208";a="121624995" Received: from aburakov-mobl1.ger.corp.intel.com (HELO [10.237.220.66]) ([10.237.220.66]) by orsmga007.jf.intel.com with ESMTP; 11 Mar 2019 03:19:51 -0700 To: Shahaf Shuler , yskoh@mellanox.com, thomas@monjalon.net, ferruh.yigit@intel.com, nhorman@tuxdriver.com, gaetan.rivet@6wind.com Cc: dev@dpdk.org References: <1159b0da448d794e00011c7bbfd0a99523d005e6.1552206210.git.shahafs@mellanox.com> From: "Burakov, Anatoly" Message-ID: <15427812-f9c6-359c-6894-db3a26e7f732@intel.com> Date: Mon, 11 Mar 2019 10:19:50 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.5.2 MIME-Version: 1.0 In-Reply-To: <1159b0da448d794e00011c7bbfd0a99523d005e6.1552206210.git.shahafs@mellanox.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v4 3/6] bus: introduce device level DMA memory mapping X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 11 Mar 2019 10:19:56 -0000 On 10-Mar-19 8:28 AM, Shahaf Shuler wrote: > The DPDK APIs expose 3 different modes to work with memory used for DMA: > > 1. Use the DPDK owned memory (backed by the DPDK provided hugepages). > This memory is allocated by the DPDK libraries, included in the DPDK > memory system (memseg lists) and automatically DMA mapped by the DPDK > layers. > > 2. Use memory allocated by the user and register to the DPDK memory > systems. Upon registration of memory, the DPDK layers will DMA map it > to all needed devices. After registration, allocation of this memory > will be done with rte_*malloc APIs. > > 3. Use memory allocated by the user and not registered to the DPDK memory > system. This is for users who wants to have tight control on this > memory (e.g. avoid the rte_malloc header). > The user should create a memory, register it through rte_extmem_register > API, and call DMA map function in order to register such memory to > the different devices. > > The scope of the patch focus on #3 above. > > Currently the only way to map external memory is through VFIO > (rte_vfio_dma_map). While VFIO is common, there are other vendors > which use different ways to map memory (e.g. Mellanox and NXP). > > The work in this patch moves the DMA mapping to vendor agnostic APIs. > Device level DMA map and unmap APIs were added. Implementation of those > APIs was done currently only for PCI devices. > > For PCI bus devices, the pci driver can expose its own map and unmap > functions to be used for the mapping. In case the driver doesn't provide > any, the memory will be mapped, if possible, to IOMMU through VFIO APIs. > > Application usage with those APIs is quite simple: > * allocate memory > * call rte_extmem_register on the memory chunk. > * take a device, and query its rte_device. > * call the device specific mapping function for this device. > > Future work will deprecate the rte_vfio_dma_map and rte_vfio_dma_unmap > APIs, leaving the rte device APIs as the preferred option for the user. > > Signed-off-by: Shahaf Shuler > --- Acked-by: Anatoly Burakov -- Thanks, Anatoly