From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id BA9E4A05D3 for ; Sat, 30 Mar 2019 15:40:41 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2EF124C9C; Sat, 30 Mar 2019 15:40:40 +0100 (CET) Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by dpdk.org (Postfix) with ESMTP id 401154C95 for ; Sat, 30 Mar 2019 15:40:38 +0100 (CET) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id C63D02016D; Sat, 30 Mar 2019 10:40:37 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Sat, 30 Mar 2019 10:40:37 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=mesmtp; bh=CqffDCLluKAuAro8oKGgE4crdLbz4RZNwC2VCf3Weps=; b=TjSHw4XayOsV EpJ+WrHtSTu5s4oMXdJHgQihP1DD2Xy8vCpFP0Lm60JgYlUi/8pIF4T+GjgFq0JS VT9ezbV5cOJKYdbCRWzflgEt58Ob/wxsbd+vjZwuEOz8zW5TlUxXkPy2DE+Q677O 9d/gZDyiXDE5yNfAJtRCBwR0AEe4VQ8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; bh=CqffDCLluKAuAro8oKGgE4crdLbz4RZNwC2VCf3We ps=; b=yiJkphl4t2+g5fvdYUGJvJtZuMY1SukJJSxDDVmajXjUeBWAT5RXX/QJx fXeuJ1PkrBWRmon7/1VyMJKPUcJg4/koFzzCx0scEFc5VuUEmUF/POGdfcM8X2tZ Kds8ZRM2MttJx45CGy91Chjo6ZIGkcyaXMPHjITBVRVi91+LU1/RlA3RKhcQzqeQ cRElfZ1iPRUrtTtRkWFA3fDSM9ZZUaZauK+5+9MCCjg9yoKspyo74h1/yqdIyDmd qkAdFRmupVQfrs4WQPoNtN1r7u3uVGHwJhc3mGSwMdOGCnUGJSQ8PUc6L9FEwoJ9 glTmpQ8MVuL8U0vaYUDDiGfQhnugw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrkeelgdeikecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhmrghs ucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenucfkph epjeejrddufeegrddvtdefrddukeegnecurfgrrhgrmhepmhgrihhlfhhrohhmpehthhho mhgrshesmhhonhhjrghlohhnrdhnvghtnecuvehluhhsthgvrhfuihiivgepud X-ME-Proxy: Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id F302B10310; Sat, 30 Mar 2019 10:40:35 -0400 (EDT) From: Thomas Monjalon To: Shahaf Shuler Cc: dev@dpdk.org, anatoly.burakov@intel.com, yskoh@mellanox.com, ferruh.yigit@intel.com, nhorman@tuxdriver.com, gaetan.rivet@6wind.com Date: Sat, 30 Mar 2019 15:40:34 +0100 Message-ID: <29901841.I4p9NahPEI@xps> In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH v4 0/6] introduce DMA memory mapping for external memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Message-ID: <20190330144034.uZH5MrkytINUBUPA7fsW8CvtO0ugzvNNH6tH3aJl1Dg@z> 10/03/2019 09:27, Shahaf Shuler: > The DPDK APIs expose 3 different modes to work with memory used for DMA: > > 1. Use the DPDK owned memory (backed by the DPDK provided hugepages). > This memory is allocated by the DPDK libraries, included in the DPDK > memory system (memseg lists) and automatically DMA mapped by the DPDK > layers. > > 2. Use memory allocated by the user and register to the DPDK memory > systems. Upon registration of memory, the DPDK layers will DMA map it > to all needed devices. After registration, allocation of this memory > will be done with rte_*malloc APIs. > > 3. Use memory allocated by the user and not registered to the DPDK memory > system. This is for users who wants to have tight control on this > memory (e.g. avoid the rte_malloc header). > The user should create a memory, register it through rte_extmem_register > API, and call DMA map function in order to register such memory to > the different devices. > > The scope of the patch focus on #3 above. > > Currently the only way to map external memory is through VFIO > (rte_vfio_dma_map). While VFIO is common, there are other vendors > which use different ways to map memory (e.g. Mellanox and NXP). > > The work in this patch moves the DMA mapping to vendor agnostic APIs. > Device level DMA map and unmap APIs were added. Implementation of those > APIs was done currently only for PCI devices. > > For PCI bus devices, the pci driver can expose its own map and unmap > functions to be used for the mapping. In case the driver doesn't provide > any, the memory will be mapped, if possible, to IOMMU through VFIO APIs. > > Application usage with those APIs is quite simple: > * allocate memory > * call rte_extmem_register on the memory chunk. > * take a device, and query its rte_device. > * call the device specific mapping function for this device. > > Future work will deprecate the rte_vfio_dma_map and rte_vfio_dma_unmap > APIs, leaving the rte device APIs as the preferred option for the user. > > Shahaf Shuler (6): > vfio: allow DMA map of memory for the default vfio fd > vfio: don't fail to DMA map if memory is already mapped > bus: introduce device level DMA memory mapping > net/mlx5: refactor external memory registration > net/mlx5: support PCI device DMA map and unmap > doc: deprecation notice for VFIO DMA map APIs Applied, thanks