From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CCD69A0A0F; Fri, 4 Jun 2021 15:19:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 756484068F; Fri, 4 Jun 2021 15:19:02 +0200 (CEST) Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by mails.dpdk.org (Postfix) with ESMTP id 74C9640147 for ; Fri, 4 Jun 2021 15:19:01 +0200 (CEST) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id EB3AD5C012C; Fri, 4 Jun 2021 09:18:59 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Fri, 04 Jun 2021 09:18:59 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm1; bh= vKYgL0xc71WzzVaOwMJSI9hgMj6dfFfCa3+3qK1B4GY=; b=f6HhlDBIsHZaUkKa gU7vx8xnpxWkEQzDDtPNzfepV/91P0KQA1uRZVjEpt88ONvTloXXwFowId/1Esmj o7XJYNhMBUSD0/r3TX1ddkkg1+n/0AvmWdkI8LtdPYAMSckzFz19zopJ7wXea1vl z2aMURKoUedd4y+oNjwLxcnY8yrcGi0TpTxylnZFb8KYHQdxzE9JDw4zEFsGHnEU jeIh4jDL/5dzXqoPUreHnTpE6Zhr5WuVGV9pVZlLwSQ6tKIBRaER90QDRvYIqUXg 5VNF+YlNaavRDKjJKguLyPzeLxaGRd6HI9MLrgWOis1+xZKudWqpEr60ukbtpBSa nQ0H/w== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm3; bh=vKYgL0xc71WzzVaOwMJSI9hgMj6dfFfCa3+3qK1B4 GY=; b=R08kMLdeNmIFo06e7hQo78LyrvEX3q1JM280HeDXvr+6U+KyuP9L+0YzQ KNPYocFKNVug51CgB94DwvSlMgp2RvBHcNwBP9NSVVmDWYaSnl7NI7lgc3oTvp8/ 1Vgqy3TTef8rhmC+r6tDWMRncWd1OlVLjPofWW/M7hpwTqRxe9Gjze8zwjzh+hjq OY1bgI5prUuat+u3BafeLgj4tF6PkOX9IejPT7utPe5T3WlPdcqLTvQY6Qyrnmma gu+Afu3lytO9VxOCt6XL8gi9SH0SSeSxhUetfihNpIYciJjX3QR1u85INlYM1L6Q FOisQp8QL4O3kVx0ueSK+EX8+b3Hw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrfedtuddgieegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkfgjfhgggfgtsehtufertddttddvnecuhfhrohhmpefvhhhomhgr shcuofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqnecugg ftrfgrthhtvghrnhepudeggfdvfeduffdtfeeglefghfeukefgfffhueejtdetuedtjeeu ieeivdffgeehnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrh homhepthhhohhmrghssehmohhnjhgrlhhonhdrnhgvth X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 4 Jun 2021 09:18:57 -0400 (EDT) From: Thomas Monjalon To: Andrew Rybchenko Cc: Jerin Jacob , Ferruh Yigit , dpdk-dev , Elena Agostini , david.marchand@redhat.com Date: Fri, 04 Jun 2021 15:18:56 +0200 Message-ID: <2020675.CS5hdstByM@thomas> In-Reply-To: References: <20210602203531.2288645-1-thomas@monjalon.net> <1817476.i3Lo7XacKO@thomas> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH] gpudev: introduce memory API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 04/06/2021 15:05, Andrew Rybchenko: > On 6/4/21 3:46 PM, Thomas Monjalon wrote: > > 04/06/2021 13:09, Jerin Jacob: > >> On Fri, Jun 4, 2021 at 3:58 PM Thomas Monjalon wrote: > >>> 03/06/2021 11:33, Ferruh Yigit: > >>>> On 6/3/2021 8:47 AM, Jerin Jacob wrote: > >>>>> On Thu, Jun 3, 2021 at 2:05 AM Thomas Monjalon wrote: > >>>>>> + [gpudev] (@ref rte_gpudev.h), > >>>>> > >>>>> Since this device does not have a queue etc? Shouldn't make it a > >>>>> library like mempool with vendor-defined ops? > >>>> > >>>> +1 > >>>> > >>>> Current RFC announces additional memory allocation capabilities, which can suits > >>>> better as extension to existing memory related library instead of a new device > >>>> abstraction library. > >>> > >>> It is not replacing mempool. > >>> It is more at the same level as EAL memory management: > >>> allocate simple buffer, but with the exception it is done > >>> on a specific device, so it requires a device ID. > >>> > >>> The other reason it needs to be a full library is that > >>> it will start a workload on the GPU and get completion notification > >>> so we can integrate the GPU workload in a packet processing pipeline. > >> > >> I might have confused you. My intention is not to make to fit under mempool API. > >> > >> I agree that we need a separate library for this. My objection is only > >> to not call libgpudev and > >> call it libgpu. And have APIs with rte_gpu_ instead of rte_gpu_dev as > >> it not like existing "device libraries" in DPDK and > >> it like other "libraries" in DPDK. > > > > I think we should define a queue of processing actions, > > so it looks like other device libraries. > > And anyway I think a library managing a device class, > > and having some device drivers deserves the name of device library. > > > > I would like to read more opinions. > > Since the library is an unified interface to GPU device drivers > I think it should be named as in the patch - gpudev. > > Mempool looks like an exception here - initially it was pure SW > library, but not there are HW backends and corresponding device > drivers. > > What I don't understand where is GPU specifics here? That's an interesting question. Let's ask first what is a GPU for DPDK? I think it is like a sub-CPU with high parallel execution capabilities, and it is controlled by the CPU. > I.e. why GPU? NIC can have own memory and provide corresponding API. So far we don't need to explicitly allocate memory on the NIC. The packets are received or copied to the CPU memory. In the GPU case, the NIC could save the packets directly in the GPU memory, thus the need to manage the GPU memory. Also, because the GPU program is dynamically loaded, there is no fixed API to interact with the GPU workload except via memory. > What's the difference of "the memory on the CPU that is visible from the > GPU" from existing memzones which are DMA mapped? The only difference is that the GPU must map the CPU memory in its program logic.