From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6312DA034F; Mon, 11 Oct 2021 12:28:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E95EF40142; Mon, 11 Oct 2021 12:27:59 +0200 (CEST) Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by mails.dpdk.org (Postfix) with ESMTP id 187B04003C; Mon, 11 Oct 2021 12:27:58 +0200 (CEST) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 90C585C0083; Mon, 11 Oct 2021 06:27:57 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Mon, 11 Oct 2021 06:27:57 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm2; bh= sJDtlvdktKU9T0h5h5qlMcPBJSQFVV/YpvWKacLCCIE=; b=SE1/Z06t6Wpet9Gc Ju5LKAvigcFfPXo7fulkYwc/Wb1R4b1v39kdQWJXiX4plEzx77AeLwBoqrvC/4zN I4hCycfYsJ0tZPOeZxABNu2ug++jGsbF10ExFnyw6tzandySy8dP58ZRO117gAwn dZYu+diO/+PsMehZTkTR1t12R4pT3Uu4isfooW9SVUK4rTSD9cNnJeofD6KjYlF4 F57GZtWzQz/e6noxYR26qe0NH9nTu0eUT3RPU7Ghas/C+AoXsKBIpJh/7jIm/0Un K8uVh4kyxS7HM4gxJaXFIGanroNfxIEkhYH0rr3pXmHgp8a3VaAhzGcboq0UcMnd A0Y+vQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm1; bh=sJDtlvdktKU9T0h5h5qlMcPBJSQFVV/YpvWKacLCC IE=; b=B8q1IbcIrA+Xu8c3dUyDce6ycxL+m/1o9p/NR8AlHsFIjyRfUIZtUBo1T oBvWBn4UOPUWe1mQky3kplntAhBi6xm3jX080sSnbrRwh0W4pfq1evd+OAf5VDvU XDnp/QPPfG8zC/w5s7esq7U4z1bONmB98vtcAjCvOVbCthEZK20sYqQSAzEcYVw3 Hq+8+sRE+YqWoFa+Q79HxcQsCF5yzqmTzH1fRC78QrGGGYPXR5JyGt2IdR+bFZHZ cFxBCDWgk3wMTQkwE8gYSQWLI2bdnMV7HBvc2/Zw5GO4C7aMhQRVlAGKJmjQoSe1 +2aYhWfx6sL6hy7P7BosCfnViHAIg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrvddtiedgvdegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkfgjfhgggfgtsehtufertddttddvnecuhfhrohhmpefvhhhomhgr shcuofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqnecugg ftrfgrthhtvghrnhepudeggfdvfeduffdtfeeglefghfeukefgfffhueejtdetuedtjeeu ieeivdffgeehnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrh homhepthhhohhmrghssehmohhnjhgrlhhonhdrnhgvth X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 11 Oct 2021 06:27:56 -0400 (EDT) From: Thomas Monjalon To: Jerin Jacob Cc: Elena Agostini , dpdk-dev , techboard@dpdk.org Date: Mon, 11 Oct 2021 12:27:52 +0200 Message-ID: <68083401.ybZ649KAnY@thomas> In-Reply-To: References: <20210602203531.2288645-1-thomas@monjalon.net> <18783192.D4B0UDpyQ6@thomas> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH v3 0/9] GPU library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 11/10/2021 11:29, Jerin Jacob: > On Mon, Oct 11, 2021 at 2:42 PM Thomas Monjalon wrote: > > 11/10/2021 10:43, Jerin Jacob: > > > On Mon, Oct 11, 2021 at 1:48 PM Thomas Monjalon wrote: > > > > 10/10/2021 12:16, Jerin Jacob: > > > > > On Fri, Oct 8, 2021 at 11:13 PM wrote: > > > > > > > > > > > > From: eagostini > > > > > > > > > > > > In heterogeneous computing system, processing is not only in the CPU. > > > > > > Some tasks can be delegated to devices working in parallel. > > > > > > > > > > > > The goal of this new library is to enhance the collaboration between > > > > > > DPDK, that's primarily a CPU framework, and GPU devices. > > > > > > > > > > > > When mixing network activity with task processing on a non-CPU device, > > > > > > there may be the need to put in communication the CPU with the device > > > > > > in order to manage the memory, synchronize operations, exchange info, etc.. > > > > > > > > > > > > This library provides a number of new features: > > > > > > - Interoperability with GPU-specific library with generic handlers > > > > > > - Possibility to allocate and free memory on the GPU > > > > > > - Possibility to allocate and free memory on the CPU but visible from the GPU > > > > > > - Communication functions to enhance the dialog between the CPU and the GPU > > > > > > > > > > In the RFC thread, There was one outstanding non technical issues on this, > > > > > > > > > > i.e > > > > > The above features are driver specific details. Does the DPDK > > > > > _application_ need to be aware of this? > > > > > > > > I don't see these features as driver-specific. > > > > > > That is the disconnect. I see this as more driver-specific details > > > which are not required to implement an "application" facing API. > > > > Indeed this is the disconnect. > > I already answered but it seems you don't accept the answer. > > Same with you. That is why I requested, we need to get opinions from others. > Some of them already provided opinions in RFC. This is why I Cc'ed techboard. > > First, this is not driver-specific. It is a low-level API. > > What is the difference between low-level API and driver-level API. The low-level API provides tools to build a feature, but no specific feature. > > > For example, If we need to implement application facing" subsystems like bbdev, > > > If we make all this driver interface, you can still implement the > > > bbdev API as a driver without > > > exposing HW specific details like how devices communicate to CPU, how > > > memory is allocated etc > > > to "application". > > > > There are 2 things to understand here. > > > > First we want to allow the application using the GPU for needs which are > > not exposed by any other DPDK API. > > > > Second, if we want to implement another DPDK API like bbdev, > > then the GPU implementation would be exposed as a vdev in bbdev, > > using the HW GPU device being a PCI in gpudev. > > They are two different levels, got it? > > Exactly. So what is the point of exposing low-level driver API to > "application", > why not it is part of the internal driver API. My point is, why the > application needs to worry > about, How the CPU <-> Device communicated? CPU < -> Device memory > visibility etc. There are two reasons. 1/ The application may want to use the GPU for some application-specific needs which are not abstracted in DPDK API. 2/ This API may also be used by some feature implementation internally in some DPDK libs or drivers. We cannot skip the gpudev layer because this is what allows generic probing of the HW, and gpudev allows to share the GPU with multiple features implemented in different libs or drivers, thanks to the "child" concept. > > > > > aka DPDK device class has a fixed personality and it has API to deal > > > > > with abstracting specific application specific > > > > > end user functionality like ethdev, cryptodev, eventdev irrespective > > > > > of underlying bus/device properties. > > > > > > > > The goal of the lib is to allow anyone to invent any feature > > > > which is not already available in DPDK. > > > > > > > > > Even similar semantics are required for DPU(Smart NIC) > > > > > communitication. I am planning to > > > > > send RFC in coming days to address the issue without the application > > > > > knowing the Bus/HW/Driver details. > > > > > > > > gpudev is not exposing bus/hw/driver details. > > > > I don't understand what you mean. > > > > > > See above.