From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5AF3FA034F; Mon, 11 Oct 2021 14:44:27 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4BB80410DF; Mon, 11 Oct 2021 14:44:27 +0200 (CEST) Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by mails.dpdk.org (Postfix) with ESMTP id 7E66940E0F; Mon, 11 Oct 2021 14:44:26 +0200 (CEST) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 1F03A5C00EA; Mon, 11 Oct 2021 08:44:26 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Mon, 11 Oct 2021 08:44:26 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm2; bh= bV8YP7Q87Ahp33NQ2c1dh4xh+rr/3pAI21nDb4qCUP8=; b=dUgZBehONrzS92Fm qI1mJIu92pmeFkfhKKmAu3q0uIbZxcQn7QTPTmOExmzgEAe2w+GXXgNwXiO5QT28 OeCh3pGXDMaD+UfLZXmv1cEtb2G69Jw6UyIOr0rJgPjykCVVNZdzUO584hEdpIT/ PQEMZLhxBmAhJijB8BJVu0kCAk465qP4PMkWaVzPQ9kSkLW38QKLE+HthrwK5ZWD X2u6PaWk3BHQSQy/WJ80d1vLSvXnsBmroRgGbdYUtGoVK5qS96I8SnixZpGdELed abgB/RNeRhuNQsuXp94B0x1VRnan0wy2NxA00CYThYLtVnjXD86etpQF4MhxI0gm 0QKdFg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm1; bh=bV8YP7Q87Ahp33NQ2c1dh4xh+rr/3pAI21nDb4qCU P8=; b=WE5sUJhFxyubsIBj9w8Nl+W/3cecfVSD/zndwo+8W+orRluQdqShRzGVi FcmwAwTSuU88xna+K8GyjLRg+a5aQgVxYhjq5hJ++WGebaAmGNF4Wk1nWuZBHaRH m7WRS570UpJKU6sexGCXvJwcGBhpbA1V92I/URUWflA/+9SzxgS0c+/UNZvfM/KP zzvBWnmHOkY4/SClwDEpURXgG+ff2kMuSyEDgllVZJIBKKWF8JYJ/QKvnre3sYcQ 9YFijyfu8fWFtzdfE1cK7EGTs8ZLPgJMwntZWgzZxXDjO2w7LpVlEerqkpsquHwH wq4MtupFfrQUheG8ALhCitH8rw2Wg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrvddtiedghedvucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkfgjfhgggfgtsehtufertddttddvnecuhfhrohhmpefvhhhomhgr shcuofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqnecugg ftrfgrthhtvghrnhepudeggfdvfeduffdtfeeglefghfeukefgfffhueejtdetuedtjeeu ieeivdffgeehnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrh homhepthhhohhmrghssehmohhnjhgrlhhonhdrnhgvth X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 11 Oct 2021 08:44:24 -0400 (EDT) From: Thomas Monjalon To: Jerin Jacob , techboard@dpdk.org Cc: Elena Agostini , dpdk-dev Date: Mon, 11 Oct 2021 14:44:21 +0200 Message-ID: <8296426.PREcJboN3U@thomas> In-Reply-To: References: <20210602203531.2288645-1-thomas@monjalon.net> <68083401.ybZ649KAnY@thomas> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH v3 0/9] GPU library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 11/10/2021 13:41, Jerin Jacob: > On Mon, Oct 11, 2021 at 3:57 PM Thomas Monjalon wrote: > > 11/10/2021 11:29, Jerin Jacob: > > > On Mon, Oct 11, 2021 at 2:42 PM Thomas Monjalon wrote: > > > > 11/10/2021 10:43, Jerin Jacob: > > > > > On Mon, Oct 11, 2021 at 1:48 PM Thomas Monjalon wrote: > > > > > > 10/10/2021 12:16, Jerin Jacob: > > > > > > > On Fri, Oct 8, 2021 at 11:13 PM wrote: > > > > > > > > > > > > > > > > From: eagostini > > > > > > > > > > > > > > > > In heterogeneous computing system, processing is not only in the CPU. > > > > > > > > Some tasks can be delegated to devices working in parallel. > > > > > > > > > > > > > > > > The goal of this new library is to enhance the collaboration between > > > > > > > > DPDK, that's primarily a CPU framework, and GPU devices. > > > > > > > > > > > > > > > > When mixing network activity with task processing on a non-CPU device, > > > > > > > > there may be the need to put in communication the CPU with the device > > > > > > > > in order to manage the memory, synchronize operations, exchange info, etc.. > > > > > > > > > > > > > > > > This library provides a number of new features: > > > > > > > > - Interoperability with GPU-specific library with generic handlers > > > > > > > > - Possibility to allocate and free memory on the GPU > > > > > > > > - Possibility to allocate and free memory on the CPU but visible from the GPU > > > > > > > > - Communication functions to enhance the dialog between the CPU and the GPU > > > > > > > > > > > > > > In the RFC thread, There was one outstanding non technical issues on this, > > > > > > > > > > > > > > i.e > > > > > > > The above features are driver specific details. Does the DPDK > > > > > > > _application_ need to be aware of this? > > > > > > > > > > > > I don't see these features as driver-specific. > > > > > > > > > > That is the disconnect. I see this as more driver-specific details > > > > > which are not required to implement an "application" facing API. > > > > > > > > Indeed this is the disconnect. > > > > I already answered but it seems you don't accept the answer. > > > > > > Same with you. That is why I requested, we need to get opinions from others. > > > Some of them already provided opinions in RFC. > > > > This is why I Cc'ed techboard. > > Yes. Indeed. > > > > > > > First, this is not driver-specific. It is a low-level API. > > > > > > What is the difference between low-level API and driver-level API. > > > > The low-level API provides tools to build a feature, > > but no specific feature. > > > > > > > For example, If we need to implement application facing" subsystems like bbdev, > > > > > If we make all this driver interface, you can still implement the > > > > > bbdev API as a driver without > > > > > exposing HW specific details like how devices communicate to CPU, how > > > > > memory is allocated etc > > > > > to "application". > > > > > > > > There are 2 things to understand here. > > > > > > > > First we want to allow the application using the GPU for needs which are > > > > not exposed by any other DPDK API. > > > > > > > > Second, if we want to implement another DPDK API like bbdev, > > > > then the GPU implementation would be exposed as a vdev in bbdev, > > > > using the HW GPU device being a PCI in gpudev. > > > > They are two different levels, got it? > > > > > > Exactly. So what is the point of exposing low-level driver API to > > > "application", > > > why not it is part of the internal driver API. My point is, why the > > > application needs to worry > > > about, How the CPU <-> Device communicated? CPU < -> Device memory > > > visibility etc. > > > > There are two reasons. > > > > 1/ The application may want to use the GPU for some application-specific > > needs which are not abstracted in DPDK API. > > Yes. Exactly, That's where my concern, If we take this path, What is > the motivation to contribute to DPDK abstracted subsystem APIs which > make sense for multiple vendors and every > Similar stuff applicable for DPU, A feature-specific API is better of course, there is no lose of motivation. But you cannot forbid applications to have their own features on GPU. > Otherway to put, if GPU is doing some ethdev offload, why not making > as ethdev offload in ethdev spec so that > another type of device can be used and make sense for application writters. If we do ethdev offload, yes we'll implement it. And we'll do it on top of gpudev, which is the only way to share the CPU. > For example, In the future, If someone needs to add ML(Machine > learning) subsystem and enable a proper subsystem > interface that is good for DPDK. If this path is open, there is no > motivation for contribution and the application > will not have a standard interface doing the ML job across multiple vendors. Wrong. It does remove the motivation, it is a first step to build on top of it. > That's is the only reason why saying it should not APPLICATION > interface it can be DRIVER interface. > > > > > 2/ This API may also be used by some feature implementation internally > > in some DPDK libs or drivers. > > We cannot skip the gpudev layer because this is what allows generic probing > > of the HW, and gpudev allows to share the GPU with multiple features > > implemented in different libs or drivers, thanks to the "child" concept. > > Again, why do applications need to know it? It is similar to `bus` > kind of this where it sharing the physical resouces. No it's not a bus, it is a device that we need to share. > > > > > > > aka DPDK device class has a fixed personality and it has API to deal > > > > > > > with abstracting specific application specific > > > > > > > end user functionality like ethdev, cryptodev, eventdev irrespective > > > > > > > of underlying bus/device properties. > > > > > > > > > > > > The goal of the lib is to allow anyone to invent any feature > > > > > > which is not already available in DPDK. > > > > > > > > > > > > > Even similar semantics are required for DPU(Smart NIC) > > > > > > > communitication. I am planning to > > > > > > > send RFC in coming days to address the issue without the application > > > > > > > knowing the Bus/HW/Driver details. > > > > > > > > > > > > gpudev is not exposing bus/hw/driver details. > > > > > > I don't understand what you mean. > > > > > > > > > > See above. We are going into circles. In short, Jerin wants to forbid the generic use of GPU in DPDK. He wants only feature-specific API. It is like restricting the functions we can run on a CPU. And anyway we need this layer to share the GPU between multiple features. Techboard please vote.