From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F119EA034F; Tue, 8 Jun 2021 09:32:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 69034410E7; Tue, 8 Jun 2021 09:32:48 +0200 (CEST) Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by mails.dpdk.org (Postfix) with ESMTP id 374174013F for ; Tue, 8 Jun 2021 09:32:47 +0200 (CEST) Received: from compute2.internal (compute2.nyi.internal [10.202.2.42]) by mailout.nyi.internal (Postfix) with ESMTP id D517A5C00F0; Tue, 8 Jun 2021 03:32:46 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute2.internal (MEProxy); Tue, 08 Jun 2021 03:32:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm1; bh= KkDZUCwbihmUQT5TJg1YYLRN63x3K7LbQlALgTuxvR4=; b=HuwWWtrCHM3ZvZEA ZgupSXATaFHYeLKNjQP6WyTwV7oavMMftNsaOHPPI3UgBTvG8HjjQIV49se0nwYw bbJ1co1pPffIZIFjAtHiNQRBZ3kMoUEAAcT76XIG2BBhbBfso8SndJHVtlDz4LB+ Lk0ImoxudqK14wuIciLOtbrxEv7BisBh3WNQ2dfO+mXSLzYrBUnxOo/R6ncZGfLo ZZVgicq9zfFA57pwsCBOqZjPBnBGRhavjXgLTvLsENXmbrHnKZbvt0sv9GJj5iga Gr2MjdDcR5S9vd/cwm9fwBDWtzWWqn86gNu3OqdOWk3/Q9XKAQaVYyw4c/z4SbZ0 YpTIcA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm3; bh=KkDZUCwbihmUQT5TJg1YYLRN63x3K7LbQlALgTuxv R4=; b=FPF6RuIQn220NHIP1oeUw4q9jM8C+VwvgHwpEsy5brDQFhUIKazGS3aaw Ybui3GJ8kWtsG0zu4/sYhFVd4SDuG2s95/vHkO2Stojz2blKLuU+J+kBV1fJM4im TfAZJOgq3thth6pVmDst4eAOQtGu1JSLz6DgW5so6newAN7KX8NJeJELW2Z8Lzjg N/+OiKGNM0en0jIQBfVe+jYCG/k+E/9wIdOinKIe+HrmQcA14HyCoWuLZyd62wgw 5jLbeOH7ofkU9ycnNPA/1pDyKlRk8qTey9/kWLqDBvnSdpjv15W2i2MpIgCiALk3 /Ac+rlZ0Din42TZOdmYeXFJ6+pxEg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrfedtkedgudduvdcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhm rghsucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenuc ggtffrrghtthgvrhhnpedugefgvdefudfftdefgeelgffhueekgfffhfeujedtteeutdej ueeiiedvffegheenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfh hrohhmpehthhhomhgrshesmhhonhhjrghlohhnrdhnvght X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 8 Jun 2021 03:32:45 -0400 (EDT) From: Thomas Monjalon To: Jerin Jacob Cc: Honnappa Nagarahalli , Andrew Rybchenko , "Yigit, Ferruh" , dpdk-dev , Elena Agostini , David Marchand , nd , "Wang, Haiyue" Date: Tue, 08 Jun 2021 09:32:43 +0200 Message-ID: <9721638.xpiActGcI8@thomas> In-Reply-To: References: <20210602203531.2288645-1-thomas@monjalon.net> <2152098.qji4Z79139@thomas> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH] gpudev: introduce memory API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 08/06/2021 09:09, Jerin Jacob: > On Tue, Jun 8, 2021 at 12:05 PM Thomas Monjalon wrote: > > > > 08/06/2021 06:10, Jerin Jacob: > > > On Mon, Jun 7, 2021 at 10:17 PM Thomas Monjalon wrote: > > > > > > > > 07/06/2021 15:54, Jerin Jacob: > > > > > On Mon, Jun 7, 2021 at 4:13 PM Thomas Monjalon wrote: > > > > > > 07/06/2021 09:20, Wang, Haiyue: > > > > > > > From: Honnappa Nagarahalli > > > > > > > > If we keep CXL in mind, I would imagine that in the future the devices on PCIe could have their own > > > > > > > > local memory. May be some of the APIs could use generic names. For ex: instead of calling it as > > > > > > > > "rte_gpu_malloc" may be we could call it as "rte_dev_malloc". This way any future device which hosts > > > > > > > > its own memory that need to be managed by the application, can use these APIs. > > > > > > > > > > > > > > > > > > > > > > "rte_dev_malloc" sounds a good name, > > > > > > > > > > > > Yes I like the idea. > > > > > > 2 concerns: > > > > > > > > > > > > 1/ Device memory allocation requires a device handle. > > > > > > So far we avoided exposing rte_device to the application. > > > > > > How should we get a device handle from a DPDK application? > > > > > > > > > > Each device behaves differently at this level. In the view of the > > > > > generic application, the architecture should like > > > > > > > > > > < Use DPDK subsystem as rte_ethdev, rte_bbdev etc for SPECIFIC function > > > > > > ^ > > > > > | > > > > > < DPDK driver> > > > > > ^ > > > > > | > > > > > > > > > > > > > I think the formatting went wrong above. > > > > > > > > I would add more to the block diagram: > > > > > > > > class device API - computing device API > > > > | | | > > > > class device driver - computing device driver > > > > | | > > > > EAL device with memory callback > > > > > > > > The idea above is that the class device driver can use services > > > > of the new computing device library. > > > > > > Yes. The question is, do we need any public DPDK _application_ APIs for that? > > > > To have something generic! > > > > > If it is public API then the scope is much bigger than that as the application > > > can use it directly and it makes it non portable. > > > > It is a non-sense. If we make an API, it will be better portable. > > The portal application will be using class device API. > For example, when application needs to call rte_gpu_malloc() vs rte_malloc() ? > Is it better the use of drivers specific functions used in "class > device driver" not exposed? > > > > > The only part which is non-portable is the program on the device > > which may be different per computing device. > > The synchronization with the DPDK application should be portable > > if we define some good API. > > > > > if the scope is only, the class driver consumption then the existing > > > "bus" _kind of_ > > > abstraction/API makes sense to me. > > > > > > Where it abstracts, > > > -FW download of device > > > -Memory management of device > > > -Opaque way to enq/deque jobs to the device. > > > > > > And above should be consumed by "class driver" not "application". > > > > > > If the application doing do that, we are in rte_raw device territory. > > > > I'm sorry I don't understand what you make such assertion. > > It seems you don't want generic API (which is the purpose of DPDK). > > I would like to have a generic _application_ API if the application > _needs_ to use it. > > The v1 nowhere close to any compute device description. As I said, I forgot the RFC tag. I just wanted to start the discussion and it was fruitful, no regret. > It has a memory allocation API. It is the device attribute, not > strictly tied to ONLY TO computing device. > > So at least, I am asking to have concrete > proposal on "compute device" schematic rather than start with memory API > and rubber stamp as new device adds anything in future. > > When we added any all the class devices to DPDK, Everyone had a complete view > of it is function(at RFC of each subsystem had enough API to express > the "basic" usage) > and purpose from the _application_ PoV. I see that is missing here. I keep explaining in emails while preparing a v2. Now that we go into circles, let's wait the v2 which will address a lot of comments.