From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 46FFFA034F; Mon, 7 Jun 2021 18:47:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BBB9C4068B; Mon, 7 Jun 2021 18:47:30 +0200 (CEST) Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by mails.dpdk.org (Postfix) with ESMTP id 62D944067E for ; Mon, 7 Jun 2021 18:47:29 +0200 (CEST) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 0293B5C005D; Mon, 7 Jun 2021 12:47:28 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Mon, 07 Jun 2021 12:47:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm1; bh= FcihvC+6u+R+eHHh6T2q/BCQihvCSvS6KxcIPpHwLH4=; b=dZCXxvvW1XcyODSe +SaIxSm1hjv9T/d4IuTP764jIFCueAd3WPrKCd2Ci6aRzZL5SlYyQgxZyNHj4mDn T1801kEajzAY2wCmiVwhoKvWeWU2P1GZNmzWIprQlkTZMRGhKD8nzcxsVleFD5YS I4P4zO3mpiLETqmYtLDz4kArAf5Jh3Ex4zvbhfqAGoehn8ZXR+HfNTPVo87zNvRR ncOQIq+bDIfK58nOWZ4fOrqQ+udT4xKyXmkJN/2lfKTKQ6o/FrsoCXGv8zFftFHd pmBooFhjZ3Mp7moJ1TdUysc2PXJ0MwSeP++x2BfvpLOthD/6zKj1EuWYp6hIKshu 0vmQyg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm3; bh=FcihvC+6u+R+eHHh6T2q/BCQihvCSvS6KxcIPpHwL H4=; b=oLkbaTBe6a793uerCBwbI5y8UzOsffH/pKeD52bW4FRISOeazk2tVQBgK do45IwiZ/98FgDzqa7pFEfvJYG38hxd2e0d12QCLezO3A00glF+6F+DfHoSwjvgz 9vdzNq5pJ1Wtv4FMLrRB+q9cnDEJV3aTuTezZuILZ7re5TZF0qNu5XAHqX93N9ct bod2IiRr+Z0Nk8NxKWOiEqiAg4iEvqejClwQH0bjOEpxeXL42JpaXVv6eLuJX1jt k8yxVGcK9K5BTmVE4XLMCiE5F6XV8OkauE6dwznj0mZqIcYP4bRZSzPafD7tJTlr BzMP0mSpvTxfl5wPPdVsQ/bm1KSog== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrfedtjedguddthecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhm rghsucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenuc ggtffrrghtthgvrhhnpedugefgvdefudfftdefgeelgffhueekgfffhfeujedtteeutdej ueeiiedvffegheenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfh hrohhmpehthhhomhgrshesmhhonhhjrghlohhnrdhnvght X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 7 Jun 2021 12:47:25 -0400 (EDT) From: Thomas Monjalon To: Jerin Jacob Cc: dev@dpdk.org, Honnappa Nagarahalli , Andrew Rybchenko , "Yigit, Ferruh" , dpdk-dev , Elena Agostini , David Marchand , nd , "Wang, Haiyue" Date: Mon, 07 Jun 2021 18:47:24 +0200 Message-ID: <2428387.JO1QuEOxcK@thomas> In-Reply-To: References: <20210602203531.2288645-1-thomas@monjalon.net> <3716354.mlbyQRhZUS@thomas> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH] gpudev: introduce memory API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 07/06/2021 15:54, Jerin Jacob: > On Mon, Jun 7, 2021 at 4:13 PM Thomas Monjalon wrote: > > 07/06/2021 09:20, Wang, Haiyue: > > > From: Honnappa Nagarahalli > > > > If we keep CXL in mind, I would imagine that in the future the devices on PCIe could have their own > > > > local memory. May be some of the APIs could use generic names. For ex: instead of calling it as > > > > "rte_gpu_malloc" may be we could call it as "rte_dev_malloc". This way any future device which hosts > > > > its own memory that need to be managed by the application, can use these APIs. > > > > > > > > > > "rte_dev_malloc" sounds a good name, > > > > Yes I like the idea. > > 2 concerns: > > > > 1/ Device memory allocation requires a device handle. > > So far we avoided exposing rte_device to the application. > > How should we get a device handle from a DPDK application? > > Each device behaves differently at this level. In the view of the > generic application, the architecture should like > > < Use DPDK subsystem as rte_ethdev, rte_bbdev etc for SPECIFIC function > > ^ > | > < DPDK driver> > ^ > | > I think the formatting went wrong above. I would add more to the block diagram: class device API - computing device API | | | class device driver - computing device driver | | EAL device with memory callback The idea above is that the class device driver can use services of the new computing device library. One basic API service is to provide a device ID for the memory callback. Other services are for execution control. > An implementation may decide to have "in tree" or "out of tree" > drivers or rte_device implementaion. > But generic DPDK applications should not use devices directly. i.e > rte_device need to have this callback and > mlx ethdev/crypto driver use this driver to implement public API. > Otherwise, it is the same as rawdev in DPDK. > So not sure what it brings other than raw dev here if we are not > taking the above architecture. > > > > > 2/ Implementation must be done in a driver. > > Should it be a callback defined at rte_device level? > > IMO, Yes and DPDK subsystem drivers to use it. I'm not sure subsystems should bypass the API for device memory. We could do some generic work in the API function and call the driver callback only for device-specific stuff. In such case the callback and the API would be in the library computing device library. On the other hand, having the callback and API in EAL would allow having a common function for memory allocation in EAL. Another thought: I would like to unify memory allocation in DPDK with the same set of flags in an unique function. A flag could be used to target devices instead of the running CPU, and the same parameter could be shared for the device ID or NUMA node.