From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 77B96A034F; Fri, 8 Oct 2021 12:34:02 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F0A794111C; Fri, 8 Oct 2021 12:34:01 +0200 (CEST) Received: from new4-smtp.messagingengine.com (new4-smtp.messagingengine.com [66.111.4.230]) by mails.dpdk.org (Postfix) with ESMTP id 9AA2341100 for ; Fri, 8 Oct 2021 12:34:00 +0200 (CEST) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailnew.nyi.internal (Postfix) with ESMTP id 4F940580FB5; Fri, 8 Oct 2021 06:33:59 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Fri, 08 Oct 2021 06:33:59 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm2; bh= x3NVfh2hbu5i2fjl/JsyoGDZEpRk613WynAZoIyHLCA=; b=Nh3YEfM8/63Eo0Tk 1xfk8rRsY1owRvp4J8OSakeHPH8U1oO+jamKvzP207JOl5aBXwYlnnBsITvQAjot pZQNJweMrWwHSRoKSIBAzUiQEE4w+TJ7C+ZM2vgjE5sgBAEATV156TpUjMlgIrCp OHaZ2GNnIDJF09ts31aupKqBxfcxYFTBz956hciQhKArP01R0vcBSJMVm8OVJJpV fnQ95Cg3G2Wd81EwM/zGr9CbpycnAlzuc+Nx/6H7if0szMdRDacFvaaD+SM28sy7 e9ej4TuWOKAgh/4HlzoMTZLBfK8/UuLnPD9P9rQ/bJJu4f8dRXoSxAGagjjNqnvb JEA5rQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm1; bh=x3NVfh2hbu5i2fjl/JsyoGDZEpRk613WynAZoIyHL CA=; b=dGQN4WcKiRnepZIHY999ikGOTnaTMx/rVu4Tap+RNU+OXj+tnUL43T0is xefXGNsMBW6/xrxq+uc6sYZVPb61T7eZoFdCrXnimmMuMtGz7zroHW0FiwbypDv6 NclQ+XEyiNdenUATHPS5ZcFYZNVbGxShYvn0bR9QeGeJbuOMSYa38NdpLve+CUyQ qgNp1Z59NUfmsQ1eaCqmN9K0TxnCC6Rv6bYaH1423xW6S7FDzUVtKEmNi+L/ZzgK PStMlmcErZkWShl3ibm/O3L8bAgn0Xx3tZwoQkUoWbDUmrwoxVjUMs2uAS3fLVo0 8gXnvsKuPhDyAKIGOBwgdQNWzeWYA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrvddttddgvdekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkjghfggfgtgesthfure dttddtvdenucfhrhhomhepvfhhohhmrghsucfoohhnjhgrlhhonhcuoehthhhomhgrshes mhhonhhjrghlohhnrdhnvghtqeenucggtffrrghtthgvrhhnpedugefgvdefudfftdefge elgffhueekgfffhfeujedtteeutdejueeiiedvffegheenucevlhhushhtvghrufhiiigv pedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehthhhomhgrshesmhhonhhjrghlohhnrd hnvght X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 8 Oct 2021 06:33:55 -0400 (EDT) From: Thomas Monjalon To: fengchengwen Cc: ferruh.yigit@intel.com, bruce.richardson@intel.com, jerinj@marvell.com, jerinjacobk@gmail.com, andrew.rybchenko@oktetlabs.ru, dev@dpdk.org, mb@smartsharesystems.com, nipun.gupta@nxp.com, hemant.agrawal@nxp.com, maxime.coquelin@redhat.com, honnappa.nagarahalli@arm.com, david.marchand@redhat.com, sburla@marvell.com, pkapoor@marvell.com, konstantin.ananyev@intel.com, conor.walsh@intel.com, kevin.laatz@intel.com Date: Fri, 08 Oct 2021 12:09:51 +0200 Message-ID: <3470783.hDBqWYTSAp@thomas> In-Reply-To: References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <11670203.VNtdCpnXh1@thomas> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH v23 1/6] dmadev: introduce DMA device library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 08/10/2021 09:13, fengchengwen: > On 2021/10/6 18:26, Thomas Monjalon wrote: > > 24/09/2021 12:53, Chengwen Feng: > >> +++ b/lib/dmadev/rte_dmadev.h > >> + * The dmadev are dynamically allocated by rte_dma_pmd_allocate() during the > >> + * PCI/SoC device probing phase performed at EAL initialization time. And could > >> + * be released by rte_dma_pmd_release() during the PCI/SoC device removing > >> + * phase. > > > > I don't think this text has value, > > and we could imagine allocating a device ata later stage. > > Yes, we could remove the stage descriptor because it's a well-known knowledge, but I > recommend keeping the rte_dma_pmd_allocate and rte_dma_pmd_release functions, how about: > > * The dmadev are dynamically allocated by rte_dma_pmd_allocate(). And could > * be released by rte_dma_pmd_release(). These functions are for PMD. This file is for applications, so it is not appropriate. > > [...] > >> + * Configure the maximum number of dmadevs. > >> + * @note This function can be invoked before the primary process rte_eal_init() > >> + * to change the maximum number of dmadevs. > > > > You should mention what is the default. > > Is the default exported to the app in this file? > > The default macro is RTE_DMADEV_DEFAULT_MAX_DEVS, and I place it at rte_config.h. No we avoid adding thinds in rte_config.h. There should a static default which can be changed at runtime only. > I think it's better to focus on one place (rte_config.h) than to modify config in multiple places (e.g. rte_dmadev.h/rte_xxx.h). Config is modified only in one place: the function. > >> + * > >> + * @param dev_max > >> + * maximum number of dmadevs. > >> + * > >> + * @return > >> + * 0 on success. Otherwise negative value is returned. > >> + */ > >> +__rte_experimental > >> +int rte_dma_dev_max(size_t dev_max); > > > > What about a function able to do more with the name rte_dma_init? > > It should allocate the inter-process shared memory, > > and do the lookup in case of secondary process. > > Yes, we defined dma_data_prepare() which do above thing, it's in 4th patch. > > Because we could not invoke some like allocate inter-process shared memory before > rte_eal_init, so I think it's better keep rte_dma_dev_max as it is. Good point. > >> +++ b/lib/dmadev/rte_dmadev_core.h > >> +/** > >> + * @file > >> + * > >> + * DMA Device internal header. > >> + * > >> + * This header contains internal data types, that are used by the DMA devices > >> + * in order to expose their ops to the class. > >> + * > >> + * Applications should not use these API directly. > > > > If it is not part of the API, it should not be exposed at all. > > Why not having all these stuff in a file dmadev_driver.h? > > Is it used by some inline functions? > > Yes, it's used by dataplane inline functions. OK, please give this reason in the description.