From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F4184A0C4C; Tue, 12 Oct 2021 21:09:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 864DC410F5; Tue, 12 Oct 2021 21:09:47 +0200 (CEST) Received: from new4-smtp.messagingengine.com (new4-smtp.messagingengine.com [66.111.4.230]) by mails.dpdk.org (Postfix) with ESMTP id B9040410F0 for ; Tue, 12 Oct 2021 21:09:46 +0200 (CEST) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailnew.nyi.internal (Postfix) with ESMTP id 5EBFD580D19; Tue, 12 Oct 2021 15:09:46 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Tue, 12 Oct 2021 15:09:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm2; bh= yAUZCrNT8+eOEZnnsvYoMGL+t+DNMHH2Xo0akWCfXkA=; b=hFtn+szQKIpUA4Ql dVv3kkfW1dmmenTwWfSy21GifP329BAMawE2GN7VCOpuKBI/3HF02TOIyAKSRyU9 RbQEf2sLRCololYX4NvfDVyo4jGyv9khnUG7IENQvTBdm643iwDoDxze4A4ca1r9 OA2CG1nlwUyUqQw58juZDBFLQ5tdl5VMe8fHjmkFtrLsHYatUCNHUT4bpUA7oo6X Cpctlz4duJie9KBJPz1+uxngOgTJi5F9QyFpMDj9p+qEgfUTZwqLjNuwQo0ZNX0L BFKvJ1eNtpHBeos0c5MeGwIb7VFR6oC7z5UV4NnqNPd6vIAIRpKIvQBsBEdK+uYC 5uRLvA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm1; bh=yAUZCrNT8+eOEZnnsvYoMGL+t+DNMHH2Xo0akWCfX kA=; b=CCiAlno1cwLLJMVz3RHQPEHx0Hg/gp+N5JJdItOQ2px5RDktS2GuhhSPG aLND4Sce0a+KFuFvjMQOpvr98T6F9sNU/YSb2bBBU1lvCAAHqM8wOpw7JKU9RA41 6EE+EX3NkvkeXhmtBsAaLdnVA+2dW4F/nitWpGBXaiWQAHYNpckgCgc2aaMUOsNK WUs7BWnIUScBNjc0t7Bm6QbJQaaKWVGFEns27ZMi1MkPl73/yOFQLYzYUQMnRp7J fV6RaNO0w42idUz0tl1BJi98KY4bH2o0SAuXcVRtssh38is4xfhvcKye5A8tBt/I cyhHH05DhXtLxqJRDcbJVrwjssyRQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrvddtkedgudeftdcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhm rghsucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenuc ggtffrrghtthgvrhhnpedugefgvdefudfftdefgeelgffhueekgfffhfeujedtteeutdej ueeiiedvffegheenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfh hrohhmpehthhhomhgrshesmhhonhhjrghlohhnrdhnvght X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 12 Oct 2021 15:09:41 -0400 (EDT) From: Thomas Monjalon To: Chengwen Feng Cc: ferruh.yigit@intel.com, bruce.richardson@intel.com, jerinj@marvell.com, jerinjacobk@gmail.com, andrew.rybchenko@oktetlabs.ru, dev@dpdk.org, mb@smartsharesystems.com, nipun.gupta@nxp.com, hemant.agrawal@nxp.com, maxime.coquelin@redhat.com, honnappa.nagarahalli@arm.com, david.marchand@redhat.com, sburla@marvell.com, pkapoor@marvell.com, konstantin.ananyev@intel.com, conor.walsh@intel.com, kevin.laatz@intel.com Date: Tue, 12 Oct 2021 21:09:38 +0200 Message-ID: <5838181.K3ZB3fOEaN@thomas> In-Reply-To: <20211011073348.8235-2-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <20211011073348.8235-1-fengchengwen@huawei.com> <20211011073348.8235-2-fengchengwen@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH v25 1/6] dmadev: introduce DMA device library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 11/10/2021 09:33, Chengwen Feng: > --- /dev/null > +++ b/doc/guides/prog_guide/dmadev.rst > @@ -0,0 +1,60 @@ > +.. SPDX-License-Identifier: BSD-3-Clause > + Copyright 2021 HiSilicon Limited > + > +DMA Device Library > +================== > + > +The DMA library provides a DMA device framework for management and provisioning > +of hardware and software DMA poll mode drivers, defining generic API which > +support a number of different DMA operations. > + > + > +Design Principles > +----------------- > + > +The DMA framework provides a generic DMA device framework which supports both > +physical (hardware) and virtual (software) DMA devices, as well as a generic DMA > +API which allows DMA devices to be managed and configured, and supports DMA > +operations to be provisioned on DMA poll mode driver. > + > +.. _figure_dmadev: > + > +.. figure:: img/dmadev.* > + > +The above figure shows the model on which the DMA framework is built on: > + > + * The DMA controller could have multiple hardware DMA channels (aka. hardware > + DMA queues), each hardware DMA channel should be represented by a dmadev. > + * The dmadev could create multiple virtual DMA channels, each virtual DMA > + channel represents a different transfer context. The DMA operation request > + must be submitted to the virtual DMA channel. e.g. Application could create > + virtual DMA channel 0 for memory-to-memory transfer scenario, and create > + virtual DMA channel 1 for memory-to-device transfer scenario. When updating the doc, we would like to change a minimum of lines, so it's better to split lines logically: after a comma, a point, or before the next part of the sentence. Do not hesitate to make short lines if needed. Such change is quite fast to do, thanks. [...] > +* **Introduced dmadev library with:** > + > + * Device allocation functions. You can drop this line, it is not a feature. [...] > +static int > +dma_dev_data_prepare(void) > +{ > + size_t size; > + > + if (rte_dma_devices != NULL) > + return 0; > + > + size = dma_devices_max * sizeof(struct rte_dma_dev); > + rte_dma_devices = malloc(size); > + if (rte_dma_devices == NULL) > + return -ENOMEM; > + memset(rte_dma_devices, 0, size); > + > + return 0; > +} > + > +static int > +dma_data_prepare(void) > +{ > + if (dma_devices_max == 0) > + dma_devices_max = RTE_DMADEV_DEFAULT_MAX; > + return dma_dev_data_prepare(); > +} > + > +static struct rte_dma_dev * > +dma_allocate(const char *name, int numa_node, size_t private_data_size) > +{ > + struct rte_dma_dev *dev; > + void *dev_private; > + int16_t dev_id; > + int ret; > + > + ret = dma_data_prepare(); > + if (ret < 0) { > + RTE_DMA_LOG(ERR, "Cannot initialize dmadevs data"); > + return NULL; > + } > + > + dev = dma_find_by_name(name); > + if (dev != NULL) { > + RTE_DMA_LOG(ERR, "DMA device already allocated"); > + return NULL; > + } > + > + dev_private = rte_zmalloc_socket(name, private_data_size, > + RTE_CACHE_LINE_SIZE, numa_node); > + if (dev_private == NULL) { > + RTE_DMA_LOG(ERR, "Cannot allocate private data"); > + return NULL; > + } > + > + dev_id = dma_find_free_id(); > + if (dev_id < 0) { > + RTE_DMA_LOG(ERR, "Reached maximum number of DMA devices"); > + rte_free(dev_private); > + return NULL; > + } > + > + dev = &rte_dma_devices[dev_id]; > + rte_strscpy(dev->dev_name, name, sizeof(dev->dev_name)); > + dev->dev_id = dev_id; > + dev->numa_node = numa_node; > + dev->dev_private = dev_private; > + > + return dev; > +} > + > +static void > +dma_release(struct rte_dma_dev *dev) > +{ > + rte_free(dev->dev_private); > + memset(dev, 0, sizeof(struct rte_dma_dev)); > +} > + > +struct rte_dma_dev * > +rte_dma_pmd_allocate(const char *name, int numa_node, size_t private_data_size) > +{ > + struct rte_dma_dev *dev; > + > + if (dma_check_name(name) != 0 || private_data_size == 0) > + return NULL; > + > + dev = dma_allocate(name, numa_node, private_data_size); > + if (dev == NULL) > + return NULL; > + > + dev->state = RTE_DMA_DEV_REGISTERED; > + > + return dev; > +} > + > +int > +rte_dma_pmd_release(const char *name) > +{ > + struct rte_dma_dev *dev; > + > + if (dma_check_name(name) != 0) > + return -EINVAL; > + > + dev = dma_find_by_name(name); > + if (dev == NULL) > + return -EINVAL; > + > + dma_release(dev); > + return 0; > +} Trying to understand the logic of creation/destroy. skeldma_probe \-> skeldma_create \-> rte_dma_pmd_allocate \-> dma_allocate \-> dma_data_prepare \-> dma_dev_data_prepare skeldma_remove \-> skeldma_destroy \-> rte_dma_pmd_release \-> dma_release app \-> rte_dma_close \-> skeldma_close \-> dma_release My only concern is that the PMD remove does not call rte_dma_close. The PMD should check which dmadev is open for the rte_device to remove, and close the dmadev first. This way, no need for the function rte_dma_pmd_release, and no need to duplicate the release process in two paths. By the way, the function vchan_release is called only in the close function, not in the "remove path".