From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7196EA0C4D; Wed, 13 Oct 2021 02:21:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F10E140142; Wed, 13 Oct 2021 02:21:53 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id EF22F4003C for ; Wed, 13 Oct 2021 02:21:51 +0200 (CEST) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4HTY8D0b9hzWQ2h; Wed, 13 Oct 2021 08:20:12 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Wed, 13 Oct 2021 08:21:49 +0800 Received: from [127.0.0.1] (10.67.100.224) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Wed, 13 Oct 2021 08:21:48 +0800 To: Thomas Monjalon CC: , , , , , , , , , , , , , , , , References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <20211011073348.8235-1-fengchengwen@huawei.com> <20211011073348.8235-2-fengchengwen@huawei.com> <5838181.K3ZB3fOEaN@thomas> From: fengchengwen Message-ID: <0e44a184-3540-bb2e-28c7-ba4b0fc4df98@huawei.com> Date: Wed, 13 Oct 2021 08:21:48 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0 MIME-Version: 1.0 In-Reply-To: <5838181.K3ZB3fOEaN@thomas> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.67.100.224] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: Re: [dpdk-dev] [PATCH v25 1/6] dmadev: introduce DMA device library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 2021/10/13 3:09, Thomas Monjalon wrote: > 11/10/2021 09:33, Chengwen Feng: >> --- /dev/null >> +++ b/doc/guides/prog_guide/dmadev.rst >> @@ -0,0 +1,60 @@ >> +.. SPDX-License-Identifier: BSD-3-Clause >> + Copyright 2021 HiSilicon Limited >> + >> +DMA Device Library >> +================== >> + >> +The DMA library provides a DMA device framework for management and provisioning >> +of hardware and software DMA poll mode drivers, defining generic API which >> +support a number of different DMA operations. >> + >> + >> +Design Principles >> +----------------- >> + >> +The DMA framework provides a generic DMA device framework which supports both >> +physical (hardware) and virtual (software) DMA devices, as well as a generic DMA >> +API which allows DMA devices to be managed and configured, and supports DMA >> +operations to be provisioned on DMA poll mode driver. >> + >> +.. _figure_dmadev: >> + >> +.. figure:: img/dmadev.* >> + >> +The above figure shows the model on which the DMA framework is built on: >> + >> + * The DMA controller could have multiple hardware DMA channels (aka. hardware >> + DMA queues), each hardware DMA channel should be represented by a dmadev. >> + * The dmadev could create multiple virtual DMA channels, each virtual DMA >> + channel represents a different transfer context. The DMA operation request >> + must be submitted to the virtual DMA channel. e.g. Application could create >> + virtual DMA channel 0 for memory-to-memory transfer scenario, and create >> + virtual DMA channel 1 for memory-to-device transfer scenario. > > When updating the doc, we would like to change a minimum of lines, > so it's better to split lines logically: after a comma, a point, > or before the next part of the sentence. > Do not hesitate to make short lines if needed. > Such change is quite fast to do, thanks. > > [...] >> +* **Introduced dmadev library with:** >> + >> + * Device allocation functions. > > You can drop this line, it is not a feature. I'm going to try another description. > > [...] >> +static int >> +dma_dev_data_prepare(void) >> +{ >> + size_t size; >> + >> + if (rte_dma_devices != NULL) >> + return 0; >> + >> + size = dma_devices_max * sizeof(struct rte_dma_dev); >> + rte_dma_devices = malloc(size); >> + if (rte_dma_devices == NULL) >> + return -ENOMEM; >> + memset(rte_dma_devices, 0, size); >> + >> + return 0; >> +} >> + >> +static int >> +dma_data_prepare(void) >> +{ >> + if (dma_devices_max == 0) >> + dma_devices_max = RTE_DMADEV_DEFAULT_MAX; >> + return dma_dev_data_prepare(); >> +} >> + >> +static struct rte_dma_dev * >> +dma_allocate(const char *name, int numa_node, size_t private_data_size) >> +{ >> + struct rte_dma_dev *dev; >> + void *dev_private; >> + int16_t dev_id; >> + int ret; >> + >> + ret = dma_data_prepare(); >> + if (ret < 0) { >> + RTE_DMA_LOG(ERR, "Cannot initialize dmadevs data"); >> + return NULL; >> + } >> + >> + dev = dma_find_by_name(name); >> + if (dev != NULL) { >> + RTE_DMA_LOG(ERR, "DMA device already allocated"); >> + return NULL; >> + } >> + >> + dev_private = rte_zmalloc_socket(name, private_data_size, >> + RTE_CACHE_LINE_SIZE, numa_node); >> + if (dev_private == NULL) { >> + RTE_DMA_LOG(ERR, "Cannot allocate private data"); >> + return NULL; >> + } >> + >> + dev_id = dma_find_free_id(); >> + if (dev_id < 0) { >> + RTE_DMA_LOG(ERR, "Reached maximum number of DMA devices"); >> + rte_free(dev_private); >> + return NULL; >> + } >> + >> + dev = &rte_dma_devices[dev_id]; >> + rte_strscpy(dev->dev_name, name, sizeof(dev->dev_name)); >> + dev->dev_id = dev_id; >> + dev->numa_node = numa_node; >> + dev->dev_private = dev_private; >> + >> + return dev; >> +} >> + >> +static void >> +dma_release(struct rte_dma_dev *dev) >> +{ >> + rte_free(dev->dev_private); >> + memset(dev, 0, sizeof(struct rte_dma_dev)); >> +} >> + >> +struct rte_dma_dev * >> +rte_dma_pmd_allocate(const char *name, int numa_node, size_t private_data_size) >> +{ >> + struct rte_dma_dev *dev; >> + >> + if (dma_check_name(name) != 0 || private_data_size == 0) >> + return NULL; >> + >> + dev = dma_allocate(name, numa_node, private_data_size); >> + if (dev == NULL) >> + return NULL; >> + >> + dev->state = RTE_DMA_DEV_REGISTERED; >> + >> + return dev; >> +} >> + >> +int >> +rte_dma_pmd_release(const char *name) >> +{ >> + struct rte_dma_dev *dev; >> + >> + if (dma_check_name(name) != 0) >> + return -EINVAL; >> + >> + dev = dma_find_by_name(name); >> + if (dev == NULL) >> + return -EINVAL; >> + >> + dma_release(dev); >> + return 0; >> +} > > Trying to understand the logic of creation/destroy. > skeldma_probe > \-> skeldma_create > \-> rte_dma_pmd_allocate > \-> dma_allocate > \-> dma_data_prepare > \-> dma_dev_data_prepare > skeldma_remove > \-> skeldma_destroy > \-> rte_dma_pmd_release > \-> dma_release This patch only provide device allocate function, the 2st patch provide extra logic: diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c index 42a4693bd9..a6a5680d2b 100644 --- a/lib/dmadev/rte_dmadev.c +++ b/lib/dmadev/rte_dmadev.c @@ -201,6 +201,9 @@ rte_dma_pmd_release(const char *name) if (dev == NULL) return -EINVAL; + if (dev->state == RTE_DMA_DEV_READY) + return rte_dma_close(dev->dev_id); + dma_release(dev); return 0; } So the skeldma remove will be: skeldma_remove \-> skeldma_destroy \-> rte_dma_pmd_release \-> rte_dma_close \-> dma_release > app > \-> rte_dma_close > \-> skeldma_close > \-> dma_release > > My only concern is that the PMD remove does not call rte_dma_close. If the device create success, it will call rte_dma_close in device remove phase. > The PMD should check which dmadev is open for the rte_device to remove, > and close the dmadev first. > This way, no need for the function rte_dma_pmd_release, > and no need to duplicate the release process in two paths. > By the way, the function vchan_release is called only in the close function, > not in the "remove path". > > > > > . > Thanks