From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 16DB0A0C50; Fri, 16 Jul 2021 15:21:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F376A4067B; Fri, 16 Jul 2021 15:21:10 +0200 (CEST) Received: from mail-io1-f48.google.com (mail-io1-f48.google.com [209.85.166.48]) by mails.dpdk.org (Postfix) with ESMTP id 7B71040151 for ; Fri, 16 Jul 2021 15:21:10 +0200 (CEST) Received: by mail-io1-f48.google.com with SMTP id d9so10522338ioo.2 for ; Fri, 16 Jul 2021 06:21:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=+c56XJ4CHuEJ2iZKY85Wc18SWdvXMS8m/gi6JIe1XWY=; b=EO3Pg0puMAK19RJkx4G0pksN/K3WPoZPiylqZ0ratc336+dSi6PokiAtwNBf07GF/M CXSDjZSxweScoNwBhtItZt4JfCSUmWfvOGSplnozly2PbHGINtbxAayN/kGeSY36tU4J fCVxBlHCv9oFMbt/T63firVjvRTj7mBTiSf1N9Sq73ZTY91oPFsexazaZ1heoEX82gXv qMHxue9NCE+iEKPYNelnZrv38jvixh5nE9HH36+iKyUt51mlqiHR6j45YOLu9xQ55RWH 2hQGQzIy++xtYd96zdidCQT9dOz334PoNUDHtzq6jtrH32tKV97ZYe+qqunnzWoUrGPx Of0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=+c56XJ4CHuEJ2iZKY85Wc18SWdvXMS8m/gi6JIe1XWY=; b=Jr+zTNj+Jc0PpqYBaOUMzytIs24WF3YCcOvT4dgegG6MXKNupepGsTA+0OHfr/HCAZ n465CYJdRhxXNiXwCrR2EAw1ly50tbdocdiTEo4et/wCI45LJ1df+N8D6C31TNcvuVT/ Ph5rhjZ9Vt72cR0GH4zX+96rFJUlJKN2suoZTPp92kvQyvwVlXY0SEzNKt56CA4NEjtj c5O9uj0bbmtu6XXW0WGNaUNMO+/rzkqv+6cx7Wt1jXizLWT2xK2GNumiZ5WGPdGc2EP7 Jng3zwaDhEY/W+ZggcVjMe7wiVyNjd2p4lhDPixiKulkzgBg+ncBB31T58GA3VSQS2SO lZ6Q== X-Gm-Message-State: AOAM531582ZI+aear2CzAMIzimXDGsHGk1cUuGM3aSHSG5bS70ZNTDCL rOR9N4pI4Nor6EFnvKv7Kn5D0bY8zM40Gf/i6kc= X-Google-Smtp-Source: ABdhPJyX26rXVCdPkjOrFZHh8BU77E7zFWILK9firHDW/q5IiFkFrNQAqOdfStGX+zpvtTooIur+QOluJTUrTd+Sv3Q= X-Received: by 2002:a6b:b24e:: with SMTP id b75mr5990969iof.94.1626441669580; Fri, 16 Jul 2021 06:21:09 -0700 (PDT) MIME-Version: 1.0 References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <1626403535-40051-1-git-send-email-fengchengwen@huawei.com> In-Reply-To: <1626403535-40051-1-git-send-email-fengchengwen@huawei.com> From: Jerin Jacob Date: Fri, 16 Jul 2021 18:50:42 +0530 Message-ID: To: Chengwen Feng Cc: Thomas Monjalon , Ferruh Yigit , "Richardson, Bruce" , Jerin Jacob , Andrew Rybchenko , dpdk-dev , =?UTF-8?Q?Morten_Br=C3=B8rup?= , Nipun Gupta , Hemant Agrawal , Maxime Coquelin , Honnappa Nagarahalli , David Marchand , Satananda Burla , Prasun Kapoor , "Ananyev, Konstantin" Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH v5] dmadev: introduce DMA device library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Fri, Jul 16, 2021 at 8:19 AM Chengwen Feng wrote: > > This patch introduce 'dmadevice' which is a generic type of DMA > device. > > The APIs of dmadev library exposes some generic operations which can > enable configuration and I/O with the DMA devices. > > Signed-off-by: Chengwen Feng > --- > v5: > * add doxy-api-* file modify. > * use RTE_LOG_REGISTER_DEFAULT. > * fix typo. > * resolve some incorrect comments. > * fix some doxgen problem. > * fix version.map still hold rte_dmadev_completed_fails. > v4: > * replace xxx_complete_fails with xxx_completed_status. > * add SILENT capability, also a silent_mode in rte_dmadev_conf. > * add op_flag_llc for performance. > * rename dmadev_xxx_t to rte_dmadev_xxx_t to avoid namespace conflict. > * delete filed 'enqueued_count' from rte_dmadev_stats. > * make rte_dmadev hold 'dev_private' filed. > * add RTE_DMA_STATUS_NOT_ATTEMPED status code. > * rename RTE_DMA_STATUS_ACTIVE_DROP to RTE_DMA_STATUS_USER_ABORT. > * rename rte_dma_sg(e) to rte_dmadev_sg(e) to make sure all struct > prefix with rte_dmadev. > * put the comment afterwards. > * fix some doxgen problem. > * delete macro RTE_DMADEV_VALID_DEV_ID_OR_RET and > RTE_DMADEV_PTR_OR_ERR_RET. > * replace strlcpy with rte_strscpy. > * other minor modifications from review comment. > v3: > * rm reset and fill_sg ops. > * rm MT-safe capabilities. > * add submit flag. > * redefine rte_dma_sg to implement asymmetric copy. > * delete some reserved field for future use. > * rearrangement rte_dmadev/rte_dmadev_data struct. > * refresh rte_dmadev.h copyright. > * update vchan setup parameter. > * modified some inappropriate descriptions. > * arrange version.map alphabetically. > * other minor modifications from review comment. > --- > MAINTAINERS | 4 + > config/rte_config.h | 3 + > doc/api/doxy-api-index.md | 1 + > doc/api/doxy-api.conf.in | 1 + > lib/dmadev/meson.build | 7 + > lib/dmadev/rte_dmadev.c | 539 ++++++++++++++++++++++ > lib/dmadev/rte_dmadev.h | 1028 ++++++++++++++++++++++++++++++++++++++++++ > lib/dmadev/rte_dmadev_core.h | 182 ++++++++ > lib/dmadev/rte_dmadev_pmd.h | 72 +++ > lib/dmadev/version.map | 37 ++ > lib/meson.build | 1 + > 11 files changed, 1875 insertions(+) > create mode 100644 lib/dmadev/meson.build > create mode 100644 lib/dmadev/rte_dmadev.c > create mode 100644 lib/dmadev/rte_dmadev.h > create mode 100644 lib/dmadev/rte_dmadev_core.h > create mode 100644 lib/dmadev/rte_dmadev_pmd.h > create mode 100644 lib/dmadev/version.map > > diff --git a/MAINTAINERS b/MAINTAINERS > index af2a91d..e01a07f 100644 > --- a/MAINTAINERS > +++ b/MAINTAINERS > @@ -495,6 +495,10 @@ F: drivers/raw/skeleton/ > F: app/test/test_rawdev.c > F: doc/guides/prog_guide/rawdev.rst > > +DMA device API - EXPERIMENTAL > +M: Chengwen Feng > +F: lib/dmadev/ > + > @@ -0,0 +1,1028 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2021 HiSilicon Limited. > + * Copyright(c) 2021 Intel Corporation. > + * Copyright(c) 2021 Marvell International Ltd. > + * Copyright(c) 2021 SmartShare Systems. > + */ > + > +#ifndef _RTE_DMADEV_H_ > +#define _RTE_DMADEV_H_ > + > +/** > + * @file rte_dmadev.h > + * > + * RTE DMA (Direct Memory Access) device APIs. > + * > + * The DMA framework is built on the following model: > + * > + * --------------- --------------- --------------- > + * | virtual DMA | | virtual DMA | | virtual DMA | > + * | channel | | channel | | channel | > + * --------------- --------------- --------------- > + * | | | > + * ------------------ | > + * | | > + * ------------ ------------ > + * | dmadev | | dmadev | > + * ------------ ------------ > + * | | > + * ------------------ ------------------ > + * | HW-DMA-channel | | HW-DMA-channel | > + * ------------------ ------------------ > + * | | > + * -------------------------------- > + * | > + * --------------------- > + * | HW-DMA-Controller | > + * --------------------- > + * > + * The DMA controller could have multiple HW-DMA-channels (aka. HW-DMA-queues), > + * each HW-DMA-channel should be represented by a dmadev. > + * > + * The dmadev could create multiple virtual DMA channel, each virtual DMA channels? > + * channel represents a different transfer context. The DMA operation request > + * must be submitted to the virtual DMA channel. e.g. Application could create > + * virtual DMA channel 0 for memory-to-memory transfer scenario, and create > + * virtual DMA channel 1 for memory-to-device transfer scenario. > + * > + * The dmadev are dynamically allocated by rte_dmadev_pmd_allocate() during the > + * PCI/SoC device probing phase performed at EAL initialization time. And could > + * be released by rte_dmadev_pmd_release() during the PCI/SoC device removing > + * phase. > + * > + * This framework uses 'uint16_t dev_id' as the device identifier of a dmadev, > + * and 'uint16_t vchan' as the virtual DMA channel identifier in one dmadev. > + * > + * The functions exported by the dmadev API to setup a device designated by its > + * device identifier must be invoked in the following order: > + * - rte_dmadev_configure() > + * - rte_dmadev_vchan_setup() > + * - rte_dmadev_start() > + * > + * Then, the application can invoke dataplane APIs to process jobs. > + * > + * If the application wants to change the configuration (i.e. invoke > + * rte_dmadev_configure()), it must invoke rte_dmadev_stop() first to stop the > + * device and then do the reconfiguration before invoking rte_dmadev_start() > + * again. The dataplane APIs should not be invoked when the device is stopped. > + * > + * Finally, an application can close a dmadev by invoking the > + * rte_dmadev_close() function. > + * > + * The dataplane APIs include two parts: > + * The first part is the submission of operation requests: > + * - rte_dmadev_copy() > + * - rte_dmadev_copy_sg() > + * - rte_dmadev_fill() > + * - rte_dmadev_submit() > + * > + * These APIs could work with different virtual DMA channels which have > + * different contexts. > + * > + * The first three APIs are used to submit the operation request to the virtual > + * DMA channel, if the submission is successful, a uint16_t ring_idx is > + * returned, otherwise a negative number is returned. > + * > + * The last API was use to issue doorbell to hardware, and also there are flags use->used? > + * (@see RTE_DMA_OP_FLAG_SUBMIT) parameter of the first three APIs could do the > + * same work. > + * > + * The second part is to obtain the result of requests: > + * - rte_dmadev_completed() > + * - return the number of operation requests completed successfully. > + * - rte_dmadev_completed_status() > + * - return the number of operation requests completed. > + * > + * About the ring_idx which enqueue APIs (e.g. rte_dmadev_copy() > + * rte_dmadev_fill()) returned, the rules are as follows: > + * - ring_idx for each virtual DMA channel are independent. > + * - For a virtual DMA channel, the ring_idx is monotonically incremented, > + * when it reach UINT16_MAX, it wraps back to zero. > + * - This ring_idx can be used by applications to track per-operation > + * metadata in an application-defined circular ring. > + * - The initial ring_idx of a virtual DMA channel is zero, after the > + * device is stopped, the ring_idx needs to be reset to zero. > + * > + * One example: > + * - step-1: start one dmadev > + * - step-2: enqueue a copy operation, the ring_idx return is 0 > + * - step-3: enqueue a copy operation again, the ring_idx return is 1 > + * - ... > + * - step-101: stop the dmadev > + * - step-102: start the dmadev > + * - step-103: enqueue a copy operation, the cookie return is 0 > + * - ... > + * - step-x+0: enqueue a fill operation, the ring_idx return is 65535 > + * - step-x+1: enqueue a copy operation, the ring_idx return is 0 > + * - ... > + * > + * By default, all the functions of the dmadev API exported by a PMD are > + * lock-free functions which assume to not be invoked in parallel on different > + * logical cores to work on the same target object. should we add, object like dev_id, vchan_id etc. Upto you. IMO, We should tell about "silent" mode here and it is the implication on rte_dmadev_completed_* as "silent" changes above scheme in the completion side. > +/** > + * A structure used to retrieve the information of an DMA device. a DMA? > + * A structure used to descript DMA port parameters. > + */ > +struct rte_dmadev_port_param { > + enum rte_dmadev_port_type port_type; /**< The device port type. */ > + union { > + /** For PCIE port: > + * > + * The following model show SoC's PCIE module connects to shows? > + * multiple PCIE hosts and multiple endpoints. The PCIE module > + * has an integrate DMA controller. > + * If the DMA wants to access the memory of host A, it can be > + * initiated by PF1 in core0, or by VF0 of PF0 in core0. > + * > + * System Bus > + * | ----------PCIE module---------- > + * | Bus > + * | Interface > + * | ----- ------------------ > + * | | | | PCIE Core0 | > + * | | | | | ----------- > + * | | | | PF-0 -- VF-0 | | Host A | > + * | | |--------| |- VF-1 |--------| Root | > + * | | | | PF-1 | | Complex | > + * | | | | PF-2 | ----------- > + * | | | ------------------ > + * | | | > + * | | | ------------------ > + * | | | | PCIE Core1 | > + * | | | | | ----------- > + * | | | | PF-0 -- VF-0 | | Host B | > + * |-----| |--------| PF-1 -- VF-0 |--------| Root | > + * | | | | |- VF-1 | | Complex | > + * | | | | PF-2 | ----------- > + * | | | ------------------ > + * | | | > + * | | | ------------------ > + * | |DMA| | | ------ > + * | | | | |--------| EP | > + * | | |--------| PCIE Core2 | ------ > + * | | | | | ------ > + * | | | | |--------| EP | > + * | | | | | ------ > + * | ----- ------------------ > + * > + * The following structure is used to describe the above access > + * port. > + * > + * @note If some fields are not supported by hardware, set > + * these fields to zero. And also there are no capabilities > + * defined for this, it is the duty of the application to set > + * the correct parameters. Without getting, capabilities application can not set zero. I would suggest rewording something like. @note If some fields can not be supported by the hardware/driver, then the driver ignores those fields. Please check driver-specific documentation for limitations and capablites. * @note If some fields are not supported by hardware, set > + * these fields to zero. And also there are no capabilities > + * defined for this, it is the duty of the application to set > + * the correct parameters. > + */ > + struct { > + uint64_t coreid : 4; /**< PCIE core id used. */ > + uint64_t pfid : 8; /**< PF id used. */ > + uint64_t vfen : 1; /**< VF enable bit. */ > + uint64_t vfid : 16; /**< VF id used. */ > + uint64_t pasid : 20; > + /**< The pasid filed in TLP packet. */ > + uint64_t attr : 3; > + /**< The attributes filed in TLP packet. */ > + uint64_t ph : 2; > + /**< The processing hint filed in TLP packet. */ > + uint64_t st : 16; > + /**< The steering tag filed in TLP packet. */ > + } pcie; > + }; > + uint64_t reserved[2]; /**< Reserved for future fields. */ > +}; > + > +/** > + * A structure used to configure a virtual DMA channel. > + */ > +struct rte_dmadev_vchan_conf { > + uint8_t direction; > + /**< Set of supported transfer directions Transfer direction > + * @see RTE_DMA_DIR_MEM_TO_MEM > + * @see RTE_DMA_DIR_MEM_TO_DEV > + * @see RTE_DMA_DIR_DEV_TO_MEM > + * @see RTE_DMA_DIR_DEV_TO_DEV > + */ > + > + /** Number of descriptor for the virtual DMA channel */ > + uint16_t nb_desc; > + /** 1) Used to describes the port parameter in the device-to-memory > + * transfer scenario. > + * 2) Used to describes the source port parameter in the > + * device-to-device transfer scenario. > + * @see struct rte_dmadev_port_param > + */ > + struct rte_dmadev_port_param src_port; > + /** 1) Used to describes the port parameter in the memory-to-device > + * transfer scenario. > + * 2) Used to describes the destination port parameter in the > + * device-to-device transfer scenario. > + * @see struct rte_dmadev_port_param > + */ > + struct rte_dmadev_port_param dst_port; > +}; > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice. > + * > + * Allocate and set up a virtual DMA channel. > + * > + * @param dev_id > + * The identifier of the device. > + * @param conf > + * The virtual DMA channel configuration structure encapsulated into > + * rte_dmadev_vchan_conf object. > + * > + * @return > + * - >=0: Allocate success, it is the virtual DMA channel id. This value must > + * be less than the field 'max_vchans' of struct rte_dmadev_conf > + * which configured by rte_dmadev_configure(). > + * - <0: Error code returned by the driver virtual channel setup function. > + */ > +__rte_experimental > +int > +rte_dmadev_vchan_setup(uint16_t dev_id, > + const struct rte_dmadev_vchan_conf *conf); > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice. > + * > + * Release a virtual DMA channel. > + * > + * @param dev_id > + * The identifier of the device. > + * @param vchan > + * The identifier of virtual DMA channel which return by vchan setup. > + * > + * @return > + * - =0: Successfully release the virtual DMA channel. > + * - <0: Error code returned by the driver virtual channel release function. > + */ > +__rte_experimental > +int > +rte_dmadev_vchan_release(uint16_t dev_id, uint16_t vchan); Commented on another thread. > + > +/** > + * rte_dmadev_stats - running statistics. > + */ > +struct rte_dmadev_stats { > + uint64_t submitted_count; > + /**< Count of operations which were submitted to hardware. */ > + uint64_t completed_fail_count; > + /**< Count of operations which failed to complete. */ > + uint64_t completed_count; > + /**< Count of operations which successfully complete. */ > +}; > + > +#define RTE_DMADEV_ALL_VCHAN 0xFFFFu Commented on another thread. No strong opinion. > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice. > + * > + * Enqueue a copy operation onto the virtual DMA channel. > + * > + * This queues up a copy operation to be performed by hardware, if the 'flags' > + * parameter contains RTE_DMA_OP_FLAG_SUBMIT then trigger hardware to begin > + * this operation, otherwise do not trigger hardware. hardware -> doorbell. > + * > + * @param dev_id > + * The identifier of the device. > + * @param vchan > + * The identifier of virtual DMA channel. > + * @param src > + * The address of the source buffer. > + * @param dst > + * The address of the destination buffer. > + * @param length > + * The length of the data to be copied. > + * @param flags > + * An flags for this operation. > + * @see RTE_DMA_OP_FLAG_* > + * > + * @return > + * - 0..UINT16_MAX: index of enqueued copy job. > + * - <0: Error code returned by the driver copy function. > + */ > +__rte_experimental > +static inline int > +rte_dmadev_copy(uint16_t dev_id, uint16_t vchan, rte_iova_t src, rte_iova_t dst, > + uint32_t length, uint64_t flags) > +{ > + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; > + > +#ifdef RTE_DMADEV_DEBUG > + if (!rte_dmadev_is_valid_dev(dev_id) || > + vchan >= dev->data->dev_conf.max_vchans) > + return -EINVAL; > + RTE_FUNC_PTR_OR_ERR_RET(*dev->copy, -ENOTSUP); > +#endif > + > + return (*dev->copy)(dev, vchan, src, dst, length, flags); > +} > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice. > + * > + * Enqueue a scatter list copy operation onto the virtual DMA channel. > + * > + * This queues up a scatter list copy operation to be performed by hardware, if > + * the 'flags' parameter contains RTE_DMA_OP_FLAG_SUBMIT then trigger hardware > + * to begin this operation, otherwise do not trigger hardware. harware -> doorbell > + * > + * @param dev_id > + * The identifier of the device. > + * @param vchan > + * The identifier of virtual DMA channel. > + * @param sg > + * The pointer of scatterlist. > + * @param flags > + * An flags for this operation. > + * @see RTE_DMA_OP_FLAG_* > + * > + * @return > + * - 0..UINT16_MAX: index of enqueued copy scatterlist job. > + * - <0: Error code returned by the driver copy scatterlist function. > + */ > +__rte_experimental > +static inline int > +rte_dmadev_copy_sg(uint16_t dev_id, uint16_t vchan, > + const struct rte_dmadev_sg *sg, Sent comment in another thread. > + uint64_t flags) > +{ > + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; > + > +#ifdef RTE_DMADEV_DEBUG > + if (!rte_dmadev_is_valid_dev(dev_id) || > + vchan >= dev->data->dev_conf.max_vchans || > + sg == NULL || sg->src == NULL || sg->dst == NULL || > + sg->nb_src == 0 || sg->nb_dst == 0) > + return -EINVAL; > + RTE_FUNC_PTR_OR_ERR_RET(*dev->copy_sg, -ENOTSUP); > +#endif > + > + return (*dev->copy_sg)(dev, vchan, sg, flags); > +} > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice. > + * > + * Enqueue a fill operation onto the virtual DMA channel. > + * > + * This queues up a fill operation to be performed by hardware, if the 'flags' > + * parameter contains RTE_DMA_OP_FLAG_SUBMIT then trigger hardware to begin > + * this operation, otherwise do not trigger hardware. hardware -> doorbell. > + * > + * @param dev_id > + * The identifier of the device. > + * @param vchan > + * The identifier of virtual DMA channel. > + * @param pattern > + * The pattern to populate the destination buffer with. > + * @param dst > + * The address of the destination buffer. > + * @param length > + * The length of the destination buffer. > + * @param flags > + * An flags for this operation. > + * @see RTE_DMA_OP_FLAG_* > + * > + * @return > + * - 0..UINT16_MAX: index of enqueued fill job. > + * - <0: Error code returned by the driver fill function. > + */ > +__rte_experimental > +static inline int > +rte_dmadev_fill(uint16_t dev_id, uint16_t vchan, uint64_t pattern, > + rte_iova_t dst, uint32_t length, uint64_t flags) > +{ > + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; > + > +#ifdef RTE_DMADEV_DEBUG > + if (!rte_dmadev_is_valid_dev(dev_id) || > + vchan >= dev->data->dev_conf.max_vchans) > + return -EINVAL; > + RTE_FUNC_PTR_OR_ERR_RET(*dev->fill, -ENOTSUP); > +#endif > + > + return (*dev->fill)(dev, vchan, pattern, dst, length, flags); > +} > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice. > + * > + * Trigger hardware to begin performing enqueued operations. > + * > + * This API is used to write the "doorbell" to the hardware to trigger it > + * to begin the operations previously enqueued by rte_dmadev_copy/fill(). > + * > + * @param dev_id > + * The identifier of the device. > + * @param vchan > + * The identifier of virtual DMA channel. > + * > + * @return > + * - =0: Successfully trigger hardware. > + * - <0: Failure to trigger hardware. > + */ > +__rte_experimental > +static inline int > +rte_dmadev_submit(uint16_t dev_id, uint16_t vchan) > +{ > + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; > + > +#ifdef RTE_DMADEV_DEBUG > + if (!rte_dmadev_is_valid_dev(dev_id) || > + vchan >= dev->data->dev_conf.max_vchans) > + return -EINVAL; > + RTE_FUNC_PTR_OR_ERR_RET(*dev->submit, -ENOTSUP); > +#endif > + > + return (*dev->submit)(dev, vchan); > +} > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice. > + * > + * Returns the number of operations that have been successfully completed. > + * > + * @param dev_id > + * The identifier of the device. > + * @param vchan > + * The identifier of virtual DMA channel. > + * @param nb_cpls > + * The maximum number of completed operations that can be processed. > + * @param[out] last_idx > + * The last completed operation's index. > + * If not required, NULL can be passed in. > + * @param[out] has_error > + * Indicates if there are transfer error. > + * If not required, NULL can be passed in. > + * > + * @return > + * The number of operations that successfully completed. This return value > + * must be less than or equal to the value of nb_cpls. > + */ > +__rte_experimental > +static inline uint16_t > +rte_dmadev_completed(uint16_t dev_id, uint16_t vchan, const uint16_t nb_cpls, > + uint16_t *last_idx, bool *has_error) > +{ > + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; > + uint16_t idx; > + bool err; > + > +#ifdef RTE_DMADEV_DEBUG > + if (!rte_dmadev_is_valid_dev(dev_id) || > + vchan >= dev->data->dev_conf.max_vchans || > + nb_cpls == 0) > + return 0; > + RTE_FUNC_PTR_OR_ERR_RET(*dev->completed, 0); > +#endif > + > + /* Ensure the pointer values are non-null to simplify drivers. > + * In most cases these should be compile time evaluated, since this is > + * an inline function. > + * - If NULL is explicitly passed as parameter, then compiler knows the > + * value is NULL > + * - If address of local variable is passed as parameter, then compiler > + * can know it's non-NULL. > + */ > + if (last_idx == NULL) > + last_idx = &idx; > + if (has_error == NULL) > + has_error = &err; > + > + *has_error = false; > + return (*dev->completed)(dev, vchan, nb_cpls, last_idx, has_error); > +} > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice. > + * > + * Returns the number of operations that have been completed, and the > + * operations result may succeed or fail. > + * > + * @param dev_id > + * The identifier of the device. > + * @param vchan > + * The identifier of virtual DMA channel. > + * @param nb_cpls > + * Indicates the size of status array. > + * @param[out] last_idx > + * The last completed operation's index. > + * If not required, NULL can be passed in. > + * @param[out] status > + * The error code of operations that completed. > + * @see enum rte_dma_status_code > + * > + * @return > + * The number of operations that completed. This return value must be less > + * than or equal to the value of nb_cpls. > + */ > +__rte_experimental > +static inline uint16_t > +rte_dmadev_completed_status(uint16_t dev_id, uint16_t vchan, > + const uint16_t nb_cpls, uint16_t *last_idx, > + enum rte_dma_status_code *status) > +{ > + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; > + uint16_t idx; > + > +#ifdef RTE_DMADEV_DEBUG > + if (!rte_dmadev_is_valid_dev(dev_id) || > + vchan >= dev->data->dev_conf.max_vchans || > + nb_cpls == 0 || status == NULL) > + return 0; > + RTE_FUNC_PTR_OR_ERR_RET(*dev->completed_status, 0); > +#endif > + > + if (last_idx == NULL) > + last_idx = &idx; > + > + return (*dev->completed_status)(dev, vchan, nb_cpls, last_idx, status); > +} > + > +#ifdef __cplusplus > +} > +#endif > + Thanks for v5