From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CC2CDA0C4A; Tue, 13 Jul 2021 11:07:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 46A5C411CF; Tue, 13 Jul 2021 11:07:34 +0200 (CEST) Received: from mail-io1-f46.google.com (mail-io1-f46.google.com [209.85.166.46]) by mails.dpdk.org (Postfix) with ESMTP id DB2074069E for ; Tue, 13 Jul 2021 11:07:32 +0200 (CEST) Received: by mail-io1-f46.google.com with SMTP id k16so26174100ios.10 for ; Tue, 13 Jul 2021 02:07:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=GUEp1plcWFZ5Ot6bVCFkz8hICqHbhppaba3OR5LKGQY=; b=Dig3kOQkx9QylTohk7Icyn4rDnmCXVu0O6cYnxtg9d5CY4RReLljyunFwzycmkK6ew /8TtimB6w+W/D0EzZXYjRuOwOaekXBi2SfayFU6/PvOgXdFCoubBDs+I+JlGrVGW0EBc kl4elxiMvKo4LweQauniiePwlNz7PxlL5WyMEPf3A7VJNsbXGtAUgZQxF/14m7ocNPPr zy4Btw463rjEDownoeu3f+DPP+NK1ZlwSk9lQyJo7Xs+xRE17WWjrSm+a7FwEUu+oW9H KC8oEuR4ONTMGU4HT2YCBYRzgtIrFGcTEQ1z96L6vThzwQgDRiZ7Dk6N9CFDZQqANbAW mi2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=GUEp1plcWFZ5Ot6bVCFkz8hICqHbhppaba3OR5LKGQY=; b=Hm4kyN+cwLIP3LlaHsGwv/B0ZS6HAcBxytRJBRxgXdudRoixxCgZgUMkmeZ04+CCvl UAfwkUm/cf7g1oAQLdsLZUF78tLcQHIhGAoM3fYeBBK03ELuNM13WvPKJVuSJqkyhccf ZIugaUIufMc36PjiAOix3x8o5OmbeE8Ri6KDWEINaF8oQLjLKjgVXtk9Uhiag/FhOjaA 5sYO1dCKGRnQx5NKVK6En45QBvkbp0Wh6ZeWjRyC9B0o3sL4zAPCT90sVJHd0MwtL96+ 20B6U/tRThg97C7cL7qXtvTZ6MZJ5v5KuVZW3V7jZ4TP1pn6L5xOX5v6MFJybsFkLO6U hwLQ== X-Gm-Message-State: AOAM532+p/YUqpK4YCC50JaEen8ar/FVde+9QFGHASkYtNecy3tmUcmk 7V2FyUKLgK8bbEiZYjr1fqwq7KwP/rBZ/ujTpDU= X-Google-Smtp-Source: ABdhPJzK4hfdIQnwEA4m17hNCPgGbNa19yFq2MZ9vJu9obJ/f40d+Tp6RjUjhoKa3gTW6pS3wJf6jYXO6wXM4qO8nYE= X-Received: by 2002:a02:8521:: with SMTP id g30mr3071142jai.113.1626167251548; Tue, 13 Jul 2021 02:07:31 -0700 (PDT) MIME-Version: 1.0 References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <1625995556-41473-1-git-send-email-fengchengwen@huawei.com> In-Reply-To: From: Jerin Jacob Date: Tue, 13 Jul 2021 14:37:04 +0530 Message-ID: To: Bruce Richardson Cc: Chengwen Feng , Thomas Monjalon , Ferruh Yigit , Jerin Jacob , dpdk-dev , =?UTF-8?Q?Morten_Br=C3=B8rup?= , Nipun Gupta , Hemant Agrawal , Maxime Coquelin , Honnappa Nagarahalli , David Marchand , Satananda Burla , Prasun Kapoor , "Ananyev, Konstantin" , liangma@liangbit.com Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH v2] dmadev: introduce DMA device library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Mon, Jul 12, 2021 at 9:21 PM Bruce Richardson wrote: > > On Sun, Jul 11, 2021 at 05:25:56PM +0800, Chengwen Feng wrote: > > This patch introduce 'dmadevice' which is a generic type of DMA > > device. > > > > The APIs of dmadev library exposes some generic operations which can > > enable configuration and I/O with the DMA devices. > > > > Signed-off-by: Chengwen Feng > > Hi again, > > some further review comments inline. > > /Bruce > > > --- > > MAINTAINERS | 4 + > > config/rte_config.h | 3 + > > lib/dmadev/meson.build | 6 + > > lib/dmadev/rte_dmadev.c | 560 +++++++++++++++++++++++ > > lib/dmadev/rte_dmadev.h | 1030 ++++++++++++++++++++++++++++++++++++++++++ > > lib/dmadev/rte_dmadev_core.h | 159 +++++++ > > lib/dmadev/rte_dmadev_pmd.h | 72 +++ > > lib/dmadev/version.map | 40 ++ > > lib/meson.build | 1 + > > > > > diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h > > new file mode 100644 > > index 0000000..8779512 > > --- /dev/null > > +++ b/lib/dmadev/rte_dmadev.h > > @@ -0,0 +1,1030 @@ > > +/* SPDX-License-Identifier: BSD-3-Clause > > + * Copyright(c) 2021 HiSilicon Limited. > > + * Copyright(c) 2021 Intel Corporation. > > + * Copyright(c) 2021 Marvell International Ltd. > > + */ > > + > > +#ifndef _RTE_DMADEV_H_ > > +#define _RTE_DMADEV_H_ > > + > > +/** > > + * @file rte_dmadev.h > > + * > > + * RTE DMA (Direct Memory Access) device APIs. > > + * > > + * The DMA framework is built on the following model: > > + * > > + * --------------- --------------- --------------- > > + * | virtual DMA | | virtual DMA | | virtual DMA | > > + * | channel | | channel | | channel | > > + * --------------- --------------- --------------- > > + * | | | > > + * ------------------ | > > + * | | > > + * ------------ ------------ > > + * | dmadev | | dmadev | > > + * ------------ ------------ > > + * | | > > + * ------------------ ------------------ > > + * | HW-DMA-channel | | HW-DMA-channel | > > + * ------------------ ------------------ > > + * | | > > + * -------------------------------- > > + * | > > + * --------------------- > > + * | HW-DMA-Controller | > > + * --------------------- > > + * > > + * The DMA controller could have multilpe HW-DMA-channels (aka. HW-DMA-queues), > > + * each HW-DMA-channel should be represented by a dmadev. > > + * > > + * The dmadev could create multiple virtual DMA channel, each virtual DMA > > + * channel represents a different transfer context. The DMA operation request > > + * must be submitted to the virtual DMA channel. > > + * E.G. Application could create virtual DMA channel 0 for mem-to-mem transfer > > + * scenario, and create virtual DMA channel 1 for mem-to-dev transfer > > + * scenario. > > + * > > + * The dmadev are dynamically allocated by rte_dmadev_pmd_allocate() during the > > + * PCI/SoC device probing phase performed at EAL initialization time. And could > > + * be released by rte_dmadev_pmd_release() during the PCI/SoC device removing > > + * phase. > > + * > > + * We use 'uint16_t dev_id' as the device identifier of a dmadev, and > > + * 'uint16_t vchan' as the virtual DMA channel identifier in one dmadev. > > + * > > + * The functions exported by the dmadev API to setup a device designated by its > > + * device identifier must be invoked in the following order: > > + * - rte_dmadev_configure() > > + * - rte_dmadev_vchan_setup() > > + * - rte_dmadev_start() > > + * > > + * Then, the application can invoke dataplane APIs to process jobs. > > + * > > + * If the application wants to change the configuration (i.e. call > > + * rte_dmadev_configure()), it must call rte_dmadev_stop() first to stop the > > + * device and then do the reconfiguration before calling rte_dmadev_start() > > + * again. The dataplane APIs should not be invoked when the device is stopped. > > + * > > + * Finally, an application can close a dmadev by invoking the > > + * rte_dmadev_close() function. > > + * > > + * The dataplane APIs include two parts: > > + * a) The first part is the submission of operation requests: > > + * - rte_dmadev_copy() > > + * - rte_dmadev_copy_sg() - scatter-gather form of copy > > + * - rte_dmadev_fill() > > + * - rte_dmadev_fill_sg() - scatter-gather form of fill > > + * - rte_dmadev_perform() - issue doorbell to hardware > > + * These APIs could work with different virtual DMA channels which have > > + * different contexts. > > + * The first four APIs are used to submit the operation request to the > > + * virtual DMA channel, if the submission is successful, a uint16_t > > + * ring_idx is returned, otherwise a negative number is returned. > > + * b) The second part is to obtain the result of requests: > > + * - rte_dmadev_completed() > > + * - return the number of operation requests completed successfully. > > + * - rte_dmadev_completed_fails() > > + * - return the number of operation requests failed to complete. > > Please rename this to "completed_status" to allow the return of information > other than just errors. As I suggested before, I think this should also be > usable as a slower version of "completed" even in the case where there are > no errors, in that it returns status information for each and every job > rather than just returning as soon as it hits a failure. > > > + * + * About the ring_idx which rte_dmadev_copy/copy_sg/fill/fill_sg() > > returned, + * the rules are as follows: + * a) ring_idx for each > > virtual DMA channel are independent. + * b) For a virtual DMA channel, > > the ring_idx is monotonically incremented, + * when it reach > > UINT16_MAX, it wraps back to zero. > > Based on other feedback, I suggest we put in the detail here that: "This > index can be used by applications to track per-job metadata in an > application-defined circular ring, where the ring is a power-of-2 size, and > the indexes are masked appropriately." > > > + * c) The initial ring_idx of a virtual DMA channel is zero, after the device > > + * is stopped or reset, the ring_idx needs to be reset to zero. > > + * Example: > > + * step-1: start one dmadev > > + * step-2: enqueue a copy operation, the ring_idx return is 0 > > + * step-3: enqueue a copy operation again, the ring_idx return is 1 > > + * ... > > + * step-101: stop the dmadev > > + * step-102: start the dmadev > > + * step-103: enqueue a copy operation, the cookie return is 0 > > + * ... > > + * step-x+0: enqueue a fill operation, the ring_idx return is 65535 > > + * step-x+1: enqueue a copy operation, the ring_idx return is 0 > > + * ... > > + * > > + * By default, all the non-dataplane functions of the dmadev API exported by a > > + * PMD are lock-free functions which assume to not be invoked in parallel on > > + * different logical cores to work on the same target object. > > + * > > + * The dataplane functions of the dmadev API exported by a PMD can be MT-safe > > + * only when supported by the driver, generally, the driver will reports two > > + * capabilities: > > + * a) Whether to support MT-safe for the submit/completion API of the same > > + * virtual DMA channel. > > + * E.G. one thread do submit operation, another thread do completion > > + * operation. > > + * If driver support it, then declare RTE_DMA_DEV_CAPA_MT_VCHAN. > > + * If driver don't support it, it's up to the application to guarantee > > + * MT-safe. > > + * b) Whether to support MT-safe for different virtual DMA channels. > > + * E.G. one thread do operation on virtual DMA channel 0, another thread > > + * do operation on virtual DMA channel 1. > > + * If driver support it, then declare RTE_DMA_DEV_CAPA_MT_MULTI_VCHAN. > > + * If driver don't support it, it's up to the application to guarantee > > + * MT-safe. > > + * > > + */ > > Just to check - do we have hardware that currently supports these > capabilities? For Intel HW, we will only support one virtual channel per > device without any MT-safety guarantees, so won't be setting either of > these flags. If any of these flags are unused in all planned drivers, we > should drop them from the spec until they prove necessary. Idealy, > everything in the dmadev definition should be testable, and features unused > by anyone obviously will be untested. > > > + > > +#include > > +#include > > +#include > > +#include > > + > > +#ifdef __cplusplus > > +extern "C" { > > +#endif > > + > > +#define RTE_DMADEV_NAME_MAX_LEN RTE_DEV_NAME_MAX_LEN > > + > > +extern int rte_dmadev_logtype; > > + > > +#define RTE_DMADEV_LOG(level, ...) \ > > + rte_log(RTE_LOG_ ## level, rte_dmadev_logtype, "" __VA_ARGS__) > > + > > +/* Macros to check for valid port */ > > +#define RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, retval) do { \ > > + if (!rte_dmadev_is_valid_dev(dev_id)) { \ > > + RTE_DMADEV_LOG(ERR, "Invalid dev_id=%u\n", dev_id); \ > > + return retval; \ > > + } \ > > +} while (0) > > + > > +#define RTE_DMADEV_VALID_DEV_ID_OR_RET(dev_id) do { \ > > + if (!rte_dmadev_is_valid_dev(dev_id)) { \ > > + RTE_DMADEV_LOG(ERR, "Invalid dev_id=%u\n", dev_id); \ > > + return; \ > > + } \ > > +} while (0) > > + > > Can we avoid using these in the inline functions in this file, and move > them to the _pmd.h which is for internal PMD use only? It would mean we > don't get logging from the key dataplane functions, but I would hope the > return values would provide enough info. > > Alternatively, can we keep the logtype definition and first macro and move > the other two to the _pmd.h file. > > > +/** > > + * @internal > > + * Validate if the DMA device index is a valid attached DMA device. > > + * > > + * @param dev_id > > + * DMA device index. > > + * > > + * @return > > + * - If the device index is valid (true) or not (false). > > + */ > > +__rte_internal > > +bool > > +rte_dmadev_is_valid_dev(uint16_t dev_id); > > + > > +/** > > + * rte_dma_sg - can hold scatter DMA operation request > > + */ > > +struct rte_dma_sg { > > + rte_iova_t src; > > + rte_iova_t dst; > > + uint32_t length; > > +}; > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change without prior notice. > > + * > > + * Get the total number of DMA devices that have been successfully > > + * initialised. > > + * > > + * @return > > + * The total number of usable DMA devices. > > + */ > > +__rte_experimental > > +uint16_t > > +rte_dmadev_count(void); > > + > > +/** > > + * The capabilities of a DMA device > > + */ > > +#define RTE_DMA_DEV_CAPA_MEM_TO_MEM (1ull << 0) > > +/**< DMA device support mem-to-mem transfer. > > Do we need this? Can we assume that any device appearing as a dmadev can > do mem-to-mem copies, and drop the capability for mem-to-mem and the > capability for copying? > > > + * > > + * @see struct rte_dmadev_info::dev_capa > > + */ > > +#define RTE_DMA_DEV_CAPA_MEM_TO_DEV (1ull << 1) > > +/**< DMA device support slave mode & mem-to-dev transfer. > > + * > > + * @see struct rte_dmadev_info::dev_capa > > + */ > > +#define RTE_DMA_DEV_CAPA_DEV_TO_MEM (1ull << 2) > > +/**< DMA device support slave mode & dev-to-mem transfer. > > + * > > + * @see struct rte_dmadev_info::dev_capa > > + */ > > +#define RTE_DMA_DEV_CAPA_DEV_TO_DEV (1ull << 3) > > +/**< DMA device support slave mode & dev-to-dev transfer. > > + * > > Just to confirm, are there devices currently planned for dmadev that We are planning to use this support as our exiting raw driver has this. > supports only a subset of these flags? Thinking particularly of the > dev-2-mem and mem-2-dev ones here - do any of the devices we are > considering not support using device memory? > [Again, just want to ensure we aren't adding too much stuff that we don't > need yet] > > > + * @see struct rte_dmadev_info::dev_capa > > + */ > > +#define RTE_DMA_DEV_CAPA_OPS_COPY (1ull << 4) > > +/**< DMA device support copy ops. > > + * > > Suggest dropping this and making it min for dmadev. > > > + * @see struct rte_dmadev_info::dev_capa > > + */ > > +#define RTE_DMA_DEV_CAPA_OPS_FILL (1ull << 5) > > +/**< DMA device support fill ops. > > + * > > + * @see struct rte_dmadev_info::dev_capa > > + */ > > +#define RTE_DMA_DEV_CAPA_OPS_SG (1ull << 6) > > +/**< DMA device support scatter-list ops. > > + * If device support ops_copy and ops_sg, it means supporting copy_sg ops. > > + * If device support ops_fill and ops_sg, it means supporting fill_sg ops. > > + * > > + * @see struct rte_dmadev_info::dev_capa > > + */ > > +#define RTE_DMA_DEV_CAPA_FENCE (1ull << 7) > > +/**< DMA device support fence. > > + * If device support fence, then application could set a fence flags when > > + * enqueue operation by rte_dma_copy/copy_sg/fill/fill_sg. > > + * If a operation has a fence flags, it means the operation must be processed > > + * only after all previous operations are completed. > > + * > > Is this needed? As I understand it, the Marvell driver doesn't require > fences so providing one is a no-op. Therefore, this flag is probably > unnecessary. +1 > > > + * @see struct rte_dmadev_info::dev_capa > > + */ > > +#define RTE_DMA_DEV_CAPA_SVA (1ull << 8) > > +/**< DMA device support SVA which could use VA as DMA address. > > + * If device support SVA then application could pass any VA address like memory > > + * from rte_malloc(), rte_memzone(), malloc, stack memory. > > + * If device don't support SVA, then application should pass IOVA address which > > + * from rte_malloc(), rte_memzone(). > > + * > > + * @see struct rte_dmadev_info::dev_capa > > + */ > > +#define RTE_DMA_DEV_CAPA_MT_VCHAN (1ull << 9) > > +/**< DMA device support MT-safe of a virtual DMA channel. > > + * > > + * @see struct rte_dmadev_info::dev_capa > > + */ > > +#define RTE_DMA_DEV_CAPA_MT_MULTI_VCHAN (1ull << 10) > > +/**< DMA device support MT-safe of different virtual DMA channels. > > + * > > + * @see struct rte_dmadev_info::dev_capa > > + */ > > As with comments above - let's check that these will actually be used > before we add them. > > > + > > +/** > > + * A structure used to retrieve the contextual information of > > + * an DMA device > > + */ > > +struct rte_dmadev_info { > > + struct rte_device *device; /**< Generic Device information */ > > + uint64_t dev_capa; /**< Device capabilities (RTE_DMA_DEV_CAPA_) */ > > + /** Maximum number of virtual DMA channels supported */ > > + uint16_t max_vchans; > > + /** Maximum allowed number of virtual DMA channel descriptors */ > > + uint16_t max_desc; > > + /** Minimum allowed number of virtual DMA channel descriptors */ > > + uint16_t min_desc; > > + uint16_t nb_vchans; /**< Number of virtual DMA channel configured */ > > +}; > > Let's add rte_dmadev_conf struct into this to return the configuration > settings. > > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change without prior notice. > > + * > > + * Retrieve the contextual information of a DMA device. > > + * > > + * @param dev_id > > + * The identifier of the device. > > + * @param[out] dev_info > > + * A pointer to a structure of type *rte_dmadev_info* to be filled with the > > + * contextual information of the device. > > + * > > + * @return > > + * - =0: Success, driver updates the contextual information of the DMA device > > + * - <0: Error code returned by the driver info get function. > > + * > > + */ > > +__rte_experimental > > +int > > +rte_dmadev_info_get(uint16_t dev_id, struct rte_dmadev_info *dev_info); > > + > > Should have "const" on second param. > > > +/** > > + * A structure used to configure a DMA device. > > + */ > > +struct rte_dmadev_conf { > > + /** Maximum number of virtual DMA channel to use. > > + * This value cannot be greater than the field 'max_vchans' of struct > > + * rte_dmadev_info which get from rte_dmadev_info_get(). > > + */ > > + uint16_t max_vchans; > > + /** Enable bit for MT-safe of a virtual DMA channel. > > + * This bit can be enabled only when the device supports > > + * RTE_DMA_DEV_CAPA_MT_VCHAN. > > + * @see RTE_DMA_DEV_CAPA_MT_VCHAN > > + */ > > + uint8_t enable_mt_vchan : 1; > > + /** Enable bit for MT-safe of different virtual DMA channels. > > + * This bit can be enabled only when the device supports > > + * RTE_DMA_DEV_CAPA_MT_MULTI_VCHAN. > > + * @see RTE_DMA_DEV_CAPA_MT_MULTI_VCHAN > > + */ > > + uint8_t enable_mt_multi_vchan : 1; > > + uint64_t reserved[2]; /**< Reserved for future fields */ > > +}; > > Drop the reserved fields. ABI versioning is a better way to deal with > adding new fields. +1 > > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change without prior notice. > > + * > > + * Configure a DMA device. > > + * > > + * This function must be invoked first before any other function in the > > + * API. This function can also be re-invoked when a device is in the > > + * stopped state. > > + * > > + * @param dev_id > > + * The identifier of the device to configure. > > + * @param dev_conf > > + * The DMA device configuration structure encapsulated into rte_dmadev_conf > > + * object. > > + * > > + * @return > > + * - =0: Success, device configured. > > + * - <0: Error code returned by the driver configuration function. > > + */ > > +__rte_experimental > > +int > > +rte_dmadev_configure(uint16_t dev_id, const struct rte_dmadev_conf *dev_conf); > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change without prior notice. > > + * > > + * Start a DMA device. > > + * > > + * The device start step is the last one and consists of setting the DMA > > + * to start accepting jobs. > > + * > > + * @param dev_id > > + * The identifier of the device. > > + * > > + * @return > > + * - =0: Success, device started. > > + * - <0: Error code returned by the driver start function. > > + */ > > +__rte_experimental > > +int > > +rte_dmadev_start(uint16_t dev_id); > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change without prior notice. > > + * > > + * Stop a DMA device. > > + * > > + * The device can be restarted with a call to rte_dmadev_start() > > + * > > + * @param dev_id > > + * The identifier of the device. > > + * > > + * @return > > + * - =0: Success, device stopped. > > + * - <0: Error code returned by the driver stop function. > > + */ > > +__rte_experimental > > +int > > +rte_dmadev_stop(uint16_t dev_id); > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change without prior notice. > > + * > > + * Close a DMA device. > > + * > > + * The device cannot be restarted after this call. > > + * > > + * @param dev_id > > + * The identifier of the device. > > + * > > + * @return > > + * - =0: Successfully close device > > + * - <0: Failure to close device > > + */ > > +__rte_experimental > > +int > > +rte_dmadev_close(uint16_t dev_id); > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change without prior notice. > > + * > > + * Reset a DMA device. > > + * > > + * This is different from cycle of rte_dmadev_start->rte_dmadev_stop in the > > + * sense similar to hard or soft reset. > > + * > > + * @param dev_id > > + * The identifier of the device. > > + * > > + * @return > > + * - =0: Successfully reset device. > > + * - <0: Failure to reset device. > > + * - (-ENOTSUP): If the device doesn't support this function. > > + */ > > +__rte_experimental > > +int > > +rte_dmadev_reset(uint16_t dev_id); > > + > > +/** > > + * DMA transfer direction defines. > > + */ > > +#define RTE_DMA_MEM_TO_MEM (1ull << 0) > > +/**< DMA transfer direction - from memory to memory. > > + * > > + * @see struct rte_dmadev_vchan_conf::direction > > + */ > > +#define RTE_DMA_MEM_TO_DEV (1ull << 1) > > +/**< DMA transfer direction - slave mode & from memory to device. > > + * In a typical scenario, ARM SoCs are installed on x86 servers as iNICs. In > > + * this case, the ARM SoCs works in slave mode, it could initiate a DMA move > > + * request from ARM memory to x86 host memory. > > For clarity, it would be good to specify in the scenario described which > memory is the "mem" and which is the "dev" (I assume SoC memory is "mem" > and x86 host memory is "dev"??) > > > + * > > + * @see struct rte_dmadev_vchan_conf::direction > > + */ > > +#define RTE_DMA_DEV_TO_MEM (1ull << 2) > > +/**< DMA transfer direction - slave mode & from device to memory. > > + * In a typical scenario, ARM SoCs are installed on x86 servers as iNICs. In > > + * this case, the ARM SoCs works in slave mode, it could initiate a DMA move > > + * request from x86 host memory to ARM memory. > > + * > > + * @see struct rte_dmadev_vchan_conf::direction > > + */ > > +#define RTE_DMA_DEV_TO_DEV (1ull << 3) > > +/**< DMA transfer direction - slave mode & from device to device. > > + * In a typical scenario, ARM SoCs are installed on x86 servers as iNICs. In > > + * this case, the ARM SoCs works in slave mode, it could initiate a DMA move > > + * request from x86 host memory to another x86 host memory. > > + * > > + * @see struct rte_dmadev_vchan_conf::direction > > + */ > > +#define RTE_DMA_TRANSFER_DIR_ALL (RTE_DMA_MEM_TO_MEM | \ > > + RTE_DMA_MEM_TO_DEV | \ > > + RTE_DMA_DEV_TO_MEM | \ > > + RTE_DMA_DEV_TO_DEV) > > + > > +/** > > + * enum rte_dma_slave_port_type - slave mode type defines > > + */ > > +enum rte_dma_slave_port_type { > > + /** The slave port is PCIE. */ > > + RTE_DMA_SLAVE_PORT_PCIE = 1, > > +}; > > + > > As previously mentioned, this needs to be updated to use other terms. > For some suggested alternatives see: > https://doc.dpdk.org/guides-21.05/contributing/coding_style.html#naming > > > +/** > > + * A structure used to descript slave port parameters. > > + */ > > +struct rte_dma_slave_port_parameters { > > + enum rte_dma_slave_port_type port_type; > > + union { > > + /** For PCIE port */ > > + struct { > > + /** The physical function number which to use */ > > + uint64_t pf_number : 6; > > + /** Virtual function enable bit */ > > + uint64_t vf_enable : 1; > > + /** The virtual function number which to use */ > > + uint64_t vf_number : 8; > > + uint64_t pasid : 20; > > + /** The attributes filed in TLP packet */ > > + uint64_t tlp_attr : 3; > > + }; > > + }; > > +}; > > + > > +/** > > + * A structure used to configure a virtual DMA channel. > > + */ > > +struct rte_dmadev_vchan_conf { > > + uint8_t direction; /**< Set of supported transfer directions */ > > + /** Number of descriptor for the virtual DMA channel */ > > + uint16_t nb_desc; > > + /** 1) Used to describes the dev parameter in the mem-to-dev/dev-to-mem > > + * transfer scenario. > > + * 2) Used to describes the src dev parameter in the dev-to-dev > > + * transfer scenario. > > + */ > > + struct rte_dma_slave_port_parameters port; > > + /** Used to describes the dst dev parameters in the dev-to-dev > > + * transfer scenario. > > + */ > > + struct rte_dma_slave_port_parameters peer_port; > > + uint64_t reserved[2]; /**< Reserved for future fields */ > > +}; > > Let's drop the reserved fields and use ABI versioning if necesssary in > future. > > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change without prior notice. > > + * > > + * Allocate and set up a virtual DMA channel. > > + * > > + * @param dev_id > > + * The identifier of the device. > > + * @param conf > > + * The virtual DMA channel configuration structure encapsulated into > > + * rte_dmadev_vchan_conf object. > > + * > > + * @return > > + * - >=0: Allocate success, it is the virtual DMA channel id. This value must > > + * be less than the field 'max_vchans' of struct rte_dmadev_conf > > + which configured by rte_dmadev_configure(). > > nit: whitespace error here. > > > + * - <0: Error code returned by the driver virtual channel setup function. > > + */ > > +__rte_experimental > > +int > > +rte_dmadev_vchan_setup(uint16_t dev_id, > > + const struct rte_dmadev_vchan_conf *conf); > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change without prior notice. > > + * > > + * Release a virtual DMA channel. > > + * > > + * @param dev_id > > + * The identifier of the device. > > + * @param vchan > > + * The identifier of virtual DMA channel which return by vchan setup. > > + * > > + * @return > > + * - =0: Successfully release the virtual DMA channel. > > + * - <0: Error code returned by the driver virtual channel release function. > > + */ > > +__rte_experimental > > +int > > +rte_dmadev_vchan_release(uint16_t dev_id, uint16_t vchan); > > + > > +/** > > + * rte_dmadev_stats - running statistics. > > + */ > > +struct rte_dmadev_stats { > > + /** Count of operations which were successfully enqueued */ > > + uint64_t enqueued_count; > > + /** Count of operations which were submitted to hardware */ > > + uint64_t submitted_count; > > + /** Count of operations which failed to complete */ > > + uint64_t completed_fail_count; > > + /** Count of operations which successfully complete */ > > + uint64_t completed_count; > > + uint64_t reserved[4]; /**< Reserved for future fields */ > > +}; > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change without prior notice. > > + * > > + * Retrieve basic statistics of a or all virtual DMA channel(s). > > + * > > + * @param dev_id > > + * The identifier of the device. > > + * @param vchan > > + * The identifier of virtual DMA channel, -1 means all channels. > > + * @param[out] stats > > + * The basic statistics structure encapsulated into rte_dmadev_stats > > + * object. > > + * > > + * @return > > + * - =0: Successfully retrieve stats. > > + * - <0: Failure to retrieve stats. > > + */ > > +__rte_experimental > > +int > > +rte_dmadev_stats_get(uint16_t dev_id, int vchan, > > vchan as uint16_t rather than int, I think. This would apply to all > dataplane functions. There is no need for a signed vchan value. > > > + struct rte_dmadev_stats *stats); > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change without prior notice. > > + * > > + * Reset basic statistics of a or all virtual DMA channel(s). > > + * > > + * @param dev_id > > + * The identifier of the device. > > + * @param vchan > > + * The identifier of virtual DMA channel, -1 means all channels. > > + * > > + * @return > > + * - =0: Successfully reset stats. > > + * - <0: Failure to reset stats. > > + */ > > +__rte_experimental > > +int > > +rte_dmadev_stats_reset(uint16_t dev_id, int vchan); > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change without prior notice. > > + * > > + * Dump DMA device info. > > + * > > + * @param dev_id > > + * The identifier of the device. > > + * @param f > > + * The file to write the output to. > > + * > > + * @return > > + * 0 on success. Non-zero otherwise. > > + */ > > +__rte_experimental > > +int > > +rte_dmadev_dump(uint16_t dev_id, FILE *f); > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change without prior notice. > > + * > > + * Trigger the dmadev self test. > > + * > > + * @param dev_id > > + * The identifier of the device. > > + * > > + * @return > > + * - 0: Selftest successful. > > + * - -ENOTSUP if the device doesn't support selftest > > + * - other values < 0 on failure. > > + */ > > +__rte_experimental > > +int > > +rte_dmadev_selftest(uint16_t dev_id); > > I don't think this needs to be in the public API, since it should only be > for the autotest app to use. Maybe move the prototype to the _pmd.h (since > we don't have a separate internal header), and then the autotest app can > pick it up from there. > > > + > > +#include "rte_dmadev_core.h" > > + > > +/** > > + * DMA flags to augment operation preparation. > > + * Used as the 'flags' parameter of rte_dmadev_copy/copy_sg/fill/fill_sg. > > + */ > > +#define RTE_DMA_FLAG_FENCE (1ull << 0) > > +/**< DMA fence flag > > + * It means the operation with this flag must be processed only after all > > + * previous operations are completed. > > + * > > + * @see rte_dmadev_copy() > > + * @see rte_dmadev_copy_sg() > > + * @see rte_dmadev_fill() > > + * @see rte_dmadev_fill_sg() > > + */ > > As a general comment, I think all these multi-line comments should go > before the item they describe. Comments after should only be used in the > case where the comment fits on the rest of the line after a value. > > We also should define the SUBMIT flag as suggested by Jerin, to allow apps > to automatically submit jobs after enqueue. > > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change without prior notice. > > + * > > + * Enqueue a copy operation onto the virtual DMA channel. > > + * > > + * This queues up a copy operation to be performed by hardware, but does not > > + * trigger hardware to begin that operation. > > + * > > + * @param dev_id > > + * The identifier of the device. > > + * @param vchan > > + * The identifier of virtual DMA channel. > > + * @param src > > + * The address of the source buffer. > > + * @param dst > > + * The address of the destination buffer. > > + * @param length > > + * The length of the data to be copied. > > + * @param flags > > + * An flags for this operation. > > + * > > + * @return > > + * - 0..UINT16_MAX: index of enqueued copy job. > > + * - <0: Error code returned by the driver copy function. > > + */ > > +__rte_experimental > > +static inline int > > +rte_dmadev_copy(uint16_t dev_id, uint16_t vchan, rte_iova_t src, rte_iova_t dst, > > + uint32_t length, uint64_t flags) > > +{ > > + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; > > +#ifdef RTE_DMADEV_DEBUG > > + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); > > + RTE_FUNC_PTR_OR_ERR_RET(*dev->copy, -ENOTSUP); > > + if (vchan >= dev->data->dev_conf.max_vchans) { > > + RTE_DMADEV_LOG(ERR, "Invalid vchan %d\n", vchan); > > + return -EINVAL; > > + } > > +#endif > > + return (*dev->copy)(dev, vchan, src, dst, length, flags); > > +} > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change without prior notice. > > + * > > + * Enqueue a scatter list copy operation onto the virtual DMA channel. > > + * > > + * This queues up a scatter list copy operation to be performed by hardware, > > + * but does not trigger hardware to begin that operation. > > + * > > + * @param dev_id > > + * The identifier of the device. > > + * @param vchan > > + * The identifier of virtual DMA channel. > > + * @param sg > > + * The pointer of scatterlist. > > + * @param sg_len > > + * The number of scatterlist elements. > > + * @param flags > > + * An flags for this operation. > > + * > > + * @return > > + * - 0..UINT16_MAX: index of enqueued copy job. > > + * - <0: Error code returned by the driver copy function. > > + */ > > +__rte_experimental > > +static inline int > > +rte_dmadev_copy_sg(uint16_t dev_id, uint16_t vchan, const struct rte_dma_sg *sg, > > + uint32_t sg_len, uint64_t flags) > > +{ > > + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; > > +#ifdef RTE_DMADEV_DEBUG > > + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); > > + RTE_FUNC_PTR_OR_ERR_RET(sg, -EINVAL); > > + RTE_FUNC_PTR_OR_ERR_RET(*dev->copy_sg, -ENOTSUP); > > + if (vchan >= dev->data->dev_conf.max_vchans) { > > + RTE_DMADEV_LOG(ERR, "Invalid vchan %d\n", vchan); > > + return -EINVAL; > > + } > > +#endif > > + return (*dev->copy_sg)(dev, vchan, sg, sg_len, flags); > > +} > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change without prior notice. > > + * > > + * Enqueue a fill operation onto the virtual DMA channel. > > + * > > + * This queues up a fill operation to be performed by hardware, but does not > > + * trigger hardware to begin that operation. > > + * > > + * @param dev_id > > + * The identifier of the device. > > + * @param vchan > > + * The identifier of virtual DMA channel. > > + * @param pattern > > + * The pattern to populate the destination buffer with. > > + * @param dst > > + * The address of the destination buffer. > > + * @param length > > + * The length of the destination buffer. > > + * @param flags > > + * An flags for this operation. > > + * > > + * @return > > + * - 0..UINT16_MAX: index of enqueued copy job. > > + * - <0: Error code returned by the driver copy function. > > + */ > > +__rte_experimental > > +static inline int > > +rte_dmadev_fill(uint16_t dev_id, uint16_t vchan, uint64_t pattern, > > + rte_iova_t dst, uint32_t length, uint64_t flags) > > +{ > > + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; > > +#ifdef RTE_DMADEV_DEBUG > > + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); > > + RTE_FUNC_PTR_OR_ERR_RET(*dev->fill, -ENOTSUP); > > + if (vchan >= dev->data->dev_conf.max_vchans) { > > + RTE_DMADEV_LOG(ERR, "Invalid vchan %d\n", vchan); > > + return -EINVAL; > > + } > > +#endif > > + return (*dev->fill)(dev, vchan, pattern, dst, length, flags); > > +} > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change without prior notice. > > + * > > + * Enqueue a scatter list fill operation onto the virtual DMA channel. > > + * > > + * This queues up a scatter list fill operation to be performed by hardware, > > + * but does not trigger hardware to begin that operation. > > + * > > + * @param dev_id > > + * The identifier of the device. > > + * @param vchan > > + * The identifier of virtual DMA channel. > > + * @param pattern > > + * The pattern to populate the destination buffer with. > > + * @param sg > > + * The pointer of scatterlist. > > + * @param sg_len > > + * The number of scatterlist elements. > > + * @param flags > > + * An flags for this operation. > > + * > > + * @return > > + * - 0..UINT16_MAX: index of enqueued copy job. > > + * - <0: Error code returned by the driver copy function. > > + */ > > +__rte_experimental > > +static inline int > > +rte_dmadev_fill_sg(uint16_t dev_id, uint16_t vchan, uint64_t pattern, > > + const struct rte_dma_sg *sg, uint32_t sg_len, > > + uint64_t flags) > > +{ > > + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; > > +#ifdef RTE_DMADEV_DEBUG > > + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); > > + RTE_FUNC_PTR_OR_ERR_RET(sg, -ENOTSUP); > > + RTE_FUNC_PTR_OR_ERR_RET(*dev->fill, -ENOTSUP); > > + if (vchan >= dev->data->dev_conf.max_vchans) { > > + RTE_DMADEV_LOG(ERR, "Invalid vchan %d\n", vchan); > > + return -EINVAL; > > + } > > +#endif > > + return (*dev->fill_sg)(dev, vchan, pattern, sg, sg_len, flags); > > +} > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change without prior notice. > > + * > > + * Trigger hardware to begin performing enqueued operations. > > + * > > + * This API is used to write the "doorbell" to the hardware to trigger it > > + * to begin the operations previously enqueued by rte_dmadev_copy/fill() > > + * > > + * @param dev_id > > + * The identifier of the device. > > + * @param vchan > > + * The identifier of virtual DMA channel. > > + * > > + * @return > > + * - =0: Successfully trigger hardware. > > + * - <0: Failure to trigger hardware. > > + */ > > +__rte_experimental > > +static inline int > > +rte_dmadev_submit(uint16_t dev_id, uint16_t vchan) > > +{ > > + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; > > +#ifdef RTE_DMADEV_DEBUG > > + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); > > + RTE_FUNC_PTR_OR_ERR_RET(*dev->submit, -ENOTSUP); > > + if (vchan >= dev->data->dev_conf.max_vchans) { > > + RTE_DMADEV_LOG(ERR, "Invalid vchan %d\n", vchan); > > + return -EINVAL; > > + } > > +#endif > > + return (*dev->submit)(dev, vchan); > > +} > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change without prior notice. > > + * > > + * Returns the number of operations that have been successfully completed. > > + * > > + * @param dev_id > > + * The identifier of the device. > > + * @param vchan > > + * The identifier of virtual DMA channel. > > + * @param nb_cpls > > + * The maximum number of completed operations that can be processed. > > + * @param[out] last_idx > > + * The last completed operation's index. > > + * If not required, NULL can be passed in. > > + * @param[out] has_error > > + * Indicates if there are transfer error. > > + * If not required, NULL can be passed in. > > + * > > + * @return > > + * The number of operations that successfully completed. > > + */ > > +__rte_experimental > > +static inline uint16_t > > +rte_dmadev_completed(uint16_t dev_id, uint16_t vchan, const uint16_t nb_cpls, > > + uint16_t *last_idx, bool *has_error) > > +{ > > + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; > > + uint16_t idx; > > + bool err; > > + > > +#ifdef RTE_DMADEV_DEBUG > > + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); > > + RTE_FUNC_PTR_OR_ERR_RET(*dev->completed, -ENOTSUP); > > + if (vchan >= dev->data->dev_conf.max_vchans) { > > + RTE_DMADEV_LOG(ERR, "Invalid vchan %d\n", vchan); > > + return -EINVAL; > > + } > > + if (nb_cpls == 0) { > > + RTE_DMADEV_LOG(ERR, "Invalid nb_cpls\n"); > > + return -EINVAL; > > + } > > +#endif > > + > > + /* Ensure the pointer values are non-null to simplify drivers. > > + * In most cases these should be compile time evaluated, since this is > > + * an inline function. > > + * - If NULL is explicitly passed as parameter, then compiler knows the > > + * value is NULL > > + * - If address of local variable is passed as parameter, then compiler > > + * can know it's non-NULL. > > + */ > > + if (last_idx == NULL) > > + last_idx = &idx; > > + if (has_error == NULL) > > + has_error = &err; > > + > > + *has_error = false; > > + return (*dev->completed)(dev, vchan, nb_cpls, last_idx, has_error); > > +} > > + > > +/** > > + * DMA transfer status code defines > > + */ > > +enum rte_dma_status_code { > > + /** The operation completed successfully */ > > + RTE_DMA_STATUS_SUCCESSFUL = 0, > > + /** The operation failed to complete due active drop > > + * This is mainly used when processing dev_stop, allow outstanding > > + * requests to be completed as much as possible. > > + */ > > + RTE_DMA_STATUS_ACTIVE_DROP, > > + /** The operation failed to complete due invalid source address */ > > + RTE_DMA_STATUS_INVALID_SRC_ADDR, > > + /** The operation failed to complete due invalid destination address */ > > + RTE_DMA_STATUS_INVALID_DST_ADDR, > > + /** The operation failed to complete due invalid length */ > > + RTE_DMA_STATUS_INVALID_LENGTH, > > + /** The operation failed to complete due invalid opcode > > + * The DMA descriptor could have multiple format, which are > > + * distinguished by the opcode field. > > + */ > > + RTE_DMA_STATUS_INVALID_OPCODE, > > + /** The operation failed to complete due bus err */ > > + RTE_DMA_STATUS_BUS_ERROR, > > + /** The operation failed to complete due data poison */ > > + RTE_DMA_STATUS_DATA_POISION, > > + /** The operation failed to complete due descriptor read error */ > > + RTE_DMA_STATUS_DESCRIPTOR_READ_ERROR, > > + /** The operation failed to complete due device link error > > + * Used to indicates that the link error in the mem-to-dev/dev-to-mem/ > > + * dev-to-dev transfer scenario. > > + */ > > + RTE_DMA_STATUS_DEV_LINK_ERROR, > > + /** Driver specific status code offset > > + * Start status code for the driver to define its own error code. > > + */ > > + RTE_DMA_STATUS_DRV_SPECIFIC_OFFSET = 0x10000, > > +}; > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change without prior notice. > > + * > > + * Returns the number of operations that failed to complete. > > + * NOTE: This API was used when rte_dmadev_completed has_error was set. > > + * > > + * @param dev_id > > + * The identifier of the device. > > + * @param vchan > > + * The identifier of virtual DMA channel. > > + * @param nb_status > > + * Indicates the size of status array. > > + * @param[out] status > > + * The error code of operations that failed to complete. > > + * Some standard error code are described in 'enum rte_dma_status_code' > > + * @see rte_dma_status_code > > + * @param[out] last_idx > > + * The last failed completed operation's index. > > + * > > + * @return > > + * The number of operations that failed to complete. > > + */ > > +__rte_experimental > > +static inline uint16_t > > +rte_dmadev_completed_fails(uint16_t dev_id, uint16_t vchan, > > + const uint16_t nb_status, uint32_t *status, > > + uint16_t *last_idx) > > +{ > > + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; > > +#ifdef RTE_DMADEV_DEBUG > > + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); > > + RTE_FUNC_PTR_OR_ERR_RET(status, -EINVAL); > > + RTE_FUNC_PTR_OR_ERR_RET(last_idx, -EINVAL); > > + RTE_FUNC_PTR_OR_ERR_RET(*dev->completed_fails, -ENOTSUP); > > + if (vchan >= dev->data->dev_conf.max_vchans) { > > + RTE_DMADEV_LOG(ERR, "Invalid vchan %d\n", vchan); > > + return -EINVAL; > > + } > > + if (nb_status == 0) { > > + RTE_DMADEV_LOG(ERR, "Invalid nb_status\n"); > > + return -EINVAL; > > + } > > +#endif > > + return (*dev->completed_fails)(dev, vchan, nb_status, status, last_idx); > > +} > > + > > +#ifdef __cplusplus > > +} > > +#endif > > + > > +#endif /* _RTE_DMADEV_H_ */ > > diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h > > new file mode 100644 > > index 0000000..410faf0 > > --- /dev/null > > +++ b/lib/dmadev/rte_dmadev_core.h > > @@ -0,0 +1,159 @@ > > +/* SPDX-License-Identifier: BSD-3-Clause > > + * Copyright(c) 2021 HiSilicon Limited. > > + * Copyright(c) 2021 Intel Corporation. > > + */ > > + > > +#ifndef _RTE_DMADEV_CORE_H_ > > +#define _RTE_DMADEV_CORE_H_ > > + > > +/** > > + * @file > > + * > > + * RTE DMA Device internal header. > > + * > > + * This header contains internal data types, that are used by the DMA devices > > + * in order to expose their ops to the class. > > + * > > + * Applications should not use these API directly. > > + * > > + */ > > + > > +struct rte_dmadev; > > + > > +/** @internal Used to get device information of a device. */ > > +typedef int (*dmadev_info_get_t)(struct rte_dmadev *dev, > > + struct rte_dmadev_info *dev_info); > > First parameter can be "const" > > > +/** @internal Used to configure a device. */ > > +typedef int (*dmadev_configure_t)(struct rte_dmadev *dev, > > + const struct rte_dmadev_conf *dev_conf); > > + > > +/** @internal Used to start a configured device. */ > > +typedef int (*dmadev_start_t)(struct rte_dmadev *dev); > > + > > +/** @internal Used to stop a configured device. */ > > +typedef int (*dmadev_stop_t)(struct rte_dmadev *dev); > > + > > +/** @internal Used to close a configured device. */ > > +typedef int (*dmadev_close_t)(struct rte_dmadev *dev); > > + > > +/** @internal Used to reset a configured device. */ > > +typedef int (*dmadev_reset_t)(struct rte_dmadev *dev); > > + > > +/** @internal Used to allocate and set up a virtual DMA channel. */ > > +typedef int (*dmadev_vchan_setup_t)(struct rte_dmadev *dev, > > + const struct rte_dmadev_vchan_conf *conf); > > + > > +/** @internal Used to release a virtual DMA channel. */ > > +typedef int (*dmadev_vchan_release_t)(struct rte_dmadev *dev, uint16_t vchan); > > + > > +/** @internal Used to retrieve basic statistics. */ > > +typedef int (*dmadev_stats_get_t)(struct rte_dmadev *dev, int vchan, > > + struct rte_dmadev_stats *stats); > > First parameter can be "const" > > > + > > +/** @internal Used to reset basic statistics. */ > > +typedef int (*dmadev_stats_reset_t)(struct rte_dmadev *dev, int vchan); > > + > > +/** @internal Used to dump internal information. */ > > +typedef int (*dmadev_dump_t)(struct rte_dmadev *dev, FILE *f); > > + > > First param "const" > > > +/** @internal Used to start dmadev selftest. */ > > +typedef int (*dmadev_selftest_t)(uint16_t dev_id); > > + > > This looks an outlier taking a dev_id. It should take a rawdev parameter. > Most drivers should not need to implement this anyway, as the main unit > tests should be in "test_dmadev.c" in the autotest app. > > > +/** @internal Used to enqueue a copy operation. */ > > +typedef int (*dmadev_copy_t)(struct rte_dmadev *dev, uint16_t vchan, > > + rte_iova_t src, rte_iova_t dst, > > + uint32_t length, uint64_t flags); > > + > > +/** @internal Used to enqueue a scatter list copy operation. */ > > +typedef int (*dmadev_copy_sg_t)(struct rte_dmadev *dev, uint16_t vchan, > > + const struct rte_dma_sg *sg, > > + uint32_t sg_len, uint64_t flags); > > + > > +/** @internal Used to enqueue a fill operation. */ > > +typedef int (*dmadev_fill_t)(struct rte_dmadev *dev, uint16_t vchan, > > + uint64_t pattern, rte_iova_t dst, > > + uint32_t length, uint64_t flags); > > + > > +/** @internal Used to enqueue a scatter list fill operation. */ > > +typedef int (*dmadev_fill_sg_t)(struct rte_dmadev *dev, uint16_t vchan, > > + uint64_t pattern, const struct rte_dma_sg *sg, > > + uint32_t sg_len, uint64_t flags); > > + > > +/** @internal Used to trigger hardware to begin working. */ > > +typedef int (*dmadev_submit_t)(struct rte_dmadev *dev, uint16_t vchan); > > + > > +/** @internal Used to return number of successful completed operations. */ > > +typedef uint16_t (*dmadev_completed_t)(struct rte_dmadev *dev, uint16_t vchan, > > + const uint16_t nb_cpls, > > + uint16_t *last_idx, bool *has_error); > > + > > +/** @internal Used to return number of failed completed operations. */ > > +typedef uint16_t (*dmadev_completed_fails_t)(struct rte_dmadev *dev, > > + uint16_t vchan, const uint16_t nb_status, > > + uint32_t *status, uint16_t *last_idx); > > + > > +/** > > + * DMA device operations function pointer table > > + */ > > +struct rte_dmadev_ops { > > + dmadev_info_get_t dev_info_get; > > + dmadev_configure_t dev_configure; > > + dmadev_start_t dev_start; > > + dmadev_stop_t dev_stop; > > + dmadev_close_t dev_close; > > + dmadev_reset_t dev_reset; > > + dmadev_vchan_setup_t vchan_setup; > > + dmadev_vchan_release_t vchan_release; > > + dmadev_stats_get_t stats_get; > > + dmadev_stats_reset_t stats_reset; > > + dmadev_dump_t dev_dump; > > + dmadev_selftest_t dev_selftest; > > +}; > > + > > +/** > > + * @internal > > + * The data part, with no function pointers, associated with each DMA device. > > + * > > + * This structure is safe to place in shared memory to be common among different > > + * processes in a multi-process configuration. > > + */ > > +struct rte_dmadev_data { > > + uint16_t dev_id; /**< Device [external] identifier. */ > > + char dev_name[RTE_DMADEV_NAME_MAX_LEN]; /**< Unique identifier name */ > > + void *dev_private; /**< PMD-specific private data. */ > > + struct rte_dmadev_conf dev_conf; /**< DMA device configuration. */ > > + uint8_t dev_started : 1; /**< Device state: STARTED(1)/STOPPED(0). */ > > + uint64_t reserved[4]; /**< Reserved for future fields */ > > +} __rte_cache_aligned; > > + > > While I generally don't like having reserved space, this is one place where > it makes sense, so +1 for it here. > > > +/** > > + * @internal > > + * The generic data structure associated with each DMA device. > > + * > > + * The dataplane APIs are located at the beginning of the structure, along > > + * with the pointer to where all the data elements for the particular device > > + * are stored in shared memory. This split scheme allows the function pointer > > + * and driver data to be per-process, while the actual configuration data for > > + * the device is shared. > > + */ > > +struct rte_dmadev { > > + dmadev_copy_t copy; > > + dmadev_copy_sg_t copy_sg; > > + dmadev_fill_t fill; > > + dmadev_fill_sg_t fill_sg; > > + dmadev_submit_t submit; > > + dmadev_completed_t completed; > > + dmadev_completed_fails_t completed_fails; > > + const struct rte_dmadev_ops *dev_ops; /**< Functions exported by PMD. */ > > + /** Flag indicating the device is attached: ATTACHED(1)/DETACHED(0). */ > > + uint8_t attached : 1; > > Since it's in the midst of a series of pointers, this 1-bit flag is > actually using 8-bytes of space. Is it needed. Can we use dev_ops == NULL > or data == NULL instead to indicate this is a valid entry? > > > + /** Device info which supplied during device initialization. */ > > + struct rte_device *device; > > + struct rte_dmadev_data *data; /**< Pointer to device data. */ > > If we are to try and minimise cacheline access, we should put this data > pointer - or even better a copy of data->private pointer - at the top of > the structure on the same cacheline as datapath operations. For dataplane, > I can't see any elements of data, except the private pointer being > accessed, so we would probably get most benefit for having a copy put there > on init of the dmadev struct. > > > + uint64_t reserved[4]; /**< Reserved for future fields */ > > +} __rte_cache_aligned; > > + > > +extern struct rte_dmadev rte_dmadevices[]; > > + > > +#endif /* _RTE_DMADEV_CORE_H_ */ > > diff --git a/lib/dmadev/rte_dmadev_pmd.h b/lib/dmadev/rte_dmadev_pmd.h > > new file mode 100644 > > index 0000000..45141f9 > > --- /dev/null > > +++ b/lib/dmadev/rte_dmadev_pmd.h > > @@ -0,0 +1,72 @@ > > +/* SPDX-License-Identifier: BSD-3-Clause > > + * Copyright(c) 2021 HiSilicon Limited. > > + */ > > + > > +#ifndef _RTE_DMADEV_PMD_H_ > > +#define _RTE_DMADEV_PMD_H_ > > + > > +/** > > + * @file > > + * > > + * RTE DMA Device PMD APIs > > + * > > + * Driver facing APIs for a DMA device. These are not to be called directly by > > + * any application. > > + */ > > + > > +#include "rte_dmadev.h" > > + > > +#ifdef __cplusplus > > +extern "C" { > > +#endif > > + > > +/** > > + * @internal > > + * Allocates a new dmadev slot for an DMA device and returns the pointer > > + * to that slot for the driver to use. > > + * > > + * @param name > > + * DMA device name. > > + * > > + * @return > > + * A pointer to the DMA device slot case of success, > > + * NULL otherwise. > > + */ > > +__rte_internal > > +struct rte_dmadev * > > +rte_dmadev_pmd_allocate(const char *name); > > + > > +/** > > + * @internal > > + * Release the specified dmadev. > > + * > > + * @param dev > > + * Device to be released. > > + * > > + * @return > > + * - 0 on success, negative on error > > + */ > > +__rte_internal > > +int > > +rte_dmadev_pmd_release(struct rte_dmadev *dev); > > + > > +/** > > + * @internal > > + * Return the DMA device based on the device name. > > + * > > + * @param name > > + * DMA device name. > > + * > > + * @return > > + * A pointer to the DMA device slot case of success, > > + * NULL otherwise. > > + */ > > +__rte_internal > > +struct rte_dmadev * > > +rte_dmadev_get_device_by_name(const char *name); > > + > > +#ifdef __cplusplus > > +} > > +#endif > > + > > +#endif /* _RTE_DMADEV_PMD_H_ */ > > diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map > > new file mode 100644 > > index 0000000..0f099e7 > > --- /dev/null > > +++ b/lib/dmadev/version.map > > @@ -0,0 +1,40 @@ > > +EXPERIMENTAL { > > + global: > > + > > + rte_dmadev_count; > > + rte_dmadev_info_get; > > + rte_dmadev_configure; > > + rte_dmadev_start; > > + rte_dmadev_stop; > > + rte_dmadev_close; > > + rte_dmadev_reset; > > + rte_dmadev_vchan_setup; > > + rte_dmadev_vchan_release; > > + rte_dmadev_stats_get; > > + rte_dmadev_stats_reset; > > + rte_dmadev_dump; > > + rte_dmadev_selftest; > > + rte_dmadev_copy; > > + rte_dmadev_copy_sg; > > + rte_dmadev_fill; > > + rte_dmadev_fill_sg; > > + rte_dmadev_submit; > > + rte_dmadev_completed; > > + rte_dmadev_completed_fails; > > + > > + local: *; > > +}; > > The elements in the version.map file blocks should be sorted alphabetically. > > > + > > +INTERNAL { > > + global: > > + > > + rte_dmadevices; > > + rte_dmadev_pmd_allocate; > > + rte_dmadev_pmd_release; > > + rte_dmadev_get_device_by_name; > > + > > + local: > > + > > + rte_dmadev_is_valid_dev; > > +}; > > + > > diff --git a/lib/meson.build b/lib/meson.build > > index 1673ca4..68d239f 100644 > > --- a/lib/meson.build > > +++ b/lib/meson.build > > @@ -60,6 +60,7 @@ libraries = [ > > 'bpf', > > 'graph', > > 'node', > > + 'dmadev', > > ] > > > > if is_windows > > -- > > 2.8.1 > >