From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A958B42601; Thu, 21 Sep 2023 04:41:30 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 492034026D; Thu, 21 Sep 2023 04:41:30 +0200 (CEST) Received: from mail-ua1-f42.google.com (mail-ua1-f42.google.com [209.85.222.42]) by mails.dpdk.org (Postfix) with ESMTP id D70C54014F for ; Thu, 21 Sep 2023 04:41:28 +0200 (CEST) Received: by mail-ua1-f42.google.com with SMTP id a1e0cc1a2514c-7870821d9a1so276242241.1 for ; Wed, 20 Sep 2023 19:41:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1695264088; x=1695868888; darn=dpdk.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=CEW5vFN+H9ilto3B41PlFymSRX5bv+8b3pJIHWhQ+vw=; b=IDLOdxcdo4LbVxgD0saKbMY1kxLieyVMvgn29hquVRRbPWbmrP/d+Y0PhP2hCNXrpx P9fJ3aQHbPAYuKNxtYXekLXhLjWNih1WsaV4GMjAQUKA6T0uaqZXhWvHMjM4bUBGjIjr ernmCP2p5d854yqttegkLNmV8meLQn+ht9fCpJtI/U9jlvLyoXbp+ezGNfjkU2HsJIiI xTLpoVmUHdlRTRXwH3g/E9ruyRrSPNnphamaf0IahT0CXwlK5sBTq8j8FV8pSVxh+9ds V1j3FqeaRZErcVXyAor+DdUq8qYxUGXPer0qUaKw5muEWs2nZuq2Tb6K42tdHDFB/upL a6WQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695264088; x=1695868888; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CEW5vFN+H9ilto3B41PlFymSRX5bv+8b3pJIHWhQ+vw=; b=xQa6Pszbuf922hNk1EPcVIdbC/j4IJ+v9D27DLdA6GrGGV55R/KhRBi1V8cynrPFbR 5TTxlfw6h32AxCfLk/+9kvH+I79Ft+xwpGw0SekJglnwJFgsKEVgp+Jw9zocLen/p+oV zeHMhYcNsW8myiu2vUnyuLYek+h2jcuex7kT+MaGajxCj8/Jcpq0dQHzUMxKeDiF1XV9 koy9YnE/I3f7lzCw+J/KGq7DaqqDS0PvMgSmtloByX3NbRLnsHZrTOzzsKGbFbn8FPRb agoxANe42EQROvUiLFxRmudhLt8Mdezexpl2NZPwGOom+S/KKE/3f/A4cW+a7ygs5wOR CT0A== X-Gm-Message-State: AOJu0YxQk0OPLlXRguQXQKuikAuuZ1hUf51F/KINchq+eAHSRd75UHEF AUbD6tv9EaWqW93NFjYsrKEZ+NumMSpofiRzZcU= X-Google-Smtp-Source: AGHT+IEdC6TlS36QEXsriA6/n27nldxSyXDM5f9viv9aPbzKz6PYpNhxGjCM7ArDirDaMEDr6/RZEFzj+xxv1uuA+X0= X-Received: by 2002:a05:6102:5e8b:b0:452:7d35:4f9c with SMTP id ij11-20020a0561025e8b00b004527d354f9cmr4626246vsb.7.1695264087955; Wed, 20 Sep 2023 19:41:27 -0700 (PDT) MIME-Version: 1.0 References: <20230919134222.2500033-1-amitprakashs@marvell.com> In-Reply-To: <20230919134222.2500033-1-amitprakashs@marvell.com> From: Jerin Jacob Date: Thu, 21 Sep 2023 08:11:01 +0530 Message-ID: Subject: Re: [PATCH v1 1/7] eventdev: introduce DMA event adapter library To: Amit Prakash Shukla Cc: Jerin Jacob , dev@dpdk.org, fengchengwen@huawei.com, kevin.laatz@intel.com, bruce.richardson@intel.com, conor.walsh@intel.com, vattunuru@marvell.com, g.singh@nxp.com, sachin.saxena@oss.nxp.com, hemant.agrawal@nxp.com, cheng1.jiang@intel.com, ndabilpuram@marvell.com, anoobj@marvell.com, mb@smartsharesystems.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Tue, Sep 19, 2023 at 7:12=E2=80=AFPM Amit Prakash Shukla wrote: > > Introduce event DMA adapter APIs. The change provides information > on adapter modes and usage. Application can use this event adapter > interface to transfer packets between DMA device and event device. > > Signed-off-by: Amit Prakash Shukla Please have a cover letter and document changes from RFC for easy review. The driver changes can be accepted after rc1. So please have two series better review First series - Targeting to merge for rc1 -API Header file with programing guide -SW implementation -Test cases Second series: - Targeting to merge for rc2. -cnxk driver changes > --- > 13 files changed, 3336 insertions(+), 5 deletions(-) > create mode 100644 doc/guides/prog_guide/event_dma_adapter.rst > create mode 100644 doc/guides/prog_guide/img/event_dma_adapter_op_forwar= d.svg > create mode 100644 doc/guides/prog_guide/img/event_dma_adapter_op_new.sv= g > create mode 100644 lib/eventdev/rte_event_dma_adapter.h Missing updates to MAINTAINERS and doc/guides/rel_notes/release_23_11.rst > diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/event= devs/features/default.ini > index 00360f60c6..fda8baf487 100644 > --- a/doc/guides/eventdevs/features/default.ini > +++ b/doc/guides/eventdevs/features/default.ini > @@ -44,6 +44,14 @@ internal_port_op_fwd =3D > internal_port_qp_ev_bind =3D > session_private_data =3D > > +; > +; Features of a default DMA adapter. > +; > +[DMA adapter Features] > +internal_port_op_new =3D > +internal_port_op_fwd =3D > +internal_port_qp_ev_bind =3D For dmadev there is qp (queue pair) change to vchan. > diff --git a/doc/guides/prog_guide/event_dma_adapter.rst b/doc/guides/pro= g_guide/event_dma_adapter.rst Snip > +Adding vchan queue to the adapter instance Keep only vchan (remove queue) to align with dmadev. > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > + > +dmadev device id and vchan queue are configured using dmadev APIs. For m= ore information > +see :doc:`here `. > + > +.. code-block:: c > + > + struct rte_dma_vchan_conf vchan_conf; > + struct rte_dma_conf dev_conf; > + uint8_t dev_id =3D 0; > + uint16_t vchan =3D 0; > + > + rte_dma_configure(dev_id, &dev_conf); > + rte_dma_vchan_setup(dev_id, vhcan, &vchan_conf); > + > +These dmadev id and vchan are added to the instance using the > +``rte_event_dma_adapter_vchan_queue_add()`` API. The same is removed usi= ng Keep only vchan (remove queue) to align with dmadev. > +``rte_event_dma_adapter_vchan_queue_del()`` API. If hardware supports Keep only vchan (remove queue) to align with dmadev. > +``RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND`` capability, event= information must be passed to QP->VCHAN > +the add API. > + > +.. code-block:: c > + > + uint32_t cap; > + int ret; > + > + ret =3D rte_event_dma_adapter_caps_get(evdev_id, dma_dev_id, &ca= p); > + if (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND) { > + struct rte_event event; > + > + rte_event_dma_adapter_vchan_queue_add(id, dma_dev_id, vc= han, &conf); Keep only vchan (remove queue) to align with dmadev. > + } else > + rte_event_dma_adapter_vchan_queue_add(id, dma_dev_id, vc= han, NULL); Keep only vchan (remove queue) to align with dmadev. > + > + > + > +Set event request / response information > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > + > +In the RTE_EVENT_DMA_ADAPTER_OP_FORWARD mode, the application specifies = the dmadev ID and > +vchan ID (request information) in addition to the event information (res= ponse information) > +needed to enqueue an event after the DMA operation has completed. The re= quest and response > +information are specified in the ``struct rte_event_dma_metadata``. Symbol rte_event_dma_metadata not found in header file. Remove this section if it is not used. > +/** > + * This API may change without prior notice > + * > + * Add DMA queue pair to event device. This callback is invoked if > + * the caps returned from rte_event_dma_adapter_caps_get(, dmadev_id) > + * has RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_* set. > + * > + * @param dev > + * Event device pointer > + * > + * @param dmadev > + * DMADEV pointer > + * > + * @param queue_pair_id > + * DMADEV queue pair identifier. > + * > + * @param event > + * Event information required for binding dmadev queue pair to event qu= eue. > + * This structure will have a valid value for only those HW PMDs support= ing > + * @see RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND capability. QP->VCHAN > + * > + * @return > + * - 0: Success, dmadev queue pair added successfully. > + * - <0: Error code returned by the driver function. > + * > + */ > +typedef int (*eventdev_dma_adapter_queue_pair_add_t)(const struct rte_ev= entdev *dev, queue_pair->vchan > + const struct rte_dma_= dev *dmadev, > + int32_t queue_pair_id= , queue_pair->vchan > +++ b/lib/eventdev/rte_event_dma_adapter.h > @@ -0,0 +1,641 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright (c) 2023 Marvell. > + */ > + > +#ifndef RTE_EVENT_DMA_ADAPTER > +#define RTE_EVENT_DMA_ADAPTER > + > +/** > + * @file rte_event_dma_adapter.h > + * > + * @warning > + * @b EXPERIMENTAL: > + * All functions in this file may be changed or removed without prior no= tice. > + * > + * DMA Event Adapter API. > + * > + * Eventdev library provides adapters to bridge between various componen= ts for providing new > + * event source. The event DMA adapter is one of those adapters which is= intended to bridge > + * between event devices and DMA devices. > + * > + * The DMA adapter adds support to enqueue / dequeue DMA operations to /= from event device. The > + * packet flow between DMA device and the event device can be accomplish= ed using both SW and HW > + * based transfer mechanisms. The adapter uses an EAL service core funct= ion for SW based packet > + * transfer and uses the eventdev PMD functions to configure HW based pa= cket transfer between the > + * DMA device and the event device. > + * > + * The application can choose to submit a DMA operation directly to an D= MA device or send it to the > + * DMA adapter via eventdev based on RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_= PORT_OP_FWD capability. The > + * first mode is known as the event new (RTE_EVENT_DMA_ADAPTER_OP_NEW) m= ode and the second as the > + * event forward (RTE_EVENT_DMA_ADAPTER_OP_FORWARD) mode. The choice of = mode can be specified while > + * creating the adapter. In the former mode, it is an application respon= sibility to enable ingress > + * packet ordering. In the latter mode, it is the adapter responsibility= to enable the ingress > + * packet ordering. > + * > + * > + * Working model of RTE_EVENT_DMA_ADAPTER_OP_NEW mode: > + * > + * +--------------+ +--------------+ > + * | | | DMA stage | > + * | Application |---[2]-->| + enqueue to | > + * | | | dmadev | > + * +--------------+ +--------------+ > + * ^ ^ | > + * | | [3] > + * [6] [1] | > + * | | | > + * +--------------+ | > + * | | | > + * | Event device | | > + * | | | > + * +--------------+ | > + * ^ | > + * | | > + * [5] | > + * | v > + * +--------------+ +--------------+ > + * | | | | > + * | DMA adapter |<--[4]---| dmadev | > + * | | | | > + * +--------------+ +--------------+ > + * > + * > + * [1] Application dequeues events from the previous stage. > + * [2] Application prepares the DMA operations. > + * [3] DMA operations are submitted to dmadev by application. > + * [4] DMA adapter dequeues DMA completions from dmadev. > + * [5] DMA adapter enqueues events to the eventdev. > + * [6] Application dequeues from eventdev for further processing= . > + * > + * In the RTE_EVENT_DMA_ADAPTER_OP_NEW mode, application submits DMA ope= rations directly to DMA > + * device. The DMA adapter then dequeues DMA completions from DMA device= and enqueue events to the > + * event device. This mode does not ensure ingress ordering, if the appl= ication directly enqueues > + * to dmadev without going through DMA / atomic stage i.e. removing item= [1] and [2]. > + * > + * Events dequeued from the adapter will be treated as new events. In th= is mode, application needs > + * to specify event information (response information) which is needed t= o enqueue an event after the > + * DMA operation is completed. > + * > + * > + * Working model of RTE_EVENT_DMA_ADAPTER_OP_FORWARD mode: > + * > + * +--------------+ +--------------+ > + * --[1]-->| |---[2]-->| Application | > + * | Event device | | in | > + * <--[8]--| |<--[3]---| Ordered stage| > + * +--------------+ +--------------+ > + * ^ | > + * | [4] > + * [7] | > + * | v > + * +----------------+ +--------------+ > + * | |--[5]->| | > + * | DMA adapter | | dmadev | > + * | |<-[6]--| | > + * +----------------+ +--------------+ > + * > + * > + * [1] Events from the previous stage. > + * [2] Application in ordered stage dequeues events from eventde= v. > + * [3] Application enqueues DMA operations as events to eventdev= . > + * [4] DMA adapter dequeues event from eventdev. > + * [5] DMA adapter submits DMA operations to dmadev (Atomic stag= e). > + * [6] DMA adapter dequeues DMA completions from dmadev > + * [7] DMA adapter enqueues events to the eventdev > + * [8] Events to the next stage > + * > + * In the event forward (RTE_EVENT_DMA_ADAPTER_OP_FORWARD) mode, if the = HW supports the capability > + * RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, application can direc= tly submit the DMA > + * operations to the dmadev. If not, application retrieves the event por= t of the DMA adapter > + * through the API, rte_event_DMA_adapter_event_port_get(). Then, links = its event queue to this > + * port and starts enqueuing DMA operations as events to the eventdev. T= he adapter then dequeues > + * the events and submits the DMA operations to the dmadev. After the DM= A completions, the adapter > + * enqueues events to the event device. > + * > + * Application can use this mode, when ingress packet ordering is needed= . Events dequeued from the > + * adapter will be treated as forwarded events. In this mode, the applic= ation needs to specify the > + * dmadev ID and queue pair ID (request information) needed to enqueue a= n DMA operation in addition Grep queue pair and change to vchan everywhere. > + * to the event information (response information) needed to enqueue an = event after the DMA > + * operation has completed. > + * > + * The event DMA adapter provides common APIs to configure the packet fl= ow from the DMA device to > + * event devices for both SW and HW based transfers. The DMA event adapt= er's functions are: > + * > + * - rte_event_dma_adapter_create_ext() > + * - rte_event_dma_adapter_create() > + * - rte_event_dma_adapter_free() > + * - rte_event_dma_adapter_vchan_queue_add() > + * - rte_event_dma_adapter_vchan_queue_del() Remove queue > + * - rte_event_dma_adapter_start() > + * - rte_event_dma_adapter_stop() > + * - rte_event_dma_adapter_stats_get() > + * - rte_event_dma_adapter_stats_reset() > + * > + * The application creates an instance using rte_event_dma_adapter_creat= e() or > + * rte_event_dma_adapter_create_ext(). > + * > + * dmadev queue pair addition / deletion is done using the rte_event_dma= _adapter_vchan_queue_add() / > + * rte_event_dma_adapter_vchan_queue_del() APIs. If HW supports the capa= bility > + * RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND, event information= must be passed to the add > + * API. > + * > + */ > + > +#include > + > +#include "rte_eventdev.h" use <> and sort in alphanetical orer. > +#include > +#include > + > +#ifdef __cplusplus > +extern "C" { > +#endif > + > +/** > + * A structure used to hold event based DMA operation entry. > + */ > +struct rte_event_dma_adapter_op { > + struct rte_dma_sge *src_seg; > + /**< Source segments. */ > + struct rte_dma_sge *dst_seg; > + /**< Destination segments. */ > + uint16_t nb_src; > + /**< Number of source segments. */ > + uint16_t nb_dst; > + /**< Number of destination segments. */ > + uint64_t flags; > + /**< Flags related to the operation. > + * @see RTE_DMA_OP_FLAG_* > + */ > + struct rte_mempool *op_mp; > + /**< Mempool from which op is allocated. */ > +}; > + > +/** > + * DMA event adapter mode > + */ > +enum rte_event_dma_adapter_mode { > + RTE_EVENT_DMA_ADAPTER_OP_NEW, > + /**< Start the DMA adapter in event new mode. > + * @see RTE_EVENT_OP_NEW. > + * > + * Application submits DMA operations to the dmadev. Adapter only= dequeues the DMA > + * completions from dmadev and enqueue events to the eventdev. > + */ > + > + RTE_EVENT_DMA_ADAPTER_OP_FORWARD, > + /**< Start the DMA adapter in event forward mode. > + * @see RTE_EVENT_OP_FORWARD. > + * > + * Application submits DMA requests as events to the DMA adapter = or DMA device based on > + * RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability. DMA= completions are enqueued > + * back to the eventdev by DMA adapter. > + */ > +}; > + > +/** > + * DMA event request structure will be filled by application to provide = event request information to > + * the adapter. > + */ > +struct rte_event_dma_request { It is got duplicated by rte_event_dma_adapter_op. Right? if so, please remove it. > + uint8_t resv[8]; > + /**< Overlaps with first 8 bytes of struct rte_event that encode = the response event > + * information. Application is expected to fill in struct rte_eve= nt response_info. > + */ > + > + int16_t dmadev_id; > + /**< DMA device ID to be used */ > + > + uint16_t queue_pair_id; > + /**< DMA queue pair ID to be used */ > + > + uint32_t rsvd; > + /**< Reserved bits */ > +}; > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice. Header file has this banner. No need to duplcate for EVERY functiion. > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice. > + * > + * Add a vchan queue to an event DMA adapter. Remove queue > + * > + * @param id > + * Adapter identifier. > + * @param dmadev_id > + * dmadev identifier. > + * @param queue_pair_id vchan_id > + * DMA device vchan queue identifier. If queue_pair_id is set -1, ad= apter adds all the > + * preconfigured queue pairs to the instance. > + * @param event > + * If HW supports dmadev queue pair to event queue binding, applicat= ion is expected to fill in > + * event information, else it will be NULL. > + * @see RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND > + * > + * @return > + * - 0: Success, vchan queue added correctly. > + * - <0: Error code on failure. > + */ > +__rte_experimental > +int rte_event_dma_adapter_vchan_queue_add(uint8_t id, int16_t dmadev_id,= int32_t queue_pair_id, Remove queue > + const struct rte_event *event); > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice. > + * > + * Delete a vchan queue from an event DMA adapter. > + * > + * @param id > + * Adapter identifier. > + * @param dmadev_id > + * DMA device identifier. > + * @param queue_pair_id vchan_id > + * DMA device vchan queue identifier. > + * > + * @return > + * - 0: Success, vchan queue deleted successfully. > + * - <0: Error code on failure. > + */ > +__rte_experimental > +int rte_event_dma_adapter_vchan_queue_del(uint8_t id, int16_t dmadev_id,= int32_t queue_pair_id); Remove queue > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice. > + * > + * Enqueue a burst of DMA operations as event objects supplied in *rte_e= vent* structure on an event > + * DMA adapter designated by its event *evdev_id* through the event port= specified by *port_id*. > + * This function is supported if the eventdev PMD has the > + * #RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability flag set. > + * > + * The *nb_events* parameter is the number of event objects to enqueue t= hat are supplied in the > + * *ev* array of *rte_event* structure. > + * > + * The rte_event_dma_adapter_enqueue() function returns the number of ev= ent objects it actually > + * enqueued. A return value equal to *nb_events* means that all event ob= jects have been enqueued. > + * > + * @param evdev_id > + * The identifier of the device. > + * @param port_id > + * The identifier of the event port. > + * @param ev > + * Points to an array of *nb_events* objects of type *rte_event* str= ucture which contain the > + * event object enqueue operations to be processed. Documenent the use of rte_event_dma_adapter_ops > + * @param nb_events > + * The number of event objects to enqueue, typically number of > + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) availabl= e for this port. > + * > + * @return > + * The number of event objects actually enqueued on the event device= . The return value can be > + * less than the value of the *nb_events* parameter when the event devic= es queue is full or if > + * invalid parameters are specified in a *rte_event*. If the return valu= e is less than *nb_events*, > + * the remaining events at the end of ev[] are not consumed and the call= er has to take care of them, > + * and rte_errno is set accordingly. Possible errno values include: > + * > + * - EINVAL: The port ID is invalid, device ID is invalid, an event'= s queue ID is invalid, or an > + * event's sched type doesn't match the capabilities of the destination = queue. > + * - ENOSPC: The event port was backpressured and unable to enqueue = one or more events. This > + * error code is only applicable to closed systems. > + */ > +__rte_experimental > +uint16_t rte_event_dma_adapter_enqueue(uint8_t evdev_id, uint8_t port_id= , struct rte_event ev[], > + uint16_t nb_events); > + > +#ifdef __cplusplus > +} > +#endif > + > + > +#define RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND 0x4 QP->VCHAN > +/**< Flag indicates HW is capable of mapping DMA queue pair to queue pair to vhcan > + * event queue. > + */ > + > +/** > + * Retrieve the event device's DMA adapter capabilities for the > + * specified dmadev device > + * > + * @param dev_id > + * The identifier of the device. > + * > + * @param dmadev_id > + * The identifier of the dmadev device. > + * > + * @param[out] caps > + * A pointer to memory filled with event adapter capabilities. > + * It is expected to be pre-allocated & initialized by caller. > + * > + * @return > + * - 0: Success, driver provides event adapter capabilities for the > + * dmadev device. > + * - <0: Error code returned by the driver function. > + * > + */ > +int > +rte_event_dma_adapter_caps_get(uint8_t dev_id, int16_t dmadev_id, uint32= _t *caps); > + > extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS]; > diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map > index 7ce09a87bb..597a5c9cda 100644 > --- a/lib/eventdev/version.map > +++ b/lib/eventdev/version.map > @@ -134,6 +134,21 @@ EXPERIMENTAL { > > # added in 23.11 > rte_event_eth_rx_adapter_create_ext_with_params; > + rte_event_dma_adapter_create_ext; > + rte_event_dma_adapter_create; > + rte_event_dma_adapter_free; > + rte_event_dma_adapter_event_port_get; > + rte_event_dma_adapter_vchan_queue_add; > + rte_event_dma_adapter_vchan_queue_del; > + rte_event_dma_adapter_service_id_get; > + rte_event_dma_adapter_start; > + rte_event_dma_adapter_stop; > + rte_event_dma_adapter_runtime_params_init; > + rte_event_dma_adapter_runtime_params_set; > + rte_event_dma_adapter_runtime_params_get; > + rte_event_dma_adapter_stats_get; > + rte_event_dma_adapter_stats_reset; > + rte_event_dma_adapter_enqueue; sort in alphabetical order. Phew :-)