DPDK patches and discussions
 help / color / mirror / Atom feed
* [RFC 0/2] introduce event vector adapter
@ 2025-03-26 13:14 pbhagavatula
  2025-03-26 13:14 ` [RFC 1/2] eventdev: " pbhagavatula
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: pbhagavatula @ 2025-03-26 13:14 UTC (permalink / raw)
  To: jerinj; +Cc: dev, Pavan Nikhilesh

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

The event vector adapter supports offloading the creation of event vectors
by vectorizing objects (mbufs/ptrs/u64s).

An event vector adapter has the following working model:

         ┌──────────┐
         │  Vector  ├─┐
         │ adapter0 │ │
         └──────────┘ │
         ┌──────────┐ │   ┌──────────┐
         │  Vector  ├─┼──►│  Event   │
         │ adapter1 │ │   │  Queue0  │
         └──────────┘ │   └──────────┘
         ┌──────────┐ │
         │  Vector  ├─┘
         │ adapter2 │
         └──────────┘

         ┌──────────┐
         │  Vector  ├─┐
         │ adapter0 │ │   ┌──────────┐
         └──────────┘ ├──►│  Event   │
         ┌──────────┐ │   │  Queue1  │
         │  Vector  ├─┘   └──────────┘
         │ adapter1 │
         └──────────┘

 - A vector adapter can be seen as an extension to event queue. It helps in
   aggregating objects and generating a vector event which is enqueued to the
   event queue.

 - Multiple vector adapters can be created on an event queue, each with its
   own unique properties such as event properties, vector size, and timeout.
   Note: If the target event queue doesn't support RTE_EVENT_QUEUE_CFG_ALL_TYPES,
         then the vector adapter should use the same schedule type as the event
         queue.

 - Each vector adapter aggregates objects, generates a vector event and
   enqueues it to the event queue with the event properties mentioned in
   rte_event_vector_adapter_conf::ev.

 - After configuring the vector adapter, the Application needs to use the
   rte_event_vector_adapter_enqueue() function to enqueue objects i.e.,
   mbufs/ptrs/u64s to the vector adapter.
   On reaching the configured vector size or timeout, the vector adapter
   enqueues the event vector to the event queue.
   Note: Application should use the event_type and sub_event_type properly
         identifying the contents of vector event on dequeue.

 - If the vector adapter advertises the RTE_EVENT_VECTOR_ADAPTER_CAP_SOV_EOV
  capability, application can use the RTE_EVENT_VECTOR_ENQ_[S|E]OV flags
  to indicate the start and end of a vector event.
  * When RTE_EVENT_VECTOR_ENQ_SOV is set, the vector adapter will flush any
    aggregation in progress as a vector event and start aggregating a new
    vector event with the enqueued ptr.
  * When RTE_EVENT_VECTOR_ENQ_EOV is set, the vector adapter will add the
    current ptr enqueued to the aggregated event and enqueue the vector event
    to the event queue.
  * If both flags are set, the vector adapter will flush the current aggregation
    as a vector event and enqueue the current ptr as a single event to the event
    queue.

 - If the vector adapter reaches the configured vector size, it will enqueue
   the aggregated vector event to the event queue.

 - If the vector adapter reaches the configured vector timeout, it will flush
   the current aggregation as a vector event if the minimum vector size is
   reached, if not it will enqueue the objects as single events to the event
   queue.

 - If the vector adapter is unable to aggregate the objects into a vector event,
   it will enqueue the objects as single events to the event queue with the event
   properties mentioned in rte_event_vector_adapter_conf::ev_fallback.

 Before using the vector adapter, the application has to create and configure
 an event device and based on the event device capability it might require
 creating an additional event port.

 When the application creates the vector adapter using the
 ``rte_event_vector_adapter_create()`` function, the event device driver
 capabilities are checked. If an in-built port is absent, the application
 uses the default function to create a new event port.
 For finer control over event port creation, the application should use
 the ``rte_event_vector_adapter_create_ext()`` function.

 The application can enqueue one or more objects to the vector adapter using the
 ``rte_event_vector_adapter_enqueue()`` function and control the aggregation
 using the flags.

 Vector adapters report stats using the ``rte_event_vector_adapter_stats_get()``
 function and reset the stats using the ``rte_event_vector_adapter_stats_reset()``.

 The application can destroy the vector adapter using the
 ``rte_event_vector_adapter_destroy()`` function.

Pavan Nikhilesh (2):
  eventdev: introduce event vector adapter
  eventdev: add default software vector adapter

 config/rte_config.h                     |   1 +
 lib/eventdev/event_vector_adapter_pmd.h |  87 +++
 lib/eventdev/eventdev_pmd.h             |  38 ++
 lib/eventdev/meson.build                |   3 +
 lib/eventdev/rte_event_vector_adapter.c | 762 ++++++++++++++++++++++++
 lib/eventdev/rte_event_vector_adapter.h | 469 +++++++++++++++
 lib/eventdev/rte_eventdev.c             |  23 +
 lib/eventdev/rte_eventdev.h             |   8 +
 lib/eventdev/version.map                |  13 +
 9 files changed, 1404 insertions(+)
 create mode 100644 lib/eventdev/event_vector_adapter_pmd.h
 create mode 100644 lib/eventdev/rte_event_vector_adapter.c
 create mode 100644 lib/eventdev/rte_event_vector_adapter.h

--
2.43.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [RFC 1/2] eventdev: introduce event vector adapter
  2025-03-26 13:14 [RFC 0/2] introduce event vector adapter pbhagavatula
@ 2025-03-26 13:14 ` pbhagavatula
  2025-03-26 13:14 ` [RFC 2/2] eventdev: add default software " pbhagavatula
  2025-03-26 17:06 ` [RFC 0/2] introduce event " Pavan Nikhilesh Bhagavatula
  2 siblings, 0 replies; 8+ messages in thread
From: pbhagavatula @ 2025-03-26 13:14 UTC (permalink / raw)
  To: jerinj, Bruce Richardson; +Cc: dev, Pavan Nikhilesh

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

The event vector adapter supports offloading creation of
event vectors by vectorizing objects (mbufs/ptrs/u64s).
Applications can create a vector adapter associated with
an event queue and enqueue objects to be vectorized.
When the vector reaches the configured size or when the timeout
is reached, the vector adapter will enqueue the vector to the
event queue.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 config/rte_config.h                     |   1 +
 lib/eventdev/event_vector_adapter_pmd.h |  87 +++++
 lib/eventdev/eventdev_pmd.h             |  36 ++
 lib/eventdev/meson.build                |   3 +
 lib/eventdev/rte_event_vector_adapter.c | 444 ++++++++++++++++++++++
 lib/eventdev/rte_event_vector_adapter.h | 469 ++++++++++++++++++++++++
 lib/eventdev/rte_eventdev.c             |  21 ++
 lib/eventdev/rte_eventdev.h             |   8 +
 lib/eventdev/version.map                |  13 +
 9 files changed, 1082 insertions(+)
 create mode 100644 lib/eventdev/event_vector_adapter_pmd.h
 create mode 100644 lib/eventdev/rte_event_vector_adapter.c
 create mode 100644 lib/eventdev/rte_event_vector_adapter.h

diff --git a/config/rte_config.h b/config/rte_config.h
index 86897de75e..9535c48d81 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -92,6 +92,7 @@
 #define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
 #define RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE 32
 #define RTE_EVENT_DMA_ADAPTER_MAX_INSTANCE 32
+#define RTE_EVENT_VECTOR_ADAPTER_MAX_INSTANCE_PER_QUEUE 32
 
 /* rawdev defines */
 #define RTE_RAWDEV_MAX_DEVS 64
diff --git a/lib/eventdev/event_vector_adapter_pmd.h b/lib/eventdev/event_vector_adapter_pmd.h
new file mode 100644
index 0000000000..dab0350564
--- /dev/null
+++ b/lib/eventdev/event_vector_adapter_pmd.h
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell International Ltd.
+ * All rights reserved.
+ */
+#ifndef __EVENT_VECTOR_ADAPTER_PMD_H__
+#define __EVENT_VECTOR_ADAPTER_PMD_H__
+/**
+ * @file
+ * RTE Event Vector Adapter API (PMD Side)
+ *
+ * @note
+ * This file provides implementation helpers for internal use by PMDs.  They
+ * are not intended to be exposed to applications and are not subject to ABI
+ * versioning.
+ */
+#include "eventdev_pmd.h"
+#include "rte_event_vector_adapter.h"
+
+typedef int (*rte_event_vector_adapter_create_t)(struct rte_event_vector_adapter *adapter);
+/**< @internal Event vector adapter implementation setup */
+typedef int (*rte_event_vector_adapter_destroy_t)(struct rte_event_vector_adapter *adapter);
+/**< @internal Event vector adapter implementation teardown */
+typedef int (*rte_event_vector_adapter_caps_get_t)(struct rte_eventdev *dev);
+/**< @internal Get capabilities for event vector adapter */
+typedef int (*rte_event_vector_adapter_stats_get_t)(const struct rte_event_vector_adapter *adapter,
+						    struct rte_event_vector_adapter_stats *stats);
+/**< @internal Get statistics for event vector adapter */
+typedef int (*rte_event_vector_adapter_stats_reset_t)(
+	const struct rte_event_vector_adapter *adapter);
+/**< @internal Reset statistics for event vector adapter */
+
+/**
+ * @internal Structure containing the functions exported by an event vector
+ * adapter implementation.
+ */
+struct event_vector_adapter_ops {
+	rte_event_vector_adapter_create_t create;
+	/**< Set up adapter */
+	rte_event_vector_adapter_destroy_t destroy;
+	/**< Tear down adapter */
+	rte_event_vector_adapter_caps_get_t caps_get;
+	/**< Get capabilities from driver */
+	rte_event_vector_adapter_stats_get_t stats_get;
+	/**< Get adapter statistics */
+	rte_event_vector_adapter_stats_reset_t stats_reset;
+	/**< Reset adapter statistics */
+
+	rte_event_vector_adapter_enqueue_t enqueue;
+	/**< Enqueue ptrs into the event vector adapter */
+};
+/**
+ * @internal Adapter data; structure to be placed in shared memory to be
+ * accessible by various processes in a multi-process configuration.
+ */
+struct __rte_cache_aligned rte_event_vector_adapter_data {
+	uint32_t id;
+	/**< Event vector adapter ID */
+	uint8_t event_dev_id;
+	/**< Event device ID */
+	uint32_t socket_id;
+	/**< Socket ID where memory is allocated */
+	uint8_t event_port_id;
+	/**< Optional: event port ID used when the inbuilt port is absent */
+	const struct rte_memzone *mz;
+	/**< Event vector adapter memzone pointer */
+	struct rte_event_vector_adapter_conf conf;
+	/**< Configuration used to configure the adapter. */
+	uint32_t caps;
+	/**< Adapter capabilities */
+	void *adapter_priv;
+	/**< Vector adapter private data*/
+	uint32_t unified_service_id;
+	/**< Unified Service ID*/
+};
+
+static int
+dummy_vector_adapter_enqueue(struct rte_event_vector_adapter *adapter, uintptr_t ptrs[],
+			     uint16_t num_events, uint64_t flags)
+{
+	RTE_SET_USED(adapter);
+	RTE_SET_USED(ptrs);
+	RTE_SET_USED(num_events);
+	RTE_SET_USED(flags);
+	return 0;
+}
+
+#endif /* __EVENT_VECTOR_ADAPTER_PMD_H__ */
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index ad13ba5b03..d03461316b 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -26,6 +26,7 @@
 
 #include "event_timer_adapter_pmd.h"
 #include "rte_event_eth_rx_adapter.h"
+#include "rte_event_vector_adapter.h"
 #include "rte_eventdev.h"
 
 #ifdef __cplusplus
@@ -1555,6 +1556,36 @@ typedef int (*eventdev_dma_adapter_stats_get)(const struct rte_eventdev *dev,
 typedef int (*eventdev_dma_adapter_stats_reset)(const struct rte_eventdev *dev,
 						const int16_t dma_dev_id);
 
+/**
+ * Event device vector adapter capabilities.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param caps
+ *   Vector adapter capabilities
+ * @param ops
+ *   Vector adapter ops
+ *
+ * @return
+ *   Return 0 on success.
+ *
+ */
+typedef int (*eventdev_vector_adapter_caps_get_t)(const struct rte_eventdev *dev, uint32_t *caps,
+						  const struct event_vector_adapter_ops **ops);
+
+/**
+ * Event device vector adapter info.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param info
+ *   Vector adapter info
+ *
+ * @return
+ *   Return 0 on success.
+ */
+typedef int (*eventdev_vector_adapter_info_get_t)(const struct rte_eventdev *dev,
+						  struct rte_event_vector_adapter_info *info);
 
 /** Event device operations function pointer table */
 struct eventdev_ops {
@@ -1697,6 +1728,11 @@ struct eventdev_ops {
 	eventdev_dma_adapter_stats_reset dma_adapter_stats_reset;
 	/**< Reset DMA stats */
 
+	eventdev_vector_adapter_caps_get_t vector_adapter_caps_get;
+	/**< Get vector adapter capabilities */
+	eventdev_vector_adapter_info_get_t vector_adapter_info_get;
+	/**< Get vector adapter info */
+
 	eventdev_selftest dev_selftest;
 	/**< Start eventdev Selftest */
 
diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build
index 71dea91727..0797c145e7 100644
--- a/lib/eventdev/meson.build
+++ b/lib/eventdev/meson.build
@@ -18,6 +18,7 @@ sources = files(
         'rte_event_eth_tx_adapter.c',
         'rte_event_ring.c',
         'rte_event_timer_adapter.c',
+        'rte_event_vector_adapter.c',
         'rte_eventdev.c',
 )
 headers = files(
@@ -27,6 +28,7 @@ headers = files(
         'rte_event_eth_tx_adapter.h',
         'rte_event_ring.h',
         'rte_event_timer_adapter.h',
+        'rte_event_vector_adapter.h',
         'rte_eventdev.h',
         'rte_eventdev_trace_fp.h',
 )
@@ -38,6 +40,7 @@ driver_sdk_headers += files(
         'eventdev_pmd_pci.h',
         'eventdev_pmd_vdev.h',
         'event_timer_adapter_pmd.h',
+        'event_vector_adapter_pmd.h',
 )
 
 deps += ['ring', 'ethdev', 'hash', 'mempool', 'mbuf', 'timer', 'cryptodev', 'dmadev']
diff --git a/lib/eventdev/rte_event_vector_adapter.c b/lib/eventdev/rte_event_vector_adapter.c
new file mode 100644
index 0000000000..5f38a9a40b
--- /dev/null
+++ b/lib/eventdev/rte_event_vector_adapter.c
@@ -0,0 +1,444 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell International Ltd.
+ * All rights reserved.
+ */
+
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_mcslock.h>
+#include <rte_service_component.h>
+#include <rte_tailq.h>
+
+#include "event_vector_adapter_pmd.h"
+#include "eventdev_pmd.h"
+#include "rte_event_vector_adapter.h"
+
+#define ADAPTER_ID(dev_id, queue_id, adapter_id)                                                   \
+	((uint32_t)dev_id << 16 | (uint32_t)queue_id << 8 | (uint32_t)adapter_id)
+#define DEV_ID_FROM_ADAPTER_ID(adapter_id)     ((adapter_id >> 16) & 0xFF)
+#define QUEUE_ID_FROM_ADAPTER_ID(adapter_id)   ((adapter_id >> 8) & 0xFF)
+#define ADAPTER_ID_FROM_ADAPTER_ID(adapter_id) (adapter_id & 0xFF)
+
+#define MZ_NAME_MAX_LEN	    64
+#define DATA_MZ_NAME_FORMAT "rte_event_vector_adapter_data_%d_%d_%d"
+
+RTE_LOG_REGISTER_SUFFIX(ev_vector_logtype, adapter.vector, NOTICE);
+#define RTE_LOGTYPE_EVVEC ev_vector_logtype
+
+struct rte_event_vector_adapter *adapters[RTE_EVENT_MAX_DEVS][RTE_EVENT_MAX_QUEUES_PER_DEV];
+
+#define EVVEC_LOG(level, logtype, ...)                                                             \
+	RTE_LOG_LINE_PREFIX(level, logtype,                                                        \
+			    "EVVEC: %s() line %u: ", __func__ RTE_LOG_COMMA __LINE__, __VA_ARGS__)
+#define EVVEC_LOG_ERR(...) EVVEC_LOG(ERR, EVVEC, __VA_ARGS__)
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+#define EVVEC_LOG_DBG(...) EVVEC_LOG(DEBUG, EVVEC, __VA_ARGS__)
+#else
+#define EVVEC_LOG_DBG(...) /* No debug logging */
+#endif
+
+#define PTR_VALID_OR_ERR_RET(ptr, retval)                                                          \
+	do {                                                                                       \
+		if (ptr == NULL) {                                                                 \
+			rte_errno = EINVAL;                                                        \
+			return retval;                                                             \
+		}                                                                                  \
+	} while (0)
+
+static int
+validate_conf(const struct rte_event_vector_adapter_conf *conf,
+	      struct rte_event_vector_adapter_info *info)
+{
+	int rc = -EINVAL;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(conf->event_dev_id, rc);
+
+	if (conf->vector_sz < info->min_vector_sz || conf->vector_sz > info->max_vector_sz) {
+		EVVEC_LOG_DBG("invalid vector size %u, should be between %u and %u",
+			      conf->vector_sz, info->min_vector_sz, info->max_vector_sz);
+		return rc;
+	}
+
+	if (conf->vector_timeout_ns < info->min_vector_timeout_ns ||
+	    conf->vector_timeout_ns > info->max_vector_timeout_ns) {
+		EVVEC_LOG_DBG("invalid vector timeout %u, should be between %u and %u",
+			      conf->vector_timeout_ns, info->min_vector_timeout_ns,
+			      info->max_vector_timeout_ns);
+		return rc;
+	}
+
+	if (conf->vector_mp == NULL) {
+		EVVEC_LOG_DBG("invalid mempool for vector adapter");
+		return rc;
+	}
+
+	if (info->log2_sz && rte_is_power_of_2(conf->vector_sz) != 0) {
+		EVVEC_LOG_DBG("invalid vector size %u, should be a power of 2", conf->vector_sz);
+		return rc;
+	}
+
+	return 0;
+}
+
+static int
+default_port_conf_cb(uint8_t event_dev_id, uint8_t *event_port_id, void *conf_arg)
+{
+	struct rte_event_port_conf *port_conf, def_port_conf = {0};
+	struct rte_event_dev_config dev_conf;
+	struct rte_eventdev *dev;
+	uint8_t port_id;
+	uint8_t dev_id;
+	int started;
+	int ret;
+
+	dev = &rte_eventdevs[event_dev_id];
+	dev_id = dev->data->dev_id;
+	dev_conf = dev->data->dev_conf;
+
+	started = dev->data->dev_started;
+	if (started)
+		rte_event_dev_stop(dev_id);
+
+	port_id = dev_conf.nb_event_ports;
+	if (conf_arg != NULL)
+		port_conf = conf_arg;
+	else {
+		port_conf = &def_port_conf;
+		ret = rte_event_port_default_conf_get(dev_id, (port_id - 1), port_conf);
+		if (ret < 0)
+			return ret;
+	}
+
+	dev_conf.nb_event_ports += 1;
+	if (port_conf->event_port_cfg & RTE_EVENT_PORT_CFG_SINGLE_LINK)
+		dev_conf.nb_single_link_event_port_queues += 1;
+
+	ret = rte_event_dev_configure(dev_id, &dev_conf);
+	if (ret < 0) {
+		EVVEC_LOG_ERR("failed to configure event dev %u", dev_id);
+		if (started)
+			if (rte_event_dev_start(dev_id))
+				return -EIO;
+
+		return ret;
+	}
+
+	ret = rte_event_port_setup(dev_id, port_id, port_conf);
+	if (ret < 0) {
+		EVVEC_LOG_ERR("failed to setup event port %u on event dev %u", port_id, dev_id);
+		return ret;
+	}
+
+	*event_port_id = port_id;
+
+	if (started)
+		ret = rte_event_dev_start(dev_id);
+
+	return ret;
+}
+
+struct rte_event_vector_adapter *
+rte_event_vector_adapter_create(const struct rte_event_vector_adapter_conf *conf)
+{
+	return rte_event_vector_adapter_create_ext(conf, default_port_conf_cb, NULL);
+}
+
+struct rte_event_vector_adapter *
+rte_event_vector_adapter_create_ext(const struct rte_event_vector_adapter_conf *conf,
+				    rte_event_vector_adapter_port_conf_cb_t conf_cb, void *conf_arg)
+{
+	struct rte_event_vector_adapter *adapter = NULL;
+	struct rte_event_vector_adapter_info info;
+	char mz_name[MZ_NAME_MAX_LEN];
+	const struct rte_memzone *mz;
+	struct rte_eventdev *dev;
+	uint32_t caps;
+	int i, n, rc;
+
+	PTR_VALID_OR_ERR_RET(conf, NULL);
+
+	if (adapters[conf->event_dev_id][conf->ev.queue_id] == NULL) {
+		adapters[conf->event_dev_id][conf->ev.queue_id] =
+			rte_zmalloc("rte_event_vector_adapter",
+				    sizeof(struct rte_event_vector_adapter) *
+					    RTE_EVENT_VECTOR_ADAPTER_MAX_INSTANCE_PER_QUEUE,
+				    RTE_CACHE_LINE_SIZE);
+		if (adapters[conf->event_dev_id][conf->ev.queue_id] == NULL) {
+			EVVEC_LOG_DBG("failed to allocate memory for vector adapters");
+			rte_errno = ENOMEM;
+			return NULL;
+		}
+	}
+
+	for (i = 0; i < RTE_EVENT_VECTOR_ADAPTER_MAX_INSTANCE_PER_QUEUE; i++) {
+		if (adapters[conf->event_dev_id][conf->ev.queue_id][i].used == false) {
+			adapter = &adapters[conf->event_dev_id][conf->ev.queue_id][i];
+			adapter->adapter_id = ADAPTER_ID(conf->event_dev_id, conf->ev.queue_id, i);
+			adapter->used = true;
+			break;
+		}
+	}
+
+	if (adapter == NULL) {
+		EVVEC_LOG_DBG("no available vector adapters");
+		rte_errno = ENODEV;
+		return NULL;
+	}
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(conf->event_dev_id, NULL);
+
+	dev = &rte_eventdevs[conf->event_dev_id];
+	if (dev->dev_ops->vector_adapter_caps_get != NULL &&
+	    dev->dev_ops->vector_adapter_info_get != NULL) {
+		rc = dev->dev_ops->vector_adapter_caps_get(dev, &caps, &adapter->ops);
+		if (rc < 0) {
+			EVVEC_LOG_DBG("failed to get vector adapter capabilities rc = %d", rc);
+			rte_errno = ENOTSUP;
+			goto error;
+		}
+
+		rc = dev->dev_ops->vector_adapter_info_get(dev, &info);
+		if (rc < 0) {
+			adapter->ops = NULL;
+			EVVEC_LOG_DBG("failed to get vector adapter info rc = %d", rc);
+			rte_errno = ENOTSUP;
+			goto error;
+		}
+	}
+
+	if (conf->ev.sched_type != dev->data->queues_cfg[conf->ev.queue_id].schedule_type &&
+	    !(dev->data->event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)) {
+		EVVEC_LOG_DBG("invalid event schedule type, eventdev doesn't support all types");
+		rte_errno = EINVAL;
+		goto error;
+	}
+
+	if (!(caps & RTE_EVENT_VECTOR_ADAPTER_CAP_INTERNAL_PORT)) {
+		if (conf_cb == NULL) {
+			EVVEC_LOG_DBG("port config callback is NULL");
+			rte_errno = EINVAL;
+			goto error;
+		}
+
+		rc = conf_cb(conf->event_dev_id, &adapter->data->event_port_id, conf_arg);
+		if (rc < 0) {
+			EVVEC_LOG_DBG("failed to create port for vector adapter");
+			rte_errno = EINVAL;
+			goto error;
+		}
+	}
+
+	rc = validate_conf(conf, &info);
+	if (rc < 0) {
+		adapter->ops = NULL;
+		rte_errno = EINVAL;
+		goto error;
+	}
+
+	n = snprintf(mz_name, MZ_NAME_MAX_LEN, DATA_MZ_NAME_FORMAT, conf->event_dev_id,
+		     conf->ev.queue_id, adapter->adapter_id);
+	if (n >= (int)sizeof(mz_name)) {
+		adapter->ops = NULL;
+		EVVEC_LOG_DBG("failed to create memzone name");
+		rte_errno = EINVAL;
+		goto error;
+	}
+	mz = rte_memzone_reserve(mz_name, sizeof(struct rte_event_vector_adapter_data),
+				 conf->socket_id, 0);
+	if (mz == NULL) {
+		adapter->ops = NULL;
+		EVVEC_LOG_DBG("failed to reserve memzone for vector adapter");
+		rte_errno = ENOMEM;
+		goto error;
+	}
+
+	adapter->data = mz->addr;
+	memset(adapter->data, 0, sizeof(struct rte_event_vector_adapter_data));
+
+	adapter->data->mz = mz;
+	adapter->data->event_dev_id = conf->event_dev_id;
+	adapter->data->id = adapter->adapter_id;
+	adapter->data->socket_id = conf->socket_id;
+	adapter->data->conf = *conf;
+
+	FUNC_PTR_OR_ERR_RET(adapter->ops->create, NULL);
+
+	rc = adapter->ops->create(adapter);
+	if (rc < 0) {
+		adapter->ops = NULL;
+		EVVEC_LOG_DBG("failed to create vector adapter");
+		rte_errno = EINVAL;
+		goto error;
+	}
+
+	adapter->enqueue = adapter->ops->enqueue;
+
+	return adapter;
+
+error:
+	adapter->used = false;
+	return NULL;
+}
+
+struct rte_event_vector_adapter *
+rte_event_vector_adapter_lookup(uint32_t adapter_id)
+{
+	uint8_t adapter_idx = ADAPTER_ID_FROM_ADAPTER_ID(adapter_id);
+	uint8_t queue_id = QUEUE_ID_FROM_ADAPTER_ID(adapter_id);
+	uint8_t dev_id = DEV_ID_FROM_ADAPTER_ID(adapter_id);
+	struct rte_event_vector_adapter *adapter;
+	const struct rte_memzone *mz;
+	char name[MZ_NAME_MAX_LEN];
+	struct rte_eventdev *dev;
+	int rc;
+
+	if (dev_id >= RTE_EVENT_MAX_DEVS || queue_id >= RTE_EVENT_MAX_QUEUES_PER_DEV ||
+	    adapter_idx >= RTE_EVENT_VECTOR_ADAPTER_MAX_INSTANCE_PER_QUEUE) {
+		EVVEC_LOG_ERR("invalid adapter id %u", adapter_id);
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
+	if (adapters[dev_id][queue_id] == NULL) {
+		adapters[dev_id][queue_id] =
+			rte_zmalloc("rte_event_vector_adapter",
+				    sizeof(struct rte_event_vector_adapter) *
+					    RTE_EVENT_VECTOR_ADAPTER_MAX_INSTANCE_PER_QUEUE,
+				    RTE_CACHE_LINE_SIZE);
+		if (adapters[dev_id][queue_id] == NULL) {
+			EVVEC_LOG_DBG("failed to allocate memory for vector adapters");
+			rte_errno = ENOMEM;
+			return NULL;
+		}
+	}
+
+	if (adapters[dev_id][queue_id][adapter_idx].used == true)
+		return &adapters[dev_id][queue_id][adapter_idx];
+
+	adapter = &adapters[dev_id][queue_id][adapter_idx];
+
+	snprintf(name, MZ_NAME_MAX_LEN, DATA_MZ_NAME_FORMAT, dev_id, queue_id, adapter_idx);
+	mz = rte_memzone_lookup(name);
+	if (mz == NULL) {
+		EVVEC_LOG_DBG("failed to lookup memzone for vector adapter");
+		rte_errno = ENOENT;
+		return NULL;
+	}
+
+	adapter->data = mz->addr;
+	dev = &rte_eventdevs[dev_id];
+
+	if (dev->dev_ops->vector_adapter_caps_get != NULL) {
+		rc = dev->dev_ops->vector_adapter_caps_get(dev, &adapter->data->caps,
+							   &adapter->ops);
+		if (rc < 0) {
+			EVVEC_LOG_DBG("failed to get vector adapter capabilities");
+			rte_errno = ENOTSUP;
+			return NULL;
+		}
+	}
+
+	adapter->enqueue = adapter->ops->enqueue;
+	adapter->adapter_id = adapter_id;
+	adapter->used = true;
+
+	return adapter;
+}
+
+int
+rte_event_vector_adapter_destroy(struct rte_event_vector_adapter *adapter)
+{
+	int rc;
+
+	PTR_VALID_OR_ERR_RET(adapter, -EINVAL);
+	if (adapter->used == false) {
+		EVVEC_LOG_ERR("event vector adapter is not allocated");
+		return -EINVAL;
+	}
+
+	FUNC_PTR_OR_ERR_RET(adapter->ops->destroy, -ENOTSUP);
+
+	rc = adapter->ops->destroy(adapter);
+	if (rc < 0) {
+		EVVEC_LOG_DBG("failed to destroy vector adapter");
+		return rc;
+	}
+
+	rte_memzone_free(adapter->data->mz);
+	adapter->ops = NULL;
+	adapter->enqueue = dummy_vector_adapter_enqueue;
+	adapter->data = NULL;
+	adapter->used = false;
+
+	return 0;
+}
+
+int
+rte_event_vector_adapter_info_get(uint8_t event_dev_id, struct rte_event_vector_adapter_info *info)
+{
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(event_dev_id, -EINVAL);
+	PTR_VALID_OR_ERR_RET(info, -EINVAL);
+
+	struct rte_eventdev *dev = &rte_eventdevs[event_dev_id];
+	if (dev->dev_ops->vector_adapter_info_get != NULL)
+		return dev->dev_ops->vector_adapter_info_get(dev, info);
+
+	return 0;
+}
+
+int
+rte_event_vector_adapter_conf_get(struct rte_event_vector_adapter *adapter,
+				  struct rte_event_vector_adapter_conf *conf)
+{
+	PTR_VALID_OR_ERR_RET(adapter, -EINVAL);
+	PTR_VALID_OR_ERR_RET(conf, -EINVAL);
+
+	*conf = adapter->data->conf;
+	return 0;
+}
+
+uint8_t
+rte_event_vector_adapter_remaining(uint8_t event_dev_id, uint8_t event_queue_id)
+{
+	uint8_t remaining = 0;
+	int i;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(event_dev_id, 0);
+
+	if (event_queue_id >= RTE_EVENT_MAX_QUEUES_PER_DEV)
+		return 0;
+
+	for (i = 0; i < RTE_EVENT_VECTOR_ADAPTER_MAX_INSTANCE_PER_QUEUE; i++) {
+		if (adapters[event_dev_id][event_queue_id][i].used == false)
+			remaining++;
+	}
+
+	return remaining;
+}
+
+int
+rte_event_vector_adapter_stats_get(struct rte_event_vector_adapter *adapter,
+				   struct rte_event_vector_adapter_stats *stats)
+{
+	PTR_VALID_OR_ERR_RET(adapter, -EINVAL);
+	PTR_VALID_OR_ERR_RET(stats, -EINVAL);
+
+	FUNC_PTR_OR_ERR_RET(adapter->ops->stats_get, -ENOTSUP);
+
+	adapter->ops->stats_get(adapter, stats);
+
+	return 0;
+}
+
+int
+rte_event_vector_adapter_stats_reset(struct rte_event_vector_adapter *adapter)
+{
+	PTR_VALID_OR_ERR_RET(adapter, -EINVAL);
+
+	FUNC_PTR_OR_ERR_RET(adapter->ops->stats_reset, -ENOTSUP);
+
+	adapter->ops->stats_reset(adapter);
+
+	return 0;
+}
diff --git a/lib/eventdev/rte_event_vector_adapter.h b/lib/eventdev/rte_event_vector_adapter.h
new file mode 100644
index 0000000000..e7ecc26cdf
--- /dev/null
+++ b/lib/eventdev/rte_event_vector_adapter.h
@@ -0,0 +1,469 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell International Ltd.
+ * All rights reserved.
+ */
+
+#ifndef __RTE_EVENT_VECTOR_ADAPTER_H__
+#define __RTE_EVENT_VECTOR_ADAPTER_H__
+
+/**
+ * @file rte_event_vector_adapter.h
+ *
+ * @warning
+ * @b EXPERIMENTAL:
+ * All functions in this file may be changed or removed without prior notice.
+ *
+ * Event vector adapter API.
+ *
+ * An event vector adapter has the following working model:
+ *
+ *         ┌──────────┐
+ *         │  Vector  ├─┐
+ *         │ adapter0 │ │
+ *         └──────────┘ │
+ *         ┌──────────┐ │   ┌──────────┐
+ *         │  Vector  ├─┼──►│  Event   │
+ *         │ adapter1 │ │   │  Queue0  │
+ *         └──────────┘ │   └──────────┘
+ *         ┌──────────┐ │
+ *         │  Vector  ├─┘
+ *         │ adapter2 │
+ *         └──────────┘
+ *
+ *         ┌──────────┐
+ *         │  Vector  ├─┐
+ *         │ adapter0 │ │   ┌──────────┐
+ *         └──────────┘ ├──►│  Event   │
+ *         ┌──────────┐ │   │  Queue1  │
+ *         │  Vector  ├─┘   └──────────┘
+ *         │ adapter1 │
+ *         └──────────┘
+ *
+ * - A vector adapter can be seen as an extension to event queue. It helps in
+ *   aggregating ptrs and generating a vector event which is enqueued to the
+ *   event queue.
+ *
+ * - Multiple vector adapters can be created on an event queue, each with its
+ *   own unique properties such as event properties, vector size, and timeout.
+ *   Note: If the target event queue doesn't support RTE_EVENT_QUEUE_CFG_ALL_TYPES,
+ *         then the vector adapter should use the same schedule type as the event
+ *         queue.
+ *
+ * - Each vector adapter aggregates ptrs, generates a vector event and
+ *   enqueues it to the event queue with the event properties mentioned in
+ *   rte_event_vector_adapter_conf::ev.
+ *
+ * - After configuring the vector adapter, Application needs to use the
+ *   rte_event_vector_adapter_enqueue() function to enqueue ptrs i.e.,
+ *   mbufs/ptrs/u64s to the vector adapter.
+ *   On reaching the configured vector size or timeout, the vector adapter
+ *   enqueues the event vector to the event queue.
+ *   Note: Application should use the event_type and sub_event_type properly
+ *         identifying the contents of vector event on dequeue.
+ *
+ * - If the vector adapter advertises the RTE_EVENT_VECTOR_ADAPTER_CAP_SOV_EOV
+ *  capability, application can use the RTE_EVENT_VECTOR_ENQ_[S|E]OV flags
+ *  to indicate the start and end of a vector event.
+ *  * When RTE_EVENT_VECTOR_ENQ_SOV is set, the vector adapter will flush any
+ *    aggregation in progress as a vector event and start aggregating a new
+ *    vector event with the enqueued ptr.
+ *  * When RTE_EVENT_VECTOR_ENQ_EOV is set, the vector adapter will add the
+ *    current ptr enqueued to the aggregated event and enqueue the vector event
+ *    to the event queue.
+ *  * If both flags are set, the vector adapter will flush the current aggregation
+ *    as a vector event and enqueue the current ptr as a single event to the event
+ *    queue.
+ *
+ * - If the vector adapter reaches the configured vector size, it will enqueue
+ *   the aggregated vector event to the event queue.
+ *
+ * - If the vector adapter reaches the configured vector timeout, it will flush
+ *   the current aggregation as a vector event if the minimum vector size is
+ *   reached, if not it will enqueue the ptrs as single events to the event
+ *   queue.
+ *
+ * - If the vector adapter is unable to aggregate the ptrs into a vector event,
+ *   it will enqueue the ptrs as single events to the event queue with the event
+ *   properties mentioned in rte_event_vector_adapter_conf::ev_fallback.
+ *
+ * Before using the vector adapter, the application has to create and configure
+ * an event device and based on the event device capability it might require
+ * creating an additional event port.
+ *
+ * When the application creates the vector adapter using the
+ * ``rte_event_vector_adapter_create()`` function, the event device driver
+ * capabilities are checked. If an in-built port is absent, the application
+ * uses the default function to create a new event port.
+ * For finer control over event port creation, the application should use
+ * the ``rte_event_vector_adapter_create_ext()`` function.
+ *
+ * The application can enqueue one or more ptrs to the vector adapter using the
+ * ``rte_event_vector_adapter_enqueue()`` function and control the aggregation
+ * using the flags.
+ *
+ * Vector adapters report stats using the ``rte_event_vector_adapter_stats_get()``
+ * function and reset the stats using the ``rte_event_vector_adapter_stats_reset()``.
+ *
+ * The application can destroy the vector adapter using the
+ * ``rte_event_vector_adapter_destroy()`` function.
+ *
+ */
+
+#include <rte_eventdev.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define RTE_EVENT_VECTOR_ADAPTER_CAP_SOV_EOV (1ULL << 0)
+/**< Vector adapter supports Start of Vector (SOV) and End of Vector (EOV) flags
+ *  in the enqueue flags.
+ *
+ * @see RTE_EVENT_VECTOR_ENQ_SOV
+ * @see RTE_EVENT_VECTOR_ENQ_EOV
+ */
+
+#define RTE_EVENT_VECTOR_ENQ_SOV   (1ULL << 0)
+/**< Indicates the start of a vector event. When enqueue is called with
+ *  RTE_EVENT_VECTOR_ENQ_SOV, the vector adapter will flush any vector
+ *  aggregation in progress and start aggregating a new vector event with
+ *  the enqueued ptr.
+ */
+#define RTE_EVENT_VECTOR_ENQ_EOV   (1ULL << 1)
+/**< Indicates the end of a vector event. When enqueue is called with
+ *  RTE_EVENT_VECTOR_ENQ_EOV, the vector adapter will add the current ptr
+ *  to the aggregated event and flush the event vector.
+ */
+#define RTE_EVENT_VECTOR_ENQ_FLUSH (1ULL << 2)
+/**< Flush any in-progress vector aggregation. */
+
+/**
+ * Vector adapter configuration structure
+ */
+struct rte_event_vector_adapter_conf {
+	uint8_t event_dev_id;
+	/**< Event device identifier */
+	uint32_t socket_id;
+	/**< Identifier of socket from which to allocate memory for adapter */
+	struct rte_event ev;
+	/**<
+	 *  The values from the following event fields will be used when
+	 *  queuing work:
+	 *   - queue_id: Targeted event queue ID for vector event.
+	 *   - event_priority: Event priority of the vector event in
+	 *                     the event queue relative to other events.
+	 *   - sched_type: Scheduling type for events from this vector adapter.
+	 *   - event_type: Event type for the vector event.
+	 *   - sub_event_type: Sub event type for the vector event.
+	 *   - flow_id: Flow ID for the vectors enqueued to the event queue by
+	 *              the vector adapter.
+	 */
+	struct rte_event ev_fallback;
+	/**<
+	 * The values from the following event fields will be used when
+	 * aggregation fails and single event is enqueued:
+	 *   - event_type: Event type for the single event.
+	 *   - sub_event_type: Sub event type for the single event.
+	 *   - flow_id: Flow ID for the single event.
+	 *
+	 * Other fields are taken from rte_event_vector_adapter_conf::ev.
+	 */
+	uint16_t vector_sz;
+	/**<
+	 * Indicates the maximum number for enqueued work to combine and form a vector.
+	 * Should be within vectorization limits of the adapter.
+	 * @see rte_event_vector_adapter_info::min_vector_sz
+	 * @see rte_event_vector_adapter_info::max_vector_sz
+	 */
+	uint64_t vector_timeout_ns;
+	/**<
+	 * Indicates the maximum number of nanoseconds to wait for receiving
+	 * work. Should be within vectorization limits of the adapter.
+	 * @see rte_event_vector_adapter_info::min_vector_ns
+	 * @see rte_event_vector_adapter_info::max_vector_ns
+	 */
+	struct rte_mempool *vector_mp;
+	/**<
+	 * Indicates the mempool that should be used for allocating
+	 * rte_event_vector container.
+	 * @see rte_event_vector_pool_create
+	 */
+};
+
+/**
+ * Vector adapter vector info structure
+ */
+struct rte_event_vector_adapter_info {
+	uint8_t max_vector_adapters_per_event_queue;
+	/**< Maximum number of vector adapters configurable */
+	uint16_t min_vector_sz;
+	/**< Minimum vector size configurable */
+	uint16_t max_vector_sz;
+	/**< Maximum vector size configurable */
+	uint64_t min_vector_timeout_ns;
+	/**< Minimum vector timeout configurable */
+	uint64_t max_vector_timeout_ns;
+	/**< Maximum vector timeout configurable */
+	uint8_t log2_sz;
+	/**< True if the size configured should be in log2. */
+};
+
+/**
+ * Vector adapter statistics structure
+ */
+struct rte_event_vector_adapter_stats {
+	uint64_t vectorized;
+	/**< Number of events vectorized */
+	uint64_t vectors_timedout;
+	/**< Number of timeouts occurred */
+	uint64_t vectors_flushed;
+	/**< Number of vectors flushed */
+	uint64_t alloc_failures;
+	/**< Number of vector allocation failures */
+};
+
+struct rte_event_vector_adapter;
+
+typedef int (*rte_event_vector_adapter_enqueue_t)(struct rte_event_vector_adapter *adapter,
+						  uintptr_t ptrs[], uint16_t num_elem,
+						  uint64_t flags);
+/**< @internal Enqueue ptrs into the event vector adapter. */
+
+struct __rte_cache_aligned rte_event_vector_adapter {
+	rte_event_vector_adapter_enqueue_t enqueue;
+	/**< Pointer to driver enqueue function. */
+	struct rte_event_vector_adapter_data *data;
+	/**< Pointer to the adapter data */
+	const struct event_vector_adapter_ops *ops;
+	/**< Functions exported by adapter driver */
+
+	uint32_t adapter_id;
+	/**< Identifier of the adapter instance. */
+	uint8_t used : 1;
+	/**< Flag to indicate that this adapter is being used. */
+};
+
+/**
+ * Callback function type for producer port creation.
+ */
+typedef int (*rte_event_vector_adapter_port_conf_cb_t)(uint8_t event_dev_id, uint8_t *event_port_id,
+						       void *conf_arg);
+
+/**
+ * Create an event vector adapter.
+ *
+ * This function creates an event vector adapter based on the provided
+ * configuration. The adapter can be used to combine multiple mbufs/ptrs/u64s
+ * into a single vector event, i.e., rte_event_vector, which is then enqueued
+ * to the event queue provided.
+ * @see rte_event_vector_adapter_conf::ev::event_queue_id.
+ *
+ * @param conf
+ *   Configuration for the event vector adapter.
+ * @return
+ *   - Pointer to the created event vector adapter on success.
+ *   - NULL on failure with rte_errno set to the error code.
+ *     Possible rte_errno values include:
+ *    - EINVAL: Invalid event device identifier specified in config.
+ *    - ENOMEM: Unable to allocate sufficient memory for adapter instances.
+ *    - ENOSPC: Maximum number of adapters already created.
+ */
+struct rte_event_vector_adapter *
+rte_event_vector_adapter_create(const struct rte_event_vector_adapter_conf *conf);
+
+/**
+ * Create an event vector adapter with the supplied callback.
+ *
+ * This function can be used to have a more granular control over the event
+ * vector adapter creation. If a built-in port is absent, then the function uses
+ * the callback provided to create and get the port id to be used as a producer
+ * port.
+ *
+ * @param conf
+ *   The event vector adapter configuration structure.
+ * @param conf_cb
+ *   The port config callback function.
+ * @param conf_arg
+ *   Opaque pointer to the argument for the callback function.
+ * @return
+ *   - Pointer to the new allocated event vector adapter on success.
+ *   - NULL on error with rte_errno set appropriately.
+ *   Possible rte_errno values include:
+ *   - ERANGE: vector_timeout_ns is not in supported range.
+ *   - ENOMEM: Unable to allocate sufficient memory for adapter instances.
+ *   - EINVAL: Invalid event device identifier specified in config.
+ *   - ENOSPC: Maximum number of adapters already created.
+ */
+struct rte_event_vector_adapter *
+rte_event_vector_adapter_create_ext(const struct rte_event_vector_adapter_conf *conf,
+				    rte_event_vector_adapter_port_conf_cb_t conf_cb,
+				    void *conf_arg);
+
+/**
+ * Lookup an event vector adapter using its identifier.
+ *
+ * This function returns the event vector adapter based on the adapter_id.
+ * This is useful when the adapter is created in another process and the
+ * application wants to use the adapter in the current process.
+ *
+ * @param adapter_id
+ *   Identifier of the event vector adapter to look up.
+ * @return
+ *   - Pointer to the event vector adapter on success.
+ *   - NULL if the adapter is not found.
+ */
+struct rte_event_vector_adapter *
+rte_event_vector_adapter_lookup(uint32_t adapter_id);
+
+/**
+ * Destroy an event vector adapter.
+ *
+ * This function releases the resources associated with the event vector adapter.
+ *
+ * @param adapter
+ *   Pointer to the event vector adapter to be destroyed.
+ * @return
+ *   - 0 on success.
+ *   - Negative value on failure with rte_errno set to the error code.
+ */
+int
+rte_event_vector_adapter_destroy(struct rte_event_vector_adapter *adapter);
+
+/**
+ * Get the vector info of an event vector adapter.
+ *
+ * This function retrieves the vector info of the event vector adapter.
+ *
+ * @param event_dev_id
+ *   Event device identifier.
+ * @param info
+ *   Pointer to the structure where the vector info will be stored.
+ * @return
+ *   0 on success, negative value on failure.
+ *   - EINVAL if the event device identifier is invalid.
+ *   - ENOTSUP if the event device does not support vector adapters.
+ */
+int
+rte_event_vector_adapter_info_get(uint8_t event_dev_id,
+				  struct rte_event_vector_adapter_info *info);
+
+/**
+ * Get the configuration of an event vector adapter.
+ *
+ * This function retrieves the configuration of the event vector adapter.
+ *
+ * @param adapter
+ *   Pointer to the event vector adapter.
+ * @param conf
+ *   Pointer to the structure where the configuration will be stored.
+ * @return
+ *   0 on success, negative value on failure.
+ */
+int
+rte_event_vector_adapter_conf_get(struct rte_event_vector_adapter *adapter,
+				  struct rte_event_vector_adapter_conf *conf);
+
+/**
+ * Get the remaining event vector adapters.
+ *
+ * This function retrieves the number of remaining event vector adapters
+ * available for a given event device and event queue.
+ *
+ * @param event_dev_id
+ *   Event device identifier.
+ * @param event_queue_id
+ *   Event queue identifier.
+ * @return
+ *   Number of remaining slots available for enqueuing events.
+ */
+uint8_t
+rte_event_vector_adapter_remaining(uint8_t event_dev_id, uint8_t event_queue_id);
+
+/**
+ * Get the event vector adapter statistics.
+ *
+ * This function retrieves the statistics of the event vector adapter.
+ *
+ * @param adapter
+ *   Pointer to the event vector adapter.
+ * @param stats
+ *   Pointer to the structure where the statistics will be stored.
+ * @return
+ *   0 on success, negative value on failure.
+ */
+int
+rte_event_vector_adapter_stats_get(struct rte_event_vector_adapter *adapter,
+				   struct rte_event_vector_adapter_stats *stats);
+
+/**
+ * @brief Reset the event vector adapter statistics.
+ *
+ * This function resets the statistics of the event vector adapter to their default values.
+ *
+ * @param adapter
+ *   Pointer to the event vector adapter whose statistics are to be reset.
+ * @return
+ *   0 on success, negative value on failure.
+ */
+int
+rte_event_vector_adapter_stats_reset(struct rte_event_vector_adapter *adapter);
+
+/**
+ * Retrieve the service ID of the event vector adapter. If the adapter doesn't
+ * use an rte_service function, this function returns -ESRCH.
+ *
+ * @param adapter
+ *   A pointer to an event vector adapter.
+ * @param [out] service_id
+ *   A pointer to a uint32_t, to be filled in with the service id.
+ *
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure
+ *   - -ESRCH: the adapter does not require a service to operate
+ */
+int
+rte_event_vector_adapter_service_id_get(struct rte_event_vector_adapter *adapter,
+					uint32_t *service_id);
+
+/**
+ * Enqueue ptrs into the event vector adapter.
+ *
+ * This function enqueues a specified number of ptrs into the event vector adapter.
+ * The ptrs are combined into a single vector event, i.e., rte_event_vector, which
+ * is then enqueued to the event queue configured in the adapter.
+ *
+ * @param adapter
+ *   Pointer to the event vector adapter.
+ * @param ptrs
+ *   Array of ptrs to be enqueued.
+ * @param num_elem
+ *   Number of ptrs to be enqueued.
+ * @param flags
+ *   Flags to be used for the enqueue operation.
+ * @return
+ *   Number of ptrs enqueued on success.
+ */
+static inline int
+rte_event_vector_adapter_enqueue(struct rte_event_vector_adapter *adapter, uintptr_t ptrs[],
+				 uint16_t num_elem, uint64_t flags)
+{
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+	if (adapter == NULL) {
+		rte_errno = EINVAL;
+		return 0;
+	}
+
+	if (adapter->used == false) {
+		rte_errno = EINVAL;
+		return 0;
+	}
+#endif
+	return adapter->enqueue(adapter, ptrs, num_elem, flags);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __RTE_EVENT_VECTOR_ADAPTER_H__ */
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 43cd95d765..0be1c0ba31 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -244,6 +244,27 @@ rte_event_dma_adapter_caps_get(uint8_t dev_id, uint8_t dma_dev_id, uint32_t *cap
 	return 0;
 }
 
+int
+rte_event_vector_adapter_caps_get(uint8_t dev_id, uint32_t *caps)
+{
+	const struct event_vector_adapter_ops *ops;
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+
+	dev = &rte_eventdevs[dev_id];
+
+	if (caps == NULL)
+		return -EINVAL;
+
+	if (dev->dev_ops->vector_adapter_caps_get == NULL)
+		*caps = 0;
+
+	return dev->dev_ops->vector_adapter_caps_get ?
+		       dev->dev_ops->vector_adapter_caps_get(dev, caps, &ops) :
+		       0;
+}
+
 static inline int
 event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
 {
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 6400d6109f..41c459feff 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1985,6 +1985,14 @@ int
 rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
 				uint32_t *caps);
 
+/* Vector adapter capability bitmap flags */
+#define RTE_EVENT_VECTOR_ADAPTER_CAP_INTERNAL_PORT	0x1
+/**< This flag is set when the vector adapter is capable of generating events
+ * using an internal event port.
+ */
+
+int rte_event_vector_adapter_caps_get(uint8_t dev_id, uint32_t *caps);
+
 /**
  * Converts nanoseconds to *timeout_ticks* value for rte_event_dequeue_burst()
  *
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index 44687255cb..083e6c0d18 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -156,6 +156,19 @@ EXPERIMENTAL {
 
 	# added in 25.03
 	rte_event_eth_rx_adapter_queues_add;
+
+	#added in 25.05
+	rte_event_vector_adapter_create;
+	rte_event_vector_adapter_create_ext;
+	rte_event_vector_adapter_lookup;
+	rte_event_vector_adapter_destroy;
+	rte_event_vector_adapter_info_get;
+	rte_event_vector_adapter_conf_get;
+	rte_event_vector_adapter_remaining;
+	rte_event_vector_adapter_stats_get;
+	rte_event_vector_adapter_stats_reset;
+	rte_event_vector_adapter_service_id_get;
+	rte_event_vector_adapter_enqueue;
 };
 
 INTERNAL {
-- 
2.43.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [RFC 2/2] eventdev: add default software vector adapter
  2025-03-26 13:14 [RFC 0/2] introduce event vector adapter pbhagavatula
  2025-03-26 13:14 ` [RFC 1/2] eventdev: " pbhagavatula
@ 2025-03-26 13:14 ` pbhagavatula
  2025-03-26 14:18   ` Stephen Hemminger
  2025-03-26 14:22   ` Stephen Hemminger
  2025-03-26 17:06 ` [RFC 0/2] introduce event " Pavan Nikhilesh Bhagavatula
  2 siblings, 2 replies; 8+ messages in thread
From: pbhagavatula @ 2025-03-26 13:14 UTC (permalink / raw)
  To: jerinj; +Cc: dev, Pavan Nikhilesh

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

When event device PMD doesn't support vector adapter,
the library will fallback to software implementation
which relies on service core to check for timeouts
and vectorizes the objects on enqueue.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 lib/eventdev/eventdev_pmd.h             |   2 +
 lib/eventdev/rte_event_vector_adapter.c | 318 ++++++++++++++++++++++++
 lib/eventdev/rte_eventdev.c             |   2 +
 3 files changed, 322 insertions(+)

diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index d03461316b..dda8ad82c9 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -87,6 +87,8 @@ extern int rte_event_logtype;
 #define RTE_EVENT_TIMER_ADAPTER_SW_CAP \
 		RTE_EVENT_TIMER_ADAPTER_CAP_PERIODIC
 
+#define RTE_EVENT_VECTOR_ADAPTER_SW_CAP RTE_EVENT_VECTOR_ADAPTER_CAP_SOV_EOV
+
 #define RTE_EVENTDEV_DETACHED  (0)
 #define RTE_EVENTDEV_ATTACHED  (1)
 
diff --git a/lib/eventdev/rte_event_vector_adapter.c b/lib/eventdev/rte_event_vector_adapter.c
index 5f38a9a40b..c1d29530be 100644
--- a/lib/eventdev/rte_event_vector_adapter.c
+++ b/lib/eventdev/rte_event_vector_adapter.c
@@ -21,6 +21,10 @@
 
 #define MZ_NAME_MAX_LEN	    64
 #define DATA_MZ_NAME_FORMAT "rte_event_vector_adapter_data_%d_%d_%d"
+#define MAX_VECTOR_SIZE	    1024
+#define MIN_VECTOR_SIZE	    1
+#define MAX_VECTOR_NS	    1E9
+#define MIN_VECTOR_NS	    1E5
 
 RTE_LOG_REGISTER_SUFFIX(ev_vector_logtype, adapter.vector, NOTICE);
 #define RTE_LOGTYPE_EVVEC ev_vector_logtype
@@ -46,6 +50,9 @@ struct rte_event_vector_adapter *adapters[RTE_EVENT_MAX_DEVS][RTE_EVENT_MAX_QUEU
 		}                                                                                  \
 	} while (0)
 
+static const struct event_vector_adapter_ops sw_ops;
+static const struct rte_event_vector_adapter_info sw_info;
+
 static int
 validate_conf(const struct rte_event_vector_adapter_conf *conf,
 	      struct rte_event_vector_adapter_info *info)
@@ -229,6 +236,11 @@ rte_event_vector_adapter_create_ext(const struct rte_event_vector_adapter_conf *
 		}
 	}
 
+	if (adapter->ops == NULL) {
+		adapter->ops = &sw_ops;
+		info = sw_info;
+	}
+
 	rc = validate_conf(conf, &info);
 	if (rc < 0) {
 		adapter->ops = NULL;
@@ -338,6 +350,8 @@ rte_event_vector_adapter_lookup(uint32_t adapter_id)
 			return NULL;
 		}
 	}
+	if (adapter->ops == NULL)
+		adapter->ops = &sw_ops;
 
 	adapter->enqueue = adapter->ops->enqueue;
 	adapter->adapter_id = adapter_id;
@@ -384,6 +398,7 @@ rte_event_vector_adapter_info_get(uint8_t event_dev_id, struct rte_event_vector_
 	if (dev->dev_ops->vector_adapter_info_get != NULL)
 		return dev->dev_ops->vector_adapter_info_get(dev, info);
 
+	*info = sw_info;
 	return 0;
 }
 
@@ -442,3 +457,306 @@ rte_event_vector_adapter_stats_reset(struct rte_event_vector_adapter *adapter)
 
 	return 0;
 }
+
+/* Software vector adapter implementation. */
+
+struct sw_vector_adapter_service_data;
+struct sw_vector_adapter_data {
+	uint8_t dev_id;
+	uint8_t port_id;
+	uint16_t vector_sz;
+	uint64_t timestamp;
+	uint64_t event_meta;
+	uint64_t vector_tmo_ticks;
+	uint64_t fallback_event_meta;
+	struct rte_mempool *vector_mp;
+	struct rte_event_vector *vector;
+	RTE_ATOMIC(rte_mcslock_t *) lock;
+	struct rte_event_vector_adapter *adapter;
+	struct rte_event_vector_adapter_stats stats;
+	struct sw_vector_adapter_service_data *service_data;
+	RTE_TAILQ_ENTRY(sw_vector_adapter_data) next;
+};
+
+struct sw_vector_adapter_service_data {
+	uint32_t service_id;
+	RTE_ATOMIC(rte_mcslock_t *) lock;
+	RTE_TAILQ_HEAD(, sw_vector_adapter_data) adapter_list;
+};
+
+static inline struct sw_vector_adapter_data *
+sw_vector_adapter_priv(const struct rte_event_vector_adapter *adapter)
+{
+	return adapter->data->adapter_priv;
+}
+
+static int
+sw_vector_adapter_flush(struct sw_vector_adapter_data *sw)
+{
+	struct rte_event ev;
+
+	if (sw->vector == NULL)
+		return -ENOBUFS;
+
+	ev.event = sw->event_meta;
+	ev.vec = sw->vector;
+	if (rte_event_enqueue_burst(sw->dev_id, sw->port_id, &ev, 1) != 1)
+		return -ENOSPC;
+
+	sw->vector = NULL;
+	sw->timestamp = 0;
+	return 0;
+}
+
+static int
+sw_vector_adapter_service_func(void *arg)
+{
+	struct sw_vector_adapter_service_data *service_data = arg;
+	struct sw_vector_adapter_data *sw, *nextsw;
+	rte_mcslock_t me, me_adptr;
+	int ret;
+
+	rte_mcslock_lock(&service_data->lock, &me);
+	RTE_TAILQ_FOREACH_SAFE(sw, &service_data->adapter_list, next, nextsw)
+	{
+		if (!rte_mcslock_trylock(&sw->lock, &me_adptr))
+			continue;
+		if (sw->vector == NULL) {
+			TAILQ_REMOVE(&service_data->adapter_list, sw, next);
+			rte_mcslock_unlock(&sw->lock, &me_adptr);
+			continue;
+		}
+		if (rte_get_timer_cycles() - sw->timestamp < sw->vector_tmo_ticks) {
+			rte_mcslock_unlock(&sw->lock, &me_adptr);
+			continue;
+		}
+		ret = sw_vector_adapter_flush(sw);
+		if (ret) {
+			rte_mcslock_unlock(&sw->lock, &me_adptr);
+			continue;
+		}
+		sw->stats.vectors_timedout++;
+		TAILQ_REMOVE(&service_data->adapter_list, sw, next);
+		rte_mcslock_unlock(&sw->lock, &me_adptr);
+	}
+	rte_mcslock_unlock(&service_data->lock, &me);
+
+	return 0;
+}
+
+static int
+sw_vector_adapter_service_init(struct sw_vector_adapter_data *sw)
+{
+#define SW_VECTOR_ADAPTER_SERVICE_FMT "sw_vector_adapter_service"
+	struct sw_vector_adapter_service_data *service_data;
+	struct rte_service_spec service;
+	const struct rte_memzone *mz;
+	int ret;
+
+	mz = rte_memzone_lookup(SW_VECTOR_ADAPTER_SERVICE_FMT);
+	if (mz == NULL) {
+		mz = rte_memzone_reserve(SW_VECTOR_ADAPTER_SERVICE_FMT,
+					 sizeof(struct sw_vector_adapter_service_data),
+					 sw->adapter->data->socket_id, 0);
+		if (mz == NULL) {
+			EVVEC_LOG_DBG("failed to reserve memzone for service");
+			return -ENOMEM;
+		}
+		service_data = (struct sw_vector_adapter_service_data *)mz->addr;
+
+		service.callback = sw_vector_adapter_service_func;
+		service.callback_userdata = service_data;
+		service.socket_id = sw->adapter->data->socket_id;
+
+		ret = rte_service_component_register(&service, &service_data->service_id);
+		if (ret < 0) {
+			EVVEC_LOG_ERR("failed to register service");
+			return -ENOTSUP;
+		}
+		TAILQ_INIT(&service_data->adapter_list);
+	}
+	service_data = (struct sw_vector_adapter_service_data *)mz->addr;
+
+	sw->service_data = service_data;
+	sw->adapter->data->unified_service_id = service_data->service_id;
+	return 0;
+}
+
+static int
+sw_vector_adapter_create(struct rte_event_vector_adapter *adapter)
+{
+#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
+#define SW_VECTOR_ADAPTER_NAME	64
+	char name[SW_VECTOR_ADAPTER_NAME];
+	struct sw_vector_adapter_data *sw;
+	struct rte_event ev;
+
+	snprintf(name, SW_VECTOR_ADAPTER_NAME, "sw_vector_%" PRIx32, adapter->data->id);
+	sw = rte_zmalloc_socket(name, sizeof(*sw), RTE_CACHE_LINE_SIZE, adapter->data->socket_id);
+	if (sw == NULL) {
+		EVVEC_LOG_ERR("failed to allocate space for private data");
+		rte_errno = ENOMEM;
+		return -1;
+	}
+
+	/* Connect storage to adapter instance */
+	adapter->data->adapter_priv = sw;
+	sw->adapter = adapter;
+	sw->dev_id = adapter->data->event_dev_id;
+	sw->port_id = adapter->data->event_port_id;
+
+	sw->vector_sz = adapter->data->conf.vector_sz;
+	sw->vector_mp = adapter->data->conf.vector_mp;
+	sw->vector_tmo_ticks = NSEC2TICK(adapter->data->conf.vector_timeout_ns, rte_get_timer_hz());
+
+	ev = adapter->data->conf.ev;
+	ev.op = RTE_EVENT_OP_NEW;
+	sw->event_meta = ev.event;
+
+	ev = adapter->data->conf.ev_fallback;
+	ev.op = RTE_EVENT_OP_NEW;
+	ev.priority = adapter->data->conf.ev.priority;
+	ev.queue_id = adapter->data->conf.ev.queue_id;
+	ev.sched_type = adapter->data->conf.ev.sched_type;
+	sw->fallback_event_meta = ev.event;
+
+	sw_vector_adapter_service_init(sw);
+
+	return 0;
+}
+
+static int
+sw_vector_adapter_destroy(struct rte_event_vector_adapter *adapter)
+{
+	struct sw_vector_adapter_data *sw = sw_vector_adapter_priv(adapter);
+
+	rte_free(sw);
+	adapter->data->adapter_priv = NULL;
+
+	return 0;
+}
+
+static int
+sw_vector_adapter_flush_single_event(struct sw_vector_adapter_data *sw, uintptr_t ptr)
+{
+	struct rte_event ev;
+
+	ev.event = sw->fallback_event_meta;
+	ev.u64 = ptr;
+	if (rte_event_enqueue_burst(sw->dev_id, sw->port_id, &ev, 1) != 1)
+		return -ENOSPC;
+
+	return 0;
+}
+
+static int
+sw_vector_adapter_enqueue(struct rte_event_vector_adapter *adapter, uintptr_t ptrs[],
+			  uint16_t num_elem, uint64_t flags)
+{
+	struct sw_vector_adapter_data *sw = sw_vector_adapter_priv(adapter);
+	uint16_t cnt = num_elem, n;
+	rte_mcslock_t me, me_s;
+	int ret;
+
+	rte_mcslock_lock(&sw->lock, &me);
+	if (flags & RTE_EVENT_VECTOR_ENQ_FLUSH) {
+		sw_vector_adapter_flush(sw);
+		sw->stats.vectors_flushed++;
+		rte_mcslock_unlock(&sw->lock, &me);
+		return 0;
+	}
+
+	if (num_elem == 0) {
+		rte_mcslock_unlock(&sw->lock, &me);
+		return 0;
+	}
+
+	if (flags & RTE_EVENT_VECTOR_ENQ_SOV) {
+		while (sw_vector_adapter_flush(sw) != 0)
+			;
+		sw->stats.vectors_flushed++;
+	}
+
+	while (num_elem) {
+		if (sw->vector == NULL) {
+			ret = rte_mempool_get(sw->vector_mp, (void **)&sw->vector);
+			if (ret) {
+				if (sw_vector_adapter_flush_single_event(sw, *ptrs) == 0) {
+					sw->stats.alloc_failures++;
+					num_elem--;
+					ptrs++;
+					continue;
+				}
+				rte_errno = -ENOSPC;
+				goto done;
+			}
+			sw->vector->nb_elem = 0;
+			sw->vector->attr_valid = 0;
+			sw->vector->elem_offset = 0;
+		}
+		n = RTE_MIN(sw->vector_sz - sw->vector->nb_elem, num_elem);
+		memcpy(&sw->vector->u64s[sw->vector->nb_elem], ptrs, n * sizeof(uintptr_t));
+		sw->vector->nb_elem += n;
+		num_elem -= n;
+		ptrs += n;
+
+		if (sw->vector_sz == sw->vector->nb_elem) {
+			ret = sw_vector_adapter_flush(sw);
+			if (ret)
+				goto done;
+			sw->stats.vectorized++;
+		}
+	}
+
+	if (flags & RTE_EVENT_VECTOR_ENQ_EOV) {
+		while (sw_vector_adapter_flush(sw) != 0)
+			;
+		sw->stats.vectors_flushed++;
+	}
+
+	if (sw->vector != NULL && sw->vector->nb_elem) {
+		sw->timestamp = rte_get_timer_cycles();
+		rte_mcslock_lock(&sw->service_data->lock, &me_s);
+		TAILQ_INSERT_TAIL(&sw->service_data->adapter_list, sw, next);
+		rte_mcslock_unlock(&sw->service_data->lock, &me_s);
+	}
+
+done:
+	rte_mcslock_unlock(&sw->lock, &me);
+	return cnt - num_elem;
+}
+
+static int
+sw_vector_adapter_stats_get(const struct rte_event_vector_adapter *adapter,
+			    struct rte_event_vector_adapter_stats *stats)
+{
+	struct sw_vector_adapter_data *sw = sw_vector_adapter_priv(adapter);
+
+	*stats = sw->stats;
+	return 0;
+}
+
+static int
+sw_vector_adapter_stats_reset(const struct rte_event_vector_adapter *adapter)
+{
+	struct sw_vector_adapter_data *sw = sw_vector_adapter_priv(adapter);
+
+	memset(&sw->stats, 0, sizeof(sw->stats));
+	return 0;
+}
+
+static const struct event_vector_adapter_ops sw_ops = {
+	.create = sw_vector_adapter_create,
+	.destroy = sw_vector_adapter_destroy,
+	.enqueue = sw_vector_adapter_enqueue,
+	.stats_get = sw_vector_adapter_stats_get,
+	.stats_reset = sw_vector_adapter_stats_reset,
+};
+
+static const struct rte_event_vector_adapter_info sw_info = {
+	.min_vector_sz = MIN_VECTOR_SIZE,
+	.max_vector_sz = MAX_VECTOR_SIZE,
+	.min_vector_timeout_ns = MIN_VECTOR_NS,
+	.max_vector_timeout_ns = MAX_VECTOR_NS,
+	.log2_sz = 0,
+};
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 0be1c0ba31..e452a469d3 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -258,6 +258,8 @@ rte_event_vector_adapter_caps_get(uint8_t dev_id, uint32_t *caps)
 		return -EINVAL;
 
 	if (dev->dev_ops->vector_adapter_caps_get == NULL)
+		*caps = RTE_EVENT_VECTOR_ADAPTER_SW_CAP;
+	else
 		*caps = 0;
 
 	return dev->dev_ops->vector_adapter_caps_get ?
-- 
2.43.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC 2/2] eventdev: add default software vector adapter
  2025-03-26 13:14 ` [RFC 2/2] eventdev: add default software " pbhagavatula
@ 2025-03-26 14:18   ` Stephen Hemminger
  2025-03-26 17:25     ` [EXTERNAL] " Pavan Nikhilesh Bhagavatula
  2025-03-26 14:22   ` Stephen Hemminger
  1 sibling, 1 reply; 8+ messages in thread
From: Stephen Hemminger @ 2025-03-26 14:18 UTC (permalink / raw)
  To: pbhagavatula; +Cc: jerinj, dev

On Wed, 26 Mar 2025 18:44:36 +0530
<pbhagavatula@marvell.com> wrote:

> +struct sw_vector_adapter_service_data {
> +	uint32_t service_id;
> +	RTE_ATOMIC(rte_mcslock_t *) lock;
> +	RTE_TAILQ_HEAD(, sw_vector_adapter_data) adapter_list;
> +};

Why the indirect pointer to the lock? rather than embedding it in
the structure?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC 2/2] eventdev: add default software vector adapter
  2025-03-26 13:14 ` [RFC 2/2] eventdev: add default software " pbhagavatula
  2025-03-26 14:18   ` Stephen Hemminger
@ 2025-03-26 14:22   ` Stephen Hemminger
  1 sibling, 0 replies; 8+ messages in thread
From: Stephen Hemminger @ 2025-03-26 14:22 UTC (permalink / raw)
  To: pbhagavatula; +Cc: jerinj, dev

On Wed, 26 Mar 2025 18:44:36 +0530
<pbhagavatula@marvell.com> wrote:

> +
> +struct sw_vector_adapter_service_data {
> +	uint32_t service_id;
> +	RTE_ATOMIC(rte_mcslock_t *) lock;
> +	RTE_TAILQ_HEAD(, sw_vector_adapter_data) adapter_list;
> +};

Do you really need mcslock here?
mcslock is for locks where there is large amount of contention and lots of CPU's.
This doesn't seem like that.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [RFC 0/2] introduce event vector adapter
  2025-03-26 13:14 [RFC 0/2] introduce event vector adapter pbhagavatula
  2025-03-26 13:14 ` [RFC 1/2] eventdev: " pbhagavatula
  2025-03-26 13:14 ` [RFC 2/2] eventdev: add default software " pbhagavatula
@ 2025-03-26 17:06 ` Pavan Nikhilesh Bhagavatula
  2 siblings, 0 replies; 8+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2025-03-26 17:06 UTC (permalink / raw)
  To: Pavan Nikhilesh Bhagavatula, Jerin Jacob
  Cc: dev, pravin.pathak, hemant.agrawal, sachin.saxena,
	mattias.ronnblom, Jerin Jacob, liangma, peter.mccarthy,
	harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
	Amit Prakash Shukla, s.v.naga.harish.k, anatoly.burakov

++

> -----Original Message-----
> From: pbhagavatula@marvell.com <pbhagavatula@marvell.com>
> Sent: Wednesday, March 26, 2025 6:45 PM
> To: Jerin Jacob <jerinj@marvell.com>
> Cc: dev@dpdk.org; Pavan Nikhilesh Bhagavatula
> <pbhagavatula@marvell.com>
> Subject: [RFC 0/2] introduce event vector adapter
> 
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> 
> The event vector adapter supports offloading the creation of event vectors
> by vectorizing objects (mbufs/ptrs/u64s).
> 
> An event vector adapter has the following working model:
> 
>          ┌──────────┐
>          │  Vector  ├─┐
>          │ adapter0 │ │
>          └──────────┘ │
>          ┌──────────┐ │   ┌──────────┐
>          │  Vector  ├─┼──►│  Event   │
>          │ adapter1 │ │   │  Queue0  │
>          └──────────┘ │   └──────────┘
>          ┌──────────┐ │
>          │  Vector  ├─┘
>          │ adapter2 │
>          └──────────┘
> 
>          ┌──────────┐
>          │  Vector  ├─┐
>          │ adapter0 │ │   ┌──────────┐
>          └──────────┘ ├──►│  Event   │
>          ┌──────────┐ │   │  Queue1  │
>          │  Vector  ├─┘   └──────────┘
>          │ adapter1 │
>          └──────────┘
> 
>  - A vector adapter can be seen as an extension to event queue. It helps in
>    aggregating objects and generating a vector event which is enqueued to the
>    event queue.
> 
>  - Multiple vector adapters can be created on an event queue, each with its
>    own unique properties such as event properties, vector size, and timeout.
>    Note: If the target event queue doesn't support
> RTE_EVENT_QUEUE_CFG_ALL_TYPES,
>          then the vector adapter should use the same schedule type as the event
>          queue.
> 
>  - Each vector adapter aggregates objects, generates a vector event and
>    enqueues it to the event queue with the event properties mentioned in
>    rte_event_vector_adapter_conf::ev.
> 
>  - After configuring the vector adapter, the Application needs to use the
>    rte_event_vector_adapter_enqueue() function to enqueue objects i.e.,
>    mbufs/ptrs/u64s to the vector adapter.
>    On reaching the configured vector size or timeout, the vector adapter
>    enqueues the event vector to the event queue.
>    Note: Application should use the event_type and sub_event_type properly
>          identifying the contents of vector event on dequeue.
> 
>  - If the vector adapter advertises the
> RTE_EVENT_VECTOR_ADAPTER_CAP_SOV_EOV
>   capability, application can use the RTE_EVENT_VECTOR_ENQ_[S|E]OV flags
>   to indicate the start and end of a vector event.
>   * When RTE_EVENT_VECTOR_ENQ_SOV is set, the vector adapter will flush
> any
>     aggregation in progress as a vector event and start aggregating a new
>     vector event with the enqueued ptr.
>   * When RTE_EVENT_VECTOR_ENQ_EOV is set, the vector adapter will add the
>     current ptr enqueued to the aggregated event and enqueue the vector event
>     to the event queue.
>   * If both flags are set, the vector adapter will flush the current aggregation
>     as a vector event and enqueue the current ptr as a single event to the event
>     queue.
> 
>  - If the vector adapter reaches the configured vector size, it will enqueue
>    the aggregated vector event to the event queue.
> 
>  - If the vector adapter reaches the configured vector timeout, it will flush
>    the current aggregation as a vector event if the minimum vector size is
>    reached, if not it will enqueue the objects as single events to the event
>    queue.
> 
>  - If the vector adapter is unable to aggregate the objects into a vector event,
>    it will enqueue the objects as single events to the event queue with the event
>    properties mentioned in rte_event_vector_adapter_conf::ev_fallback.
> 
>  Before using the vector adapter, the application has to create and configure
>  an event device and based on the event device capability it might require
>  creating an additional event port.
> 
>  When the application creates the vector adapter using the
>  ``rte_event_vector_adapter_create()`` function, the event device driver
>  capabilities are checked. If an in-built port is absent, the application
>  uses the default function to create a new event port.
>  For finer control over event port creation, the application should use
>  the ``rte_event_vector_adapter_create_ext()`` function.
> 
>  The application can enqueue one or more objects to the vector adapter using
> the
>  ``rte_event_vector_adapter_enqueue()`` function and control the aggregation
>  using the flags.
> 
>  Vector adapters report stats using the
> ``rte_event_vector_adapter_stats_get()``
>  function and reset the stats using the
> ``rte_event_vector_adapter_stats_reset()``.
> 
>  The application can destroy the vector adapter using the
>  ``rte_event_vector_adapter_destroy()`` function.
> 
> Pavan Nikhilesh (2):
>   eventdev: introduce event vector adapter
>   eventdev: add default software vector adapter
> 
>  config/rte_config.h                     |   1 +
>  lib/eventdev/event_vector_adapter_pmd.h |  87 +++
>  lib/eventdev/eventdev_pmd.h             |  38 ++
>  lib/eventdev/meson.build                |   3 +
>  lib/eventdev/rte_event_vector_adapter.c | 762
> ++++++++++++++++++++++++
>  lib/eventdev/rte_event_vector_adapter.h | 469 +++++++++++++++
>  lib/eventdev/rte_eventdev.c             |  23 +
>  lib/eventdev/rte_eventdev.h             |   8 +
>  lib/eventdev/version.map                |  13 +
>  9 files changed, 1404 insertions(+)
>  create mode 100644 lib/eventdev/event_vector_adapter_pmd.h
>  create mode 100644 lib/eventdev/rte_event_vector_adapter.c
>  create mode 100644 lib/eventdev/rte_event_vector_adapter.h
> 
> --
> 2.43.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [EXTERNAL] Re: [RFC 2/2] eventdev: add default software vector adapter
  2025-03-26 14:18   ` Stephen Hemminger
@ 2025-03-26 17:25     ` Pavan Nikhilesh Bhagavatula
  2025-03-26 20:25       ` Stephen Hemminger
  0 siblings, 1 reply; 8+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2025-03-26 17:25 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Jerin Jacob, dev

> On Wed, 26 Mar 2025 18:44:36 +0530
> <pbhagavatula@marvell.com> wrote:
> 
> > +struct sw_vector_adapter_service_data {
> > +	uint32_t service_id;
> > +	RTE_ATOMIC(rte_mcslock_t *) lock;
> > +	RTE_TAILQ_HEAD(, sw_vector_adapter_data) adapter_list;
> > +};
> 
> Why the indirect pointer to the lock? rather than embedding it in
> the structure?

IIUC, the lock itself is declared and used as a pointer right?
I looked at examples from test_mcslock.c, and this seemed correct.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [EXTERNAL] Re: [RFC 2/2] eventdev: add default software vector adapter
  2025-03-26 17:25     ` [EXTERNAL] " Pavan Nikhilesh Bhagavatula
@ 2025-03-26 20:25       ` Stephen Hemminger
  0 siblings, 0 replies; 8+ messages in thread
From: Stephen Hemminger @ 2025-03-26 20:25 UTC (permalink / raw)
  To: Pavan Nikhilesh Bhagavatula; +Cc: Jerin Jacob, dev

On Wed, 26 Mar 2025 17:25:32 +0000
Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com> wrote:

> > On Wed, 26 Mar 2025 18:44:36 +0530
> > <pbhagavatula@marvell.com> wrote:
> >   
> > > +struct sw_vector_adapter_service_data {
> > > +	uint32_t service_id;
> > > +	RTE_ATOMIC(rte_mcslock_t *) lock;
> > > +	RTE_TAILQ_HEAD(, sw_vector_adapter_data) adapter_list;
> > > +};  
> > 
> > Why the indirect pointer to the lock? rather than embedding it in
> > the structure?  
> 
> IIUC, the lock itself is declared and used as a pointer right?
> I looked at examples from test_mcslock.c, and this seemed correct.
> 

Forgot, these locks used linked list of waiters, and root is a pointer.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-03-26 20:25 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-03-26 13:14 [RFC 0/2] introduce event vector adapter pbhagavatula
2025-03-26 13:14 ` [RFC 1/2] eventdev: " pbhagavatula
2025-03-26 13:14 ` [RFC 2/2] eventdev: add default software " pbhagavatula
2025-03-26 14:18   ` Stephen Hemminger
2025-03-26 17:25     ` [EXTERNAL] " Pavan Nikhilesh Bhagavatula
2025-03-26 20:25       ` Stephen Hemminger
2025-03-26 14:22   ` Stephen Hemminger
2025-03-26 17:06 ` [RFC 0/2] introduce event " Pavan Nikhilesh Bhagavatula

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).