* [RFC 0/2] introduce event vector adapter
@ 2025-03-26 13:14 pbhagavatula
2025-03-26 13:14 ` [RFC 1/2] eventdev: " pbhagavatula
` (2 more replies)
0 siblings, 3 replies; 12+ messages in thread
From: pbhagavatula @ 2025-03-26 13:14 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
The event vector adapter supports offloading the creation of event vectors
by vectorizing objects (mbufs/ptrs/u64s).
An event vector adapter has the following working model:
┌──────────┐
│ Vector ├─┐
│ adapter0 │ │
└──────────┘ │
┌──────────┐ │ ┌──────────┐
│ Vector ├─┼──►│ Event │
│ adapter1 │ │ │ Queue0 │
└──────────┘ │ └──────────┘
┌──────────┐ │
│ Vector ├─┘
│ adapter2 │
└──────────┘
┌──────────┐
│ Vector ├─┐
│ adapter0 │ │ ┌──────────┐
└──────────┘ ├──►│ Event │
┌──────────┐ │ │ Queue1 │
│ Vector ├─┘ └──────────┘
│ adapter1 │
└──────────┘
- A vector adapter can be seen as an extension to event queue. It helps in
aggregating objects and generating a vector event which is enqueued to the
event queue.
- Multiple vector adapters can be created on an event queue, each with its
own unique properties such as event properties, vector size, and timeout.
Note: If the target event queue doesn't support RTE_EVENT_QUEUE_CFG_ALL_TYPES,
then the vector adapter should use the same schedule type as the event
queue.
- Each vector adapter aggregates objects, generates a vector event and
enqueues it to the event queue with the event properties mentioned in
rte_event_vector_adapter_conf::ev.
- After configuring the vector adapter, the Application needs to use the
rte_event_vector_adapter_enqueue() function to enqueue objects i.e.,
mbufs/ptrs/u64s to the vector adapter.
On reaching the configured vector size or timeout, the vector adapter
enqueues the event vector to the event queue.
Note: Application should use the event_type and sub_event_type properly
identifying the contents of vector event on dequeue.
- If the vector adapter advertises the RTE_EVENT_VECTOR_ADAPTER_CAP_SOV_EOV
capability, application can use the RTE_EVENT_VECTOR_ENQ_[S|E]OV flags
to indicate the start and end of a vector event.
* When RTE_EVENT_VECTOR_ENQ_SOV is set, the vector adapter will flush any
aggregation in progress as a vector event and start aggregating a new
vector event with the enqueued ptr.
* When RTE_EVENT_VECTOR_ENQ_EOV is set, the vector adapter will add the
current ptr enqueued to the aggregated event and enqueue the vector event
to the event queue.
* If both flags are set, the vector adapter will flush the current aggregation
as a vector event and enqueue the current ptr as a single event to the event
queue.
- If the vector adapter reaches the configured vector size, it will enqueue
the aggregated vector event to the event queue.
- If the vector adapter reaches the configured vector timeout, it will flush
the current aggregation as a vector event if the minimum vector size is
reached, if not it will enqueue the objects as single events to the event
queue.
- If the vector adapter is unable to aggregate the objects into a vector event,
it will enqueue the objects as single events to the event queue with the event
properties mentioned in rte_event_vector_adapter_conf::ev_fallback.
Before using the vector adapter, the application has to create and configure
an event device and based on the event device capability it might require
creating an additional event port.
When the application creates the vector adapter using the
``rte_event_vector_adapter_create()`` function, the event device driver
capabilities are checked. If an in-built port is absent, the application
uses the default function to create a new event port.
For finer control over event port creation, the application should use
the ``rte_event_vector_adapter_create_ext()`` function.
The application can enqueue one or more objects to the vector adapter using the
``rte_event_vector_adapter_enqueue()`` function and control the aggregation
using the flags.
Vector adapters report stats using the ``rte_event_vector_adapter_stats_get()``
function and reset the stats using the ``rte_event_vector_adapter_stats_reset()``.
The application can destroy the vector adapter using the
``rte_event_vector_adapter_destroy()`` function.
Pavan Nikhilesh (2):
eventdev: introduce event vector adapter
eventdev: add default software vector adapter
config/rte_config.h | 1 +
lib/eventdev/event_vector_adapter_pmd.h | 87 +++
lib/eventdev/eventdev_pmd.h | 38 ++
lib/eventdev/meson.build | 3 +
lib/eventdev/rte_event_vector_adapter.c | 762 ++++++++++++++++++++++++
lib/eventdev/rte_event_vector_adapter.h | 469 +++++++++++++++
lib/eventdev/rte_eventdev.c | 23 +
lib/eventdev/rte_eventdev.h | 8 +
lib/eventdev/version.map | 13 +
9 files changed, 1404 insertions(+)
create mode 100644 lib/eventdev/event_vector_adapter_pmd.h
create mode 100644 lib/eventdev/rte_event_vector_adapter.c
create mode 100644 lib/eventdev/rte_event_vector_adapter.h
--
2.43.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* [RFC 1/2] eventdev: introduce event vector adapter
2025-03-26 13:14 [RFC 0/2] introduce event vector adapter pbhagavatula
@ 2025-03-26 13:14 ` pbhagavatula
2025-04-10 18:00 ` [PATCH 0/3] " pbhagavatula
2025-03-26 13:14 ` [RFC 2/2] eventdev: add default software vector adapter pbhagavatula
2025-03-26 17:06 ` [RFC 0/2] introduce event " Pavan Nikhilesh Bhagavatula
2 siblings, 1 reply; 12+ messages in thread
From: pbhagavatula @ 2025-03-26 13:14 UTC (permalink / raw)
To: jerinj, Bruce Richardson; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
The event vector adapter supports offloading creation of
event vectors by vectorizing objects (mbufs/ptrs/u64s).
Applications can create a vector adapter associated with
an event queue and enqueue objects to be vectorized.
When the vector reaches the configured size or when the timeout
is reached, the vector adapter will enqueue the vector to the
event queue.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
config/rte_config.h | 1 +
lib/eventdev/event_vector_adapter_pmd.h | 87 +++++
lib/eventdev/eventdev_pmd.h | 36 ++
lib/eventdev/meson.build | 3 +
lib/eventdev/rte_event_vector_adapter.c | 444 ++++++++++++++++++++++
lib/eventdev/rte_event_vector_adapter.h | 469 ++++++++++++++++++++++++
lib/eventdev/rte_eventdev.c | 21 ++
lib/eventdev/rte_eventdev.h | 8 +
lib/eventdev/version.map | 13 +
9 files changed, 1082 insertions(+)
create mode 100644 lib/eventdev/event_vector_adapter_pmd.h
create mode 100644 lib/eventdev/rte_event_vector_adapter.c
create mode 100644 lib/eventdev/rte_event_vector_adapter.h
diff --git a/config/rte_config.h b/config/rte_config.h
index 86897de75e..9535c48d81 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -92,6 +92,7 @@
#define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
#define RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE 32
#define RTE_EVENT_DMA_ADAPTER_MAX_INSTANCE 32
+#define RTE_EVENT_VECTOR_ADAPTER_MAX_INSTANCE_PER_QUEUE 32
/* rawdev defines */
#define RTE_RAWDEV_MAX_DEVS 64
diff --git a/lib/eventdev/event_vector_adapter_pmd.h b/lib/eventdev/event_vector_adapter_pmd.h
new file mode 100644
index 0000000000..dab0350564
--- /dev/null
+++ b/lib/eventdev/event_vector_adapter_pmd.h
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell International Ltd.
+ * All rights reserved.
+ */
+#ifndef __EVENT_VECTOR_ADAPTER_PMD_H__
+#define __EVENT_VECTOR_ADAPTER_PMD_H__
+/**
+ * @file
+ * RTE Event Vector Adapter API (PMD Side)
+ *
+ * @note
+ * This file provides implementation helpers for internal use by PMDs. They
+ * are not intended to be exposed to applications and are not subject to ABI
+ * versioning.
+ */
+#include "eventdev_pmd.h"
+#include "rte_event_vector_adapter.h"
+
+typedef int (*rte_event_vector_adapter_create_t)(struct rte_event_vector_adapter *adapter);
+/**< @internal Event vector adapter implementation setup */
+typedef int (*rte_event_vector_adapter_destroy_t)(struct rte_event_vector_adapter *adapter);
+/**< @internal Event vector adapter implementation teardown */
+typedef int (*rte_event_vector_adapter_caps_get_t)(struct rte_eventdev *dev);
+/**< @internal Get capabilities for event vector adapter */
+typedef int (*rte_event_vector_adapter_stats_get_t)(const struct rte_event_vector_adapter *adapter,
+ struct rte_event_vector_adapter_stats *stats);
+/**< @internal Get statistics for event vector adapter */
+typedef int (*rte_event_vector_adapter_stats_reset_t)(
+ const struct rte_event_vector_adapter *adapter);
+/**< @internal Reset statistics for event vector adapter */
+
+/**
+ * @internal Structure containing the functions exported by an event vector
+ * adapter implementation.
+ */
+struct event_vector_adapter_ops {
+ rte_event_vector_adapter_create_t create;
+ /**< Set up adapter */
+ rte_event_vector_adapter_destroy_t destroy;
+ /**< Tear down adapter */
+ rte_event_vector_adapter_caps_get_t caps_get;
+ /**< Get capabilities from driver */
+ rte_event_vector_adapter_stats_get_t stats_get;
+ /**< Get adapter statistics */
+ rte_event_vector_adapter_stats_reset_t stats_reset;
+ /**< Reset adapter statistics */
+
+ rte_event_vector_adapter_enqueue_t enqueue;
+ /**< Enqueue ptrs into the event vector adapter */
+};
+/**
+ * @internal Adapter data; structure to be placed in shared memory to be
+ * accessible by various processes in a multi-process configuration.
+ */
+struct __rte_cache_aligned rte_event_vector_adapter_data {
+ uint32_t id;
+ /**< Event vector adapter ID */
+ uint8_t event_dev_id;
+ /**< Event device ID */
+ uint32_t socket_id;
+ /**< Socket ID where memory is allocated */
+ uint8_t event_port_id;
+ /**< Optional: event port ID used when the inbuilt port is absent */
+ const struct rte_memzone *mz;
+ /**< Event vector adapter memzone pointer */
+ struct rte_event_vector_adapter_conf conf;
+ /**< Configuration used to configure the adapter. */
+ uint32_t caps;
+ /**< Adapter capabilities */
+ void *adapter_priv;
+ /**< Vector adapter private data*/
+ uint32_t unified_service_id;
+ /**< Unified Service ID*/
+};
+
+static int
+dummy_vector_adapter_enqueue(struct rte_event_vector_adapter *adapter, uintptr_t ptrs[],
+ uint16_t num_events, uint64_t flags)
+{
+ RTE_SET_USED(adapter);
+ RTE_SET_USED(ptrs);
+ RTE_SET_USED(num_events);
+ RTE_SET_USED(flags);
+ return 0;
+}
+
+#endif /* __EVENT_VECTOR_ADAPTER_PMD_H__ */
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index ad13ba5b03..d03461316b 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -26,6 +26,7 @@
#include "event_timer_adapter_pmd.h"
#include "rte_event_eth_rx_adapter.h"
+#include "rte_event_vector_adapter.h"
#include "rte_eventdev.h"
#ifdef __cplusplus
@@ -1555,6 +1556,36 @@ typedef int (*eventdev_dma_adapter_stats_get)(const struct rte_eventdev *dev,
typedef int (*eventdev_dma_adapter_stats_reset)(const struct rte_eventdev *dev,
const int16_t dma_dev_id);
+/**
+ * Event device vector adapter capabilities.
+ *
+ * @param dev
+ * Event device pointer
+ * @param caps
+ * Vector adapter capabilities
+ * @param ops
+ * Vector adapter ops
+ *
+ * @return
+ * Return 0 on success.
+ *
+ */
+typedef int (*eventdev_vector_adapter_caps_get_t)(const struct rte_eventdev *dev, uint32_t *caps,
+ const struct event_vector_adapter_ops **ops);
+
+/**
+ * Event device vector adapter info.
+ *
+ * @param dev
+ * Event device pointer
+ * @param info
+ * Vector adapter info
+ *
+ * @return
+ * Return 0 on success.
+ */
+typedef int (*eventdev_vector_adapter_info_get_t)(const struct rte_eventdev *dev,
+ struct rte_event_vector_adapter_info *info);
/** Event device operations function pointer table */
struct eventdev_ops {
@@ -1697,6 +1728,11 @@ struct eventdev_ops {
eventdev_dma_adapter_stats_reset dma_adapter_stats_reset;
/**< Reset DMA stats */
+ eventdev_vector_adapter_caps_get_t vector_adapter_caps_get;
+ /**< Get vector adapter capabilities */
+ eventdev_vector_adapter_info_get_t vector_adapter_info_get;
+ /**< Get vector adapter info */
+
eventdev_selftest dev_selftest;
/**< Start eventdev Selftest */
diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build
index 71dea91727..0797c145e7 100644
--- a/lib/eventdev/meson.build
+++ b/lib/eventdev/meson.build
@@ -18,6 +18,7 @@ sources = files(
'rte_event_eth_tx_adapter.c',
'rte_event_ring.c',
'rte_event_timer_adapter.c',
+ 'rte_event_vector_adapter.c',
'rte_eventdev.c',
)
headers = files(
@@ -27,6 +28,7 @@ headers = files(
'rte_event_eth_tx_adapter.h',
'rte_event_ring.h',
'rte_event_timer_adapter.h',
+ 'rte_event_vector_adapter.h',
'rte_eventdev.h',
'rte_eventdev_trace_fp.h',
)
@@ -38,6 +40,7 @@ driver_sdk_headers += files(
'eventdev_pmd_pci.h',
'eventdev_pmd_vdev.h',
'event_timer_adapter_pmd.h',
+ 'event_vector_adapter_pmd.h',
)
deps += ['ring', 'ethdev', 'hash', 'mempool', 'mbuf', 'timer', 'cryptodev', 'dmadev']
diff --git a/lib/eventdev/rte_event_vector_adapter.c b/lib/eventdev/rte_event_vector_adapter.c
new file mode 100644
index 0000000000..5f38a9a40b
--- /dev/null
+++ b/lib/eventdev/rte_event_vector_adapter.c
@@ -0,0 +1,444 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell International Ltd.
+ * All rights reserved.
+ */
+
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_mcslock.h>
+#include <rte_service_component.h>
+#include <rte_tailq.h>
+
+#include "event_vector_adapter_pmd.h"
+#include "eventdev_pmd.h"
+#include "rte_event_vector_adapter.h"
+
+#define ADAPTER_ID(dev_id, queue_id, adapter_id) \
+ ((uint32_t)dev_id << 16 | (uint32_t)queue_id << 8 | (uint32_t)adapter_id)
+#define DEV_ID_FROM_ADAPTER_ID(adapter_id) ((adapter_id >> 16) & 0xFF)
+#define QUEUE_ID_FROM_ADAPTER_ID(adapter_id) ((adapter_id >> 8) & 0xFF)
+#define ADAPTER_ID_FROM_ADAPTER_ID(adapter_id) (adapter_id & 0xFF)
+
+#define MZ_NAME_MAX_LEN 64
+#define DATA_MZ_NAME_FORMAT "rte_event_vector_adapter_data_%d_%d_%d"
+
+RTE_LOG_REGISTER_SUFFIX(ev_vector_logtype, adapter.vector, NOTICE);
+#define RTE_LOGTYPE_EVVEC ev_vector_logtype
+
+struct rte_event_vector_adapter *adapters[RTE_EVENT_MAX_DEVS][RTE_EVENT_MAX_QUEUES_PER_DEV];
+
+#define EVVEC_LOG(level, logtype, ...) \
+ RTE_LOG_LINE_PREFIX(level, logtype, \
+ "EVVEC: %s() line %u: ", __func__ RTE_LOG_COMMA __LINE__, __VA_ARGS__)
+#define EVVEC_LOG_ERR(...) EVVEC_LOG(ERR, EVVEC, __VA_ARGS__)
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+#define EVVEC_LOG_DBG(...) EVVEC_LOG(DEBUG, EVVEC, __VA_ARGS__)
+#else
+#define EVVEC_LOG_DBG(...) /* No debug logging */
+#endif
+
+#define PTR_VALID_OR_ERR_RET(ptr, retval) \
+ do { \
+ if (ptr == NULL) { \
+ rte_errno = EINVAL; \
+ return retval; \
+ } \
+ } while (0)
+
+static int
+validate_conf(const struct rte_event_vector_adapter_conf *conf,
+ struct rte_event_vector_adapter_info *info)
+{
+ int rc = -EINVAL;
+
+ RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(conf->event_dev_id, rc);
+
+ if (conf->vector_sz < info->min_vector_sz || conf->vector_sz > info->max_vector_sz) {
+ EVVEC_LOG_DBG("invalid vector size %u, should be between %u and %u",
+ conf->vector_sz, info->min_vector_sz, info->max_vector_sz);
+ return rc;
+ }
+
+ if (conf->vector_timeout_ns < info->min_vector_timeout_ns ||
+ conf->vector_timeout_ns > info->max_vector_timeout_ns) {
+ EVVEC_LOG_DBG("invalid vector timeout %u, should be between %u and %u",
+ conf->vector_timeout_ns, info->min_vector_timeout_ns,
+ info->max_vector_timeout_ns);
+ return rc;
+ }
+
+ if (conf->vector_mp == NULL) {
+ EVVEC_LOG_DBG("invalid mempool for vector adapter");
+ return rc;
+ }
+
+ if (info->log2_sz && rte_is_power_of_2(conf->vector_sz) != 0) {
+ EVVEC_LOG_DBG("invalid vector size %u, should be a power of 2", conf->vector_sz);
+ return rc;
+ }
+
+ return 0;
+}
+
+static int
+default_port_conf_cb(uint8_t event_dev_id, uint8_t *event_port_id, void *conf_arg)
+{
+ struct rte_event_port_conf *port_conf, def_port_conf = {0};
+ struct rte_event_dev_config dev_conf;
+ struct rte_eventdev *dev;
+ uint8_t port_id;
+ uint8_t dev_id;
+ int started;
+ int ret;
+
+ dev = &rte_eventdevs[event_dev_id];
+ dev_id = dev->data->dev_id;
+ dev_conf = dev->data->dev_conf;
+
+ started = dev->data->dev_started;
+ if (started)
+ rte_event_dev_stop(dev_id);
+
+ port_id = dev_conf.nb_event_ports;
+ if (conf_arg != NULL)
+ port_conf = conf_arg;
+ else {
+ port_conf = &def_port_conf;
+ ret = rte_event_port_default_conf_get(dev_id, (port_id - 1), port_conf);
+ if (ret < 0)
+ return ret;
+ }
+
+ dev_conf.nb_event_ports += 1;
+ if (port_conf->event_port_cfg & RTE_EVENT_PORT_CFG_SINGLE_LINK)
+ dev_conf.nb_single_link_event_port_queues += 1;
+
+ ret = rte_event_dev_configure(dev_id, &dev_conf);
+ if (ret < 0) {
+ EVVEC_LOG_ERR("failed to configure event dev %u", dev_id);
+ if (started)
+ if (rte_event_dev_start(dev_id))
+ return -EIO;
+
+ return ret;
+ }
+
+ ret = rte_event_port_setup(dev_id, port_id, port_conf);
+ if (ret < 0) {
+ EVVEC_LOG_ERR("failed to setup event port %u on event dev %u", port_id, dev_id);
+ return ret;
+ }
+
+ *event_port_id = port_id;
+
+ if (started)
+ ret = rte_event_dev_start(dev_id);
+
+ return ret;
+}
+
+struct rte_event_vector_adapter *
+rte_event_vector_adapter_create(const struct rte_event_vector_adapter_conf *conf)
+{
+ return rte_event_vector_adapter_create_ext(conf, default_port_conf_cb, NULL);
+}
+
+struct rte_event_vector_adapter *
+rte_event_vector_adapter_create_ext(const struct rte_event_vector_adapter_conf *conf,
+ rte_event_vector_adapter_port_conf_cb_t conf_cb, void *conf_arg)
+{
+ struct rte_event_vector_adapter *adapter = NULL;
+ struct rte_event_vector_adapter_info info;
+ char mz_name[MZ_NAME_MAX_LEN];
+ const struct rte_memzone *mz;
+ struct rte_eventdev *dev;
+ uint32_t caps;
+ int i, n, rc;
+
+ PTR_VALID_OR_ERR_RET(conf, NULL);
+
+ if (adapters[conf->event_dev_id][conf->ev.queue_id] == NULL) {
+ adapters[conf->event_dev_id][conf->ev.queue_id] =
+ rte_zmalloc("rte_event_vector_adapter",
+ sizeof(struct rte_event_vector_adapter) *
+ RTE_EVENT_VECTOR_ADAPTER_MAX_INSTANCE_PER_QUEUE,
+ RTE_CACHE_LINE_SIZE);
+ if (adapters[conf->event_dev_id][conf->ev.queue_id] == NULL) {
+ EVVEC_LOG_DBG("failed to allocate memory for vector adapters");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+ }
+
+ for (i = 0; i < RTE_EVENT_VECTOR_ADAPTER_MAX_INSTANCE_PER_QUEUE; i++) {
+ if (adapters[conf->event_dev_id][conf->ev.queue_id][i].used == false) {
+ adapter = &adapters[conf->event_dev_id][conf->ev.queue_id][i];
+ adapter->adapter_id = ADAPTER_ID(conf->event_dev_id, conf->ev.queue_id, i);
+ adapter->used = true;
+ break;
+ }
+ }
+
+ if (adapter == NULL) {
+ EVVEC_LOG_DBG("no available vector adapters");
+ rte_errno = ENODEV;
+ return NULL;
+ }
+
+ RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(conf->event_dev_id, NULL);
+
+ dev = &rte_eventdevs[conf->event_dev_id];
+ if (dev->dev_ops->vector_adapter_caps_get != NULL &&
+ dev->dev_ops->vector_adapter_info_get != NULL) {
+ rc = dev->dev_ops->vector_adapter_caps_get(dev, &caps, &adapter->ops);
+ if (rc < 0) {
+ EVVEC_LOG_DBG("failed to get vector adapter capabilities rc = %d", rc);
+ rte_errno = ENOTSUP;
+ goto error;
+ }
+
+ rc = dev->dev_ops->vector_adapter_info_get(dev, &info);
+ if (rc < 0) {
+ adapter->ops = NULL;
+ EVVEC_LOG_DBG("failed to get vector adapter info rc = %d", rc);
+ rte_errno = ENOTSUP;
+ goto error;
+ }
+ }
+
+ if (conf->ev.sched_type != dev->data->queues_cfg[conf->ev.queue_id].schedule_type &&
+ !(dev->data->event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)) {
+ EVVEC_LOG_DBG("invalid event schedule type, eventdev doesn't support all types");
+ rte_errno = EINVAL;
+ goto error;
+ }
+
+ if (!(caps & RTE_EVENT_VECTOR_ADAPTER_CAP_INTERNAL_PORT)) {
+ if (conf_cb == NULL) {
+ EVVEC_LOG_DBG("port config callback is NULL");
+ rte_errno = EINVAL;
+ goto error;
+ }
+
+ rc = conf_cb(conf->event_dev_id, &adapter->data->event_port_id, conf_arg);
+ if (rc < 0) {
+ EVVEC_LOG_DBG("failed to create port for vector adapter");
+ rte_errno = EINVAL;
+ goto error;
+ }
+ }
+
+ rc = validate_conf(conf, &info);
+ if (rc < 0) {
+ adapter->ops = NULL;
+ rte_errno = EINVAL;
+ goto error;
+ }
+
+ n = snprintf(mz_name, MZ_NAME_MAX_LEN, DATA_MZ_NAME_FORMAT, conf->event_dev_id,
+ conf->ev.queue_id, adapter->adapter_id);
+ if (n >= (int)sizeof(mz_name)) {
+ adapter->ops = NULL;
+ EVVEC_LOG_DBG("failed to create memzone name");
+ rte_errno = EINVAL;
+ goto error;
+ }
+ mz = rte_memzone_reserve(mz_name, sizeof(struct rte_event_vector_adapter_data),
+ conf->socket_id, 0);
+ if (mz == NULL) {
+ adapter->ops = NULL;
+ EVVEC_LOG_DBG("failed to reserve memzone for vector adapter");
+ rte_errno = ENOMEM;
+ goto error;
+ }
+
+ adapter->data = mz->addr;
+ memset(adapter->data, 0, sizeof(struct rte_event_vector_adapter_data));
+
+ adapter->data->mz = mz;
+ adapter->data->event_dev_id = conf->event_dev_id;
+ adapter->data->id = adapter->adapter_id;
+ adapter->data->socket_id = conf->socket_id;
+ adapter->data->conf = *conf;
+
+ FUNC_PTR_OR_ERR_RET(adapter->ops->create, NULL);
+
+ rc = adapter->ops->create(adapter);
+ if (rc < 0) {
+ adapter->ops = NULL;
+ EVVEC_LOG_DBG("failed to create vector adapter");
+ rte_errno = EINVAL;
+ goto error;
+ }
+
+ adapter->enqueue = adapter->ops->enqueue;
+
+ return adapter;
+
+error:
+ adapter->used = false;
+ return NULL;
+}
+
+struct rte_event_vector_adapter *
+rte_event_vector_adapter_lookup(uint32_t adapter_id)
+{
+ uint8_t adapter_idx = ADAPTER_ID_FROM_ADAPTER_ID(adapter_id);
+ uint8_t queue_id = QUEUE_ID_FROM_ADAPTER_ID(adapter_id);
+ uint8_t dev_id = DEV_ID_FROM_ADAPTER_ID(adapter_id);
+ struct rte_event_vector_adapter *adapter;
+ const struct rte_memzone *mz;
+ char name[MZ_NAME_MAX_LEN];
+ struct rte_eventdev *dev;
+ int rc;
+
+ if (dev_id >= RTE_EVENT_MAX_DEVS || queue_id >= RTE_EVENT_MAX_QUEUES_PER_DEV ||
+ adapter_idx >= RTE_EVENT_VECTOR_ADAPTER_MAX_INSTANCE_PER_QUEUE) {
+ EVVEC_LOG_ERR("invalid adapter id %u", adapter_id);
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ if (adapters[dev_id][queue_id] == NULL) {
+ adapters[dev_id][queue_id] =
+ rte_zmalloc("rte_event_vector_adapter",
+ sizeof(struct rte_event_vector_adapter) *
+ RTE_EVENT_VECTOR_ADAPTER_MAX_INSTANCE_PER_QUEUE,
+ RTE_CACHE_LINE_SIZE);
+ if (adapters[dev_id][queue_id] == NULL) {
+ EVVEC_LOG_DBG("failed to allocate memory for vector adapters");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+ }
+
+ if (adapters[dev_id][queue_id][adapter_idx].used == true)
+ return &adapters[dev_id][queue_id][adapter_idx];
+
+ adapter = &adapters[dev_id][queue_id][adapter_idx];
+
+ snprintf(name, MZ_NAME_MAX_LEN, DATA_MZ_NAME_FORMAT, dev_id, queue_id, adapter_idx);
+ mz = rte_memzone_lookup(name);
+ if (mz == NULL) {
+ EVVEC_LOG_DBG("failed to lookup memzone for vector adapter");
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ adapter->data = mz->addr;
+ dev = &rte_eventdevs[dev_id];
+
+ if (dev->dev_ops->vector_adapter_caps_get != NULL) {
+ rc = dev->dev_ops->vector_adapter_caps_get(dev, &adapter->data->caps,
+ &adapter->ops);
+ if (rc < 0) {
+ EVVEC_LOG_DBG("failed to get vector adapter capabilities");
+ rte_errno = ENOTSUP;
+ return NULL;
+ }
+ }
+
+ adapter->enqueue = adapter->ops->enqueue;
+ adapter->adapter_id = adapter_id;
+ adapter->used = true;
+
+ return adapter;
+}
+
+int
+rte_event_vector_adapter_destroy(struct rte_event_vector_adapter *adapter)
+{
+ int rc;
+
+ PTR_VALID_OR_ERR_RET(adapter, -EINVAL);
+ if (adapter->used == false) {
+ EVVEC_LOG_ERR("event vector adapter is not allocated");
+ return -EINVAL;
+ }
+
+ FUNC_PTR_OR_ERR_RET(adapter->ops->destroy, -ENOTSUP);
+
+ rc = adapter->ops->destroy(adapter);
+ if (rc < 0) {
+ EVVEC_LOG_DBG("failed to destroy vector adapter");
+ return rc;
+ }
+
+ rte_memzone_free(adapter->data->mz);
+ adapter->ops = NULL;
+ adapter->enqueue = dummy_vector_adapter_enqueue;
+ adapter->data = NULL;
+ adapter->used = false;
+
+ return 0;
+}
+
+int
+rte_event_vector_adapter_info_get(uint8_t event_dev_id, struct rte_event_vector_adapter_info *info)
+{
+ RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(event_dev_id, -EINVAL);
+ PTR_VALID_OR_ERR_RET(info, -EINVAL);
+
+ struct rte_eventdev *dev = &rte_eventdevs[event_dev_id];
+ if (dev->dev_ops->vector_adapter_info_get != NULL)
+ return dev->dev_ops->vector_adapter_info_get(dev, info);
+
+ return 0;
+}
+
+int
+rte_event_vector_adapter_conf_get(struct rte_event_vector_adapter *adapter,
+ struct rte_event_vector_adapter_conf *conf)
+{
+ PTR_VALID_OR_ERR_RET(adapter, -EINVAL);
+ PTR_VALID_OR_ERR_RET(conf, -EINVAL);
+
+ *conf = adapter->data->conf;
+ return 0;
+}
+
+uint8_t
+rte_event_vector_adapter_remaining(uint8_t event_dev_id, uint8_t event_queue_id)
+{
+ uint8_t remaining = 0;
+ int i;
+
+ RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(event_dev_id, 0);
+
+ if (event_queue_id >= RTE_EVENT_MAX_QUEUES_PER_DEV)
+ return 0;
+
+ for (i = 0; i < RTE_EVENT_VECTOR_ADAPTER_MAX_INSTANCE_PER_QUEUE; i++) {
+ if (adapters[event_dev_id][event_queue_id][i].used == false)
+ remaining++;
+ }
+
+ return remaining;
+}
+
+int
+rte_event_vector_adapter_stats_get(struct rte_event_vector_adapter *adapter,
+ struct rte_event_vector_adapter_stats *stats)
+{
+ PTR_VALID_OR_ERR_RET(adapter, -EINVAL);
+ PTR_VALID_OR_ERR_RET(stats, -EINVAL);
+
+ FUNC_PTR_OR_ERR_RET(adapter->ops->stats_get, -ENOTSUP);
+
+ adapter->ops->stats_get(adapter, stats);
+
+ return 0;
+}
+
+int
+rte_event_vector_adapter_stats_reset(struct rte_event_vector_adapter *adapter)
+{
+ PTR_VALID_OR_ERR_RET(adapter, -EINVAL);
+
+ FUNC_PTR_OR_ERR_RET(adapter->ops->stats_reset, -ENOTSUP);
+
+ adapter->ops->stats_reset(adapter);
+
+ return 0;
+}
diff --git a/lib/eventdev/rte_event_vector_adapter.h b/lib/eventdev/rte_event_vector_adapter.h
new file mode 100644
index 0000000000..e7ecc26cdf
--- /dev/null
+++ b/lib/eventdev/rte_event_vector_adapter.h
@@ -0,0 +1,469 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell International Ltd.
+ * All rights reserved.
+ */
+
+#ifndef __RTE_EVENT_VECTOR_ADAPTER_H__
+#define __RTE_EVENT_VECTOR_ADAPTER_H__
+
+/**
+ * @file rte_event_vector_adapter.h
+ *
+ * @warning
+ * @b EXPERIMENTAL:
+ * All functions in this file may be changed or removed without prior notice.
+ *
+ * Event vector adapter API.
+ *
+ * An event vector adapter has the following working model:
+ *
+ * ┌──────────┐
+ * │ Vector ├─┐
+ * │ adapter0 │ │
+ * └──────────┘ │
+ * ┌──────────┐ │ ┌──────────┐
+ * │ Vector ├─┼──►│ Event │
+ * │ adapter1 │ │ │ Queue0 │
+ * └──────────┘ │ └──────────┘
+ * ┌──────────┐ │
+ * │ Vector ├─┘
+ * │ adapter2 │
+ * └──────────┘
+ *
+ * ┌──────────┐
+ * │ Vector ├─┐
+ * │ adapter0 │ │ ┌──────────┐
+ * └──────────┘ ├──►│ Event │
+ * ┌──────────┐ │ │ Queue1 │
+ * │ Vector ├─┘ └──────────┘
+ * │ adapter1 │
+ * └──────────┘
+ *
+ * - A vector adapter can be seen as an extension to event queue. It helps in
+ * aggregating ptrs and generating a vector event which is enqueued to the
+ * event queue.
+ *
+ * - Multiple vector adapters can be created on an event queue, each with its
+ * own unique properties such as event properties, vector size, and timeout.
+ * Note: If the target event queue doesn't support RTE_EVENT_QUEUE_CFG_ALL_TYPES,
+ * then the vector adapter should use the same schedule type as the event
+ * queue.
+ *
+ * - Each vector adapter aggregates ptrs, generates a vector event and
+ * enqueues it to the event queue with the event properties mentioned in
+ * rte_event_vector_adapter_conf::ev.
+ *
+ * - After configuring the vector adapter, Application needs to use the
+ * rte_event_vector_adapter_enqueue() function to enqueue ptrs i.e.,
+ * mbufs/ptrs/u64s to the vector adapter.
+ * On reaching the configured vector size or timeout, the vector adapter
+ * enqueues the event vector to the event queue.
+ * Note: Application should use the event_type and sub_event_type properly
+ * identifying the contents of vector event on dequeue.
+ *
+ * - If the vector adapter advertises the RTE_EVENT_VECTOR_ADAPTER_CAP_SOV_EOV
+ * capability, application can use the RTE_EVENT_VECTOR_ENQ_[S|E]OV flags
+ * to indicate the start and end of a vector event.
+ * * When RTE_EVENT_VECTOR_ENQ_SOV is set, the vector adapter will flush any
+ * aggregation in progress as a vector event and start aggregating a new
+ * vector event with the enqueued ptr.
+ * * When RTE_EVENT_VECTOR_ENQ_EOV is set, the vector adapter will add the
+ * current ptr enqueued to the aggregated event and enqueue the vector event
+ * to the event queue.
+ * * If both flags are set, the vector adapter will flush the current aggregation
+ * as a vector event and enqueue the current ptr as a single event to the event
+ * queue.
+ *
+ * - If the vector adapter reaches the configured vector size, it will enqueue
+ * the aggregated vector event to the event queue.
+ *
+ * - If the vector adapter reaches the configured vector timeout, it will flush
+ * the current aggregation as a vector event if the minimum vector size is
+ * reached, if not it will enqueue the ptrs as single events to the event
+ * queue.
+ *
+ * - If the vector adapter is unable to aggregate the ptrs into a vector event,
+ * it will enqueue the ptrs as single events to the event queue with the event
+ * properties mentioned in rte_event_vector_adapter_conf::ev_fallback.
+ *
+ * Before using the vector adapter, the application has to create and configure
+ * an event device and based on the event device capability it might require
+ * creating an additional event port.
+ *
+ * When the application creates the vector adapter using the
+ * ``rte_event_vector_adapter_create()`` function, the event device driver
+ * capabilities are checked. If an in-built port is absent, the application
+ * uses the default function to create a new event port.
+ * For finer control over event port creation, the application should use
+ * the ``rte_event_vector_adapter_create_ext()`` function.
+ *
+ * The application can enqueue one or more ptrs to the vector adapter using the
+ * ``rte_event_vector_adapter_enqueue()`` function and control the aggregation
+ * using the flags.
+ *
+ * Vector adapters report stats using the ``rte_event_vector_adapter_stats_get()``
+ * function and reset the stats using the ``rte_event_vector_adapter_stats_reset()``.
+ *
+ * The application can destroy the vector adapter using the
+ * ``rte_event_vector_adapter_destroy()`` function.
+ *
+ */
+
+#include <rte_eventdev.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define RTE_EVENT_VECTOR_ADAPTER_CAP_SOV_EOV (1ULL << 0)
+/**< Vector adapter supports Start of Vector (SOV) and End of Vector (EOV) flags
+ * in the enqueue flags.
+ *
+ * @see RTE_EVENT_VECTOR_ENQ_SOV
+ * @see RTE_EVENT_VECTOR_ENQ_EOV
+ */
+
+#define RTE_EVENT_VECTOR_ENQ_SOV (1ULL << 0)
+/**< Indicates the start of a vector event. When enqueue is called with
+ * RTE_EVENT_VECTOR_ENQ_SOV, the vector adapter will flush any vector
+ * aggregation in progress and start aggregating a new vector event with
+ * the enqueued ptr.
+ */
+#define RTE_EVENT_VECTOR_ENQ_EOV (1ULL << 1)
+/**< Indicates the end of a vector event. When enqueue is called with
+ * RTE_EVENT_VECTOR_ENQ_EOV, the vector adapter will add the current ptr
+ * to the aggregated event and flush the event vector.
+ */
+#define RTE_EVENT_VECTOR_ENQ_FLUSH (1ULL << 2)
+/**< Flush any in-progress vector aggregation. */
+
+/**
+ * Vector adapter configuration structure
+ */
+struct rte_event_vector_adapter_conf {
+ uint8_t event_dev_id;
+ /**< Event device identifier */
+ uint32_t socket_id;
+ /**< Identifier of socket from which to allocate memory for adapter */
+ struct rte_event ev;
+ /**<
+ * The values from the following event fields will be used when
+ * queuing work:
+ * - queue_id: Targeted event queue ID for vector event.
+ * - event_priority: Event priority of the vector event in
+ * the event queue relative to other events.
+ * - sched_type: Scheduling type for events from this vector adapter.
+ * - event_type: Event type for the vector event.
+ * - sub_event_type: Sub event type for the vector event.
+ * - flow_id: Flow ID for the vectors enqueued to the event queue by
+ * the vector adapter.
+ */
+ struct rte_event ev_fallback;
+ /**<
+ * The values from the following event fields will be used when
+ * aggregation fails and single event is enqueued:
+ * - event_type: Event type for the single event.
+ * - sub_event_type: Sub event type for the single event.
+ * - flow_id: Flow ID for the single event.
+ *
+ * Other fields are taken from rte_event_vector_adapter_conf::ev.
+ */
+ uint16_t vector_sz;
+ /**<
+ * Indicates the maximum number for enqueued work to combine and form a vector.
+ * Should be within vectorization limits of the adapter.
+ * @see rte_event_vector_adapter_info::min_vector_sz
+ * @see rte_event_vector_adapter_info::max_vector_sz
+ */
+ uint64_t vector_timeout_ns;
+ /**<
+ * Indicates the maximum number of nanoseconds to wait for receiving
+ * work. Should be within vectorization limits of the adapter.
+ * @see rte_event_vector_adapter_info::min_vector_ns
+ * @see rte_event_vector_adapter_info::max_vector_ns
+ */
+ struct rte_mempool *vector_mp;
+ /**<
+ * Indicates the mempool that should be used for allocating
+ * rte_event_vector container.
+ * @see rte_event_vector_pool_create
+ */
+};
+
+/**
+ * Vector adapter vector info structure
+ */
+struct rte_event_vector_adapter_info {
+ uint8_t max_vector_adapters_per_event_queue;
+ /**< Maximum number of vector adapters configurable */
+ uint16_t min_vector_sz;
+ /**< Minimum vector size configurable */
+ uint16_t max_vector_sz;
+ /**< Maximum vector size configurable */
+ uint64_t min_vector_timeout_ns;
+ /**< Minimum vector timeout configurable */
+ uint64_t max_vector_timeout_ns;
+ /**< Maximum vector timeout configurable */
+ uint8_t log2_sz;
+ /**< True if the size configured should be in log2. */
+};
+
+/**
+ * Vector adapter statistics structure
+ */
+struct rte_event_vector_adapter_stats {
+ uint64_t vectorized;
+ /**< Number of events vectorized */
+ uint64_t vectors_timedout;
+ /**< Number of timeouts occurred */
+ uint64_t vectors_flushed;
+ /**< Number of vectors flushed */
+ uint64_t alloc_failures;
+ /**< Number of vector allocation failures */
+};
+
+struct rte_event_vector_adapter;
+
+typedef int (*rte_event_vector_adapter_enqueue_t)(struct rte_event_vector_adapter *adapter,
+ uintptr_t ptrs[], uint16_t num_elem,
+ uint64_t flags);
+/**< @internal Enqueue ptrs into the event vector adapter. */
+
+struct __rte_cache_aligned rte_event_vector_adapter {
+ rte_event_vector_adapter_enqueue_t enqueue;
+ /**< Pointer to driver enqueue function. */
+ struct rte_event_vector_adapter_data *data;
+ /**< Pointer to the adapter data */
+ const struct event_vector_adapter_ops *ops;
+ /**< Functions exported by adapter driver */
+
+ uint32_t adapter_id;
+ /**< Identifier of the adapter instance. */
+ uint8_t used : 1;
+ /**< Flag to indicate that this adapter is being used. */
+};
+
+/**
+ * Callback function type for producer port creation.
+ */
+typedef int (*rte_event_vector_adapter_port_conf_cb_t)(uint8_t event_dev_id, uint8_t *event_port_id,
+ void *conf_arg);
+
+/**
+ * Create an event vector adapter.
+ *
+ * This function creates an event vector adapter based on the provided
+ * configuration. The adapter can be used to combine multiple mbufs/ptrs/u64s
+ * into a single vector event, i.e., rte_event_vector, which is then enqueued
+ * to the event queue provided.
+ * @see rte_event_vector_adapter_conf::ev::event_queue_id.
+ *
+ * @param conf
+ * Configuration for the event vector adapter.
+ * @return
+ * - Pointer to the created event vector adapter on success.
+ * - NULL on failure with rte_errno set to the error code.
+ * Possible rte_errno values include:
+ * - EINVAL: Invalid event device identifier specified in config.
+ * - ENOMEM: Unable to allocate sufficient memory for adapter instances.
+ * - ENOSPC: Maximum number of adapters already created.
+ */
+struct rte_event_vector_adapter *
+rte_event_vector_adapter_create(const struct rte_event_vector_adapter_conf *conf);
+
+/**
+ * Create an event vector adapter with the supplied callback.
+ *
+ * This function can be used to have a more granular control over the event
+ * vector adapter creation. If a built-in port is absent, then the function uses
+ * the callback provided to create and get the port id to be used as a producer
+ * port.
+ *
+ * @param conf
+ * The event vector adapter configuration structure.
+ * @param conf_cb
+ * The port config callback function.
+ * @param conf_arg
+ * Opaque pointer to the argument for the callback function.
+ * @return
+ * - Pointer to the new allocated event vector adapter on success.
+ * - NULL on error with rte_errno set appropriately.
+ * Possible rte_errno values include:
+ * - ERANGE: vector_timeout_ns is not in supported range.
+ * - ENOMEM: Unable to allocate sufficient memory for adapter instances.
+ * - EINVAL: Invalid event device identifier specified in config.
+ * - ENOSPC: Maximum number of adapters already created.
+ */
+struct rte_event_vector_adapter *
+rte_event_vector_adapter_create_ext(const struct rte_event_vector_adapter_conf *conf,
+ rte_event_vector_adapter_port_conf_cb_t conf_cb,
+ void *conf_arg);
+
+/**
+ * Lookup an event vector adapter using its identifier.
+ *
+ * This function returns the event vector adapter based on the adapter_id.
+ * This is useful when the adapter is created in another process and the
+ * application wants to use the adapter in the current process.
+ *
+ * @param adapter_id
+ * Identifier of the event vector adapter to look up.
+ * @return
+ * - Pointer to the event vector adapter on success.
+ * - NULL if the adapter is not found.
+ */
+struct rte_event_vector_adapter *
+rte_event_vector_adapter_lookup(uint32_t adapter_id);
+
+/**
+ * Destroy an event vector adapter.
+ *
+ * This function releases the resources associated with the event vector adapter.
+ *
+ * @param adapter
+ * Pointer to the event vector adapter to be destroyed.
+ * @return
+ * - 0 on success.
+ * - Negative value on failure with rte_errno set to the error code.
+ */
+int
+rte_event_vector_adapter_destroy(struct rte_event_vector_adapter *adapter);
+
+/**
+ * Get the vector info of an event vector adapter.
+ *
+ * This function retrieves the vector info of the event vector adapter.
+ *
+ * @param event_dev_id
+ * Event device identifier.
+ * @param info
+ * Pointer to the structure where the vector info will be stored.
+ * @return
+ * 0 on success, negative value on failure.
+ * - EINVAL if the event device identifier is invalid.
+ * - ENOTSUP if the event device does not support vector adapters.
+ */
+int
+rte_event_vector_adapter_info_get(uint8_t event_dev_id,
+ struct rte_event_vector_adapter_info *info);
+
+/**
+ * Get the configuration of an event vector adapter.
+ *
+ * This function retrieves the configuration of the event vector adapter.
+ *
+ * @param adapter
+ * Pointer to the event vector adapter.
+ * @param conf
+ * Pointer to the structure where the configuration will be stored.
+ * @return
+ * 0 on success, negative value on failure.
+ */
+int
+rte_event_vector_adapter_conf_get(struct rte_event_vector_adapter *adapter,
+ struct rte_event_vector_adapter_conf *conf);
+
+/**
+ * Get the remaining event vector adapters.
+ *
+ * This function retrieves the number of remaining event vector adapters
+ * available for a given event device and event queue.
+ *
+ * @param event_dev_id
+ * Event device identifier.
+ * @param event_queue_id
+ * Event queue identifier.
+ * @return
+ * Number of remaining slots available for enqueuing events.
+ */
+uint8_t
+rte_event_vector_adapter_remaining(uint8_t event_dev_id, uint8_t event_queue_id);
+
+/**
+ * Get the event vector adapter statistics.
+ *
+ * This function retrieves the statistics of the event vector adapter.
+ *
+ * @param adapter
+ * Pointer to the event vector adapter.
+ * @param stats
+ * Pointer to the structure where the statistics will be stored.
+ * @return
+ * 0 on success, negative value on failure.
+ */
+int
+rte_event_vector_adapter_stats_get(struct rte_event_vector_adapter *adapter,
+ struct rte_event_vector_adapter_stats *stats);
+
+/**
+ * @brief Reset the event vector adapter statistics.
+ *
+ * This function resets the statistics of the event vector adapter to their default values.
+ *
+ * @param adapter
+ * Pointer to the event vector adapter whose statistics are to be reset.
+ * @return
+ * 0 on success, negative value on failure.
+ */
+int
+rte_event_vector_adapter_stats_reset(struct rte_event_vector_adapter *adapter);
+
+/**
+ * Retrieve the service ID of the event vector adapter. If the adapter doesn't
+ * use an rte_service function, this function returns -ESRCH.
+ *
+ * @param adapter
+ * A pointer to an event vector adapter.
+ * @param [out] service_id
+ * A pointer to a uint32_t, to be filled in with the service id.
+ *
+ * @return
+ * - 0: Success
+ * - <0: Error code on failure
+ * - -ESRCH: the adapter does not require a service to operate
+ */
+int
+rte_event_vector_adapter_service_id_get(struct rte_event_vector_adapter *adapter,
+ uint32_t *service_id);
+
+/**
+ * Enqueue ptrs into the event vector adapter.
+ *
+ * This function enqueues a specified number of ptrs into the event vector adapter.
+ * The ptrs are combined into a single vector event, i.e., rte_event_vector, which
+ * is then enqueued to the event queue configured in the adapter.
+ *
+ * @param adapter
+ * Pointer to the event vector adapter.
+ * @param ptrs
+ * Array of ptrs to be enqueued.
+ * @param num_elem
+ * Number of ptrs to be enqueued.
+ * @param flags
+ * Flags to be used for the enqueue operation.
+ * @return
+ * Number of ptrs enqueued on success.
+ */
+static inline int
+rte_event_vector_adapter_enqueue(struct rte_event_vector_adapter *adapter, uintptr_t ptrs[],
+ uint16_t num_elem, uint64_t flags)
+{
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+ if (adapter == NULL) {
+ rte_errno = EINVAL;
+ return 0;
+ }
+
+ if (adapter->used == false) {
+ rte_errno = EINVAL;
+ return 0;
+ }
+#endif
+ return adapter->enqueue(adapter, ptrs, num_elem, flags);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __RTE_EVENT_VECTOR_ADAPTER_H__ */
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 43cd95d765..0be1c0ba31 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -244,6 +244,27 @@ rte_event_dma_adapter_caps_get(uint8_t dev_id, uint8_t dma_dev_id, uint32_t *cap
return 0;
}
+int
+rte_event_vector_adapter_caps_get(uint8_t dev_id, uint32_t *caps)
+{
+ const struct event_vector_adapter_ops *ops;
+ struct rte_eventdev *dev;
+
+ RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+
+ dev = &rte_eventdevs[dev_id];
+
+ if (caps == NULL)
+ return -EINVAL;
+
+ if (dev->dev_ops->vector_adapter_caps_get == NULL)
+ *caps = 0;
+
+ return dev->dev_ops->vector_adapter_caps_get ?
+ dev->dev_ops->vector_adapter_caps_get(dev, caps, &ops) :
+ 0;
+}
+
static inline int
event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
{
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 6400d6109f..41c459feff 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1985,6 +1985,14 @@ int
rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
uint32_t *caps);
+/* Vector adapter capability bitmap flags */
+#define RTE_EVENT_VECTOR_ADAPTER_CAP_INTERNAL_PORT 0x1
+/**< This flag is set when the vector adapter is capable of generating events
+ * using an internal event port.
+ */
+
+int rte_event_vector_adapter_caps_get(uint8_t dev_id, uint32_t *caps);
+
/**
* Converts nanoseconds to *timeout_ticks* value for rte_event_dequeue_burst()
*
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index 44687255cb..083e6c0d18 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -156,6 +156,19 @@ EXPERIMENTAL {
# added in 25.03
rte_event_eth_rx_adapter_queues_add;
+
+ #added in 25.05
+ rte_event_vector_adapter_create;
+ rte_event_vector_adapter_create_ext;
+ rte_event_vector_adapter_lookup;
+ rte_event_vector_adapter_destroy;
+ rte_event_vector_adapter_info_get;
+ rte_event_vector_adapter_conf_get;
+ rte_event_vector_adapter_remaining;
+ rte_event_vector_adapter_stats_get;
+ rte_event_vector_adapter_stats_reset;
+ rte_event_vector_adapter_service_id_get;
+ rte_event_vector_adapter_enqueue;
};
INTERNAL {
--
2.43.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* [RFC 2/2] eventdev: add default software vector adapter
2025-03-26 13:14 [RFC 0/2] introduce event vector adapter pbhagavatula
2025-03-26 13:14 ` [RFC 1/2] eventdev: " pbhagavatula
@ 2025-03-26 13:14 ` pbhagavatula
2025-03-26 14:18 ` Stephen Hemminger
2025-03-26 14:22 ` Stephen Hemminger
2025-03-26 17:06 ` [RFC 0/2] introduce event " Pavan Nikhilesh Bhagavatula
2 siblings, 2 replies; 12+ messages in thread
From: pbhagavatula @ 2025-03-26 13:14 UTC (permalink / raw)
To: jerinj; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
When event device PMD doesn't support vector adapter,
the library will fallback to software implementation
which relies on service core to check for timeouts
and vectorizes the objects on enqueue.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
lib/eventdev/eventdev_pmd.h | 2 +
lib/eventdev/rte_event_vector_adapter.c | 318 ++++++++++++++++++++++++
lib/eventdev/rte_eventdev.c | 2 +
3 files changed, 322 insertions(+)
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index d03461316b..dda8ad82c9 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -87,6 +87,8 @@ extern int rte_event_logtype;
#define RTE_EVENT_TIMER_ADAPTER_SW_CAP \
RTE_EVENT_TIMER_ADAPTER_CAP_PERIODIC
+#define RTE_EVENT_VECTOR_ADAPTER_SW_CAP RTE_EVENT_VECTOR_ADAPTER_CAP_SOV_EOV
+
#define RTE_EVENTDEV_DETACHED (0)
#define RTE_EVENTDEV_ATTACHED (1)
diff --git a/lib/eventdev/rte_event_vector_adapter.c b/lib/eventdev/rte_event_vector_adapter.c
index 5f38a9a40b..c1d29530be 100644
--- a/lib/eventdev/rte_event_vector_adapter.c
+++ b/lib/eventdev/rte_event_vector_adapter.c
@@ -21,6 +21,10 @@
#define MZ_NAME_MAX_LEN 64
#define DATA_MZ_NAME_FORMAT "rte_event_vector_adapter_data_%d_%d_%d"
+#define MAX_VECTOR_SIZE 1024
+#define MIN_VECTOR_SIZE 1
+#define MAX_VECTOR_NS 1E9
+#define MIN_VECTOR_NS 1E5
RTE_LOG_REGISTER_SUFFIX(ev_vector_logtype, adapter.vector, NOTICE);
#define RTE_LOGTYPE_EVVEC ev_vector_logtype
@@ -46,6 +50,9 @@ struct rte_event_vector_adapter *adapters[RTE_EVENT_MAX_DEVS][RTE_EVENT_MAX_QUEU
} \
} while (0)
+static const struct event_vector_adapter_ops sw_ops;
+static const struct rte_event_vector_adapter_info sw_info;
+
static int
validate_conf(const struct rte_event_vector_adapter_conf *conf,
struct rte_event_vector_adapter_info *info)
@@ -229,6 +236,11 @@ rte_event_vector_adapter_create_ext(const struct rte_event_vector_adapter_conf *
}
}
+ if (adapter->ops == NULL) {
+ adapter->ops = &sw_ops;
+ info = sw_info;
+ }
+
rc = validate_conf(conf, &info);
if (rc < 0) {
adapter->ops = NULL;
@@ -338,6 +350,8 @@ rte_event_vector_adapter_lookup(uint32_t adapter_id)
return NULL;
}
}
+ if (adapter->ops == NULL)
+ adapter->ops = &sw_ops;
adapter->enqueue = adapter->ops->enqueue;
adapter->adapter_id = adapter_id;
@@ -384,6 +398,7 @@ rte_event_vector_adapter_info_get(uint8_t event_dev_id, struct rte_event_vector_
if (dev->dev_ops->vector_adapter_info_get != NULL)
return dev->dev_ops->vector_adapter_info_get(dev, info);
+ *info = sw_info;
return 0;
}
@@ -442,3 +457,306 @@ rte_event_vector_adapter_stats_reset(struct rte_event_vector_adapter *adapter)
return 0;
}
+
+/* Software vector adapter implementation. */
+
+struct sw_vector_adapter_service_data;
+struct sw_vector_adapter_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+ uint16_t vector_sz;
+ uint64_t timestamp;
+ uint64_t event_meta;
+ uint64_t vector_tmo_ticks;
+ uint64_t fallback_event_meta;
+ struct rte_mempool *vector_mp;
+ struct rte_event_vector *vector;
+ RTE_ATOMIC(rte_mcslock_t *) lock;
+ struct rte_event_vector_adapter *adapter;
+ struct rte_event_vector_adapter_stats stats;
+ struct sw_vector_adapter_service_data *service_data;
+ RTE_TAILQ_ENTRY(sw_vector_adapter_data) next;
+};
+
+struct sw_vector_adapter_service_data {
+ uint32_t service_id;
+ RTE_ATOMIC(rte_mcslock_t *) lock;
+ RTE_TAILQ_HEAD(, sw_vector_adapter_data) adapter_list;
+};
+
+static inline struct sw_vector_adapter_data *
+sw_vector_adapter_priv(const struct rte_event_vector_adapter *adapter)
+{
+ return adapter->data->adapter_priv;
+}
+
+static int
+sw_vector_adapter_flush(struct sw_vector_adapter_data *sw)
+{
+ struct rte_event ev;
+
+ if (sw->vector == NULL)
+ return -ENOBUFS;
+
+ ev.event = sw->event_meta;
+ ev.vec = sw->vector;
+ if (rte_event_enqueue_burst(sw->dev_id, sw->port_id, &ev, 1) != 1)
+ return -ENOSPC;
+
+ sw->vector = NULL;
+ sw->timestamp = 0;
+ return 0;
+}
+
+static int
+sw_vector_adapter_service_func(void *arg)
+{
+ struct sw_vector_adapter_service_data *service_data = arg;
+ struct sw_vector_adapter_data *sw, *nextsw;
+ rte_mcslock_t me, me_adptr;
+ int ret;
+
+ rte_mcslock_lock(&service_data->lock, &me);
+ RTE_TAILQ_FOREACH_SAFE(sw, &service_data->adapter_list, next, nextsw)
+ {
+ if (!rte_mcslock_trylock(&sw->lock, &me_adptr))
+ continue;
+ if (sw->vector == NULL) {
+ TAILQ_REMOVE(&service_data->adapter_list, sw, next);
+ rte_mcslock_unlock(&sw->lock, &me_adptr);
+ continue;
+ }
+ if (rte_get_timer_cycles() - sw->timestamp < sw->vector_tmo_ticks) {
+ rte_mcslock_unlock(&sw->lock, &me_adptr);
+ continue;
+ }
+ ret = sw_vector_adapter_flush(sw);
+ if (ret) {
+ rte_mcslock_unlock(&sw->lock, &me_adptr);
+ continue;
+ }
+ sw->stats.vectors_timedout++;
+ TAILQ_REMOVE(&service_data->adapter_list, sw, next);
+ rte_mcslock_unlock(&sw->lock, &me_adptr);
+ }
+ rte_mcslock_unlock(&service_data->lock, &me);
+
+ return 0;
+}
+
+static int
+sw_vector_adapter_service_init(struct sw_vector_adapter_data *sw)
+{
+#define SW_VECTOR_ADAPTER_SERVICE_FMT "sw_vector_adapter_service"
+ struct sw_vector_adapter_service_data *service_data;
+ struct rte_service_spec service;
+ const struct rte_memzone *mz;
+ int ret;
+
+ mz = rte_memzone_lookup(SW_VECTOR_ADAPTER_SERVICE_FMT);
+ if (mz == NULL) {
+ mz = rte_memzone_reserve(SW_VECTOR_ADAPTER_SERVICE_FMT,
+ sizeof(struct sw_vector_adapter_service_data),
+ sw->adapter->data->socket_id, 0);
+ if (mz == NULL) {
+ EVVEC_LOG_DBG("failed to reserve memzone for service");
+ return -ENOMEM;
+ }
+ service_data = (struct sw_vector_adapter_service_data *)mz->addr;
+
+ service.callback = sw_vector_adapter_service_func;
+ service.callback_userdata = service_data;
+ service.socket_id = sw->adapter->data->socket_id;
+
+ ret = rte_service_component_register(&service, &service_data->service_id);
+ if (ret < 0) {
+ EVVEC_LOG_ERR("failed to register service");
+ return -ENOTSUP;
+ }
+ TAILQ_INIT(&service_data->adapter_list);
+ }
+ service_data = (struct sw_vector_adapter_service_data *)mz->addr;
+
+ sw->service_data = service_data;
+ sw->adapter->data->unified_service_id = service_data->service_id;
+ return 0;
+}
+
+static int
+sw_vector_adapter_create(struct rte_event_vector_adapter *adapter)
+{
+#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
+#define SW_VECTOR_ADAPTER_NAME 64
+ char name[SW_VECTOR_ADAPTER_NAME];
+ struct sw_vector_adapter_data *sw;
+ struct rte_event ev;
+
+ snprintf(name, SW_VECTOR_ADAPTER_NAME, "sw_vector_%" PRIx32, adapter->data->id);
+ sw = rte_zmalloc_socket(name, sizeof(*sw), RTE_CACHE_LINE_SIZE, adapter->data->socket_id);
+ if (sw == NULL) {
+ EVVEC_LOG_ERR("failed to allocate space for private data");
+ rte_errno = ENOMEM;
+ return -1;
+ }
+
+ /* Connect storage to adapter instance */
+ adapter->data->adapter_priv = sw;
+ sw->adapter = adapter;
+ sw->dev_id = adapter->data->event_dev_id;
+ sw->port_id = adapter->data->event_port_id;
+
+ sw->vector_sz = adapter->data->conf.vector_sz;
+ sw->vector_mp = adapter->data->conf.vector_mp;
+ sw->vector_tmo_ticks = NSEC2TICK(adapter->data->conf.vector_timeout_ns, rte_get_timer_hz());
+
+ ev = adapter->data->conf.ev;
+ ev.op = RTE_EVENT_OP_NEW;
+ sw->event_meta = ev.event;
+
+ ev = adapter->data->conf.ev_fallback;
+ ev.op = RTE_EVENT_OP_NEW;
+ ev.priority = adapter->data->conf.ev.priority;
+ ev.queue_id = adapter->data->conf.ev.queue_id;
+ ev.sched_type = adapter->data->conf.ev.sched_type;
+ sw->fallback_event_meta = ev.event;
+
+ sw_vector_adapter_service_init(sw);
+
+ return 0;
+}
+
+static int
+sw_vector_adapter_destroy(struct rte_event_vector_adapter *adapter)
+{
+ struct sw_vector_adapter_data *sw = sw_vector_adapter_priv(adapter);
+
+ rte_free(sw);
+ adapter->data->adapter_priv = NULL;
+
+ return 0;
+}
+
+static int
+sw_vector_adapter_flush_single_event(struct sw_vector_adapter_data *sw, uintptr_t ptr)
+{
+ struct rte_event ev;
+
+ ev.event = sw->fallback_event_meta;
+ ev.u64 = ptr;
+ if (rte_event_enqueue_burst(sw->dev_id, sw->port_id, &ev, 1) != 1)
+ return -ENOSPC;
+
+ return 0;
+}
+
+static int
+sw_vector_adapter_enqueue(struct rte_event_vector_adapter *adapter, uintptr_t ptrs[],
+ uint16_t num_elem, uint64_t flags)
+{
+ struct sw_vector_adapter_data *sw = sw_vector_adapter_priv(adapter);
+ uint16_t cnt = num_elem, n;
+ rte_mcslock_t me, me_s;
+ int ret;
+
+ rte_mcslock_lock(&sw->lock, &me);
+ if (flags & RTE_EVENT_VECTOR_ENQ_FLUSH) {
+ sw_vector_adapter_flush(sw);
+ sw->stats.vectors_flushed++;
+ rte_mcslock_unlock(&sw->lock, &me);
+ return 0;
+ }
+
+ if (num_elem == 0) {
+ rte_mcslock_unlock(&sw->lock, &me);
+ return 0;
+ }
+
+ if (flags & RTE_EVENT_VECTOR_ENQ_SOV) {
+ while (sw_vector_adapter_flush(sw) != 0)
+ ;
+ sw->stats.vectors_flushed++;
+ }
+
+ while (num_elem) {
+ if (sw->vector == NULL) {
+ ret = rte_mempool_get(sw->vector_mp, (void **)&sw->vector);
+ if (ret) {
+ if (sw_vector_adapter_flush_single_event(sw, *ptrs) == 0) {
+ sw->stats.alloc_failures++;
+ num_elem--;
+ ptrs++;
+ continue;
+ }
+ rte_errno = -ENOSPC;
+ goto done;
+ }
+ sw->vector->nb_elem = 0;
+ sw->vector->attr_valid = 0;
+ sw->vector->elem_offset = 0;
+ }
+ n = RTE_MIN(sw->vector_sz - sw->vector->nb_elem, num_elem);
+ memcpy(&sw->vector->u64s[sw->vector->nb_elem], ptrs, n * sizeof(uintptr_t));
+ sw->vector->nb_elem += n;
+ num_elem -= n;
+ ptrs += n;
+
+ if (sw->vector_sz == sw->vector->nb_elem) {
+ ret = sw_vector_adapter_flush(sw);
+ if (ret)
+ goto done;
+ sw->stats.vectorized++;
+ }
+ }
+
+ if (flags & RTE_EVENT_VECTOR_ENQ_EOV) {
+ while (sw_vector_adapter_flush(sw) != 0)
+ ;
+ sw->stats.vectors_flushed++;
+ }
+
+ if (sw->vector != NULL && sw->vector->nb_elem) {
+ sw->timestamp = rte_get_timer_cycles();
+ rte_mcslock_lock(&sw->service_data->lock, &me_s);
+ TAILQ_INSERT_TAIL(&sw->service_data->adapter_list, sw, next);
+ rte_mcslock_unlock(&sw->service_data->lock, &me_s);
+ }
+
+done:
+ rte_mcslock_unlock(&sw->lock, &me);
+ return cnt - num_elem;
+}
+
+static int
+sw_vector_adapter_stats_get(const struct rte_event_vector_adapter *adapter,
+ struct rte_event_vector_adapter_stats *stats)
+{
+ struct sw_vector_adapter_data *sw = sw_vector_adapter_priv(adapter);
+
+ *stats = sw->stats;
+ return 0;
+}
+
+static int
+sw_vector_adapter_stats_reset(const struct rte_event_vector_adapter *adapter)
+{
+ struct sw_vector_adapter_data *sw = sw_vector_adapter_priv(adapter);
+
+ memset(&sw->stats, 0, sizeof(sw->stats));
+ return 0;
+}
+
+static const struct event_vector_adapter_ops sw_ops = {
+ .create = sw_vector_adapter_create,
+ .destroy = sw_vector_adapter_destroy,
+ .enqueue = sw_vector_adapter_enqueue,
+ .stats_get = sw_vector_adapter_stats_get,
+ .stats_reset = sw_vector_adapter_stats_reset,
+};
+
+static const struct rte_event_vector_adapter_info sw_info = {
+ .min_vector_sz = MIN_VECTOR_SIZE,
+ .max_vector_sz = MAX_VECTOR_SIZE,
+ .min_vector_timeout_ns = MIN_VECTOR_NS,
+ .max_vector_timeout_ns = MAX_VECTOR_NS,
+ .log2_sz = 0,
+};
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 0be1c0ba31..e452a469d3 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -258,6 +258,8 @@ rte_event_vector_adapter_caps_get(uint8_t dev_id, uint32_t *caps)
return -EINVAL;
if (dev->dev_ops->vector_adapter_caps_get == NULL)
+ *caps = RTE_EVENT_VECTOR_ADAPTER_SW_CAP;
+ else
*caps = 0;
return dev->dev_ops->vector_adapter_caps_get ?
--
2.43.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC 2/2] eventdev: add default software vector adapter
2025-03-26 13:14 ` [RFC 2/2] eventdev: add default software vector adapter pbhagavatula
@ 2025-03-26 14:18 ` Stephen Hemminger
2025-03-26 17:25 ` [EXTERNAL] " Pavan Nikhilesh Bhagavatula
2025-03-26 14:22 ` Stephen Hemminger
1 sibling, 1 reply; 12+ messages in thread
From: Stephen Hemminger @ 2025-03-26 14:18 UTC (permalink / raw)
To: pbhagavatula; +Cc: jerinj, dev
On Wed, 26 Mar 2025 18:44:36 +0530
<pbhagavatula@marvell.com> wrote:
> +struct sw_vector_adapter_service_data {
> + uint32_t service_id;
> + RTE_ATOMIC(rte_mcslock_t *) lock;
> + RTE_TAILQ_HEAD(, sw_vector_adapter_data) adapter_list;
> +};
Why the indirect pointer to the lock? rather than embedding it in
the structure?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC 2/2] eventdev: add default software vector adapter
2025-03-26 13:14 ` [RFC 2/2] eventdev: add default software vector adapter pbhagavatula
2025-03-26 14:18 ` Stephen Hemminger
@ 2025-03-26 14:22 ` Stephen Hemminger
1 sibling, 0 replies; 12+ messages in thread
From: Stephen Hemminger @ 2025-03-26 14:22 UTC (permalink / raw)
To: pbhagavatula; +Cc: jerinj, dev
On Wed, 26 Mar 2025 18:44:36 +0530
<pbhagavatula@marvell.com> wrote:
> +
> +struct sw_vector_adapter_service_data {
> + uint32_t service_id;
> + RTE_ATOMIC(rte_mcslock_t *) lock;
> + RTE_TAILQ_HEAD(, sw_vector_adapter_data) adapter_list;
> +};
Do you really need mcslock here?
mcslock is for locks where there is large amount of contention and lots of CPU's.
This doesn't seem like that.
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: [RFC 0/2] introduce event vector adapter
2025-03-26 13:14 [RFC 0/2] introduce event vector adapter pbhagavatula
2025-03-26 13:14 ` [RFC 1/2] eventdev: " pbhagavatula
2025-03-26 13:14 ` [RFC 2/2] eventdev: add default software vector adapter pbhagavatula
@ 2025-03-26 17:06 ` Pavan Nikhilesh Bhagavatula
2 siblings, 0 replies; 12+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2025-03-26 17:06 UTC (permalink / raw)
To: Pavan Nikhilesh Bhagavatula, Jerin Jacob
Cc: dev, pravin.pathak, hemant.agrawal, sachin.saxena,
mattias.ronnblom, Jerin Jacob, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
Amit Prakash Shukla, s.v.naga.harish.k, anatoly.burakov
++
> -----Original Message-----
> From: pbhagavatula@marvell.com <pbhagavatula@marvell.com>
> Sent: Wednesday, March 26, 2025 6:45 PM
> To: Jerin Jacob <jerinj@marvell.com>
> Cc: dev@dpdk.org; Pavan Nikhilesh Bhagavatula
> <pbhagavatula@marvell.com>
> Subject: [RFC 0/2] introduce event vector adapter
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> The event vector adapter supports offloading the creation of event vectors
> by vectorizing objects (mbufs/ptrs/u64s).
>
> An event vector adapter has the following working model:
>
> ┌──────────┐
> │ Vector ├─┐
> │ adapter0 │ │
> └──────────┘ │
> ┌──────────┐ │ ┌──────────┐
> │ Vector ├─┼──►│ Event │
> │ adapter1 │ │ │ Queue0 │
> └──────────┘ │ └──────────┘
> ┌──────────┐ │
> │ Vector ├─┘
> │ adapter2 │
> └──────────┘
>
> ┌──────────┐
> │ Vector ├─┐
> │ adapter0 │ │ ┌──────────┐
> └──────────┘ ├──►│ Event │
> ┌──────────┐ │ │ Queue1 │
> │ Vector ├─┘ └──────────┘
> │ adapter1 │
> └──────────┘
>
> - A vector adapter can be seen as an extension to event queue. It helps in
> aggregating objects and generating a vector event which is enqueued to the
> event queue.
>
> - Multiple vector adapters can be created on an event queue, each with its
> own unique properties such as event properties, vector size, and timeout.
> Note: If the target event queue doesn't support
> RTE_EVENT_QUEUE_CFG_ALL_TYPES,
> then the vector adapter should use the same schedule type as the event
> queue.
>
> - Each vector adapter aggregates objects, generates a vector event and
> enqueues it to the event queue with the event properties mentioned in
> rte_event_vector_adapter_conf::ev.
>
> - After configuring the vector adapter, the Application needs to use the
> rte_event_vector_adapter_enqueue() function to enqueue objects i.e.,
> mbufs/ptrs/u64s to the vector adapter.
> On reaching the configured vector size or timeout, the vector adapter
> enqueues the event vector to the event queue.
> Note: Application should use the event_type and sub_event_type properly
> identifying the contents of vector event on dequeue.
>
> - If the vector adapter advertises the
> RTE_EVENT_VECTOR_ADAPTER_CAP_SOV_EOV
> capability, application can use the RTE_EVENT_VECTOR_ENQ_[S|E]OV flags
> to indicate the start and end of a vector event.
> * When RTE_EVENT_VECTOR_ENQ_SOV is set, the vector adapter will flush
> any
> aggregation in progress as a vector event and start aggregating a new
> vector event with the enqueued ptr.
> * When RTE_EVENT_VECTOR_ENQ_EOV is set, the vector adapter will add the
> current ptr enqueued to the aggregated event and enqueue the vector event
> to the event queue.
> * If both flags are set, the vector adapter will flush the current aggregation
> as a vector event and enqueue the current ptr as a single event to the event
> queue.
>
> - If the vector adapter reaches the configured vector size, it will enqueue
> the aggregated vector event to the event queue.
>
> - If the vector adapter reaches the configured vector timeout, it will flush
> the current aggregation as a vector event if the minimum vector size is
> reached, if not it will enqueue the objects as single events to the event
> queue.
>
> - If the vector adapter is unable to aggregate the objects into a vector event,
> it will enqueue the objects as single events to the event queue with the event
> properties mentioned in rte_event_vector_adapter_conf::ev_fallback.
>
> Before using the vector adapter, the application has to create and configure
> an event device and based on the event device capability it might require
> creating an additional event port.
>
> When the application creates the vector adapter using the
> ``rte_event_vector_adapter_create()`` function, the event device driver
> capabilities are checked. If an in-built port is absent, the application
> uses the default function to create a new event port.
> For finer control over event port creation, the application should use
> the ``rte_event_vector_adapter_create_ext()`` function.
>
> The application can enqueue one or more objects to the vector adapter using
> the
> ``rte_event_vector_adapter_enqueue()`` function and control the aggregation
> using the flags.
>
> Vector adapters report stats using the
> ``rte_event_vector_adapter_stats_get()``
> function and reset the stats using the
> ``rte_event_vector_adapter_stats_reset()``.
>
> The application can destroy the vector adapter using the
> ``rte_event_vector_adapter_destroy()`` function.
>
> Pavan Nikhilesh (2):
> eventdev: introduce event vector adapter
> eventdev: add default software vector adapter
>
> config/rte_config.h | 1 +
> lib/eventdev/event_vector_adapter_pmd.h | 87 +++
> lib/eventdev/eventdev_pmd.h | 38 ++
> lib/eventdev/meson.build | 3 +
> lib/eventdev/rte_event_vector_adapter.c | 762
> ++++++++++++++++++++++++
> lib/eventdev/rte_event_vector_adapter.h | 469 +++++++++++++++
> lib/eventdev/rte_eventdev.c | 23 +
> lib/eventdev/rte_eventdev.h | 8 +
> lib/eventdev/version.map | 13 +
> 9 files changed, 1404 insertions(+)
> create mode 100644 lib/eventdev/event_vector_adapter_pmd.h
> create mode 100644 lib/eventdev/rte_event_vector_adapter.c
> create mode 100644 lib/eventdev/rte_event_vector_adapter.h
>
> --
> 2.43.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: [EXTERNAL] Re: [RFC 2/2] eventdev: add default software vector adapter
2025-03-26 14:18 ` Stephen Hemminger
@ 2025-03-26 17:25 ` Pavan Nikhilesh Bhagavatula
2025-03-26 20:25 ` Stephen Hemminger
0 siblings, 1 reply; 12+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2025-03-26 17:25 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Jerin Jacob, dev
> On Wed, 26 Mar 2025 18:44:36 +0530
> <pbhagavatula@marvell.com> wrote:
>
> > +struct sw_vector_adapter_service_data {
> > + uint32_t service_id;
> > + RTE_ATOMIC(rte_mcslock_t *) lock;
> > + RTE_TAILQ_HEAD(, sw_vector_adapter_data) adapter_list;
> > +};
>
> Why the indirect pointer to the lock? rather than embedding it in
> the structure?
IIUC, the lock itself is declared and used as a pointer right?
I looked at examples from test_mcslock.c, and this seemed correct.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [EXTERNAL] Re: [RFC 2/2] eventdev: add default software vector adapter
2025-03-26 17:25 ` [EXTERNAL] " Pavan Nikhilesh Bhagavatula
@ 2025-03-26 20:25 ` Stephen Hemminger
0 siblings, 0 replies; 12+ messages in thread
From: Stephen Hemminger @ 2025-03-26 20:25 UTC (permalink / raw)
To: Pavan Nikhilesh Bhagavatula; +Cc: Jerin Jacob, dev
On Wed, 26 Mar 2025 17:25:32 +0000
Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com> wrote:
> > On Wed, 26 Mar 2025 18:44:36 +0530
> > <pbhagavatula@marvell.com> wrote:
> >
> > > +struct sw_vector_adapter_service_data {
> > > + uint32_t service_id;
> > > + RTE_ATOMIC(rte_mcslock_t *) lock;
> > > + RTE_TAILQ_HEAD(, sw_vector_adapter_data) adapter_list;
> > > +};
> >
> > Why the indirect pointer to the lock? rather than embedding it in
> > the structure?
>
> IIUC, the lock itself is declared and used as a pointer right?
> I looked at examples from test_mcslock.c, and this seemed correct.
>
Forgot, these locks used linked list of waiters, and root is a pointer.
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 0/3] introduce event vector adapter
2025-03-26 13:14 ` [RFC 1/2] eventdev: " pbhagavatula
@ 2025-04-10 18:00 ` pbhagavatula
2025-04-10 18:00 ` [PATCH 1/3] eventdev: " pbhagavatula
` (2 more replies)
0 siblings, 3 replies; 12+ messages in thread
From: pbhagavatula @ 2025-04-10 18:00 UTC (permalink / raw)
To: jerinj, pravin.pathak, hemant.agrawal, sachin.saxena,
mattias.ronnblom, liangma, peter.mccarthy, harry.van.haaren,
erik.g.carrillo, abhinandan.gujjar, amitprakashs,
s.v.naga.harish.k, anatoly.burakov
Cc: dev, Pavan Nikhilesh
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="y", Size: 6786 bytes --]
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
The event vector adapter supports offloading the creation of event vectors
by vectorizing objects (mbufs/ptrs/u64s).
An event vector adapter has the following working model:
┌──────────┐
│ Vector ├─┐
│ adapter0 │ │
└──────────┘ │
┌──────────┐ │ ┌──────────┐
│ Vector ├─┼──►│ Event │
│ adapter1 │ │ │ Queue0 │
└──────────┘ │ └──────────┘
┌──────────┐ │
│ Vector ├─┘
│ adapter2 │
└──────────┘
┌──────────┐
│ Vector ├─┐
│ adapter0 │ │ ┌──────────┐
└──────────┘ ├──►│ Event │
┌──────────┐ │ │ Queue1 │
│ Vector ├─┘ └──────────┘
│ adapter1 │
└──────────┘
- A vector adapter can be seen as an extension to event queue. It helps in
aggregating objects and generating a vector event which is enqueued to the
event queue.
- Multiple vector adapters can be created on an event queue, each with its
own unique properties such as event properties, vector size, and timeout.
Note: If the target event queue doesn't support RTE_EVENT_QUEUE_CFG_ALL_TYPES
then the vector adapter should use the same schedule type as the event
queue.
- Each vector adapter aggregates objects, generates a vector event and
enqueues it to the event queue with the event properties mentioned in
rte_event_vector_adapter_conf::ev.
- After configuring the vector adapter, the Application needs to use the
rte_event_vector_adapter_enqueue() function to enqueue objects i.e.,
mbufs/ptrs/u64s to the vector adapter.
On reaching the configured vector size or timeout, the vector adapter
enqueues the event vector to the event queue.
Note: Application should use the event_type and sub_event_type properly
identifying the contents of vector event on dequeue.
- If the vector adapter advertises the RTE_EVENT_VECTOR_ADAPTER_CAP_SOV_EOV
capability, application can use the RTE_EVENT_VECTOR_ENQ_[S|E]OV flags
to indicate the start and end of a vector event.
* When RTE_EVENT_VECTOR_ENQ_SOV is set, the vector adapter will flush any
aggregation in progress and start aggregating a new vector event with
the enqueued objects.
* When RTE_EVENT_VECTOR_ENQ_EOV is set, the vector adapter will add the
objects enqueued to in-progress aggregation and enqueue the vector event
to the event queue, even if configured vector size or timeout is not
reached.
* If both flags are set, the vector adapter will flush any aggregation in
progress and enqueue the objects as a new vector event to the event queue.
- If the vector adapter reaches the configured vector size, it will enqueue
the aggregated vector event to the event queue.
- If the vector adapter reaches the configured vector timeout, it will flush
the aggregated objects as a vector event if the minimum vector size is
reached, if not it will enqueue the objects as single events with fallback
event properties mentioned in rte_event_vector_adapter_conf::ev_fallback.
- If the vector adapter is unable to aggregate the objects into a vector event,
it will enqueue the objects as single events to the event queue with the
event properties mentioned in rte_event_vector_adapter_conf::ev_fallback.
Before using the vector adapter, the application has to create and configure
an event device and based on the event device capability it might require
creating an additional event port.
When the application creates the vector adapter using the
``rte_event_vector_adapter_create()`` function, the event device driver
capabilities are checked. If an in-built port is absent, the application
uses the default function to create a new event port.
For finer control over event port creation, the application should use
the ``rte_event_vector_adapter_create_ext()`` function.
The application can enqueue one or more objects to the vector adapter using the
``rte_event_vector_adapter_enqueue()`` function and control the aggregation
using the flags.
Vector adapters report stats using the ``rte_event_vector_adapter_stats_get()``
function and reset the stats using the
``rte_event_vector_adapter_stats_reset()``.
The application can destroy the vector adapter using the
``rte_event_vector_adapter_destroy()`` function.
Pavan Nikhilesh (3):
eventdev: introduce event vector adapter
eventdev: add default software vector adapter
app/eventdev: add vector adapter performance test
app/test-eventdev/evt_common.h | 9 +-
app/test-eventdev/evt_options.c | 14 +
app/test-eventdev/evt_options.h | 1 +
app/test-eventdev/test_perf_atq.c | 61 +-
app/test-eventdev/test_perf_common.c | 281 ++++--
app/test-eventdev/test_perf_common.h | 13 +-
app/test-eventdev/test_perf_queue.c | 66 +-
app/test/meson.build | 1 +
app/test/test_event_vector_adapter.c | 682 ++++++++++++++
config/rte_config.h | 1 +
doc/api/doxy-api-index.md | 1 +
doc/guides/eventdevs/features/default.ini | 7 +
.../eventdev/event_vector_adapter.rst | 208 +++++
doc/guides/prog_guide/eventdev/eventdev.rst | 10 +-
doc/guides/prog_guide/eventdev/index.rst | 1 +
doc/guides/rel_notes/release_25_07.rst | 11 +
doc/guides/tools/testeventdev.rst | 6 +
lib/eventdev/event_vector_adapter_pmd.h | 85 ++
lib/eventdev/eventdev_pmd.h | 38 +
lib/eventdev/meson.build | 3 +
lib/eventdev/rte_event_vector_adapter.c | 864 ++++++++++++++++++
lib/eventdev/rte_event_vector_adapter.h | 481 ++++++++++
lib/eventdev/rte_eventdev.c | 24 +
lib/eventdev/rte_eventdev.h | 10 +
24 files changed, 2779 insertions(+), 99 deletions(-)
create mode 100644 app/test/test_event_vector_adapter.c
create mode 100644 doc/guides/prog_guide/eventdev/event_vector_adapter.rst
create mode 100644 lib/eventdev/event_vector_adapter_pmd.h
create mode 100644 lib/eventdev/rte_event_vector_adapter.c
create mode 100644 lib/eventdev/rte_event_vector_adapter.h
--
2.43.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 1/3] eventdev: introduce event vector adapter
2025-04-10 18:00 ` [PATCH 0/3] " pbhagavatula
@ 2025-04-10 18:00 ` pbhagavatula
2025-04-10 18:00 ` [PATCH 2/3] eventdev: add default software " pbhagavatula
2025-04-10 18:00 ` [PATCH 3/3] app/eventdev: add vector adapter performance test pbhagavatula
2 siblings, 0 replies; 12+ messages in thread
From: pbhagavatula @ 2025-04-10 18:00 UTC (permalink / raw)
To: jerinj, pravin.pathak, hemant.agrawal, sachin.saxena,
mattias.ronnblom, liangma, peter.mccarthy, harry.van.haaren,
erik.g.carrillo, abhinandan.gujjar, amitprakashs,
s.v.naga.harish.k, anatoly.burakov, Bruce Richardson
Cc: dev, Pavan Nikhilesh
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="y", Size: 53145 bytes --]
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
The event vector adapter supports offloading creation of
event vectors by vectorizing objects (mbufs/ptrs/u64s).
Applications can create a vector adapter associated with
an event queue and enqueue objects to be vectorized.
When the vector reaches the configured size or when the timeout
is reached, the vector adapter will enqueue the vector to the
event queue.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
config/rte_config.h | 1 +
doc/api/doxy-api-index.md | 1 +
doc/guides/eventdevs/features/default.ini | 7 +
.../eventdev/event_vector_adapter.rst | 208 ++++++++
doc/guides/prog_guide/eventdev/eventdev.rst | 10 +-
doc/guides/prog_guide/eventdev/index.rst | 1 +
doc/guides/rel_notes/release_25_07.rst | 6 +
lib/eventdev/event_vector_adapter_pmd.h | 85 ++++
lib/eventdev/eventdev_pmd.h | 36 ++
lib/eventdev/meson.build | 3 +
lib/eventdev/rte_event_vector_adapter.c | 472 +++++++++++++++++
lib/eventdev/rte_event_vector_adapter.h | 481 ++++++++++++++++++
lib/eventdev/rte_eventdev.c | 22 +
lib/eventdev/rte_eventdev.h | 10 +
14 files changed, 1338 insertions(+), 5 deletions(-)
create mode 100644 doc/guides/prog_guide/eventdev/event_vector_adapter.rst
create mode 100644 lib/eventdev/event_vector_adapter_pmd.h
create mode 100644 lib/eventdev/rte_event_vector_adapter.c
create mode 100644 lib/eventdev/rte_event_vector_adapter.h
diff --git a/config/rte_config.h b/config/rte_config.h
index 86897de75e..9535c48d81 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -92,6 +92,7 @@
#define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
#define RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE 32
#define RTE_EVENT_DMA_ADAPTER_MAX_INSTANCE 32
+#define RTE_EVENT_VECTOR_ADAPTER_MAX_INSTANCE_PER_QUEUE 32
/* rawdev defines */
#define RTE_RAWDEV_MAX_DEVS 64
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 5c425a2cb9..a11bd59526 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -30,6 +30,7 @@ The public API headers are grouped by topics:
[event_timer_adapter](@ref rte_event_timer_adapter.h),
[event_crypto_adapter](@ref rte_event_crypto_adapter.h),
[event_dma_adapter](@ref rte_event_dma_adapter.h),
+ [event_vector_adapter](@ref rte_event_vector_adapter.h),
[rawdev](@ref rte_rawdev.h),
[metrics](@ref rte_metrics.h),
[bitrate](@ref rte_bitrate.h),
diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini
index fa24ba38b4..9fb68f946e 100644
--- a/doc/guides/eventdevs/features/default.ini
+++ b/doc/guides/eventdevs/features/default.ini
@@ -64,3 +64,10 @@ internal_port_vchan_ev_bind =
[Timer adapter Features]
internal_port =
periodic =
+
+;
+; Features of a default Vector adapter
+;
+[Vector adapter Features]
+internal_port =
+sov_eov =
diff --git a/doc/guides/prog_guide/eventdev/event_vector_adapter.rst b/doc/guides/prog_guide/eventdev/event_vector_adapter.rst
new file mode 100644
index 0000000000..e257552d22
--- /dev/null
+++ b/doc/guides/prog_guide/eventdev/event_vector_adapter.rst
@@ -0,0 +1,208 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2025 Marvell International Ltd.
+
+Event Vector Adapter Library
+============================
+
+The Event Vector Adapter library extends the event-driven model by introducing
+a mechanism to aggregate multiple 8B objects (e.g., mbufs, u64s) into a single
+vector event and enqueue it to an event queue. It provides an API to create,
+configure, and manage vector adapters.
+
+The Event Vector Adapter library is designed to interface with hardware or
+software implementations of vector aggregation. It queries an eventdev PMD
+to determine the appropriate implementation.
+
+Examples of using the API are presented in the `API Overview`_ and
+`Processing Vector Events`_ sections.
+
+.. _vector_event:
+
+Vector Event
+~~~~~~~~~~~~
+
+A vector event is enqueued in the event device when the vector adapter
+reaches the configured vector size or timeout. The event device uses the
+attributes configured by the application when scheduling it.
+
+Fallback Behavior
+~~~~~~~~~~~~~~~~~
+
+If the vector adapter cannot aggregate objects into a vector event, it
+enqueues the objects as single events with fallback event properties configured
+by the application.
+
+Timeout and Size
+~~~~~~~~~~~~~~~~
+
+The vector adapter aggregates objects until the configured vector size or
+timeout is reached. If the timeout is reached before the minimum vector size
+is met, the adapter enqueues the objects as single events with fallback event
+properties configured by the application.
+
+API Overview
+------------
+
+This section introduces the Event Vector Adapter API, showing how to create
+and configure a vector adapter and use it to manage vector events.
+
+From a high level, the setup steps are:
+
+* rte_event_vector_adapter_create()
+
+And to enqueue and manage vectors:
+
+* rte_event_vector_adapter_enqueue()
+* rte_event_vector_adapter_stats_get()
+
+Create and Configure a Vector Adapter
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To create a vector adapter instance, initialize an ``rte_event_vector_adapter_conf``
+struct with the desired values, and pass it to ``rte_event_vector_adapter_create()``.
+
+.. code-block:: c
+
+ const struct rte_event_vector_adapter_conf adapter_config = {
+ .event_dev_id = event_dev_id,
+ .socket_id = rte_socket_id(),
+ .ev = {
+ .queue_id = event_queue_id,
+ .sched_type = RTE_SCHED_TYPE_ATOMIC,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ .event_type = RTE_EVENT_TYPE_VECTOR | RTE_EVENT_TYPE_CPU,
+ },
+ .ev_fallback = {
+ .event_type = RTE_EVENT_TYPE_CPU,
+ },
+ .vector_sz = 64,
+ .vector_timeout_ns = 1000000, // 1ms
+ .vector_mp = vector_mempool,
+ };
+
+ struct rte_event_vector_adapter *adapter;
+ adapter = rte_event_vector_adapter_create(&adapter_config);
+
+ if (adapter == NULL) { ... }
+
+Before creating an instance of a vector adapter, the application should create
+and configure an event device along with its event ports. Based on the event
+device's capability, it might require creating an additional event port to be
+used by the vector adapter. If required, the ``rte_event_vector_adapter_create()``
+function will use a default method to configure an event port.
+
+If the application desires finer control of event port allocation and setup,
+it can use the ``rte_event_vector_adapter_create_ext()`` function. This function
+is passed a callback function that will be invoked if the adapter needs to
+create an event port, giving the application the opportunity to control how
+it is done.
+
+Retrieve Vector Adapter Contextual Information
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The vector adapter implementation may have constraints on vector size or
+timeout based on the given event device or system. The application can retrieve
+these constraints using ``rte_event_vector_adapter_info_get()``. This function
+returns an ``rte_event_vector_adapter_info`` struct, which contains the following
+members:
+
+* ``max_vector_adapters_per_event_queue`` - Maximum number of vector adapters
+ configurable per event queue.
+* ``min_vector_sz`` - Minimum vector size configurable.
+* ``max_vector_sz`` - Maximum vector size configurable.
+* ``min_vector_timeout_ns`` - Minimum vector timeout configurable.
+* ``max_vector_timeout_ns`` - Maximum vector timeout configurable.
+* ``log2_sz`` - Vector size should be a power of 2.
+
+Enqueuing Objects to the Vector Adapter
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Once a vector adapter has been created, the application can enqueue objects
+to it using ``rte_event_vector_adapter_enqueue()``. The adapter will aggregate
+the objects into a vector event based on the configured size and timeout.
+
+.. code-block:: c
+
+ uint64_t objs[32];
+ uint16_t num_elem = 32;
+ uint64_t flags = 0;
+
+ int ret = rte_event_vector_adapter_enqueue(adapter, objs, num_elem, flags);
+ if (ret < 0) { ... }
+
+The application can use the ``RTE_EVENT_VECTOR_ENQ_SOV`` and ``RTE_EVENT_VECTOR_ENQ_EOV``
+flags to control the start and end of vector aggregation.
+
+The ``RTE_EVENT_VECTOR_ENQ_SOV`` flag marks the beginning of a vector and applies
+to the first pointer in the enqueue operation. Any incomplete vectors will be
+enqueued to the event device.
+
+The ``RTE_EVENT_VECTOR_ENQ_EOV`` flag marks the end of a vector and applies to
+the last pointer in the enqueue operation. The vector is enqueued to the event
+device even if the configured vector size is not reached.
+
+If both flags are set, the adapter will form a new vector event with the given
+objects and enqueue it to the event device.
+
+The ``RTE_EVENT_VECTOR_ENQ_FLUSH`` flag can be used to flush any remaining
+objects in the vector adapter. This is useful when the application needs to
+ensure that all objects are processed, even if the configured vector size or
+timeout is not reached. An enqueue call with this flag set will not handle any
+objects and will return 0.
+
+Processing Vector Events
+------------------------
+
+Once a vector event has been enqueued in the event device, the application will
+subsequently dequeue it from the event device. The application can process the
+vector event and its aggregated objects as needed:
+
+.. code-block:: c
+
+ void
+ event_processing_loop(...)
+ {
+ while (...) {
+ /* Receive events from the configured event port. */
+ rte_event_dequeue_burst(event_dev_id, event_port, &ev, 1, 0);
+ ...
+ switch(ev.event_type) {
+ ...
+ case RTE_EVENT_TYPE_VECTOR:
+ process_vector_event(ev);
+ ...
+ break;
+ }
+ }
+ }
+
+ void
+ process_vector_event(struct rte_event ev)
+ {
+ struct rte_event_vector *vector = ev.event_ptr;
+ for (uint16_t i = 0; i < vector->nb_elem; i++) {
+ uint64_t obj = vector->u64s[i];
+ /* Process each object in the vector. */
+ ...
+ }
+ }
+
+Statistics and Cleanup
+----------------------
+
+The application can retrieve statistics for the vector adapter using
+``rte_event_vector_adapter_stats_get()``:
+
+.. code-block:: c
+
+ struct rte_event_vector_adapter_stats stats;
+ rte_event_vector_adapter_stats_get(adapter, &stats);
+
+ printf("Vectors created: %" PRIu64 "\n", stats.vectorized);
+ printf("Timeouts occurred: %" PRIu64 "\n", stats.vectors_timedout);
+
+To reset the statistics, use ``rte_event_vector_adapter_stats_reset()``.
+
+To destroy the vector adapter and release its resources, use
+``rte_event_vector_adapter_destroy()``. The destroy function will
+flush any remaining events in the vector adapter before destroying it.
diff --git a/doc/guides/prog_guide/eventdev/eventdev.rst b/doc/guides/prog_guide/eventdev/eventdev.rst
index 8bb72da908..5e49db8983 100644
--- a/doc/guides/prog_guide/eventdev/eventdev.rst
+++ b/doc/guides/prog_guide/eventdev/eventdev.rst
@@ -424,8 +424,8 @@ eventdev.
.. Note::
EventDev needs to be started before starting the event producers such
- as event_eth_rx_adapter, event_timer_adapter, event_crypto_adapter and
- event_dma_adapter.
+ as event_eth_rx_adapter, event_timer_adapter, event_crypto_adapter,
+ event_dma_adapter and event_vector_adapter.
Ingress of New Events
~~~~~~~~~~~~~~~~~~~~~
@@ -561,9 +561,9 @@ using ``rte_event_dev_stop_flush_callback_register()`` function.
.. Note::
The event producers such as ``event_eth_rx_adapter``,
- ``event_timer_adapter``, ``event_crypto_adapter`` and
- ``event_dma_adapter`` need to be stopped before stopping
- the event device.
+ ``event_timer_adapter``, ``event_crypto_adapter``,
+ ``event_dma_adapter`` and ``event_vector_adapter``
+ need to be stopped before stopping the event device.
Summary
-------
diff --git a/doc/guides/prog_guide/eventdev/index.rst b/doc/guides/prog_guide/eventdev/index.rst
index 2e1940ce76..af11a57e71 100644
--- a/doc/guides/prog_guide/eventdev/index.rst
+++ b/doc/guides/prog_guide/eventdev/index.rst
@@ -14,3 +14,4 @@ Event Device Library
event_crypto_adapter
event_dma_adapter
dispatcher_lib
+ event_vector_adapter
diff --git a/doc/guides/rel_notes/release_25_07.rst b/doc/guides/rel_notes/release_25_07.rst
index 093b85d206..e6e84eeec6 100644
--- a/doc/guides/rel_notes/release_25_07.rst
+++ b/doc/guides/rel_notes/release_25_07.rst
@@ -55,6 +55,12 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Added eventdev vector adapter.**
+
+ * Added the Event vector Adapter Library. This library extends the event-based
+ model by introducing APIs that allow applications to offload creation of
+ event vectors.
+
Removed Items
-------------
diff --git a/lib/eventdev/event_vector_adapter_pmd.h b/lib/eventdev/event_vector_adapter_pmd.h
new file mode 100644
index 0000000000..667363c496
--- /dev/null
+++ b/lib/eventdev/event_vector_adapter_pmd.h
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell International Ltd.
+ * All rights reserved.
+ */
+#ifndef __EVENT_VECTOR_ADAPTER_PMD_H__
+#define __EVENT_VECTOR_ADAPTER_PMD_H__
+/**
+ * @file
+ * RTE Event Vector Adapter API (PMD Side)
+ *
+ * @note
+ * This file provides implementation helpers for internal use by PMDs. They
+ * are not intended to be exposed to applications and are not subject to ABI
+ * versioning.
+ */
+#include "eventdev_pmd.h"
+#include "rte_event_vector_adapter.h"
+
+typedef int (*rte_event_vector_adapter_create_t)(struct rte_event_vector_adapter *adapter);
+/**< @internal Event vector adapter implementation setup */
+typedef int (*rte_event_vector_adapter_destroy_t)(struct rte_event_vector_adapter *adapter);
+/**< @internal Event vector adapter implementation teardown */
+typedef int (*rte_event_vector_adapter_stats_get_t)(const struct rte_event_vector_adapter *adapter,
+ struct rte_event_vector_adapter_stats *stats);
+/**< @internal Get statistics for event vector adapter */
+typedef int (*rte_event_vector_adapter_stats_reset_t)(
+ const struct rte_event_vector_adapter *adapter);
+/**< @internal Reset statistics for event vector adapter */
+
+/**
+ * @internal Structure containing the functions exported by an event vector
+ * adapter implementation.
+ */
+struct event_vector_adapter_ops {
+ rte_event_vector_adapter_create_t create;
+ /**< Set up adapter */
+ rte_event_vector_adapter_destroy_t destroy;
+ /**< Tear down adapter */
+ rte_event_vector_adapter_stats_get_t stats_get;
+ /**< Get adapter statistics */
+ rte_event_vector_adapter_stats_reset_t stats_reset;
+ /**< Reset adapter statistics */
+
+ rte_event_vector_adapter_enqueue_t enqueue;
+ /**< Enqueue objects into the event vector adapter */
+};
+/**
+ * @internal Adapter data; structure to be placed in shared memory to be
+ * accessible by various processes in a multi-process configuration.
+ */
+struct __rte_cache_aligned rte_event_vector_adapter_data {
+ uint32_t id;
+ /**< Event vector adapter ID */
+ uint8_t event_dev_id;
+ /**< Event device ID */
+ uint32_t socket_id;
+ /**< Socket ID where memory is allocated */
+ uint8_t event_port_id;
+ /**< Optional: event port ID used when the inbuilt port is absent */
+ const struct rte_memzone *mz;
+ /**< Event vector adapter memzone pointer */
+ struct rte_event_vector_adapter_conf conf;
+ /**< Configuration used to configure the adapter. */
+ uint32_t caps;
+ /**< Adapter capabilities */
+ void *adapter_priv;
+ /**< Vector adapter private data*/
+ uint8_t service_inited;
+ /**< Service initialization state */
+ uint32_t unified_service_id;
+ /**< Unified Service ID*/
+};
+
+static inline int
+dummy_vector_adapter_enqueue(struct rte_event_vector_adapter *adapter, uint64_t objs[],
+ uint16_t num_events, uint64_t flags)
+{
+ RTE_SET_USED(adapter);
+ RTE_SET_USED(objs);
+ RTE_SET_USED(num_events);
+ RTE_SET_USED(flags);
+ return 0;
+}
+
+#endif /* __EVENT_VECTOR_ADAPTER_PMD_H__ */
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index ad13ba5b03..d03461316b 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -26,6 +26,7 @@
#include "event_timer_adapter_pmd.h"
#include "rte_event_eth_rx_adapter.h"
+#include "rte_event_vector_adapter.h"
#include "rte_eventdev.h"
#ifdef __cplusplus
@@ -1555,6 +1556,36 @@ typedef int (*eventdev_dma_adapter_stats_get)(const struct rte_eventdev *dev,
typedef int (*eventdev_dma_adapter_stats_reset)(const struct rte_eventdev *dev,
const int16_t dma_dev_id);
+/**
+ * Event device vector adapter capabilities.
+ *
+ * @param dev
+ * Event device pointer
+ * @param caps
+ * Vector adapter capabilities
+ * @param ops
+ * Vector adapter ops
+ *
+ * @return
+ * Return 0 on success.
+ *
+ */
+typedef int (*eventdev_vector_adapter_caps_get_t)(const struct rte_eventdev *dev, uint32_t *caps,
+ const struct event_vector_adapter_ops **ops);
+
+/**
+ * Event device vector adapter info.
+ *
+ * @param dev
+ * Event device pointer
+ * @param info
+ * Vector adapter info
+ *
+ * @return
+ * Return 0 on success.
+ */
+typedef int (*eventdev_vector_adapter_info_get_t)(const struct rte_eventdev *dev,
+ struct rte_event_vector_adapter_info *info);
/** Event device operations function pointer table */
struct eventdev_ops {
@@ -1697,6 +1728,11 @@ struct eventdev_ops {
eventdev_dma_adapter_stats_reset dma_adapter_stats_reset;
/**< Reset DMA stats */
+ eventdev_vector_adapter_caps_get_t vector_adapter_caps_get;
+ /**< Get vector adapter capabilities */
+ eventdev_vector_adapter_info_get_t vector_adapter_info_get;
+ /**< Get vector adapter info */
+
eventdev_selftest dev_selftest;
/**< Start eventdev Selftest */
diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build
index 71dea91727..0797c145e7 100644
--- a/lib/eventdev/meson.build
+++ b/lib/eventdev/meson.build
@@ -18,6 +18,7 @@ sources = files(
'rte_event_eth_tx_adapter.c',
'rte_event_ring.c',
'rte_event_timer_adapter.c',
+ 'rte_event_vector_adapter.c',
'rte_eventdev.c',
)
headers = files(
@@ -27,6 +28,7 @@ headers = files(
'rte_event_eth_tx_adapter.h',
'rte_event_ring.h',
'rte_event_timer_adapter.h',
+ 'rte_event_vector_adapter.h',
'rte_eventdev.h',
'rte_eventdev_trace_fp.h',
)
@@ -38,6 +40,7 @@ driver_sdk_headers += files(
'eventdev_pmd_pci.h',
'eventdev_pmd_vdev.h',
'event_timer_adapter_pmd.h',
+ 'event_vector_adapter_pmd.h',
)
deps += ['ring', 'ethdev', 'hash', 'mempool', 'mbuf', 'timer', 'cryptodev', 'dmadev']
diff --git a/lib/eventdev/rte_event_vector_adapter.c b/lib/eventdev/rte_event_vector_adapter.c
new file mode 100644
index 0000000000..ff6bc43b17
--- /dev/null
+++ b/lib/eventdev/rte_event_vector_adapter.c
@@ -0,0 +1,472 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell International Ltd.
+ * All rights reserved.
+ */
+
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_mcslock.h>
+#include <rte_service_component.h>
+#include <rte_tailq.h>
+
+#include <eal_export.h>
+
+#include "event_vector_adapter_pmd.h"
+#include "eventdev_pmd.h"
+#include "rte_event_vector_adapter.h"
+
+#define ADAPTER_ID(dev_id, queue_id, adapter_id) \
+ ((uint32_t)dev_id << 16 | (uint32_t)queue_id << 8 | (uint32_t)adapter_id)
+#define DEV_ID_FROM_ADAPTER_ID(adapter_id) ((adapter_id >> 16) & 0xFF)
+#define QUEUE_ID_FROM_ADAPTER_ID(adapter_id) ((adapter_id >> 8) & 0xFF)
+#define ADAPTER_ID_FROM_ADAPTER_ID(adapter_id) (adapter_id & 0xFF)
+
+#define MZ_NAME_MAX_LEN 64
+#define DATA_MZ_NAME_FORMAT "vector_adapter_data_%d_%d_%d"
+
+RTE_LOG_REGISTER_SUFFIX(ev_vector_logtype, adapter.vector, NOTICE);
+#define RTE_LOGTYPE_EVVEC ev_vector_logtype
+
+struct rte_event_vector_adapter *adapters[RTE_EVENT_MAX_DEVS][RTE_EVENT_MAX_QUEUES_PER_DEV];
+
+#define EVVEC_LOG(level, logtype, ...) \
+ RTE_LOG_LINE_PREFIX(level, logtype, \
+ "EVVEC: %s() line %u: ", __func__ RTE_LOG_COMMA __LINE__, __VA_ARGS__)
+#define EVVEC_LOG_ERR(...) EVVEC_LOG(ERR, EVVEC, __VA_ARGS__)
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+#define EVVEC_LOG_DBG(...) EVVEC_LOG(DEBUG, EVVEC, __VA_ARGS__)
+#else
+#define EVVEC_LOG_DBG(...) /* No debug logging */
+#endif
+
+#define PTR_VALID_OR_ERR_RET(ptr, retval) \
+ do { \
+ if (ptr == NULL) { \
+ rte_errno = EINVAL; \
+ return retval; \
+ } \
+ } while (0)
+
+static int
+validate_conf(const struct rte_event_vector_adapter_conf *conf,
+ struct rte_event_vector_adapter_info *info)
+{
+ int rc = -EINVAL;
+
+ RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(conf->event_dev_id, rc);
+
+ if (conf->vector_sz < info->min_vector_sz || conf->vector_sz > info->max_vector_sz) {
+ EVVEC_LOG_DBG("invalid vector size %u, should be between %u and %u",
+ conf->vector_sz, info->min_vector_sz, info->max_vector_sz);
+ return rc;
+ }
+
+ if (conf->vector_timeout_ns < info->min_vector_timeout_ns ||
+ conf->vector_timeout_ns > info->max_vector_timeout_ns) {
+ EVVEC_LOG_DBG("invalid vector timeout %" PRIu64 ", should be between %" PRIu64
+ " and %" PRIu64,
+ conf->vector_timeout_ns, info->min_vector_timeout_ns,
+ info->max_vector_timeout_ns);
+ return rc;
+ }
+
+ if (conf->vector_mp == NULL) {
+ EVVEC_LOG_DBG("invalid mempool for vector adapter");
+ return rc;
+ }
+
+ if (info->log2_sz && rte_is_power_of_2(conf->vector_sz) != 0) {
+ EVVEC_LOG_DBG("invalid vector size %u, should be a power of 2", conf->vector_sz);
+ return rc;
+ }
+
+ return 0;
+}
+
+static int
+default_port_conf_cb(uint8_t event_dev_id, uint8_t *event_port_id, void *conf_arg)
+{
+ struct rte_event_port_conf *port_conf, def_port_conf = {0};
+ struct rte_event_dev_config dev_conf;
+ struct rte_eventdev *dev;
+ uint8_t port_id;
+ uint8_t dev_id;
+ int started;
+ int ret;
+
+ dev = &rte_eventdevs[event_dev_id];
+ dev_id = dev->data->dev_id;
+ dev_conf = dev->data->dev_conf;
+
+ started = dev->data->dev_started;
+ if (started)
+ rte_event_dev_stop(dev_id);
+
+ port_id = dev_conf.nb_event_ports;
+ if (conf_arg != NULL)
+ port_conf = conf_arg;
+ else {
+ port_conf = &def_port_conf;
+ ret = rte_event_port_default_conf_get(dev_id, (port_id - 1), port_conf);
+ if (ret < 0)
+ return ret;
+ }
+
+ dev_conf.nb_event_ports += 1;
+ if (port_conf->event_port_cfg & RTE_EVENT_PORT_CFG_SINGLE_LINK)
+ dev_conf.nb_single_link_event_port_queues += 1;
+
+ ret = rte_event_dev_configure(dev_id, &dev_conf);
+ if (ret < 0) {
+ EVVEC_LOG_ERR("failed to configure event dev %u", dev_id);
+ if (started)
+ if (rte_event_dev_start(dev_id))
+ return -EIO;
+
+ return ret;
+ }
+
+ ret = rte_event_port_setup(dev_id, port_id, port_conf);
+ if (ret < 0) {
+ EVVEC_LOG_ERR("failed to setup event port %u on event dev %u", port_id, dev_id);
+ return ret;
+ }
+
+ *event_port_id = port_id;
+
+ if (started)
+ ret = rte_event_dev_start(dev_id);
+
+ return ret;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_create, 25.07)
+struct rte_event_vector_adapter *
+rte_event_vector_adapter_create(const struct rte_event_vector_adapter_conf *conf)
+{
+ return rte_event_vector_adapter_create_ext(conf, default_port_conf_cb, NULL);
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_create_ext, 25.07)
+struct rte_event_vector_adapter *
+rte_event_vector_adapter_create_ext(const struct rte_event_vector_adapter_conf *conf,
+ rte_event_vector_adapter_port_conf_cb_t conf_cb, void *conf_arg)
+{
+ struct rte_event_vector_adapter *adapter = NULL;
+ struct rte_event_vector_adapter_info info;
+ char mz_name[MZ_NAME_MAX_LEN];
+ const struct rte_memzone *mz;
+ struct rte_eventdev *dev;
+ uint32_t caps;
+ int i, n, rc;
+
+ PTR_VALID_OR_ERR_RET(conf, NULL);
+
+ if (adapters[conf->event_dev_id][conf->ev.queue_id] == NULL) {
+ adapters[conf->event_dev_id][conf->ev.queue_id] =
+ rte_zmalloc("rte_event_vector_adapter",
+ sizeof(struct rte_event_vector_adapter) *
+ RTE_EVENT_VECTOR_ADAPTER_MAX_INSTANCE_PER_QUEUE,
+ RTE_CACHE_LINE_SIZE);
+ if (adapters[conf->event_dev_id][conf->ev.queue_id] == NULL) {
+ EVVEC_LOG_DBG("failed to allocate memory for vector adapters");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+ }
+
+ for (i = 0; i < RTE_EVENT_VECTOR_ADAPTER_MAX_INSTANCE_PER_QUEUE; i++) {
+ if (adapters[conf->event_dev_id][conf->ev.queue_id][i].used == false) {
+ adapter = &adapters[conf->event_dev_id][conf->ev.queue_id][i];
+ adapter->adapter_id = ADAPTER_ID(conf->event_dev_id, conf->ev.queue_id, i);
+ adapter->used = true;
+ break;
+ }
+ EVVEC_LOG_DBG("adapter %u is already in use", i);
+ rte_errno = EEXIST;
+ return NULL;
+ }
+
+ if (adapter == NULL) {
+ EVVEC_LOG_DBG("no available vector adapters");
+ rte_errno = ENODEV;
+ return NULL;
+ }
+
+ RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(conf->event_dev_id, NULL);
+
+ dev = &rte_eventdevs[conf->event_dev_id];
+ if (dev->dev_ops->vector_adapter_caps_get != NULL &&
+ dev->dev_ops->vector_adapter_info_get != NULL) {
+ rc = dev->dev_ops->vector_adapter_caps_get(dev, &caps, &adapter->ops);
+ if (rc < 0) {
+ EVVEC_LOG_DBG("failed to get vector adapter capabilities rc = %d", rc);
+ rte_errno = ENOTSUP;
+ goto error;
+ }
+
+ rc = dev->dev_ops->vector_adapter_info_get(dev, &info);
+ if (rc < 0) {
+ adapter->ops = NULL;
+ EVVEC_LOG_DBG("failed to get vector adapter info rc = %d", rc);
+ rte_errno = ENOTSUP;
+ goto error;
+ }
+ }
+
+ if (conf->ev.sched_type != dev->data->queues_cfg[conf->ev.queue_id].schedule_type &&
+ !(dev->data->event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)) {
+ EVVEC_LOG_DBG("invalid event schedule type, eventdev doesn't support all types");
+ rte_errno = EINVAL;
+ goto error;
+ }
+
+ rc = validate_conf(conf, &info);
+ if (rc < 0) {
+ adapter->ops = NULL;
+ rte_errno = EINVAL;
+ goto error;
+ }
+
+ n = snprintf(mz_name, MZ_NAME_MAX_LEN, DATA_MZ_NAME_FORMAT, conf->event_dev_id,
+ conf->ev.queue_id, adapter->adapter_id);
+ if (n >= (int)sizeof(mz_name)) {
+ adapter->ops = NULL;
+ EVVEC_LOG_DBG("failed to create memzone name");
+ rte_errno = EINVAL;
+ goto error;
+ }
+ mz = rte_memzone_reserve(mz_name, sizeof(struct rte_event_vector_adapter_data),
+ conf->socket_id, 0);
+ if (mz == NULL) {
+ adapter->ops = NULL;
+ EVVEC_LOG_DBG("failed to reserve memzone for vector adapter");
+ rte_errno = ENOMEM;
+ goto error;
+ }
+
+ adapter->data = mz->addr;
+ memset(adapter->data, 0, sizeof(struct rte_event_vector_adapter_data));
+
+ adapter->data->mz = mz;
+ adapter->data->event_dev_id = conf->event_dev_id;
+ adapter->data->id = adapter->adapter_id;
+ adapter->data->socket_id = conf->socket_id;
+ adapter->data->conf = *conf;
+
+ if (!(caps & RTE_EVENT_VECTOR_ADAPTER_CAP_INTERNAL_PORT)) {
+ if (conf_cb == NULL) {
+ EVVEC_LOG_DBG("port config callback is NULL");
+ rte_errno = EINVAL;
+ goto error;
+ }
+
+ rc = conf_cb(conf->event_dev_id, &adapter->data->event_port_id, conf_arg);
+ if (rc < 0) {
+ EVVEC_LOG_DBG("failed to create port for vector adapter");
+ rte_errno = EINVAL;
+ goto error;
+ }
+ }
+
+ FUNC_PTR_OR_ERR_RET(adapter->ops->create, NULL);
+
+ rc = adapter->ops->create(adapter);
+ if (rc < 0) {
+ adapter->ops = NULL;
+ EVVEC_LOG_DBG("failed to create vector adapter");
+ rte_errno = EINVAL;
+ goto error;
+ }
+
+ adapter->enqueue = adapter->ops->enqueue;
+
+ return adapter;
+
+error:
+ adapter->used = false;
+ return NULL;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_lookup, 25.07)
+struct rte_event_vector_adapter *
+rte_event_vector_adapter_lookup(uint32_t adapter_id)
+{
+ uint8_t adapter_idx = ADAPTER_ID_FROM_ADAPTER_ID(adapter_id);
+ uint8_t queue_id = QUEUE_ID_FROM_ADAPTER_ID(adapter_id);
+ uint8_t dev_id = DEV_ID_FROM_ADAPTER_ID(adapter_id);
+ struct rte_event_vector_adapter *adapter;
+ const struct rte_memzone *mz;
+ char name[MZ_NAME_MAX_LEN];
+ struct rte_eventdev *dev;
+ int rc;
+
+ if (dev_id >= RTE_EVENT_MAX_DEVS || queue_id >= RTE_EVENT_MAX_QUEUES_PER_DEV ||
+ adapter_idx >= RTE_EVENT_VECTOR_ADAPTER_MAX_INSTANCE_PER_QUEUE) {
+ EVVEC_LOG_ERR("invalid adapter id %u", adapter_id);
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ if (adapters[dev_id][queue_id] == NULL) {
+ adapters[dev_id][queue_id] =
+ rte_zmalloc("rte_event_vector_adapter",
+ sizeof(struct rte_event_vector_adapter) *
+ RTE_EVENT_VECTOR_ADAPTER_MAX_INSTANCE_PER_QUEUE,
+ RTE_CACHE_LINE_SIZE);
+ if (adapters[dev_id][queue_id] == NULL) {
+ EVVEC_LOG_DBG("failed to allocate memory for vector adapters");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+ }
+
+ if (adapters[dev_id][queue_id][adapter_idx].used == true)
+ return &adapters[dev_id][queue_id][adapter_idx];
+
+ adapter = &adapters[dev_id][queue_id][adapter_idx];
+
+ snprintf(name, MZ_NAME_MAX_LEN, DATA_MZ_NAME_FORMAT, dev_id, queue_id, adapter_idx);
+ mz = rte_memzone_lookup(name);
+ if (mz == NULL) {
+ EVVEC_LOG_DBG("failed to lookup memzone for vector adapter");
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ adapter->data = mz->addr;
+ dev = &rte_eventdevs[dev_id];
+
+ if (dev->dev_ops->vector_adapter_caps_get != NULL) {
+ rc = dev->dev_ops->vector_adapter_caps_get(dev, &adapter->data->caps,
+ &adapter->ops);
+ if (rc < 0) {
+ EVVEC_LOG_DBG("failed to get vector adapter capabilities");
+ rte_errno = ENOTSUP;
+ return NULL;
+ }
+ }
+
+ adapter->enqueue = adapter->ops->enqueue;
+ adapter->adapter_id = adapter_id;
+ adapter->used = true;
+
+ return adapter;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_service_id_get, 25.07)
+int
+rte_event_vector_adapter_service_id_get(struct rte_event_vector_adapter *adapter,
+ uint32_t *service_id)
+{
+ PTR_VALID_OR_ERR_RET(adapter, -EINVAL);
+
+ if (adapter->data->service_inited && service_id != NULL)
+ *service_id = adapter->data->unified_service_id;
+
+ return adapter->data->service_inited ? 0 : -ESRCH;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_destroy, 25.07)
+int
+rte_event_vector_adapter_destroy(struct rte_event_vector_adapter *adapter)
+{
+ int rc;
+
+ PTR_VALID_OR_ERR_RET(adapter, -EINVAL);
+ if (adapter->used == false) {
+ EVVEC_LOG_ERR("event vector adapter is not allocated");
+ return -EINVAL;
+ }
+
+ FUNC_PTR_OR_ERR_RET(adapter->ops->destroy, -ENOTSUP);
+
+ rc = adapter->ops->destroy(adapter);
+ if (rc < 0) {
+ EVVEC_LOG_DBG("failed to destroy vector adapter");
+ return rc;
+ }
+
+ rte_memzone_free(adapter->data->mz);
+ adapter->ops = NULL;
+ adapter->enqueue = dummy_vector_adapter_enqueue;
+ adapter->data = NULL;
+ adapter->used = false;
+
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_info_get, 25.07)
+int
+rte_event_vector_adapter_info_get(uint8_t event_dev_id, struct rte_event_vector_adapter_info *info)
+{
+ RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(event_dev_id, -EINVAL);
+ PTR_VALID_OR_ERR_RET(info, -EINVAL);
+
+ struct rte_eventdev *dev = &rte_eventdevs[event_dev_id];
+ if (dev->dev_ops->vector_adapter_info_get != NULL)
+ return dev->dev_ops->vector_adapter_info_get(dev, info);
+
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_conf_get, 25.07)
+int
+rte_event_vector_adapter_conf_get(struct rte_event_vector_adapter *adapter,
+ struct rte_event_vector_adapter_conf *conf)
+{
+ PTR_VALID_OR_ERR_RET(adapter, -EINVAL);
+ PTR_VALID_OR_ERR_RET(conf, -EINVAL);
+
+ *conf = adapter->data->conf;
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_remaining, 25.07)
+uint8_t
+rte_event_vector_adapter_remaining(uint8_t event_dev_id, uint8_t event_queue_id)
+{
+ uint8_t remaining = 0;
+ int i;
+
+ RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(event_dev_id, 0);
+
+ if (event_queue_id >= RTE_EVENT_MAX_QUEUES_PER_DEV)
+ return 0;
+
+ for (i = 0; i < RTE_EVENT_VECTOR_ADAPTER_MAX_INSTANCE_PER_QUEUE; i++) {
+ if (adapters[event_dev_id][event_queue_id][i].used == false)
+ remaining++;
+ }
+
+ return remaining;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_stats_get, 25.07)
+int
+rte_event_vector_adapter_stats_get(struct rte_event_vector_adapter *adapter,
+ struct rte_event_vector_adapter_stats *stats)
+{
+ PTR_VALID_OR_ERR_RET(adapter, -EINVAL);
+ PTR_VALID_OR_ERR_RET(stats, -EINVAL);
+
+ FUNC_PTR_OR_ERR_RET(adapter->ops->stats_get, -ENOTSUP);
+
+ adapter->ops->stats_get(adapter, stats);
+
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_stats_reset, 25.07)
+int
+rte_event_vector_adapter_stats_reset(struct rte_event_vector_adapter *adapter)
+{
+ PTR_VALID_OR_ERR_RET(adapter, -EINVAL);
+
+ FUNC_PTR_OR_ERR_RET(adapter->ops->stats_reset, -ENOTSUP);
+
+ adapter->ops->stats_reset(adapter);
+
+ return 0;
+}
diff --git a/lib/eventdev/rte_event_vector_adapter.h b/lib/eventdev/rte_event_vector_adapter.h
new file mode 100644
index 0000000000..61680ec307
--- /dev/null
+++ b/lib/eventdev/rte_event_vector_adapter.h
@@ -0,0 +1,481 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell International Ltd.
+ * All rights reserved.
+ */
+
+#ifndef __RTE_EVENT_VECTOR_ADAPTER_H__
+#define __RTE_EVENT_VECTOR_ADAPTER_H__
+
+/**
+ * @file rte_event_vector_adapter.h
+ *
+ * @warning
+ * @b EXPERIMENTAL:
+ * All functions in this file may be changed or removed without prior notice.
+ *
+ * Event vector adapter API.
+ *
+ * An event vector adapter has the following working model:
+ *
+ * ┌──────────┐
+ * │ Vector ├─┐
+ * │ adapter0 │ │
+ * └──────────┘ │
+ * ┌──────────┐ │ ┌──────────┐
+ * │ Vector ├─┼──►│ Event │
+ * │ adapter1 │ │ │ Queue0 │
+ * └──────────┘ │ └──────────┘
+ * ┌──────────┐ │
+ * │ Vector ├─┘
+ * │ adapter2 │
+ * └──────────┘
+ *
+ * ┌──────────┐
+ * │ Vector ├─┐
+ * │ adapter0 │ │ ┌──────────┐
+ * └──────────┘ ├──►│ Event │
+ * ┌──────────┐ │ │ Queue1 │
+ * │ Vector ├─┘ └──────────┘
+ * │ adapter1 │
+ * └──────────┘
+ *
+ * - A vector adapter can be seen as an extension to event queue. It helps in
+ * aggregating objects and generating a vector event which is enqueued to the
+ * event queue.
+ *
+ * - Multiple vector adapters can be created on an event queue, each with its
+ * own unique properties such as event properties, vector size, and timeout.
+ * Note: If the target event queue doesn't support RTE_EVENT_QUEUE_CFG_ALL_TYPES,
+ * then the vector adapter should use the same schedule type as the event
+ * queue.
+ *
+ * - Each vector adapter aggregates 8B objects, generates a vector event and
+ * enqueues it to the event queue with the event properties mentioned in
+ * rte_event_vector_adapter_conf::ev.
+ *
+ * - After configuring the vector adapter, Application needs to use the
+ * rte_event_vector_adapter_enqueue() function to enqueue objects i.e.,
+ * mbufs/ptrs/u64s to the vector adapter.
+ * On reaching the configured vector size or timeout, the vector adapter
+ * enqueues the event vector to the event queue.
+ * Note: Application should use the event_type and sub_event_type properly
+ * identifying the contents of vector event on dequeue.
+ *
+ * - If the vector adapter advertises the RTE_EVENT_VECTOR_ADAPTER_CAP_SOV_EOV
+ * capability, application can use the RTE_EVENT_VECTOR_ENQ_[S|E]OV flags
+ * to indicate the start and end of a vector event.
+ * * When RTE_EVENT_VECTOR_ENQ_SOV is set, the vector adapter will flush any
+ * aggregation in-progress and start aggregating a new vector event with
+ * the enqueued objects.
+ * * When RTE_EVENT_VECTOR_ENQ_EOV is set, the vector adapter will add the
+ * objects enqueued to the in-progress aggregation and enqueue the vector
+ * event to the event queue, even if configured vector size or timeout is
+ * not reached.
+ * * If both flags are set, the vector adapter will flush any aggregation in
+ * progress and enqueue the objects as a new vector event to the event
+ * queue.
+ *
+ * - If the vector adapter reaches the configured vector size, it will enqueue
+ * the aggregated vector event to the event queue.
+ *
+ * - If the vector adapter reaches the configured vector timeout, it will flush
+ * the aggregated objects as a vector event if the minimum vector size is
+ * reached, if not it will enqueue the objs as single events to the event
+ * queue.
+ *
+ * - If the vector adapter is unable to aggregate the objs into a vector event,
+ * it will enqueue the objs as single events to the event queue with the event
+ * properties mentioned in rte_event_vector_adapter_conf::ev_fallback.
+ *
+ * Before using the vector adapter, the application has to create and configure
+ * an event device and based on the event device capability it might require
+ * creating an additional event port.
+ *
+ * When the application creates the vector adapter using the
+ * ``rte_event_vector_adapter_create()`` function, the event device driver
+ * capabilities are checked. If an in-built port is absent, the application
+ * uses the default function to create a new event port.
+ * For finer control over event port creation, the application should use
+ * the ``rte_event_vector_adapter_create_ext()`` function.
+ *
+ * The application can enqueue one or more objs to the vector adapter using the
+ * ``rte_event_vector_adapter_enqueue()`` function and control the aggregation
+ * using the flags.
+ *
+ * Vector adapters report stats using the ``rte_event_vector_adapter_stats_get()``
+ * function and reset the stats using the ``rte_event_vector_adapter_stats_reset()``.
+ *
+ * The application can destroy the vector adapter using the
+ * ``rte_event_vector_adapter_destroy()`` function.
+ *
+ */
+
+#include <rte_eventdev.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define RTE_EVENT_VECTOR_ADAPTER_CAP_SOV_EOV (1ULL << 0)
+/**< Vector adapter supports Start of Vector (SOV) and End of Vector (EOV) flags
+ * in the enqueue flags.
+ *
+ * @see RTE_EVENT_VECTOR_ENQ_SOV
+ * @see RTE_EVENT_VECTOR_ENQ_EOV
+ */
+
+#define RTE_EVENT_VECTOR_ENQ_SOV (1ULL << 0)
+/**< Indicates the start of a vector event. When enqueue is called with
+ * RTE_EVENT_VECTOR_ENQ_SOV, the vector adapter will flush any vector
+ * aggregation in progress and start aggregating a new vector event with
+ * the enqueued objects.
+ */
+#define RTE_EVENT_VECTOR_ENQ_EOV (1ULL << 1)
+/**< Indicates the end of a vector event. When enqueue is called with
+ * RTE_EVENT_VECTOR_ENQ_EOV, the vector adapter will add the objects
+ * to any inprogress aggregation and flush the event vector.
+ */
+#define RTE_EVENT_VECTOR_ENQ_FLUSH (1ULL << 2)
+/**< Flush any in-progress vector aggregation. */
+
+/**
+ * Vector adapter configuration structure
+ */
+struct rte_event_vector_adapter_conf {
+ uint8_t event_dev_id;
+ /**< Event device identifier */
+ uint32_t socket_id;
+ /**< Identifier of socket from which to allocate memory for adapter */
+ struct rte_event ev;
+ /**<
+ * The values from the following event fields will be used when
+ * queuing work:
+ * - queue_id: Targeted event queue ID for vector event.
+ * - event_priority: Event priority of the vector event in
+ * the event queue relative to other events.
+ * - sched_type: Scheduling type for events from this vector adapter.
+ * - event_type: Event type for the vector event.
+ * - sub_event_type: Sub event type for the vector event.
+ * - flow_id: Flow ID for the vectors enqueued to the event queue by
+ * the vector adapter.
+ */
+ struct rte_event ev_fallback;
+ /**<
+ * The values from the following event fields will be used when
+ * aggregation fails and single event is enqueued:
+ * - event_type: Event type for the single event.
+ * - sub_event_type: Sub event type for the single event.
+ * - flow_id: Flow ID for the single event.
+ *
+ * Other fields are taken from rte_event_vector_adapter_conf::ev.
+ */
+ uint16_t vector_sz;
+ /**<
+ * Indicates the maximum number for enqueued work to combine and form a vector.
+ * Should be within vectorization limits of the adapter.
+ * @see rte_event_vector_adapter_info::min_vector_sz
+ * @see rte_event_vector_adapter_info::max_vector_sz
+ */
+ uint64_t vector_timeout_ns;
+ /**<
+ * Indicates the maximum number of nanoseconds to wait for receiving
+ * work. Should be within vectorization limits of the adapter.
+ * @see rte_event_vector_adapter_info::min_vector_ns
+ * @see rte_event_vector_adapter_info::max_vector_ns
+ */
+ struct rte_mempool *vector_mp;
+ /**<
+ * Indicates the mempool that should be used for allocating
+ * rte_event_vector container.
+ * @see rte_event_vector_pool_create
+ */
+};
+
+/**
+ * Vector adapter vector info structure
+ */
+struct rte_event_vector_adapter_info {
+ uint8_t max_vector_adapters_per_event_queue;
+ /**< Maximum number of vector adapters configurable */
+ uint16_t min_vector_sz;
+ /**< Minimum vector size configurable */
+ uint16_t max_vector_sz;
+ /**< Maximum vector size configurable */
+ uint64_t min_vector_timeout_ns;
+ /**< Minimum vector timeout configurable */
+ uint64_t max_vector_timeout_ns;
+ /**< Maximum vector timeout configurable */
+ uint8_t log2_sz;
+ /**< True if the size configured should be in log2. */
+};
+
+/**
+ * Vector adapter statistics structure
+ */
+struct rte_event_vector_adapter_stats {
+ uint64_t vectorized;
+ /**< Number of events vectorized */
+ uint64_t vectors_timedout;
+ /**< Number of timeouts occurred */
+ uint64_t vectors_flushed;
+ /**< Number of vectors flushed */
+ uint64_t alloc_failures;
+ /**< Number of vector allocation failures */
+};
+
+struct rte_event_vector_adapter;
+
+typedef int (*rte_event_vector_adapter_enqueue_t)(struct rte_event_vector_adapter *adapter,
+ uint64_t objs[], uint16_t num_elem,
+ uint64_t flags);
+/**< @internal Enqueue objs into the event vector adapter. */
+
+struct __rte_cache_aligned rte_event_vector_adapter {
+ rte_event_vector_adapter_enqueue_t enqueue;
+ /**< Pointer to driver enqueue function. */
+ struct rte_event_vector_adapter_data *data;
+ /**< Pointer to the adapter data */
+ const struct event_vector_adapter_ops *ops;
+ /**< Functions exported by adapter driver */
+
+ uint32_t adapter_id;
+ /**< Identifier of the adapter instance. */
+ uint8_t used : 1;
+ /**< Flag to indicate that this adapter is being used. */
+};
+
+/**
+ * Callback function type for producer port creation.
+ */
+typedef int (*rte_event_vector_adapter_port_conf_cb_t)(uint8_t event_dev_id, uint8_t *event_port_id,
+ void *conf_arg);
+
+/**
+ * Create an event vector adapter.
+ *
+ * This function creates an event vector adapter based on the provided
+ * configuration. The adapter can be used to combine multiple mbufs/ptrs/u64s
+ * into a single vector event, i.e., rte_event_vector, which is then enqueued
+ * to the event queue provided.
+ * @see rte_event_vector_adapter_conf::ev::event_queue_id.
+ *
+ * @param conf
+ * Configuration for the event vector adapter.
+ * @return
+ * - Pointer to the created event vector adapter on success.
+ * - NULL on failure with rte_errno set to the error code.
+ * Possible rte_errno values include:
+ * - EINVAL: Invalid event device identifier specified in config.
+ * - ENOMEM: Unable to allocate sufficient memory for adapter instances.
+ * - ENOSPC: Maximum number of adapters already created.
+ */
+__rte_experimental
+struct rte_event_vector_adapter *
+rte_event_vector_adapter_create(const struct rte_event_vector_adapter_conf *conf);
+
+/**
+ * Create an event vector adapter with the supplied callback.
+ *
+ * This function can be used to have a more granular control over the event
+ * vector adapter creation. If a built-in port is absent, then the function uses
+ * the callback provided to create and get the port id to be used as a producer
+ * port.
+ *
+ * @param conf
+ * The event vector adapter configuration structure.
+ * @param conf_cb
+ * The port config callback function.
+ * @param conf_arg
+ * Opaque pointer to the argument for the callback function.
+ * @return
+ * - Pointer to the new allocated event vector adapter on success.
+ * - NULL on error with rte_errno set appropriately.
+ * Possible rte_errno values include:
+ * - ERANGE: vector_timeout_ns is not in supported range.
+ * - ENOMEM: Unable to allocate sufficient memory for adapter instances.
+ * - EINVAL: Invalid event device identifier specified in config.
+ * - ENOSPC: Maximum number of adapters already created.
+ */
+__rte_experimental
+struct rte_event_vector_adapter *
+rte_event_vector_adapter_create_ext(const struct rte_event_vector_adapter_conf *conf,
+ rte_event_vector_adapter_port_conf_cb_t conf_cb,
+ void *conf_arg);
+
+/**
+ * Lookup an event vector adapter using its identifier.
+ *
+ * This function returns the event vector adapter based on the adapter_id.
+ * This is useful when the adapter is created in another process and the
+ * application wants to use the adapter in the current process.
+ *
+ * @param adapter_id
+ * Identifier of the event vector adapter to look up.
+ * @return
+ * - Pointer to the event vector adapter on success.
+ * - NULL if the adapter is not found.
+ */
+__rte_experimental
+struct rte_event_vector_adapter *
+rte_event_vector_adapter_lookup(uint32_t adapter_id);
+
+/**
+ * Destroy an event vector adapter.
+ *
+ * This function releases the resources associated with the event vector adapter.
+ *
+ * @param adapter
+ * Pointer to the event vector adapter to be destroyed.
+ * @return
+ * - 0 on success.
+ * - Negative value on failure with rte_errno set to the error code.
+ */
+__rte_experimental
+int
+rte_event_vector_adapter_destroy(struct rte_event_vector_adapter *adapter);
+
+/**
+ * Get the vector info of an event vector adapter.
+ *
+ * This function retrieves the vector info of the event vector adapter.
+ *
+ * @param event_dev_id
+ * Event device identifier.
+ * @param info
+ * Pointer to the structure where the vector info will be stored.
+ * @return
+ * 0 on success, negative value on failure.
+ * - EINVAL if the event device identifier is invalid.
+ * - ENOTSUP if the event device does not support vector adapters.
+ */
+__rte_experimental
+int
+rte_event_vector_adapter_info_get(uint8_t event_dev_id,
+ struct rte_event_vector_adapter_info *info);
+
+/**
+ * Get the configuration of an event vector adapter.
+ *
+ * This function retrieves the configuration of the event vector adapter.
+ *
+ * @param adapter
+ * Pointer to the event vector adapter.
+ * @param conf
+ * Pointer to the structure where the configuration will be stored.
+ * @return
+ * 0 on success, negative value on failure.
+ */
+__rte_experimental
+int
+rte_event_vector_adapter_conf_get(struct rte_event_vector_adapter *adapter,
+ struct rte_event_vector_adapter_conf *conf);
+
+/**
+ * Get the remaining event vector adapters.
+ *
+ * This function retrieves the number of remaining event vector adapters
+ * available for a given event device and event queue.
+ *
+ * @param event_dev_id
+ * Event device identifier.
+ * @param event_queue_id
+ * Event queue identifier.
+ * @return
+ * Number of remaining slots available for enqueuing events.
+ */
+__rte_experimental
+uint8_t
+rte_event_vector_adapter_remaining(uint8_t event_dev_id, uint8_t event_queue_id);
+
+/**
+ * Get the event vector adapter statistics.
+ *
+ * This function retrieves the statistics of the event vector adapter.
+ *
+ * @param adapter
+ * Pointer to the event vector adapter.
+ * @param stats
+ * Pointer to the structure where the statistics will be stored.
+ * @return
+ * 0 on success, negative value on failure.
+ */
+__rte_experimental
+int
+rte_event_vector_adapter_stats_get(struct rte_event_vector_adapter *adapter,
+ struct rte_event_vector_adapter_stats *stats);
+
+/**
+ * @brief Reset the event vector adapter statistics.
+ *
+ * This function resets the statistics of the event vector adapter to their default values.
+ *
+ * @param adapter
+ * Pointer to the event vector adapter whose statistics are to be reset.
+ * @return
+ * 0 on success, negative value on failure.
+ */
+__rte_experimental
+int
+rte_event_vector_adapter_stats_reset(struct rte_event_vector_adapter *adapter);
+
+/**
+ * Retrieve the service ID of the event vector adapter. If the adapter doesn't
+ * use an rte_service function, this function returns -ESRCH.
+ *
+ * @param adapter
+ * A pointer to an event vector adapter.
+ * @param [out] service_id
+ * A pointer to a uint32_t, to be filled in with the service id.
+ *
+ * @return
+ * - 0: Success
+ * - <0: Error code on failure
+ * - -ESRCH: the adapter does not require a service to operate
+ */
+__rte_experimental
+int
+rte_event_vector_adapter_service_id_get(struct rte_event_vector_adapter *adapter,
+ uint32_t *service_id);
+
+/**
+ * Enqueue objs into the event vector adapter.
+ *
+ * This function enqueues a specified number of objs into the event vector adapter.
+ * The objs are combined into a single vector event, i.e., rte_event_vector, which
+ * is then enqueued to the event queue configured in the adapter.
+ *
+ * @param adapter
+ * Pointer to the event vector adapter.
+ * @param objs
+ * Array of objs to be enqueued.
+ * @param num_elem
+ * Number of objs to be enqueued.
+ * @param flags
+ * Flags to be used for the enqueue operation.
+ * @return
+ * Number of objs enqueued on success.
+ */
+__rte_experimental
+static inline int
+rte_event_vector_adapter_enqueue(struct rte_event_vector_adapter *adapter, uint64_t objs[],
+ uint16_t num_elem, uint64_t flags)
+{
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+ if (adapter == NULL) {
+ rte_errno = EINVAL;
+ return 0;
+ }
+
+ if (adapter->used == false) {
+ rte_errno = EINVAL;
+ return 0;
+ }
+#endif
+ return adapter->enqueue(adapter, objs, num_elem, flags);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __RTE_EVENT_VECTOR_ADAPTER_H__ */
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index b66cbb4676..916bad6c2c 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -257,6 +257,28 @@ rte_event_dma_adapter_caps_get(uint8_t dev_id, uint8_t dma_dev_id, uint32_t *cap
return 0;
}
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_caps_get, 25.07)
+int
+rte_event_vector_adapter_caps_get(uint8_t dev_id, uint32_t *caps)
+{
+ const struct event_vector_adapter_ops *ops;
+ struct rte_eventdev *dev;
+
+ RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+
+ dev = &rte_eventdevs[dev_id];
+
+ if (caps == NULL)
+ return -EINVAL;
+
+ if (dev->dev_ops->vector_adapter_caps_get == NULL)
+ *caps = 0;
+
+ return dev->dev_ops->vector_adapter_caps_get ?
+ dev->dev_ops->vector_adapter_caps_get(dev, caps, &ops) :
+ 0;
+}
+
static inline int
event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
{
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 6400d6109f..3c7fcbf0be 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1985,6 +1985,16 @@ int
rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
uint32_t *caps);
+/* Vector adapter capability bitmap flags */
+#define RTE_EVENT_VECTOR_ADAPTER_CAP_INTERNAL_PORT 0x1
+/**< This flag is set when the vector adapter is capable of generating events
+ * using an internal event port.
+ */
+
+__rte_experimental
+int
+rte_event_vector_adapter_caps_get(uint8_t dev_id, uint32_t *caps);
+
/**
* Converts nanoseconds to *timeout_ticks* value for rte_event_dequeue_burst()
*
--
2.43.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 2/3] eventdev: add default software vector adapter
2025-04-10 18:00 ` [PATCH 0/3] " pbhagavatula
2025-04-10 18:00 ` [PATCH 1/3] eventdev: " pbhagavatula
@ 2025-04-10 18:00 ` pbhagavatula
2025-04-10 18:00 ` [PATCH 3/3] app/eventdev: add vector adapter performance test pbhagavatula
2 siblings, 0 replies; 12+ messages in thread
From: pbhagavatula @ 2025-04-10 18:00 UTC (permalink / raw)
To: jerinj, pravin.pathak, hemant.agrawal, sachin.saxena,
mattias.ronnblom, liangma, peter.mccarthy, harry.van.haaren,
erik.g.carrillo, abhinandan.gujjar, amitprakashs,
s.v.naga.harish.k, anatoly.burakov
Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
When event device PMD doesn't support vector adapter,
the library will fallback to software implementation
which relies on service core to check for timeouts
and vectorizes the objects on enqueue.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
app/test/meson.build | 1 +
app/test/test_event_vector_adapter.c | 682 ++++++++++++++++++++++++
lib/eventdev/eventdev_pmd.h | 2 +
lib/eventdev/rte_event_vector_adapter.c | 392 ++++++++++++++
lib/eventdev/rte_eventdev.c | 2 +
5 files changed, 1079 insertions(+)
create mode 100644 app/test/test_event_vector_adapter.c
diff --git a/app/test/meson.build b/app/test/meson.build
index b6285a6b45..0686f3c1f2 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -79,6 +79,7 @@ source_file_deps = {
'test_event_eth_tx_adapter.c': ['bus_vdev', 'ethdev', 'net_ring', 'eventdev'],
'test_event_ring.c': ['eventdev'],
'test_event_timer_adapter.c': ['ethdev', 'eventdev', 'bus_vdev'],
+ 'test_event_vector_adapter.c': ['eventdev', 'bus_vdev'],
'test_eventdev.c': ['eventdev', 'bus_vdev'],
'test_external_mem.c': [],
'test_fbarray.c': [],
diff --git a/app/test/test_event_vector_adapter.c b/app/test/test_event_vector_adapter.c
new file mode 100644
index 0000000000..8754789bef
--- /dev/null
+++ b/app/test/test_event_vector_adapter.c
@@ -0,0 +1,682 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell International Ltd.
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_mempool.h>
+#include <rte_random.h>
+
+#include "test.h"
+
+#ifdef RTE_EXEC_ENV_WINDOWS
+static int
+test_event_vector_adapter(void)
+{
+ printf("event_vector_adapter not supported on Windows, skipping test\n");
+ return TEST_SKIPPED;
+}
+
+#else
+
+#include <rte_bus_vdev.h>
+#include <rte_event_vector_adapter.h>
+#include <rte_eventdev.h>
+#include <rte_launch.h>
+#include <rte_lcore.h>
+#include <rte_mempool.h>
+#include <rte_per_lcore.h>
+#include <rte_random.h>
+#include <rte_service.h>
+#include <stdbool.h>
+
+#define MAX_VECTOR_SIZE 8
+#define MAX_EVENTS 512
+#define MAX_RETRIES 16
+
+static int sw_slcore = -1;
+static int adapter_slcore = -1;
+static uint8_t evdev;
+static bool using_services;
+static uint8_t vector_adptr_id;
+static uint8_t evdev_max_queues;
+static struct rte_mempool *vector_mp;
+
+static uint64_t objs[MAX_VECTOR_SIZE] = {0xDEADBEAF, 0xDEADBEEF, 0xDEADC0DE, 0xDEADCAFE,
+ 0xDEADFACE, 0xDEADFADE, 0xDEADFAAA, 0xDEADFAAB};
+
+static int
+test_event_vector_adapter_create_multi(void)
+{
+ struct rte_event_vector_adapter *adapter[RTE_EVENT_MAX_QUEUES_PER_DEV]
+ [RTE_EVENT_VECTOR_ADAPTER_MAX_INSTANCE_PER_QUEUE];
+ struct rte_event_vector_adapter_conf conf;
+ struct rte_event_vector_adapter_info info;
+ int ret, i, j;
+
+ memset(&conf, 0, sizeof(conf));
+ memset(&info, 0, sizeof(info));
+
+ ret = rte_event_vector_adapter_info_get(evdev, &info);
+ TEST_ASSERT_SUCCESS(ret, "Failed to get event vector adapter info");
+
+ vector_mp = rte_event_vector_pool_create("vector_mp", MAX_EVENTS, 0, MAX_VECTOR_SIZE,
+ rte_socket_id());
+
+ TEST_ASSERT(vector_mp != NULL, "Failed to create mempool");
+
+ conf.event_dev_id = evdev;
+ conf.socket_id = rte_socket_id();
+ conf.vector_sz = RTE_MIN(MAX_VECTOR_SIZE, info.max_vector_sz);
+ conf.vector_timeout_ns = info.max_vector_timeout_ns;
+ conf.vector_mp = vector_mp;
+
+ conf.ev.queue_id = 0;
+ conf.ev.event_type = RTE_EVENT_TYPE_VECTOR | RTE_EVENT_TYPE_CPU;
+ conf.ev.sched_type = RTE_SCHED_TYPE_PARALLEL;
+
+ for (i = 0; i < evdev_max_queues; i++) {
+ for (j = 0; j < info.max_vector_adapters_per_event_queue; j++) {
+ conf.ev.queue_id = j;
+ adapter[i][j] = rte_event_vector_adapter_create(&conf);
+ TEST_ASSERT(adapter[i][j] != NULL, "Failed to create event vector adapter");
+ }
+ }
+
+ for (i = 0; i < evdev_max_queues; i++)
+ for (j = 0; j < info.max_vector_adapters_per_event_queue; j++)
+ TEST_ASSERT(adapter[i][j] == rte_event_vector_adapter_lookup(
+ adapter[i][j]->adapter_id),
+ "Failed to lookup event vector adapter");
+
+ for (i = 0; i < evdev_max_queues; i++)
+ for (j = 0; j < info.max_vector_adapters_per_event_queue; j++)
+ rte_event_vector_adapter_destroy(adapter[i][j]);
+
+ rte_mempool_free(vector_mp);
+ vector_mp = NULL;
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_event_vector_adapter_create(void)
+{
+ struct rte_event_vector_adapter_conf conf;
+ struct rte_event_vector_adapter_info info;
+ struct rte_event_vector_adapter *adapter;
+ uint32_t service_id;
+ int ret;
+
+ memset(&conf, 0, sizeof(conf));
+ memset(&info, 0, sizeof(info));
+
+ ret = rte_event_vector_adapter_info_get(evdev, &info);
+ TEST_ASSERT_SUCCESS(ret, "Failed to get event vector adapter info");
+
+ vector_mp = rte_event_vector_pool_create("vector_mp", MAX_EVENTS, 0, MAX_VECTOR_SIZE,
+ rte_socket_id());
+ TEST_ASSERT(vector_mp != NULL, "Failed to create mempool");
+
+ conf.event_dev_id = evdev;
+ conf.socket_id = rte_socket_id();
+ conf.vector_sz = RTE_MIN(MAX_VECTOR_SIZE, info.max_vector_sz);
+ conf.vector_timeout_ns = info.max_vector_timeout_ns;
+ conf.vector_mp = vector_mp;
+
+ conf.ev.queue_id = 0;
+ conf.ev.event_type = RTE_EVENT_TYPE_VECTOR | RTE_EVENT_TYPE_CPU;
+ conf.ev.sched_type = RTE_SCHED_TYPE_PARALLEL;
+
+ conf.ev_fallback.event_type = RTE_EVENT_TYPE_CPU;
+ adapter = rte_event_vector_adapter_create(&conf);
+ TEST_ASSERT(adapter != NULL, "Failed to create event vector adapter");
+
+ vector_adptr_id = adapter->adapter_id;
+
+ TEST_ASSERT(adapter == rte_event_vector_adapter_lookup(vector_adptr_id),
+ "Failed to lookup event vector adapter");
+
+ if (rte_event_vector_adapter_service_id_get(adapter, &service_id) == 0) {
+ if (sw_slcore < 0) {
+ adapter_slcore = rte_get_next_lcore(sw_slcore, 1, 0);
+ TEST_ASSERT_SUCCESS(rte_service_lcore_add(adapter_slcore),
+ "Failed to add service core");
+ TEST_ASSERT_SUCCESS(rte_service_lcore_start(adapter_slcore),
+ "Failed to start service core");
+ } else
+ adapter_slcore = sw_slcore;
+ TEST_ASSERT(rte_service_map_lcore_set(service_id, adapter_slcore, 1) == 0,
+ "Failed to map adapter service");
+ TEST_ASSERT(rte_service_runstate_set(service_id, 1) == 0,
+ "Failed to start adapter service");
+ }
+ return TEST_SUCCESS;
+}
+
+static void
+test_event_vector_adapter_free(void)
+{
+ struct rte_event_vector_adapter *adapter;
+ uint32_t service_id;
+
+ adapter = rte_event_vector_adapter_lookup(vector_adptr_id);
+
+ if (adapter != NULL) {
+ if (rte_event_vector_adapter_service_id_get(adapter, &service_id) == 0) {
+ rte_service_runstate_set(service_id, 0);
+ rte_service_map_lcore_set(service_id, adapter_slcore, 0);
+ if (adapter_slcore != sw_slcore) {
+ rte_service_lcore_stop(adapter_slcore);
+ rte_service_lcore_del(adapter_slcore);
+ }
+ adapter_slcore = -1;
+ }
+ rte_event_vector_adapter_destroy(adapter);
+ }
+ rte_mempool_free(vector_mp);
+ vector_mp = NULL;
+}
+
+static int
+test_event_vector_adapter_enqueue(void)
+{
+ struct rte_event_vector_adapter *adapter;
+ struct rte_event ev;
+ int ret, i;
+
+ adapter = rte_event_vector_adapter_lookup(vector_adptr_id);
+ TEST_ASSERT(adapter != NULL, "Failed to lookup event vector adapter");
+
+ ret = rte_event_vector_adapter_enqueue(adapter, objs, MAX_VECTOR_SIZE, 0);
+ TEST_ASSERT((ret == MAX_VECTOR_SIZE), "Failed to enqueue event vector %d", ret);
+
+ for (i = 0; i < MAX_RETRIES; i++) {
+ ret = rte_event_dequeue_burst(evdev, 0, &ev, 1, 0);
+ if (ret)
+ break;
+
+ rte_delay_ms(1);
+ }
+
+ TEST_ASSERT((ret == 1), "Failed to dequeue event vector %d", ret);
+
+ TEST_ASSERT((ev.vec->nb_elem == MAX_VECTOR_SIZE), "Incomplete event vector %d",
+ ev.vec->nb_elem);
+ TEST_ASSERT((ev.queue_id == 0), "Invalid event type %d", ev.queue_id);
+ TEST_ASSERT((ev.event_type == (RTE_EVENT_TYPE_VECTOR | RTE_EVENT_TYPE_CPU)),
+ "Invalid event type %d", ev.event_type);
+ TEST_ASSERT((ev.sched_type == RTE_SCHED_TYPE_PARALLEL), "Invalid sched type %d",
+ ev.sched_type);
+
+ for (i = 0; i < MAX_VECTOR_SIZE; i++)
+ TEST_ASSERT((ev.vec->u64s[i] == objs[i]), "Invalid object in event vector %" PRIx64,
+ ev.vec->u64s[i]);
+ rte_mempool_put(rte_mempool_from_obj(ev.vec), ev.vec);
+ return TEST_SUCCESS;
+}
+
+static int
+test_event_vector_adapter_enqueue_tmo(void)
+{
+ struct rte_event_vector_adapter_info info;
+ struct rte_event_vector_adapter *adapter;
+ uint16_t vec_sz = MAX_VECTOR_SIZE - 4;
+ struct rte_event ev;
+ int ret, i;
+
+ memset(&info, 0, sizeof(info));
+ ret = rte_event_vector_adapter_info_get(evdev, &info);
+ TEST_ASSERT_SUCCESS(ret, "Failed to get event vector adapter info");
+
+ adapter = rte_event_vector_adapter_lookup(vector_adptr_id);
+ TEST_ASSERT(adapter != NULL, "Failed to lookup event vector adapter");
+
+ ret = rte_event_vector_adapter_enqueue(adapter, objs, vec_sz, 0);
+ TEST_ASSERT((ret == vec_sz), "Failed to enqueue event vector %d", ret);
+
+ rte_delay_us(info.max_vector_timeout_ns / 1000);
+
+ for (i = 0; i < MAX_RETRIES; i++) {
+ ret = rte_event_dequeue_burst(evdev, 0, &ev, 1, 0);
+ if (ret)
+ break;
+
+ rte_delay_ms(1);
+ }
+
+ TEST_ASSERT((ret == 1), "Failed to dequeue event vector %d", ret);
+
+ TEST_ASSERT((ev.vec->nb_elem == vec_sz), "Incomplete event vector %d", ev.vec->nb_elem);
+ TEST_ASSERT((ev.queue_id == 0), "Invalid event type %d", ev.queue_id);
+ TEST_ASSERT((ev.event_type == (RTE_EVENT_TYPE_VECTOR | RTE_EVENT_TYPE_CPU)),
+ "Invalid event type %d", ev.event_type);
+ TEST_ASSERT((ev.sched_type == RTE_SCHED_TYPE_PARALLEL), "Invalid sched type %d",
+ ev.sched_type);
+
+ for (i = 0; i < vec_sz; i++)
+ TEST_ASSERT((ev.vec->u64s[i] == objs[i]), "Invalid object in event vector %" PRIx64,
+ ev.vec->u64s[i]);
+ rte_mempool_put(rte_mempool_from_obj(ev.vec), ev.vec);
+ return TEST_SUCCESS;
+}
+
+static int
+test_event_vector_adapter_enqueue_fallback(void)
+{
+ struct rte_event_vector_adapter *adapter;
+ uint64_t vec[MAX_EVENTS];
+ struct rte_event ev;
+ int ret, i;
+
+ adapter = rte_event_vector_adapter_lookup(vector_adptr_id);
+ TEST_ASSERT(adapter != NULL, "Failed to lookup event vector adapter");
+
+ ret = rte_mempool_get_bulk(vector_mp, (void **)vec, MAX_EVENTS);
+ TEST_ASSERT(ret == 0, "Failed to get mempool objects %d", ret);
+
+ ret = rte_event_vector_adapter_enqueue(adapter, objs, 1, 0);
+ TEST_ASSERT((ret == 1), "Failed to enqueue event vector %d", ret);
+
+ for (i = 0; i < MAX_RETRIES; i++) {
+ ret = rte_event_dequeue_burst(evdev, 0, &ev, 1, 0);
+ if (ret)
+ break;
+
+ rte_delay_ms(1);
+ }
+
+ TEST_ASSERT((ret == 1), "Failed to dequeue event vector %d", ret);
+ TEST_ASSERT((ev.event_type == RTE_EVENT_TYPE_CPU), "Incorrect fallback event type %d",
+ ev.event_type);
+ TEST_ASSERT((ev.sched_type == RTE_SCHED_TYPE_PARALLEL), "Invalid sched type %d",
+ ev.sched_type);
+
+ rte_mempool_put_bulk(vector_mp, (void **)vec, MAX_EVENTS);
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_event_vector_adapter_enqueue_sov(void)
+{
+ struct rte_event_vector_adapter_info info;
+ struct rte_event_vector_adapter *adapter;
+ uint16_t vec_sz = MAX_VECTOR_SIZE - 4;
+ struct rte_event ev;
+ uint32_t caps;
+ int ret, i;
+
+ memset(&info, 0, sizeof(info));
+ ret = rte_event_vector_adapter_info_get(evdev, &info);
+ TEST_ASSERT_SUCCESS(ret, "Failed to get event vector adapter info");
+
+ caps = 0;
+ ret = rte_event_vector_adapter_caps_get(evdev, &caps);
+ TEST_ASSERT_SUCCESS(ret, "Failed to get event vector adapter caps");
+
+ if (!(caps & RTE_EVENT_VECTOR_ADAPTER_CAP_SOV_EOV)) {
+ printf("SOV/EOV not supported, skipping test\n");
+ return TEST_SKIPPED;
+ }
+
+ adapter = rte_event_vector_adapter_lookup(vector_adptr_id);
+ TEST_ASSERT(adapter != NULL, "Failed to lookup event vector adapter");
+
+ ret = rte_event_vector_adapter_enqueue(adapter, objs, vec_sz, 0);
+ TEST_ASSERT((ret == vec_sz), "Failed to enqueue event vector %d", ret);
+
+ ret = rte_event_vector_adapter_enqueue(adapter, &objs[vec_sz], 2, RTE_EVENT_VECTOR_ENQ_SOV);
+ TEST_ASSERT((ret == 2), "Failed to enqueue event vector %d", ret);
+
+ for (i = 0; i < MAX_RETRIES; i++) {
+ ret = rte_event_dequeue_burst(evdev, 0, &ev, 1, 0);
+ if (ret)
+ break;
+
+ rte_delay_ms(1);
+ }
+
+ for (i = 0; i < vec_sz; i++)
+ TEST_ASSERT((ev.vec->u64s[i] == objs[i]), "Invalid object in event vector %" PRIx64,
+ ev.vec->u64s[i]);
+
+ TEST_ASSERT((ret == 1), "Failed to dequeue event vector %d", ret);
+ TEST_ASSERT((ev.vec->nb_elem == vec_sz), "Incorrect event vector %d", ev.vec->nb_elem);
+
+ rte_delay_us(info.max_vector_timeout_ns / 1000);
+ for (i = 0; i < MAX_RETRIES; i++) {
+ ret = rte_event_dequeue_burst(evdev, 0, &ev, 1, 0);
+ if (ret)
+ break;
+
+ rte_delay_ms(1);
+ }
+ TEST_ASSERT((ret == 1), "Failed to dequeue event vector %d", ret);
+ TEST_ASSERT((ev.vec->nb_elem == 2), "Incorrect event vector %d", ev.vec->nb_elem);
+
+ for (i = 0; i < 2; i++)
+ TEST_ASSERT((ev.vec->u64s[i] == objs[vec_sz + i]),
+ "Invalid object in event vector %" PRIx64, ev.vec->u64s[i]);
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_event_vector_adapter_enqueue_eov(void)
+{
+ struct rte_event_vector_adapter_info info;
+ struct rte_event_vector_adapter *adapter;
+ uint16_t vec_sz = MAX_VECTOR_SIZE - 4;
+ struct rte_event ev;
+ uint32_t caps;
+ int ret, i;
+
+ memset(&info, 0, sizeof(info));
+ ret = rte_event_vector_adapter_info_get(evdev, &info);
+ TEST_ASSERT_SUCCESS(ret, "Failed to get event vector adapter info");
+
+ caps = 0;
+ ret = rte_event_vector_adapter_caps_get(evdev, &caps);
+ TEST_ASSERT_SUCCESS(ret, "Failed to get event vector adapter caps");
+
+ if (!(caps & RTE_EVENT_VECTOR_ADAPTER_CAP_SOV_EOV)) {
+ printf("SOV/EOV not supported, skipping test\n");
+ return TEST_SKIPPED;
+ }
+
+ adapter = rte_event_vector_adapter_lookup(vector_adptr_id);
+ TEST_ASSERT(adapter != NULL, "Failed to lookup event vector adapter");
+
+ ret = rte_event_vector_adapter_enqueue(adapter, objs, vec_sz, 0);
+ TEST_ASSERT((ret == vec_sz), "Failed to enqueue event vector %d", ret);
+
+ ret = rte_event_vector_adapter_enqueue(adapter, &objs[vec_sz], 1, RTE_EVENT_VECTOR_ENQ_EOV);
+ TEST_ASSERT((ret == 1), "Failed to enqueue event vector %d", ret);
+
+ for (i = 0; i < MAX_RETRIES; i++) {
+ ret = rte_event_dequeue_burst(evdev, 0, &ev, 1, 0);
+ if (ret)
+ break;
+
+ rte_delay_ms(1);
+ }
+
+ TEST_ASSERT((ret == 1), "Failed to dequeue event vector %d", ret);
+ TEST_ASSERT((ev.vec->nb_elem == vec_sz + 1), "Incorrect event vector %d", ev.vec->nb_elem);
+
+ ret = rte_event_vector_adapter_enqueue(adapter, objs, MAX_VECTOR_SIZE - 1, 0);
+ TEST_ASSERT((ret == MAX_VECTOR_SIZE - 1), "Failed to enqueue event vector %d", ret);
+
+ ret = rte_event_vector_adapter_enqueue(adapter, &objs[vec_sz], vec_sz,
+ RTE_EVENT_VECTOR_ENQ_EOV);
+ TEST_ASSERT((ret == vec_sz), "Failed to enqueue event vector %d", ret);
+
+ for (i = 0; i < MAX_RETRIES; i++) {
+ ret = rte_event_dequeue_burst(evdev, 0, &ev, 1, 0);
+ if (ret)
+ break;
+
+ rte_delay_ms(1);
+ }
+
+ TEST_ASSERT((ret == 1), "Failed to dequeue event vector %d", ret);
+ TEST_ASSERT((ev.vec->nb_elem == MAX_VECTOR_SIZE), "Incorrect event vector %d",
+ ev.vec->nb_elem);
+
+ for (i = 0; i < MAX_VECTOR_SIZE - 1; i++)
+ TEST_ASSERT((ev.vec->u64s[i] == objs[i]), "Invalid object in event vector %" PRIx64,
+ ev.vec->u64s[i]);
+
+ TEST_ASSERT((ev.vec->u64s[MAX_VECTOR_SIZE - 1] == objs[vec_sz]),
+ "Invalid object in event vector %" PRIx64, ev.vec->u64s[MAX_VECTOR_SIZE - 1]);
+
+ for (i = 0; i < MAX_RETRIES; i++) {
+ ret = rte_event_dequeue_burst(evdev, 0, &ev, 1, 0);
+ if (ret)
+ break;
+
+ rte_delay_ms(1);
+ }
+ TEST_ASSERT((ret == 1), "Failed to dequeue event vector %d", ret);
+ TEST_ASSERT((ev.vec->nb_elem == vec_sz - 1), "Incorrect event vector %d", ev.vec->nb_elem);
+
+ for (i = 0; i < vec_sz - 1; i++)
+ TEST_ASSERT((ev.vec->u64s[i] == objs[vec_sz + i + 1]),
+ "Invalid object in event vector %" PRIx64, ev.vec->u64s[i]);
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_event_vector_adapter_enqueue_sov_eov(void)
+{
+ struct rte_event_vector_adapter_info info;
+ struct rte_event_vector_adapter *adapter;
+ uint16_t vec_sz = MAX_VECTOR_SIZE - 4;
+ struct rte_event ev;
+ uint32_t caps;
+ int ret, i;
+
+ memset(&info, 0, sizeof(info));
+ ret = rte_event_vector_adapter_info_get(evdev, &info);
+ TEST_ASSERT_SUCCESS(ret, "Failed to get event vector adapter info");
+
+ caps = 0;
+ ret = rte_event_vector_adapter_caps_get(evdev, &caps);
+ TEST_ASSERT_SUCCESS(ret, "Failed to get event vector adapter caps");
+
+ if (!(caps & RTE_EVENT_VECTOR_ADAPTER_CAP_SOV_EOV)) {
+ printf("SOV/EOV not supported, skipping test\n");
+ return TEST_SKIPPED;
+ }
+
+ adapter = rte_event_vector_adapter_lookup(vector_adptr_id);
+ TEST_ASSERT(adapter != NULL, "Failed to lookup event vector adapter");
+
+ ret = rte_event_vector_adapter_enqueue(adapter, objs, vec_sz,
+ RTE_EVENT_VECTOR_ENQ_SOV | RTE_EVENT_VECTOR_ENQ_EOV);
+ TEST_ASSERT((ret == vec_sz), "Failed to enqueue event vector %d", ret);
+
+ for (i = 0; i < MAX_RETRIES; i++) {
+ ret = rte_event_dequeue_burst(evdev, 0, &ev, 1, 0);
+ if (ret)
+ break;
+
+ rte_delay_ms(1);
+ }
+
+ TEST_ASSERT((ret == 1), "Failed to dequeue event vector %d", ret);
+ TEST_ASSERT((ev.event_type == (RTE_EVENT_TYPE_CPU | RTE_EVENT_TYPE_VECTOR)),
+ "Incorrect event type %d", ev.event_type);
+ TEST_ASSERT((ev.vec->nb_elem == vec_sz), "Incorrect event vector %d", ev.vec->nb_elem);
+
+ for (i = 0; i < vec_sz; i++)
+ TEST_ASSERT((ev.vec->u64s[i] == objs[i]), "Invalid object in event vector %" PRIx64,
+ ev.vec->u64s[i]);
+
+ ret = rte_event_vector_adapter_enqueue(adapter, objs, 1,
+ RTE_EVENT_VECTOR_ENQ_SOV | RTE_EVENT_VECTOR_ENQ_EOV);
+ TEST_ASSERT((ret == 1), "Failed to enqueue event vector %d", ret);
+
+ for (i = 0; i < MAX_RETRIES; i++) {
+ ret = rte_event_dequeue_burst(evdev, 0, &ev, 1, 0);
+ if (ret)
+ break;
+
+ rte_delay_ms(1);
+ }
+ TEST_ASSERT((ret == 1), "Failed to dequeue event vector %d", ret);
+ if (info.min_vector_sz > 1)
+ TEST_ASSERT((ev.event_type == RTE_EVENT_TYPE_CPU), "Incorrect event type %d",
+ ev.event_type);
+ else
+ TEST_ASSERT((ev.event_type == (RTE_EVENT_TYPE_CPU | RTE_EVENT_TYPE_VECTOR)),
+ "Incorrect event type %d", ev.event_type);
+
+ return TEST_SUCCESS;
+}
+
+static int
+test_event_vector_adapter_enqueue_flush(void)
+{
+ struct rte_event_vector_adapter *adapter;
+ struct rte_event ev;
+ int ret, i;
+
+ adapter = rte_event_vector_adapter_lookup(vector_adptr_id);
+ TEST_ASSERT(adapter != NULL, "Failed to lookup event vector adapter");
+
+ ret = rte_event_vector_adapter_enqueue(adapter, objs, MAX_VECTOR_SIZE - 1, 0);
+ TEST_ASSERT((ret == MAX_VECTOR_SIZE - 1), "Failed to enqueue event vector %d", ret);
+
+ ret = rte_event_vector_adapter_enqueue(adapter, NULL, 0, RTE_EVENT_VECTOR_ENQ_FLUSH);
+ TEST_ASSERT((ret == 0), "Failed to enqueue event vector %d", ret);
+
+ for (i = 0; i < MAX_RETRIES; i++) {
+ ret = rte_event_dequeue_burst(evdev, 0, &ev, 1, 0);
+ if (ret)
+ break;
+
+ rte_delay_ms(1);
+ }
+
+ TEST_ASSERT((ret == 1), "Failed to dequeue event vector %d", ret);
+ TEST_ASSERT((ev.event_type == (RTE_EVENT_TYPE_CPU | RTE_EVENT_TYPE_VECTOR)), "Incorrect event type %d",
+ ev.event_type);
+ TEST_ASSERT((ev.sched_type == RTE_SCHED_TYPE_PARALLEL), "Invalid sched type %d",
+ ev.sched_type);
+
+ return TEST_SUCCESS;
+}
+
+static inline int
+eventdev_setup(void)
+{
+ struct rte_event_queue_conf queue_conf;
+ struct rte_event_dev_config conf;
+ struct rte_event_dev_info info;
+ uint32_t service_id;
+ int ret, i;
+
+ ret = rte_event_dev_info_get(evdev, &info);
+ TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+ memset(&conf, 0, sizeof(conf));
+ conf.nb_event_port_dequeue_depth = info.max_event_port_dequeue_depth;
+ conf.nb_event_port_enqueue_depth = info.max_event_port_enqueue_depth;
+ conf.nb_event_queue_flows = info.max_event_queue_flows;
+ conf.dequeue_timeout_ns = info.min_dequeue_timeout_ns;
+ conf.nb_events_limit = info.max_num_events;
+ conf.nb_event_queues = info.max_event_queues;
+ conf.nb_event_ports = 1;
+
+ ret = rte_event_dev_configure(evdev, &conf);
+ TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
+
+ ret = rte_event_queue_default_conf_get(evdev, 0, &queue_conf);
+ TEST_ASSERT_SUCCESS(ret, "Failed to get default queue conf");
+
+ queue_conf.schedule_type = RTE_SCHED_TYPE_PARALLEL;
+ for (i = 0; i < info.max_event_queues; i++) {
+ ret = rte_event_queue_setup(evdev, i, &queue_conf);
+ TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", i);
+ }
+
+ /* Configure event port */
+ ret = rte_event_port_setup(evdev, 0, NULL);
+ TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", 0);
+ ret = rte_event_port_link(evdev, 0, NULL, NULL, 0);
+ TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d", 0);
+
+ /* If this is a software event device, map and start its service */
+ if (rte_event_dev_service_id_get(evdev, &service_id) == 0) {
+ TEST_ASSERT_SUCCESS(rte_service_lcore_add(sw_slcore), "Failed to add service core");
+ TEST_ASSERT_SUCCESS(rte_service_lcore_start(sw_slcore),
+ "Failed to start service core");
+ TEST_ASSERT_SUCCESS(rte_service_map_lcore_set(service_id, sw_slcore, 1),
+ "Failed to map evdev service");
+ TEST_ASSERT_SUCCESS(rte_service_runstate_set(service_id, 1),
+ "Failed to start evdev service");
+ }
+
+ ret = rte_event_dev_start(evdev);
+ TEST_ASSERT_SUCCESS(ret, "Failed to start device");
+
+ evdev_max_queues = info.max_event_queues;
+
+ return TEST_SUCCESS;
+}
+
+static int
+testsuite_setup(void)
+{
+ uint32_t service_id;
+ uint32_t caps = 0;
+
+ rte_service_lcore_reset_all();
+
+ if (rte_event_dev_count() == 0) {
+ RTE_LOG(DEBUG, EAL,
+ "Failed to find a valid event device... "
+ "testing with event_sw device\n");
+ TEST_ASSERT_SUCCESS(rte_vdev_init("event_sw0", NULL), "Error creating eventdev");
+ evdev = rte_event_dev_get_dev_id("event_sw0");
+ }
+
+ rte_event_vector_adapter_caps_get(evdev, &caps);
+
+ if (rte_event_dev_service_id_get(evdev, &service_id) == 0)
+ using_services = true;
+
+ if (using_services)
+ sw_slcore = rte_get_next_lcore(-1, 1, 0);
+
+ return eventdev_setup();
+}
+
+static void
+testsuite_teardown(void)
+{
+ rte_event_dev_stop(evdev);
+ rte_event_dev_close(evdev);
+}
+
+static struct unit_test_suite functional_testsuite = {
+ .suite_name = "Event vector adapter test suite",
+ .setup = testsuite_setup,
+ .teardown = testsuite_teardown,
+ .unit_test_cases = {
+ TEST_CASE_ST(NULL, test_event_vector_adapter_free,
+ test_event_vector_adapter_create),
+ TEST_CASE_ST(NULL, test_event_vector_adapter_free,
+ test_event_vector_adapter_create_multi),
+ TEST_CASE_ST(test_event_vector_adapter_create, test_event_vector_adapter_free,
+ test_event_vector_adapter_enqueue),
+ TEST_CASE_ST(test_event_vector_adapter_create, test_event_vector_adapter_free,
+ test_event_vector_adapter_enqueue_tmo),
+ TEST_CASE_ST(test_event_vector_adapter_create, test_event_vector_adapter_free,
+ test_event_vector_adapter_enqueue_fallback),
+ TEST_CASE_ST(test_event_vector_adapter_create, test_event_vector_adapter_free,
+ test_event_vector_adapter_enqueue_sov),
+ TEST_CASE_ST(test_event_vector_adapter_create, test_event_vector_adapter_free,
+ test_event_vector_adapter_enqueue_eov),
+ TEST_CASE_ST(test_event_vector_adapter_create, test_event_vector_adapter_free,
+ test_event_vector_adapter_enqueue_sov_eov),
+ TEST_CASE_ST(test_event_vector_adapter_create, test_event_vector_adapter_free,
+ test_event_vector_adapter_enqueue_flush),
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ }
+};
+
+static int
+test_event_vector_adapter(void)
+{
+ return unit_test_suite_runner(&functional_testsuite);
+}
+
+#endif
+
+REGISTER_FAST_TEST(event_vector_adapter_autotest, true, true, test_event_vector_adapter);
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index d03461316b..dda8ad82c9 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -87,6 +87,8 @@ extern int rte_event_logtype;
#define RTE_EVENT_TIMER_ADAPTER_SW_CAP \
RTE_EVENT_TIMER_ADAPTER_CAP_PERIODIC
+#define RTE_EVENT_VECTOR_ADAPTER_SW_CAP RTE_EVENT_VECTOR_ADAPTER_CAP_SOV_EOV
+
#define RTE_EVENTDEV_DETACHED (0)
#define RTE_EVENTDEV_ATTACHED (1)
diff --git a/lib/eventdev/rte_event_vector_adapter.c b/lib/eventdev/rte_event_vector_adapter.c
index ff6bc43b17..ad764e2882 100644
--- a/lib/eventdev/rte_event_vector_adapter.c
+++ b/lib/eventdev/rte_event_vector_adapter.c
@@ -23,6 +23,13 @@
#define MZ_NAME_MAX_LEN 64
#define DATA_MZ_NAME_FORMAT "vector_adapter_data_%d_%d_%d"
+#define MAX_VECTOR_SIZE 1024
+#define MIN_VECTOR_SIZE 1
+#define MAX_VECTOR_NS 1E9
+#define MIN_VECTOR_NS 1E5
+#define SERVICE_RING_SZ 1024
+#define SERVICE_DEQ_SZ 16
+#define SERVICE_PEND_LIST 32
RTE_LOG_REGISTER_SUFFIX(ev_vector_logtype, adapter.vector, NOTICE);
#define RTE_LOGTYPE_EVVEC ev_vector_logtype
@@ -48,6 +55,9 @@ struct rte_event_vector_adapter *adapters[RTE_EVENT_MAX_DEVS][RTE_EVENT_MAX_QUEU
} \
} while (0)
+static const struct event_vector_adapter_ops sw_ops;
+static const struct rte_event_vector_adapter_info sw_info;
+
static int
validate_conf(const struct rte_event_vector_adapter_conf *conf,
struct rte_event_vector_adapter_info *info)
@@ -222,6 +232,11 @@ rte_event_vector_adapter_create_ext(const struct rte_event_vector_adapter_conf *
goto error;
}
+ if (adapter->ops == NULL) {
+ adapter->ops = &sw_ops;
+ info = sw_info;
+ }
+
rc = validate_conf(conf, &info);
if (rc < 0) {
adapter->ops = NULL;
@@ -347,6 +362,8 @@ rte_event_vector_adapter_lookup(uint32_t adapter_id)
return NULL;
}
}
+ if (adapter->ops == NULL)
+ adapter->ops = &sw_ops;
adapter->enqueue = adapter->ops->enqueue;
adapter->adapter_id = adapter_id;
@@ -408,6 +425,7 @@ rte_event_vector_adapter_info_get(uint8_t event_dev_id, struct rte_event_vector_
if (dev->dev_ops->vector_adapter_info_get != NULL)
return dev->dev_ops->vector_adapter_info_get(dev, info);
+ *info = sw_info;
return 0;
}
@@ -470,3 +488,377 @@ rte_event_vector_adapter_stats_reset(struct rte_event_vector_adapter *adapter)
return 0;
}
+
+/* Software vector adapter implementation. */
+
+struct sw_vector_adapter_service_data;
+struct sw_vector_adapter_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+ uint16_t vector_sz;
+ uint64_t timestamp;
+ uint64_t event_meta;
+ uint64_t vector_tmo_ticks;
+ uint64_t fallback_event_meta;
+ struct rte_mempool *vector_mp;
+ struct rte_event_vector *vector;
+ rte_spinlock_t lock;
+ struct rte_event_vector_adapter *adapter;
+ struct rte_event_vector_adapter_stats stats;
+ struct sw_vector_adapter_service_data *service_data;
+ RTE_TAILQ_ENTRY(sw_vector_adapter_data) next;
+};
+
+struct sw_vector_adapter_service_data {
+ uint8_t pend_list;
+ uint32_t service_id;
+ struct rte_ring *ring;
+ struct sw_vector_adapter_data *pend[SERVICE_PEND_LIST];
+};
+
+static inline struct sw_vector_adapter_data *
+sw_vector_adapter_priv(const struct rte_event_vector_adapter *adapter)
+{
+ return adapter->data->adapter_priv;
+}
+
+static int
+sw_vector_adapter_flush(struct sw_vector_adapter_data *sw)
+{
+ struct rte_event ev;
+
+ if (sw->vector == NULL)
+ return -ENOBUFS;
+
+ ev.event = sw->event_meta;
+ ev.vec = sw->vector;
+ if (rte_event_enqueue_burst(sw->dev_id, sw->port_id, &ev, 1) != 1)
+ return -ENOSPC;
+
+ sw->vector = NULL;
+ sw->timestamp = 0;
+ return 0;
+}
+
+static void
+sw_vector_adapter_process_pend_list(struct sw_vector_adapter_service_data *service_data)
+{
+ struct sw_vector_adapter_data *sw;
+ int i;
+
+ if (service_data->pend_list == 0)
+ return;
+ for (i = 0; i < SERVICE_PEND_LIST; i++) {
+ if (service_data->pend_list == 0)
+ break;
+ if (service_data->pend[i] == NULL)
+ continue;
+
+ sw = service_data->pend[i];
+ if (sw->vector == NULL) {
+ service_data->pend[i] = NULL;
+ service_data->pend_list--;
+ continue;
+ }
+
+ rte_spinlock_lock(&sw->lock);
+ if (rte_get_tsc_cycles() - sw->timestamp >= sw->vector_tmo_ticks) {
+ if (sw_vector_adapter_flush(sw) != -ENOSPC) {
+ service_data->pend[i] = NULL;
+ service_data->pend_list--;
+ }
+ }
+ rte_spinlock_unlock(&sw->lock);
+ }
+}
+
+static void
+sw_vector_adapter_add_to_pend_list(struct sw_vector_adapter_service_data *service_data,
+ struct sw_vector_adapter_data *sw)
+{
+ int i, pos = SERVICE_PEND_LIST;
+
+ if (service_data->pend_list >= SERVICE_PEND_LIST) {
+ EVVEC_LOG_ERR("pend list is full");
+ return;
+ }
+ for (i = 0; i < SERVICE_PEND_LIST; i++) {
+ if (service_data->pend[i] == sw)
+ return;
+ if (service_data->pend[i] == NULL)
+ pos = i;
+ }
+ if (pos == SERVICE_PEND_LIST)
+ return;
+ service_data->pend[pos] = sw;
+ service_data->pend_list++;
+}
+
+static int
+sw_vector_adapter_service_func(void *arg)
+{
+ struct sw_vector_adapter_service_data *service_data = arg;
+ struct sw_vector_adapter_data *sw[SERVICE_DEQ_SZ];
+ int n, i;
+
+ sw_vector_adapter_process_pend_list(service_data);
+ /* Dequeue the adapter list and flush the vectors */
+ n = rte_ring_dequeue_burst(service_data->ring, (void **)&sw, SERVICE_DEQ_SZ, NULL);
+ for (i = 0; i < n; i++) {
+
+ if (sw[i]->vector == NULL)
+ continue;
+
+ if (rte_get_tsc_cycles() - sw[i]->timestamp < sw[i]->vector_tmo_ticks) {
+ sw_vector_adapter_add_to_pend_list(service_data, sw[i]);
+ } else {
+ if (!rte_spinlock_trylock(&sw[i]->lock)) {
+ sw_vector_adapter_add_to_pend_list(service_data, sw[i]);
+ continue;
+ }
+ if (sw_vector_adapter_flush(sw[i]) == -ENOSPC)
+ sw_vector_adapter_add_to_pend_list(service_data, sw[i]);
+ else
+ sw[i]->stats.vectors_timedout++;
+ rte_spinlock_unlock(&sw[i]->lock);
+ }
+ }
+
+ return 0;
+}
+
+static int
+sw_vector_adapter_service_init(struct sw_vector_adapter_data *sw)
+{
+#define SW_VECTOR_ADAPTER_SERVICE_FMT "sw_vector_adapter_service"
+ struct sw_vector_adapter_service_data *service_data;
+ struct rte_service_spec service;
+ const struct rte_memzone *mz;
+ struct rte_ring *ring;
+ int ret;
+
+ mz = rte_memzone_lookup(SW_VECTOR_ADAPTER_SERVICE_FMT);
+ if (mz == NULL) {
+ mz = rte_memzone_reserve(SW_VECTOR_ADAPTER_SERVICE_FMT,
+ sizeof(struct sw_vector_adapter_service_data),
+ sw->adapter->data->socket_id, 0);
+ if (mz == NULL) {
+ EVVEC_LOG_DBG("failed to reserve memzone for service");
+ return -ENOMEM;
+ }
+ service_data = (struct sw_vector_adapter_service_data *)mz->addr;
+ memset(service_data, 0, sizeof(*service_data));
+
+ ring = rte_ring_create(SW_VECTOR_ADAPTER_SERVICE_FMT, SERVICE_RING_SZ,
+ sw->adapter->data->socket_id, 0);
+ if (ring == NULL) {
+ EVVEC_LOG_ERR("failed to create ring for service");
+ rte_memzone_free(mz);
+ return -ENOMEM;
+ }
+ service_data->ring = ring;
+
+ memset(&service, 0, sizeof(service));
+ snprintf(service.name, RTE_SERVICE_NAME_MAX, "%s", SW_VECTOR_ADAPTER_SERVICE_FMT);
+ service.callback = sw_vector_adapter_service_func;
+ service.callback_userdata = service_data;
+ service.socket_id = sw->adapter->data->socket_id;
+
+ ret = rte_service_component_register(&service, &service_data->service_id);
+ if (ret < 0) {
+ EVVEC_LOG_ERR("failed to register service %s with id %" PRIu32 ": err = %d",
+ service.name, service_data->service_id, ret);
+ return -ENOTSUP;
+ }
+ ret = rte_service_component_runstate_set(service_data->service_id, 1);
+ if (ret < 0) {
+ EVVEC_LOG_ERR("failed to set service runstate with id %" PRIu32
+ ": err = %d",
+ service_data->service_id, ret);
+ return -ENOTSUP;
+ }
+ }
+ service_data = (struct sw_vector_adapter_service_data *)mz->addr;
+
+ sw->service_data = service_data;
+ sw->adapter->data->unified_service_id = service_data->service_id;
+ sw->adapter->data->service_inited = 1;
+ return 0;
+}
+
+static int
+sw_vector_adapter_create(struct rte_event_vector_adapter *adapter)
+{
+#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
+#define SW_VECTOR_ADAPTER_NAME 64
+ char name[SW_VECTOR_ADAPTER_NAME];
+ struct sw_vector_adapter_data *sw;
+ struct rte_event ev;
+
+ snprintf(name, SW_VECTOR_ADAPTER_NAME, "sw_vector_%" PRIx32, adapter->data->id);
+ sw = rte_zmalloc_socket(name, sizeof(*sw), RTE_CACHE_LINE_SIZE, adapter->data->socket_id);
+ if (sw == NULL) {
+ EVVEC_LOG_ERR("failed to allocate space for private data");
+ rte_errno = ENOMEM;
+ return -1;
+ }
+
+ /* Connect storage to adapter instance */
+ adapter->data->adapter_priv = sw;
+ sw->adapter = adapter;
+ sw->dev_id = adapter->data->event_dev_id;
+ sw->port_id = adapter->data->event_port_id;
+
+ sw->vector_sz = adapter->data->conf.vector_sz;
+ sw->vector_mp = adapter->data->conf.vector_mp;
+ sw->vector_tmo_ticks = NSEC2TICK(adapter->data->conf.vector_timeout_ns, rte_get_timer_hz());
+
+ ev = adapter->data->conf.ev;
+ ev.op = RTE_EVENT_OP_NEW;
+ sw->event_meta = ev.event;
+
+ ev = adapter->data->conf.ev_fallback;
+ ev.op = RTE_EVENT_OP_NEW;
+ ev.priority = adapter->data->conf.ev.priority;
+ ev.queue_id = adapter->data->conf.ev.queue_id;
+ ev.sched_type = adapter->data->conf.ev.sched_type;
+ sw->fallback_event_meta = ev.event;
+
+ rte_spinlock_init(&sw->lock);
+ sw_vector_adapter_service_init(sw);
+
+ return 0;
+}
+
+static int
+sw_vector_adapter_destroy(struct rte_event_vector_adapter *adapter)
+{
+ struct sw_vector_adapter_data *sw = sw_vector_adapter_priv(adapter);
+
+ rte_free(sw);
+ adapter->data->adapter_priv = NULL;
+
+ return 0;
+}
+
+static int
+sw_vector_adapter_flush_single_event(struct sw_vector_adapter_data *sw, uint64_t obj)
+{
+ struct rte_event ev;
+
+ ev.event = sw->fallback_event_meta;
+ ev.u64 = obj;
+ if (rte_event_enqueue_burst(sw->dev_id, sw->port_id, &ev, 1) != 1)
+ return -ENOSPC;
+
+ return 0;
+}
+
+static int
+sw_vector_adapter_enqueue(struct rte_event_vector_adapter *adapter, uint64_t objs[],
+ uint16_t num_elem, uint64_t flags)
+{
+ struct sw_vector_adapter_data *sw = sw_vector_adapter_priv(adapter);
+ uint16_t cnt = num_elem, n;
+ int ret;
+
+ rte_spinlock_lock(&sw->lock);
+ if (flags & RTE_EVENT_VECTOR_ENQ_FLUSH) {
+ sw_vector_adapter_flush(sw);
+ sw->stats.vectors_flushed++;
+ rte_spinlock_unlock(&sw->lock);
+ return 0;
+ }
+
+ if (num_elem == 0) {
+ rte_spinlock_unlock(&sw->lock);
+ return 0;
+ }
+
+ if (flags & RTE_EVENT_VECTOR_ENQ_SOV && sw->vector != NULL) {
+ while (sw_vector_adapter_flush(sw) != 0)
+ ;
+ sw->stats.vectors_flushed++;
+ }
+
+ while (num_elem) {
+ if (sw->vector == NULL) {
+ ret = rte_mempool_get(sw->vector_mp, (void **)&sw->vector);
+ if (ret) {
+ if (sw_vector_adapter_flush_single_event(sw, *objs) == 0) {
+ sw->stats.alloc_failures++;
+ num_elem--;
+ objs++;
+ continue;
+ }
+ rte_errno = -ENOSPC;
+ goto done;
+ }
+ sw->vector->nb_elem = 0;
+ sw->vector->attr_valid = 0;
+ sw->vector->elem_offset = 0;
+ }
+ n = RTE_MIN(sw->vector_sz - sw->vector->nb_elem, num_elem);
+ memcpy(&sw->vector->u64s[sw->vector->nb_elem], objs, n * sizeof(uint64_t));
+ sw->vector->nb_elem += n;
+ num_elem -= n;
+ objs += n;
+
+ if (sw->vector_sz == sw->vector->nb_elem) {
+ ret = sw_vector_adapter_flush(sw);
+ if (ret)
+ goto done;
+ sw->stats.vectorized++;
+ }
+ }
+
+ if (flags & RTE_EVENT_VECTOR_ENQ_EOV && sw->vector != NULL) {
+ while (sw_vector_adapter_flush(sw) != 0)
+ ;
+ sw->stats.vectors_flushed++;
+ }
+
+ if (sw->vector != NULL && sw->vector->nb_elem) {
+ sw->timestamp = rte_get_timer_cycles();
+ rte_ring_enqueue(sw->service_data->ring, sw);
+ }
+
+done:
+ rte_spinlock_unlock(&sw->lock);
+ return cnt - num_elem;
+}
+
+static int
+sw_vector_adapter_stats_get(const struct rte_event_vector_adapter *adapter,
+ struct rte_event_vector_adapter_stats *stats)
+{
+ struct sw_vector_adapter_data *sw = sw_vector_adapter_priv(adapter);
+
+ *stats = sw->stats;
+ return 0;
+}
+
+static int
+sw_vector_adapter_stats_reset(const struct rte_event_vector_adapter *adapter)
+{
+ struct sw_vector_adapter_data *sw = sw_vector_adapter_priv(adapter);
+
+ memset(&sw->stats, 0, sizeof(sw->stats));
+ return 0;
+}
+
+static const struct event_vector_adapter_ops sw_ops = {
+ .create = sw_vector_adapter_create,
+ .destroy = sw_vector_adapter_destroy,
+ .enqueue = sw_vector_adapter_enqueue,
+ .stats_get = sw_vector_adapter_stats_get,
+ .stats_reset = sw_vector_adapter_stats_reset,
+};
+
+static const struct rte_event_vector_adapter_info sw_info = {
+ .min_vector_sz = MIN_VECTOR_SIZE,
+ .max_vector_sz = MAX_VECTOR_SIZE,
+ .min_vector_timeout_ns = MIN_VECTOR_NS,
+ .max_vector_timeout_ns = MAX_VECTOR_NS,
+ .log2_sz = 0,
+};
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 916bad6c2c..b921142d7b 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -272,6 +272,8 @@ rte_event_vector_adapter_caps_get(uint8_t dev_id, uint32_t *caps)
return -EINVAL;
if (dev->dev_ops->vector_adapter_caps_get == NULL)
+ *caps = RTE_EVENT_VECTOR_ADAPTER_SW_CAP;
+ else
*caps = 0;
return dev->dev_ops->vector_adapter_caps_get ?
--
2.43.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 3/3] app/eventdev: add vector adapter performance test
2025-04-10 18:00 ` [PATCH 0/3] " pbhagavatula
2025-04-10 18:00 ` [PATCH 1/3] eventdev: " pbhagavatula
2025-04-10 18:00 ` [PATCH 2/3] eventdev: add default software " pbhagavatula
@ 2025-04-10 18:00 ` pbhagavatula
2 siblings, 0 replies; 12+ messages in thread
From: pbhagavatula @ 2025-04-10 18:00 UTC (permalink / raw)
To: jerinj, pravin.pathak, hemant.agrawal, sachin.saxena,
mattias.ronnblom, liangma, peter.mccarthy, harry.van.haaren,
erik.g.carrillo, abhinandan.gujjar, amitprakashs,
s.v.naga.harish.k, anatoly.burakov
Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add performance test for event vector adapter.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
app/test-eventdev/evt_common.h | 9 +-
app/test-eventdev/evt_options.c | 14 ++
app/test-eventdev/evt_options.h | 1 +
app/test-eventdev/test_perf_atq.c | 61 +++++-
app/test-eventdev/test_perf_common.c | 281 ++++++++++++++++++-------
app/test-eventdev/test_perf_common.h | 13 +-
app/test-eventdev/test_perf_queue.c | 66 +++++-
doc/guides/rel_notes/release_25_07.rst | 5 +
doc/guides/tools/testeventdev.rst | 6 +
9 files changed, 362 insertions(+), 94 deletions(-)
diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
index 74f9d187f3..ec824f2454 100644
--- a/app/test-eventdev/evt_common.h
+++ b/app/test-eventdev/evt_common.h
@@ -39,11 +39,12 @@
enum evt_prod_type {
EVT_PROD_TYPE_NONE,
- EVT_PROD_TYPE_SYNT, /* Producer type Synthetic i.e. CPU. */
- EVT_PROD_TYPE_ETH_RX_ADPTR, /* Producer type Eth Rx Adapter. */
+ EVT_PROD_TYPE_SYNT, /* Producer type Synthetic i.e. CPU. */
+ EVT_PROD_TYPE_ETH_RX_ADPTR, /* Producer type Eth Rx Adapter. */
EVT_PROD_TYPE_EVENT_TIMER_ADPTR, /* Producer type Timer Adapter. */
- EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR, /* Producer type Crypto Adapter. */
- EVT_PROD_TYPE_EVENT_DMA_ADPTR, /* Producer type DMA Adapter. */
+ EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR, /* Producer type Crypto Adapter. */
+ EVT_PROD_TYPE_EVENT_DMA_ADPTR, /* Producer type DMA Adapter. */
+ EVT_PROD_TYPE_EVENT_VECTOR_ADPTR, /* Producer type Vector adapter. */
EVT_PROD_TYPE_MAX,
};
diff --git a/app/test-eventdev/evt_options.c b/app/test-eventdev/evt_options.c
index 323d1e724d..0e70c971eb 100644
--- a/app/test-eventdev/evt_options.c
+++ b/app/test-eventdev/evt_options.c
@@ -186,6 +186,13 @@ evt_parse_dma_adptr_mode(struct evt_options *opt, const char *arg)
return ret;
}
+static int
+evt_parse_vector_prod_type(struct evt_options *opt,
+ const char *arg __rte_unused)
+{
+ opt->prod_type = EVT_PROD_TYPE_EVENT_VECTOR_ADPTR;
+ return 0;
+}
static int
evt_parse_crypto_prod_type(struct evt_options *opt,
@@ -494,6 +501,7 @@ usage(char *program)
"\t in ns.\n"
"\t--prod_type_timerdev_burst : use timer device as producer\n"
"\t burst mode.\n"
+ "\t--prod_type_vector : use vector adapter as producer.\n"
"\t--nb_timers : number of timers to arm.\n"
"\t--nb_timer_adptrs : number of timer adapters to use.\n"
"\t--timer_tick_nsec : timer tick interval in ns.\n"
@@ -591,6 +599,7 @@ static struct option lgopts[] = {
{ EVT_PROD_CRYPTODEV, 0, 0, 0 },
{ EVT_PROD_TIMERDEV, 0, 0, 0 },
{ EVT_PROD_TIMERDEV_BURST, 0, 0, 0 },
+ { EVT_PROD_VECTOR, 0, 0, 0 },
{ EVT_DMA_ADPTR_MODE, 1, 0, 0 },
{ EVT_CRYPTO_ADPTR_MODE, 1, 0, 0 },
{ EVT_CRYPTO_OP_TYPE, 1, 0, 0 },
@@ -642,6 +651,7 @@ evt_opts_parse_long(int opt_idx, struct evt_options *opt)
{ EVT_PROD_DMADEV, evt_parse_dma_prod_type},
{ EVT_PROD_TIMERDEV, evt_parse_timer_prod_type},
{ EVT_PROD_TIMERDEV_BURST, evt_parse_timer_prod_type_burst},
+ { EVT_PROD_VECTOR, evt_parse_vector_prod_type },
{ EVT_DMA_ADPTR_MODE, evt_parse_dma_adptr_mode},
{ EVT_CRYPTO_ADPTR_MODE, evt_parse_crypto_adptr_mode},
{ EVT_CRYPTO_OP_TYPE, evt_parse_crypto_op_type},
@@ -721,4 +731,8 @@ evt_options_dump(struct evt_options *opt)
evt_dump_end;
evt_dump_nb_flows(opt);
evt_dump_worker_dequeue_depth(opt);
+ if (opt->ena_vector || opt->prod_type == EVT_PROD_TYPE_EVENT_VECTOR_ADPTR) {
+ evt_dump("vector_sz", "%d", opt->vector_size);
+ evt_dump("vector_tmo_ns", "%"PRIu64, opt->vector_tmo_nsec);
+ }
}
diff --git a/app/test-eventdev/evt_options.h b/app/test-eventdev/evt_options.h
index 18a893b704..4bf712bd19 100644
--- a/app/test-eventdev/evt_options.h
+++ b/app/test-eventdev/evt_options.h
@@ -38,6 +38,7 @@
#define EVT_PROD_DMADEV ("prod_type_dmadev")
#define EVT_PROD_TIMERDEV ("prod_type_timerdev")
#define EVT_PROD_TIMERDEV_BURST ("prod_type_timerdev_burst")
+#define EVT_PROD_VECTOR ("prod_type_vector")
#define EVT_DMA_ADPTR_MODE ("dma_adptr_mode")
#define EVT_CRYPTO_ADPTR_MODE ("crypto_adptr_mode")
#define EVT_CRYPTO_OP_TYPE ("crypto_op_type")
diff --git a/app/test-eventdev/test_perf_atq.c b/app/test-eventdev/test_perf_atq.c
index 30c34edabd..b07b010af1 100644
--- a/app/test-eventdev/test_perf_atq.c
+++ b/app/test-eventdev/test_perf_atq.c
@@ -145,7 +145,7 @@ perf_atq_worker_burst(void *arg, const int enable_fwd_latency)
}
static int
-perf_atq_worker_vector(void *arg, const int enable_fwd_latency)
+perf_atq_worker_crypto_vector(void *arg, const int enable_fwd_latency)
{
uint16_t enq = 0, deq = 0;
struct rte_event ev;
@@ -161,10 +161,8 @@ perf_atq_worker_vector(void *arg, const int enable_fwd_latency)
if (!deq)
continue;
- if (ev.event_type == RTE_EVENT_TYPE_CRYPTODEV_VECTOR) {
- if (perf_handle_crypto_vector_ev(&ev, &pe, enable_fwd_latency))
- continue;
- }
+ if (perf_handle_crypto_vector_ev(&ev, &pe, enable_fwd_latency))
+ continue;
stage = ev.sub_event_type % nb_stages;
/* First q in pipeline, mark timestamp to compute fwd latency */
@@ -173,8 +171,8 @@ perf_atq_worker_vector(void *arg, const int enable_fwd_latency)
/* Last stage in pipeline */
if (unlikely(stage == laststage)) {
- perf_process_vector_last_stage(pool, t->ca_op_pool, &ev, w,
- enable_fwd_latency);
+ perf_process_crypto_vector_last_stage(pool, t->ca_op_pool, &ev, w,
+ enable_fwd_latency);
} else {
atq_fwd_event_vector(&ev, sched_type_list, nb_stages);
do {
@@ -188,6 +186,53 @@ perf_atq_worker_vector(void *arg, const int enable_fwd_latency)
return 0;
}
+static int
+perf_atq_worker_vector(void *arg, const int enable_fwd_latency)
+{
+ uint16_t enq = 0, deq = 0;
+ struct rte_event ev;
+ PERF_WORKER_INIT;
+
+ RTE_SET_USED(sz);
+ RTE_SET_USED(pe);
+ RTE_SET_USED(cnt);
+ RTE_SET_USED(prod_type);
+ RTE_SET_USED(prod_timer_type);
+
+ while (t->done == false) {
+ deq = rte_event_dequeue_burst(dev, port, &ev, 1, 0);
+ if (!deq)
+ continue;
+
+ if (ev.event_type != RTE_EVENT_TYPE_CPU_VECTOR) {
+ w->processed_pkts++;
+ continue;
+ }
+
+ stage = ev.sub_event_type % nb_stages;
+ if (enable_fwd_latency && stage == 0)
+ /* first stage in pipeline, mark ts to compute fwd latency */
+ ev.vec->u64s[0] = rte_get_timer_cycles();
+
+ /* Last stage in pipeline */
+ if (unlikely(stage == laststage)) {
+ w->processed_vecs++;
+ if (enable_fwd_latency)
+ w->latency += rte_get_timer_cycles() - ev.vec->u64s[0];
+
+ rte_mempool_put(pool, ev.event_ptr);
+ } else {
+ atq_fwd_event_vector(&ev, sched_type_list, nb_stages);
+ do {
+ enq = rte_event_enqueue_burst(dev, port, &ev, 1);
+ } while (!enq && !t->done);
+ }
+ }
+ perf_worker_cleanup(pool, dev, port, &ev, enq, deq);
+
+ return 0;
+}
+
static int
worker_wrapper(void *arg)
{
@@ -199,6 +244,8 @@ worker_wrapper(void *arg)
/* allow compiler to optimize */
if (opt->ena_vector && opt->prod_type == EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR)
+ return perf_atq_worker_crypto_vector(arg, fwd_latency);
+ else if (opt->prod_type == EVT_PROD_TYPE_EVENT_VECTOR_ADPTR)
return perf_atq_worker_vector(arg, fwd_latency);
else if (!burst && !fwd_latency)
return perf_atq_worker(arg, 0);
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index 627f07caa1..4709de8b07 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -102,16 +102,20 @@ perf_test_result(struct evt_test *test, struct evt_options *opt)
int i;
uint64_t total = 0;
struct test_perf *t = evt_test_priv(test);
+ uint8_t is_vec;
printf("Packet distribution across worker cores :\n");
+ is_vec = (opt->prod_type == EVT_PROD_TYPE_EVENT_VECTOR_ADPTR);
for (i = 0; i < t->nb_workers; i++)
- total += t->worker[i].processed_pkts;
+ total += is_vec ? t->worker[i].processed_vecs : t->worker[i].processed_pkts;
for (i = 0; i < t->nb_workers; i++)
- printf("Worker %d packets: "CLGRN"%"PRIx64" "CLNRM"percentage:"
- CLGRN" %3.2f"CLNRM"\n", i,
- t->worker[i].processed_pkts,
- (((double)t->worker[i].processed_pkts)/total)
- * 100);
+ printf("Worker %d packets: " CLGRN "%" PRIx64 " " CLNRM "percentage:" CLGRN
+ " %3.2f" CLNRM "\n",
+ i, is_vec ? t->worker[i].processed_vecs : t->worker[i].processed_pkts,
+ (((double)(is_vec ? t->worker[i].processed_vecs :
+ t->worker[i].processed_pkts)) /
+ total) *
+ 100);
return t->result;
}
@@ -887,6 +891,31 @@ perf_event_crypto_producer_burst(void *arg)
return 0;
}
+static int
+perf_event_vector_producer(struct prod_data *p)
+{
+ struct rte_event_vector_adapter *adptr = p->va.vector_adptr;
+ struct evt_options *opt = p->t->opt;
+ const struct test_perf *t = p->t;
+ uint64_t objs[BURST_SIZE];
+ uint16_t enq;
+
+ if (opt->verbose_level > 1)
+ printf("%s(): lcore %d vector adapter %p\n", __func__, rte_lcore_id(), adptr);
+
+ while (t->done == false) {
+ enq = rte_event_vector_adapter_enqueue(adptr, objs, BURST_SIZE, 0);
+ while (enq < BURST_SIZE) {
+ enq += rte_event_vector_adapter_enqueue(adptr, objs + enq, BURST_SIZE - enq,
+ 0);
+ if (t->done)
+ break;
+ rte_pause();
+ }
+ }
+ return 0;
+}
+
static int
perf_producer_wrapper(void *arg)
{
@@ -930,6 +959,8 @@ perf_producer_wrapper(void *arg)
return perf_event_crypto_producer(arg);
} else if (t->opt->prod_type == EVT_PROD_TYPE_EVENT_DMA_ADPTR)
return perf_event_dma_producer(arg);
+ else if (t->opt->prod_type == EVT_PROD_TYPE_EVENT_VECTOR_ADPTR)
+ return perf_event_vector_producer(p);
return 0;
}
@@ -947,115 +978,103 @@ processed_pkts(struct test_perf *t)
}
static inline uint64_t
-total_latency(struct test_perf *t)
+processed_vecs(struct test_perf *t)
{
uint8_t i;
uint64_t total = 0;
for (i = 0; i < t->nb_workers; i++)
- total += t->worker[i].latency;
+ total += t->worker[i].processed_vecs;
return total;
}
-
-int
-perf_launch_lcores(struct evt_test *test, struct evt_options *opt,
- int (*worker)(void *))
+static inline uint64_t
+total_latency(struct test_perf *t)
{
- int ret, lcore_id;
- struct test_perf *t = evt_test_priv(test);
-
- int port_idx = 0;
- /* launch workers */
- RTE_LCORE_FOREACH_WORKER(lcore_id) {
- if (!(opt->wlcores[lcore_id]))
- continue;
-
- ret = rte_eal_remote_launch(worker,
- &t->worker[port_idx], lcore_id);
- if (ret) {
- evt_err("failed to launch worker %d", lcore_id);
- return ret;
- }
- port_idx++;
- }
-
- /* launch producers */
- RTE_LCORE_FOREACH_WORKER(lcore_id) {
- if (!(opt->plcores[lcore_id]))
- continue;
+ uint8_t i;
+ uint64_t total = 0;
- ret = rte_eal_remote_launch(perf_producer_wrapper,
- &t->prod[port_idx], lcore_id);
- if (ret) {
- evt_err("failed to launch perf_producer %d", lcore_id);
- return ret;
- }
- port_idx++;
- }
+ for (i = 0; i < t->nb_workers; i++)
+ total += t->worker[i].latency;
- const uint64_t total_pkts = t->outstand_pkts;
+ return total;
+}
- uint64_t dead_lock_cycles = rte_get_timer_cycles();
- int64_t dead_lock_remaining = total_pkts;
+static void
+check_work_status(struct test_perf *t, struct evt_options *opt)
+{
const uint64_t dead_lock_sample = rte_get_timer_hz() * 5;
-
+ const uint64_t freq_mhz = rte_get_timer_hz() / 1000000;
+ uint64_t dead_lock_cycles = rte_get_timer_cycles();
+ const uint64_t perf_sample = rte_get_timer_hz();
uint64_t perf_cycles = rte_get_timer_cycles();
+ const uint64_t total_pkts = t->outstand_pkts;
+ int64_t dead_lock_remaining = total_pkts;
int64_t perf_remaining = total_pkts;
- const uint64_t perf_sample = rte_get_timer_hz();
-
- static float total_mpps;
static uint64_t samples;
+ static float total_mpps;
+ int64_t remaining;
+ uint8_t is_vec;
- const uint64_t freq_mhz = rte_get_timer_hz() / 1000000;
- int64_t remaining = t->outstand_pkts - processed_pkts(t);
+ is_vec = (t->opt->prod_type == EVT_PROD_TYPE_EVENT_VECTOR_ADPTR);
+ remaining = t->outstand_pkts - (is_vec ? processed_vecs(t) : processed_pkts(t));
while (t->done == false) {
const uint64_t new_cycles = rte_get_timer_cycles();
if ((new_cycles - perf_cycles) > perf_sample) {
const uint64_t latency = total_latency(t);
- const uint64_t pkts = processed_pkts(t);
+ const uint64_t pkts = is_vec ? processed_vecs(t) : processed_pkts(t);
+ uint64_t fallback_pkts = processed_pkts(t);
remaining = t->outstand_pkts - pkts;
- float mpps = (float)(perf_remaining-remaining)/1000000;
+ float mpps = (float)(perf_remaining - remaining) / 1E6;
perf_remaining = remaining;
perf_cycles = new_cycles;
total_mpps += mpps;
++samples;
+
if (opt->fwd_latency && pkts > 0) {
- printf(CLGRN"\r%.3f mpps avg %.3f mpps [avg fwd latency %.3f us] "CLNRM,
- mpps, total_mpps/samples,
- (float)(latency/pkts)/freq_mhz);
+ if (is_vec) {
+ printf(CLGRN
+ "\r%.3f mvps avg %.3f mvps [avg fwd latency %.3f us] "
+ "fallback mpps %.3f" CLNRM,
+ mpps, total_mpps / samples,
+ (float)(latency / pkts) / freq_mhz,
+ fallback_pkts / 1E6);
+ } else {
+ printf(CLGRN
+ "\r%.3f mpps avg %.3f mpps [avg fwd latency %.3f us] "
+ CLNRM,
+ mpps, total_mpps / samples,
+ (float)(latency / pkts) / freq_mhz);
+ }
} else {
- printf(CLGRN"\r%.3f mpps avg %.3f mpps"CLNRM,
- mpps, total_mpps/samples);
+ if (is_vec) {
+ printf(CLGRN
+ "\r%.3f mvps avg %.3f mvps fallback mpps %.3f" CLNRM,
+ mpps, total_mpps / samples, fallback_pkts / 1E6);
+ } else {
+ printf(CLGRN "\r%.3f mpps avg %.3f mpps" CLNRM, mpps,
+ total_mpps / samples);
+ }
}
fflush(stdout);
if (remaining <= 0) {
t->result = EVT_TEST_SUCCESS;
- if (opt->prod_type == EVT_PROD_TYPE_SYNT ||
- opt->prod_type ==
- EVT_PROD_TYPE_EVENT_TIMER_ADPTR ||
- opt->prod_type ==
- EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR ||
- opt->prod_type ==
- EVT_PROD_TYPE_EVENT_DMA_ADPTR) {
+ if (opt->prod_type != EVT_PROD_TYPE_ETH_RX_ADPTR) {
t->done = true;
break;
}
}
}
-
if (new_cycles - dead_lock_cycles > dead_lock_sample &&
- (opt->prod_type == EVT_PROD_TYPE_SYNT ||
- opt->prod_type == EVT_PROD_TYPE_EVENT_TIMER_ADPTR ||
- opt->prod_type == EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR ||
- opt->prod_type == EVT_PROD_TYPE_EVENT_DMA_ADPTR)) {
- remaining = t->outstand_pkts - processed_pkts(t);
+ (opt->prod_type != EVT_PROD_TYPE_ETH_RX_ADPTR)) {
+ remaining =
+ t->outstand_pkts - (is_vec ? processed_vecs(t) : processed_pkts(t));
if (dead_lock_remaining == remaining) {
rte_event_dev_dump(opt->dev_id, stdout);
evt_err("No schedules for seconds, deadlock");
@@ -1067,6 +1086,45 @@ perf_launch_lcores(struct evt_test *test, struct evt_options *opt,
}
}
printf("\n");
+}
+
+int
+perf_launch_lcores(struct evt_test *test, struct evt_options *opt, int (*worker)(void *))
+{
+ int ret, lcore_id;
+ struct test_perf *t = evt_test_priv(test);
+
+ int port_idx = 0;
+ /* launch workers */
+ RTE_LCORE_FOREACH_WORKER(lcore_id)
+ {
+ if (!(opt->wlcores[lcore_id]))
+ continue;
+
+ ret = rte_eal_remote_launch(worker, &t->worker[port_idx], lcore_id);
+ if (ret) {
+ evt_err("failed to launch worker %d", lcore_id);
+ return ret;
+ }
+ port_idx++;
+ }
+
+ /* launch producers */
+ RTE_LCORE_FOREACH_WORKER(lcore_id)
+ {
+ if (!(opt->plcores[lcore_id]))
+ continue;
+
+ ret = rte_eal_remote_launch(perf_producer_wrapper, &t->prod[port_idx], lcore_id);
+ if (ret) {
+ evt_err("failed to launch perf_producer %d", lcore_id);
+ return ret;
+ }
+ port_idx++;
+ }
+
+ check_work_status(t, opt);
+
return 0;
}
@@ -1564,6 +1622,70 @@ perf_event_dev_port_setup(struct evt_test *test, struct evt_options *opt,
prod++;
}
+ } else if (opt->prod_type == EVT_PROD_TYPE_EVENT_VECTOR_ADPTR) {
+ struct rte_event_vector_adapter_conf conf;
+ struct rte_event_vector_adapter_info info;
+
+ ret = rte_event_vector_adapter_info_get(opt->dev_id, &info);
+
+ if (opt->vector_size < info.min_vector_sz ||
+ opt->vector_size > info.max_vector_sz) {
+ evt_err("Vector size [%d] not within limits max[%d] min[%d]",
+ opt->vector_size, info.max_vector_sz, info.min_vector_sz);
+ return -EINVAL;
+ }
+
+ if (opt->vector_tmo_nsec > info.max_vector_timeout_ns ||
+ opt->vector_tmo_nsec < info.min_vector_timeout_ns) {
+ evt_err("Vector timeout [%" PRIu64 "] not within limits "
+ "max[%" PRIu64 "] min[%" PRIu64 "]",
+ opt->vector_tmo_nsec, info.max_vector_timeout_ns,
+ info.min_vector_timeout_ns);
+ return -EINVAL;
+ }
+
+ memset(&conf, 0, sizeof(struct rte_event_vector_adapter_conf));
+ conf.event_dev_id = opt->dev_id;
+ conf.vector_sz = opt->vector_size;
+ conf.vector_timeout_ns = opt->vector_tmo_nsec;
+ conf.socket_id = opt->socket_id;
+ conf.vector_mp = t->pool;
+
+ conf.ev.sched_type = opt->sched_type_list[0];
+ conf.ev.event_type = RTE_EVENT_TYPE_VECTOR | RTE_EVENT_TYPE_CPU;
+
+ conf.ev_fallback.event_type = RTE_EVENT_TYPE_CPU;
+
+ prod = 0;
+ for (; port < perf_nb_event_ports(opt); port++) {
+ struct rte_event_vector_adapter *vector_adptr;
+ struct prod_data *p = &t->prod[port];
+ uint32_t service_id;
+
+ p->queue_id = prod * stride;
+ p->t = t;
+
+ conf.ev.queue_id = p->queue_id;
+
+ vector_adptr = rte_event_vector_adapter_create(&conf);
+ if (vector_adptr == NULL) {
+ evt_err("Failed to create vector adapter for port %d", port);
+ return -ENOMEM;
+ }
+ p->va.vector_adptr = vector_adptr;
+ prod++;
+
+ if (rte_event_vector_adapter_service_id_get(vector_adptr, &service_id) ==
+ 0) {
+ ret = evt_service_setup(service_id);
+ if (ret) {
+ evt_err("Failed to setup service core"
+ " for vector adapter\n");
+ return ret;
+ }
+ rte_service_runstate_set(service_id, 1);
+ }
+ }
} else {
prod = 0;
for ( ; port < perf_nb_event_ports(opt); port++) {
@@ -1728,6 +1850,20 @@ perf_eventdev_destroy(struct evt_test *test, struct evt_options *opt)
for (i = 0; i < opt->nb_timer_adptrs; i++)
rte_event_timer_adapter_stop(t->timer_adptr[i]);
}
+
+ if (opt->prod_type == EVT_PROD_TYPE_EVENT_VECTOR_ADPTR) {
+ for (i = 0; i < evt_nr_active_lcores(opt->plcores); i++) {
+ struct prod_data *p = &t->prod[i];
+ uint32_t service_id;
+
+ if (p->va.vector_adptr) {
+ if (rte_event_vector_adapter_service_id_get(p->va.vector_adptr,
+ &service_id) == 0)
+ rte_service_runstate_set(service_id, 0);
+ rte_event_vector_adapter_destroy(p->va.vector_adptr);
+ }
+ }
+ }
rte_event_dev_stop(opt->dev_id);
rte_event_dev_close(opt->dev_id);
}
@@ -2119,6 +2255,9 @@ perf_mempool_setup(struct evt_test *test, struct evt_options *opt)
cache_sz, /* cache size*/
0, NULL, NULL, NULL, /* obj constructor */
NULL, opt->socket_id, 0); /* flags */
+ } else if (opt->prod_type == EVT_PROD_TYPE_EVENT_VECTOR_ADPTR) {
+ t->pool = rte_event_vector_pool_create(test->name, opt->pool_sz, cache_sz,
+ opt->vector_size, opt->socket_id);
} else {
t->pool = rte_pktmbuf_pool_create(test->name, /* mempool name */
opt->pool_sz, /* number of elements*/
diff --git a/app/test-eventdev/test_perf_common.h b/app/test-eventdev/test_perf_common.h
index d7333ad390..99df008cc7 100644
--- a/app/test-eventdev/test_perf_common.h
+++ b/app/test-eventdev/test_perf_common.h
@@ -16,6 +16,7 @@
#include <rte_event_eth_rx_adapter.h>
#include <rte_event_eth_tx_adapter.h>
#include <rte_event_timer_adapter.h>
+#include <rte_event_vector_adapter.h>
#include <rte_eventdev.h>
#include <rte_lcore.h>
#include <rte_malloc.h>
@@ -33,6 +34,7 @@ struct test_perf;
struct __rte_cache_aligned worker_data {
uint64_t processed_pkts;
+ uint64_t processed_vecs;
uint64_t latency;
uint8_t dev_id;
uint8_t port_id;
@@ -50,12 +52,17 @@ struct dma_adptr_data {
uint16_t vchan_id;
};
+struct vector_adptr_data {
+ struct rte_event_vector_adapter *vector_adptr;
+};
+
struct __rte_cache_aligned prod_data {
uint8_t dev_id;
uint8_t port_id;
uint8_t queue_id;
struct crypto_adptr_data ca;
struct dma_adptr_data da;
+ struct vector_adptr_data va;
struct test_perf *t;
};
@@ -320,9 +327,9 @@ perf_process_last_stage_latency(struct rte_mempool *const pool, enum evt_prod_ty
}
static __rte_always_inline void
-perf_process_vector_last_stage(struct rte_mempool *const pool,
- struct rte_mempool *const ca_pool, struct rte_event *const ev,
- struct worker_data *const w, const bool enable_fwd_latency)
+perf_process_crypto_vector_last_stage(struct rte_mempool *const pool,
+ struct rte_mempool *const ca_pool, struct rte_event *const ev,
+ struct worker_data *const w, const bool enable_fwd_latency)
{
struct rte_event_vector *vec = ev->vec;
struct rte_crypto_op *cop;
diff --git a/app/test-eventdev/test_perf_queue.c b/app/test-eventdev/test_perf_queue.c
index 58715a2537..36fe94e190 100644
--- a/app/test-eventdev/test_perf_queue.c
+++ b/app/test-eventdev/test_perf_queue.c
@@ -147,7 +147,7 @@ perf_queue_worker_burst(void *arg, const int enable_fwd_latency)
}
static int
-perf_queue_worker_vector(void *arg, const int enable_fwd_latency)
+perf_queue_worker_crypto_vector(void *arg, const int enable_fwd_latency)
{
uint16_t enq = 0, deq = 0;
struct rte_event ev;
@@ -163,10 +163,8 @@ perf_queue_worker_vector(void *arg, const int enable_fwd_latency)
if (!deq)
continue;
- if (ev.event_type == RTE_EVENT_TYPE_CRYPTODEV_VECTOR) {
- if (perf_handle_crypto_vector_ev(&ev, &pe, enable_fwd_latency))
- continue;
- }
+ if (perf_handle_crypto_vector_ev(&ev, &pe, enable_fwd_latency))
+ continue;
stage = ev.queue_id % nb_stages;
/* First q in pipeline, mark timestamp to compute fwd latency */
@@ -175,8 +173,8 @@ perf_queue_worker_vector(void *arg, const int enable_fwd_latency)
/* Last stage in pipeline */
if (unlikely(stage == laststage)) {
- perf_process_vector_last_stage(pool, t->ca_op_pool, &ev, w,
- enable_fwd_latency);
+ perf_process_crypto_vector_last_stage(pool, t->ca_op_pool, &ev, w,
+ enable_fwd_latency);
} else {
fwd_event_vector(&ev, sched_type_list, nb_stages);
do {
@@ -190,6 +188,52 @@ perf_queue_worker_vector(void *arg, const int enable_fwd_latency)
return 0;
}
+static int
+perf_queue_worker_vector(void *arg, const int enable_fwd_latency)
+{
+ uint16_t enq = 0, deq = 0;
+ struct rte_event ev;
+ PERF_WORKER_INIT;
+
+ RTE_SET_USED(pe);
+ RTE_SET_USED(sz);
+ RTE_SET_USED(cnt);
+ RTE_SET_USED(prod_type);
+ RTE_SET_USED(prod_timer_type);
+
+ while (t->done == false) {
+ deq = rte_event_dequeue_burst(dev, port, &ev, 1, 0);
+ if (!deq)
+ continue;
+
+ if (ev.event_type != RTE_EVENT_TYPE_CPU_VECTOR) {
+ w->processed_pkts++;
+ continue;
+ }
+
+ stage = ev.sub_event_type % nb_stages;
+ if (enable_fwd_latency && stage == 0)
+ /* first stage in pipeline, mark ts to compute fwd latency */
+ ev.vec->u64s[0] = rte_get_timer_cycles();
+
+ /* Last stage in pipeline */
+ if (unlikely(stage == laststage)) {
+ w->processed_vecs++;
+ if (enable_fwd_latency)
+ w->latency += rte_get_timer_cycles() - ev.vec->u64s[0];
+ rte_mempool_put(pool, ev.event_ptr);
+ } else {
+ fwd_event_vector(&ev, sched_type_list, nb_stages);
+ do {
+ enq = rte_event_enqueue_burst(dev, port, &ev, 1);
+ } while (!enq && !t->done);
+ }
+ }
+ perf_worker_cleanup(pool, dev, port, &ev, enq, deq);
+
+ return 0;
+}
+
static int
worker_wrapper(void *arg)
{
@@ -201,6 +245,8 @@ worker_wrapper(void *arg)
/* allow compiler to optimize */
if (opt->ena_vector && opt->prod_type == EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR)
+ return perf_queue_worker_crypto_vector(arg, fwd_latency);
+ else if (opt->prod_type == EVT_PROD_TYPE_EVENT_VECTOR_ADPTR)
return perf_queue_worker_vector(arg, fwd_latency);
else if (!burst && !fwd_latency)
return perf_queue_worker(arg, 0);
@@ -234,8 +280,10 @@ perf_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
nb_ports = evt_nr_active_lcores(opt->wlcores);
nb_ports += opt->prod_type == EVT_PROD_TYPE_ETH_RX_ADPTR ||
- opt->prod_type == EVT_PROD_TYPE_EVENT_TIMER_ADPTR ? 0 :
- evt_nr_active_lcores(opt->plcores);
+ opt->prod_type == EVT_PROD_TYPE_EVENT_TIMER_ADPTR ||
+ opt->prod_type == EVT_PROD_TYPE_EVENT_VECTOR_ADPTR ?
+ 0 :
+ evt_nr_active_lcores(opt->plcores);
nb_queues = perf_queue_nb_event_queues(opt);
diff --git a/doc/guides/rel_notes/release_25_07.rst b/doc/guides/rel_notes/release_25_07.rst
index e6e84eeec6..a17ab13a00 100644
--- a/doc/guides/rel_notes/release_25_07.rst
+++ b/doc/guides/rel_notes/release_25_07.rst
@@ -61,6 +61,11 @@ New Features
model by introducing APIs that allow applications to offload creation of
event vectors.
+* **Added vector adapter producer mode in eventdev test.**
+
+ Added vector adapter producer mode to measure performance of event vector
+ adapter.
+
Removed Items
-------------
diff --git a/doc/guides/tools/testeventdev.rst b/doc/guides/tools/testeventdev.rst
index 58f373b867..c4e1047fbb 100644
--- a/doc/guides/tools/testeventdev.rst
+++ b/doc/guides/tools/testeventdev.rst
@@ -130,6 +130,10 @@ The following are the application command-line options:
Use DMA device as producer.
+* ``--prod_type_vector``
+
+ Use event vector adapter as producer.
+
* ``--timer_tick_nsec``
Used to dictate number of nano seconds between bucket traversal of the
@@ -635,6 +639,7 @@ Supported application command line options are following::
--prod_type_timerdev
--prod_type_cryptodev
--prod_type_dmadev
+ --prod_type_vector
--prod_enq_burst_sz
--timer_tick_nsec
--max_tmo_nsec
@@ -756,6 +761,7 @@ Supported application command line options are following::
--prod_type_timerdev
--prod_type_cryptodev
--prod_type_dmadev
+ --prod_type_vector
--timer_tick_nsec
--max_tmo_nsec
--expiry_nsec
--
2.43.0
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2025-04-10 18:01 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-03-26 13:14 [RFC 0/2] introduce event vector adapter pbhagavatula
2025-03-26 13:14 ` [RFC 1/2] eventdev: " pbhagavatula
2025-04-10 18:00 ` [PATCH 0/3] " pbhagavatula
2025-04-10 18:00 ` [PATCH 1/3] eventdev: " pbhagavatula
2025-04-10 18:00 ` [PATCH 2/3] eventdev: add default software " pbhagavatula
2025-04-10 18:00 ` [PATCH 3/3] app/eventdev: add vector adapter performance test pbhagavatula
2025-03-26 13:14 ` [RFC 2/2] eventdev: add default software vector adapter pbhagavatula
2025-03-26 14:18 ` Stephen Hemminger
2025-03-26 17:25 ` [EXTERNAL] " Pavan Nikhilesh Bhagavatula
2025-03-26 20:25 ` Stephen Hemminger
2025-03-26 14:22 ` Stephen Hemminger
2025-03-26 17:06 ` [RFC 0/2] introduce event " Pavan Nikhilesh Bhagavatula
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).