* [RFC 0/3] Introduce event link profiles
@ 2023-08-09 14:26 pbhagavatula
2023-08-09 14:26 ` [RFC 1/3] eventdev: introduce " pbhagavatula
` (4 more replies)
0 siblings, 5 replies; 44+ messages in thread
From: pbhagavatula @ 2023-08-09 14:26 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
A collection of event queues linked to an event port can be associated
with unique identifier called as a profile, multiple such profiles can
be configured based on the event device capability using the function
`rte_event_port_link_with_profile` which takes arguments similar to
`rte_event_port_link` in addition to the profile identifier.
The maximum link profiles that are supported by an event device is
advertised through the structure member
`rte_event_dev_info::max_profiles_per_port`.
By default, event ports are configured to use the link profile 0 on
initialization.
Once multiple link profiles are set up and the event device is started, the
application can use the function `rte_event_port_change_profile` to change
the currently active profile on an event port. This effects the next
`rte_event_dequeue_burst` call, where the event queues associated with the
newly active link profile will participate in scheduling.
Rudementary work flow would something like:
Config path:
uint8_t lowQ[4] = {4, 5, 6, 7};
uint8_t highQ[4] = {0, 1, 2, 3};
if (rte_event_dev_info.max_profiles_per_port < 2)
return -ENOTSUP;
rte_event_port_link_with_profile(0, 0, highQ, NULL, 4, 0);
rte_event_port_link_with_profile(0, 0, lowQ, NULL, 4, 1);
Worker path:
empty_high_deq = 0;
empty_low_deq = 0;
is_low_deq = 0;
while (1) {
deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
if (deq == 0) {
/**
* Change link profile based on work activity on current
* active profile
*/
if (is_low_deq) {
empty_low_deq++;
if (empty_low_deq == MAX_LOW_RETRY) {
rte_event_port_change_profile(0, 0, 0);
is_low_deq = 0;
empty_low_deq = 0;
}
continue;
}
if (empty_high_deq == MAX_HIGH_RETRY) {
rte_event_port_change_profile(0, 0, 1);
is_low_deq = 1;
empty_high_deq = 0;
}
continue;
}
// Process the event received.
if (is_low_deq++ == MAX_LOW_EVENTS) {
rte_event_port_change_profile(0, 0, 0);
is_low_deq = 0;
}
}
An application could use heuristic data of load/activity of a given event
port and change its active profile to adapt to the traffic pattern.
An unlink function `rte_event_port_unlink_with_profile` is provided to
modify the links associated to a profile, and
`rte_event_port_links_get_with_profile` can be used to retrieve the links
associated with a profile.
Pavan Nikhilesh (3):
eventdev: introduce link profiles
event/cnxk: implement event link profiles
test/event: add event link profile test
app/test/test_eventdev.c | 110 ++++++++++
config/rte_config.h | 1 +
doc/guides/eventdevs/cnxk.rst | 1 +
doc/guides/prog_guide/eventdev.rst | 58 ++++++
drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
drivers/common/cnxk/roc_sso.c | 18 +-
drivers/common/cnxk/roc_sso.h | 8 +-
drivers/common/cnxk/roc_sso_priv.h | 4 +-
drivers/event/cnxk/cn10k_eventdev.c | 45 ++--
drivers/event/cnxk/cn10k_worker.c | 11 +
drivers/event/cnxk/cn10k_worker.h | 1 +
drivers/event/cnxk/cn9k_eventdev.c | 72 ++++---
drivers/event/cnxk/cn9k_worker.c | 22 ++
drivers/event/cnxk/cn9k_worker.h | 2 +
drivers/event/cnxk/cnxk_eventdev.c | 34 ++--
drivers/event/cnxk/cnxk_eventdev.h | 10 +-
drivers/event/dlb2/dlb2.c | 1 +
drivers/event/dpaa/dpaa_eventdev.c | 1 +
drivers/event/dpaa2/dpaa2_eventdev.c | 2 +-
drivers/event/dsw/dsw_evdev.c | 1 +
drivers/event/octeontx/ssovf_evdev.c | 2 +-
drivers/event/opdl/opdl_evdev.c | 1 +
drivers/event/skeleton/skeleton_eventdev.c | 1 +
drivers/event/sw/sw_evdev.c | 1 +
lib/eventdev/eventdev_pmd.h | 59 +++++-
lib/eventdev/eventdev_private.c | 9 +
lib/eventdev/eventdev_trace.h | 22 ++
lib/eventdev/eventdev_trace_points.c | 6 +
lib/eventdev/rte_eventdev.c | 146 ++++++++++---
lib/eventdev/rte_eventdev.h | 226 +++++++++++++++++++++
lib/eventdev/rte_eventdev_core.h | 4 +
lib/eventdev/rte_eventdev_trace_fp.h | 8 +
lib/eventdev/version.map | 5 +
33 files changed, 788 insertions(+), 108 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [RFC 1/3] eventdev: introduce link profiles
2023-08-09 14:26 [RFC 0/3] Introduce event link profiles pbhagavatula
@ 2023-08-09 14:26 ` pbhagavatula
2023-08-18 10:27 ` Jerin Jacob
2023-08-09 14:26 ` [RFC 2/3] event/cnxk: implement event " pbhagavatula
` (3 subsequent siblings)
4 siblings, 1 reply; 44+ messages in thread
From: pbhagavatula @ 2023-08-09 14:26 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
A collection of event queues linked to an event port can be
associated with a unique identifier called as a profile, multiple
such profiles can be created based on the event device capability
using the function `rte_event_port_link_with_profile` which takes
arguments similar to `rte_event_port_link` in addition to the profile
identifier.
The maximum link profiles that are supported by an event device
is advertised through the structure member
`rte_event_dev_info::max_profiles_per_port`.
By default, event ports are configured to use the link profile 0
on initialization.
Once multiple link profiles are set up and the event device is started,
the application can use the function `rte_event_port_change_profile`
to change the currently active profile on an event port. This effects
the next `rte_event_dequeue_burst` call, where the event queues
associated with the newly active link profile will participate in
scheduling.
An unlink function `rte_event_port_unlink_with_profile` is provided
to modify the links associated to a profile, and
`rte_event_port_links_get_with_profile`can be used to retrieve the
links associated with a profile.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
config/rte_config.h | 1 +
doc/guides/prog_guide/eventdev.rst | 58 ++++++
drivers/event/cnxk/cnxk_eventdev.c | 3 +-
drivers/event/dlb2/dlb2.c | 1 +
drivers/event/dpaa/dpaa_eventdev.c | 1 +
drivers/event/dpaa2/dpaa2_eventdev.c | 2 +-
drivers/event/dsw/dsw_evdev.c | 1 +
drivers/event/octeontx/ssovf_evdev.c | 2 +-
drivers/event/opdl/opdl_evdev.c | 1 +
drivers/event/skeleton/skeleton_eventdev.c | 1 +
drivers/event/sw/sw_evdev.c | 1 +
lib/eventdev/eventdev_pmd.h | 59 +++++-
lib/eventdev/eventdev_private.c | 9 +
lib/eventdev/eventdev_trace.h | 22 ++
lib/eventdev/eventdev_trace_points.c | 6 +
lib/eventdev/rte_eventdev.c | 146 ++++++++++---
lib/eventdev/rte_eventdev.h | 226 +++++++++++++++++++++
lib/eventdev/rte_eventdev_core.h | 4 +
lib/eventdev/rte_eventdev_trace_fp.h | 8 +
lib/eventdev/version.map | 5 +
20 files changed, 527 insertions(+), 30 deletions(-)
diff --git a/config/rte_config.h b/config/rte_config.h
index 400e44e3cf..d43b3eecb8 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -73,6 +73,7 @@
#define RTE_EVENT_MAX_DEVS 16
#define RTE_EVENT_MAX_PORTS_PER_DEV 255
#define RTE_EVENT_MAX_QUEUES_PER_DEV 255
+#define RTE_EVENT_MAX_PROFILES_PER_PORT 8
#define RTE_EVENT_TIMER_ADAPTER_NUM_MAX 32
#define RTE_EVENT_ETH_INTR_RING_SIZE 1024
#define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst
index 2c83176846..3a1016543c 100644
--- a/doc/guides/prog_guide/eventdev.rst
+++ b/doc/guides/prog_guide/eventdev.rst
@@ -317,6 +317,64 @@ can be achieved like this:
}
int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1);
+An application can also use link profiles if supported by the underlying event device to setup up
+multiple link profile per port and change them run time depending up on heuristic data.
+
+An Example use case could be as follows.
+
+Config path:
+
+.. code-block:: c
+
+ uint8_t lowQ[4] = {4, 5, 6, 7};
+ uint8_t highQ[4] = {0, 1, 2, 3};
+
+ if (rte_event_dev_info.max_profiles_per_port < 2)
+ return -ENOTSUP;
+
+ rte_event_port_link_with_profile(0, 0, highQ, NULL, 4, 0);
+ rte_event_port_link_with_profile(0, 0, lowQ, NULL, 4, 1);
+
+Worker path:
+
+.. code-block:: c
+
+ uint8_t empty_high_deq = 0;
+ uint8_t empty_low_deq = 0;
+ uint8_t is_low_deq = 0;
+ while (1) {
+ deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
+ if (deq == 0) {
+ /**
+ * Change link profile based on work activity on current
+ * active profile
+ */
+ if (is_low_deq) {
+ empty_low_deq++;
+ if (empty_low_deq == MAX_LOW_RETRY) {
+ rte_event_port_change_profile(0, 0, 0);
+ is_low_deq = 0;
+ empty_low_deq = 0;
+ }
+ continue;
+ }
+
+ if (empty_high_deq == MAX_HIGH_RETRY) {
+ rte_event_port_change_profile(0, 0, 1);
+ is_low_deq = 1;
+ empty_high_deq = 0;
+ }
+ continue;
+ }
+
+ // Process the event received.
+
+ if (is_low_deq++ == MAX_LOW_EVENTS) {
+ rte_event_port_change_profile(0, 0, 0);
+ is_low_deq = 0;
+ }
+ }
+
Starting the EventDev
~~~~~~~~~~~~~~~~~~~~~
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 27883a3619..529622cac6 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -31,6 +31,7 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE |
RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR;
+ dev_info->max_profiles_per_port = 1;
}
int
@@ -133,7 +134,7 @@ cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
for (i = 0; i < dev->nb_event_ports; i++) {
uint16_t nb_hwgrp = 0;
- links_map = event_dev->data->links_map;
+ links_map = event_dev->data->links_map[0];
/* Point links_map to this port specific area */
links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 60c5cd4804..580057870f 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -79,6 +79,7 @@ static struct rte_event_dev_info evdev_dlb2_default_info = {
RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE),
+ .max_profiles_per_port = 1,
};
struct process_local_port_data
diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c
index 4b3d16735b..f615da3813 100644
--- a/drivers/event/dpaa/dpaa_eventdev.c
+++ b/drivers/event/dpaa/dpaa_eventdev.c
@@ -359,6 +359,7 @@ dpaa_event_dev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
+ dev_info->max_profiles_per_port = 1;
}
static int
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index fa1a1ade80..ffc5550f85 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -411,7 +411,7 @@ dpaa2_eventdev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
-
+ dev_info->max_profiles_per_port = 1;
}
static int
diff --git a/drivers/event/dsw/dsw_evdev.c b/drivers/event/dsw/dsw_evdev.c
index 6c5cde2468..785c12f61f 100644
--- a/drivers/event/dsw/dsw_evdev.c
+++ b/drivers/event/dsw/dsw_evdev.c
@@ -218,6 +218,7 @@ dsw_info_get(struct rte_eventdev *dev __rte_unused,
.max_event_port_dequeue_depth = DSW_MAX_PORT_DEQUEUE_DEPTH,
.max_event_port_enqueue_depth = DSW_MAX_PORT_ENQUEUE_DEPTH,
.max_num_events = DSW_MAX_EVENTS,
+ .max_profiles_per_port = 1,
.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE|
RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED|
RTE_EVENT_DEV_CAP_NONSEQ_MODE|
diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
index 650266b996..0eb9358981 100644
--- a/drivers/event/octeontx/ssovf_evdev.c
+++ b/drivers/event/octeontx/ssovf_evdev.c
@@ -158,7 +158,7 @@ ssovf_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *dev_info)
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
-
+ dev_info->max_profiles_per_port = 1;
}
static int
diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c
index 9ce8b39b60..dd25749654 100644
--- a/drivers/event/opdl/opdl_evdev.c
+++ b/drivers/event/opdl/opdl_evdev.c
@@ -378,6 +378,7 @@ opdl_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE,
+ .max_profiles_per_port = 1,
};
*info = evdev_opdl_info;
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
index 8513b9a013..dc9b131641 100644
--- a/drivers/event/skeleton/skeleton_eventdev.c
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -104,6 +104,7 @@ skeleton_eventdev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_EVENT_QOS |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
+ dev_info->max_profiles_per_port = 1;
}
static int
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index cfd659d774..6d1816b76d 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -609,6 +609,7 @@ sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE),
+ .max_profiles_per_port = 1,
};
*info = evdev_sw_info;
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index c68c3a2262..2331daf93d 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -119,8 +119,8 @@ struct rte_eventdev_data {
/**< Array of port configuration structures. */
struct rte_event_queue_conf queues_cfg[RTE_EVENT_MAX_QUEUES_PER_DEV];
/**< Array of queue configuration structures. */
- uint16_t links_map[RTE_EVENT_MAX_PORTS_PER_DEV *
- RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint16_t links_map[RTE_EVENT_MAX_PROFILES_PER_PORT]
+ [RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV];
/**< Memory to store queues to port connections. */
void *dev_private;
/**< PMD-specific private data */
@@ -180,6 +180,9 @@ struct rte_eventdev {
event_tx_adapter_enqueue_t txa_enqueue;
/**< Pointer to PMD eth Tx adapter enqueue function. */
event_crypto_adapter_enqueue_t ca_enqueue;
+ /**< PMD Crypto adapter enqueue function. */
+ event_change_profile_t change_profile;
+ /**< PMD Event switch profile function. */
uint64_t reserved_64s[4]; /**< Reserved for future fields */
void *reserved_ptrs[3]; /**< Reserved for future fields */
@@ -439,6 +442,32 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
const uint8_t queues[], const uint8_t priorities[],
uint16_t nb_links);
+/**
+ * Link multiple source event queues associated with a profile to a destination
+ * event port.
+ *
+ * @param dev
+ * Event device pointer
+ * @param port
+ * Event port pointer
+ * @param queues
+ * Points to an array of *nb_links* event queues to be linked
+ * to the event port.
+ * @param priorities
+ * Points to an array of *nb_links* service priorities associated with each
+ * event queue link to event port.
+ * @param nb_links
+ * The number of links to establish.
+ * @param profile
+ * The profile ID to associate the links.
+ *
+ * @return
+ * Returns 0 on success.
+ */
+typedef int (*eventdev_port_link_profile_t)(struct rte_eventdev *dev, void *port,
+ const uint8_t queues[], const uint8_t priorities[],
+ uint16_t nb_links, uint8_t profile);
+
/**
* Unlink multiple source event queues from destination event port.
*
@@ -457,6 +486,28 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
typedef int (*eventdev_port_unlink_t)(struct rte_eventdev *dev, void *port,
uint8_t queues[], uint16_t nb_unlinks);
+/**
+ * Unlink multiple source event queues associated with a profile from destination
+ * event port.
+ *
+ * @param dev
+ * Event device pointer
+ * @param port
+ * Event port pointer
+ * @param queues
+ * An array of *nb_unlinks* event queues to be unlinked from the event port.
+ * @param nb_unlinks
+ * The number of unlinks to establish
+ * @param profile
+ * The profile ID of the associated links.
+ *
+ * @return
+ * Returns 0 on success.
+ */
+typedef int (*eventdev_port_unlink_profile_t)(struct rte_eventdev *dev, void *port,
+ uint8_t queues[], uint16_t nb_unlinks,
+ uint8_t profile);
+
/**
* Unlinks in progress. Returns number of unlinks that the PMD is currently
* performing, but have not yet been completed.
@@ -1350,8 +1401,12 @@ struct eventdev_ops {
eventdev_port_link_t port_link;
/**< Link event queues to an event port. */
+ eventdev_port_link_profile_t port_link_profile;
+ /**< Link event queues associated with a profile to an event port. */
eventdev_port_unlink_t port_unlink;
/**< Unlink event queues from an event port. */
+ eventdev_port_unlink_profile_t port_unlink_profile;
+ /**< Unlink event queues associated with a profile from an event port. */
eventdev_port_unlinks_in_progress_t port_unlinks_in_progress;
/**< Unlinks in progress on an event port. */
eventdev_dequeue_timeout_ticks_t timeout_ticks;
diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c
index 1d3d9d357e..8f1e600da5 100644
--- a/lib/eventdev/eventdev_private.c
+++ b/lib/eventdev/eventdev_private.c
@@ -81,6 +81,13 @@ dummy_event_crypto_adapter_enqueue(__rte_unused void *port,
return 0;
}
+static int
+dummy_event_port_change_profile(__rte_unused void *port, __rte_unused uint8_t profile)
+{
+ RTE_EDEV_LOG_ERR("change profile requested for unconfigured event device");
+ return -EINVAL;
+}
+
void
event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
{
@@ -97,6 +104,7 @@ event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
.txa_enqueue_same_dest =
dummy_event_tx_adapter_enqueue_same_dest,
.ca_enqueue = dummy_event_crypto_adapter_enqueue,
+ .change_profile = dummy_event_port_change_profile,
.data = dummy_data,
};
@@ -117,5 +125,6 @@ event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op,
fp_op->txa_enqueue = dev->txa_enqueue;
fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest;
fp_op->ca_enqueue = dev->ca_enqueue;
+ fp_op->change_profile = dev->change_profile;
fp_op->data = dev->data->ports;
}
diff --git a/lib/eventdev/eventdev_trace.h b/lib/eventdev/eventdev_trace.h
index f008ef0091..7b89b8d53f 100644
--- a/lib/eventdev/eventdev_trace.h
+++ b/lib/eventdev/eventdev_trace.h
@@ -76,6 +76,17 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(rc);
)
+RTE_TRACE_POINT(
+ rte_eventdev_trace_port_link_with_profile,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
+ uint16_t nb_links, uint8_t profile, int rc),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u16(nb_links);
+ rte_trace_point_emit_u8(profile);
+ rte_trace_point_emit_int(rc);
+)
+
RTE_TRACE_POINT(
rte_eventdev_trace_port_unlink,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
@@ -86,6 +97,17 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(rc);
)
+RTE_TRACE_POINT(
+ rte_eventdev_trace_port_unlink_with_profile,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
+ uint16_t nb_unlinks, uint8_t profile, int rc),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u16(nb_unlinks);
+ rte_trace_point_emit_u8(profile);
+ rte_trace_point_emit_int(rc);
+)
+
RTE_TRACE_POINT(
rte_eventdev_trace_start,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, int rc),
diff --git a/lib/eventdev/eventdev_trace_points.c b/lib/eventdev/eventdev_trace_points.c
index 76144cfe75..cfde20b2a8 100644
--- a/lib/eventdev/eventdev_trace_points.c
+++ b/lib/eventdev/eventdev_trace_points.c
@@ -19,9 +19,15 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_setup,
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_link,
lib.eventdev.port.link)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_link_with_profile,
+ lib.eventdev.port.link_with_profile)
+
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlink,
lib.eventdev.port.unlink)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlink_with_profile,
+ lib.eventdev.port.unlink_with_profile)
+
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_start,
lib.eventdev.start)
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 6ab4524332..495b010de9 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -270,7 +270,7 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
void **ports;
uint16_t *links_map;
struct rte_event_port_conf *ports_cfg;
- unsigned int i;
+ unsigned int i, j;
RTE_EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
dev->data->dev_id);
@@ -281,7 +281,6 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
ports = dev->data->ports;
ports_cfg = dev->data->ports_cfg;
- links_map = dev->data->links_map;
for (i = nb_ports; i < old_nb_ports; i++)
(*dev->dev_ops->port_release)(ports[i]);
@@ -297,9 +296,11 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
sizeof(ports[0]) * new_ps);
memset(ports_cfg + old_nb_ports, 0,
sizeof(ports_cfg[0]) * new_ps);
- for (i = old_links_map_end; i < links_map_end; i++)
- links_map[i] =
- EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++) {
+ links_map = dev->data->links_map[i];
+ for (j = old_links_map_end; j < links_map_end; j++)
+ links_map[j] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ }
}
} else {
if (*dev->dev_ops->port_release == NULL)
@@ -953,21 +954,44 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
const uint8_t queues[], const uint8_t priorities[],
uint16_t nb_links)
{
- struct rte_eventdev *dev;
- uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ return rte_event_port_link_with_profile(dev_id, port_id, queues, priorities, nb_links, 0);
+}
+
+int
+rte_event_port_link_with_profile(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile)
+{
uint8_t priorities_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
uint16_t *links_map;
int i, diag;
RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);
dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT || profile >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile=%" PRIu8, profile);
+ return -EINVAL;
+ }
+
if (*dev->dev_ops->port_link == NULL) {
RTE_EDEV_LOG_ERR("Function not supported\n");
rte_errno = ENOTSUP;
return 0;
}
+ if (profile && *dev->dev_ops->port_link_profile == NULL) {
+ RTE_EDEV_LOG_ERR("Function not supported\n");
+ rte_errno = ENOTSUP;
+ return 0;
+ }
+
if (!is_valid_port(dev, port_id)) {
RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
rte_errno = EINVAL;
@@ -995,18 +1019,22 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
return 0;
}
- diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id],
- queues, priorities, nb_links);
+ if (profile)
+ diag = (*dev->dev_ops->port_link_profile)(dev, dev->data->ports[port_id], queues,
+ priorities, nb_links, profile);
+ else
+ diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id], queues,
+ priorities, nb_links);
if (diag < 0)
return diag;
- links_map = dev->data->links_map;
+ links_map = dev->data->links_map[profile];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
for (i = 0; i < diag; i++)
links_map[queues[i]] = (uint8_t)priorities[i];
- rte_eventdev_trace_port_link(dev_id, port_id, nb_links, diag);
+ rte_eventdev_trace_port_link_with_profile(dev_id, port_id, nb_links, profile, diag);
return diag;
}
@@ -1014,27 +1042,50 @@ int
rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint16_t nb_unlinks)
{
- struct rte_eventdev *dev;
+ return rte_event_port_unlink_with_profile(dev_id, port_id, queues, nb_unlinks, 0);
+}
+
+int
+rte_event_port_unlink_with_profile(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile)
+{
uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
- int i, diag, j;
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
uint16_t *links_map;
+ int i, diag, j;
RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);
dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT || profile >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile=%" PRIu8, profile);
+ return -EINVAL;
+ }
+
if (*dev->dev_ops->port_unlink == NULL) {
RTE_EDEV_LOG_ERR("Function not supported");
rte_errno = ENOTSUP;
return 0;
}
+ if (profile && *dev->dev_ops->port_unlink_profile == NULL) {
+ RTE_EDEV_LOG_ERR("Function not supported");
+ rte_errno = ENOTSUP;
+ return 0;
+ }
+
if (!is_valid_port(dev, port_id)) {
RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
rte_errno = EINVAL;
return 0;
}
- links_map = dev->data->links_map;
+ links_map = dev->data->links_map[profile];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
@@ -1063,16 +1114,19 @@ rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
return 0;
}
- diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id],
- queues, nb_unlinks);
-
+ if (profile)
+ diag = (*dev->dev_ops->port_unlink_profile)(dev, dev->data->ports[port_id], queues,
+ nb_unlinks, profile);
+ else
+ diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id], queues,
+ nb_unlinks);
if (diag < 0)
return diag;
for (i = 0; i < diag; i++)
links_map[queues[i]] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
- rte_eventdev_trace_port_unlink(dev_id, port_id, nb_unlinks, diag);
+ rte_eventdev_trace_port_unlink_with_profile(dev_id, port_id, nb_unlinks, profile, diag);
return diag;
}
@@ -1116,7 +1170,50 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
return -EINVAL;
}
- links_map = dev->data->links_map;
+ /* Use the default profile. */
+ links_map = dev->data->links_map[0];
+ /* Point links_map to this port specific area */
+ links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
+ for (i = 0; i < dev->data->nb_queues; i++) {
+ if (links_map[i] != EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
+ queues[count] = i;
+ priorities[count] = (uint8_t)links_map[i];
+ ++count;
+ }
+ }
+
+ rte_eventdev_trace_port_links_get(dev_id, port_id, count);
+
+ return count;
+}
+
+int
+rte_event_port_links_get_with_profile(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint8_t priorities[], uint8_t profile)
+{
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
+ uint16_t *links_map;
+ int i, count = 0;
+
+ RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+
+ dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT || profile >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile=%" PRIu8, profile);
+ return -EINVAL;
+ }
+
+ if (!is_valid_port(dev, port_id)) {
+ RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+ return -EINVAL;
+ }
+
+ links_map = dev->data->links_map[profile];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
for (i = 0; i < dev->data->nb_queues; i++) {
@@ -1440,7 +1537,7 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
{
char mz_name[RTE_EVENTDEV_NAME_MAX_LEN];
const struct rte_memzone *mz;
- int n;
+ int i, n;
/* Generate memzone name */
n = snprintf(mz_name, sizeof(mz_name), "rte_eventdev_data_%u", dev_id);
@@ -1460,11 +1557,10 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
*data = mz->addr;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
memset(*data, 0, sizeof(struct rte_eventdev_data));
- for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV *
- RTE_EVENT_MAX_QUEUES_PER_DEV;
- n++)
- (*data)->links_map[n] =
- EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++)
+ for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV;
+ n++)
+ (*data)->links_map[i][n] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
}
return 0;
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index b6a4fa1495..723ecd2f28 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -446,6 +446,10 @@ struct rte_event_dev_info {
* device. These ports and queues are not accounted for in
* max_event_ports or max_event_queues.
*/
+ uint8_t max_profiles_per_port;
+ /**< Maximum number of event queue profiles per event port.
+ * A device that doesn't support multiple profiles will set this as 1.
+ */
};
/**
@@ -1537,6 +1541,10 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
* latency of critical work by establishing the link with more event ports
* at runtime.
*
+ * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater
+ * than or equal to one, this function links the event queues to the default
+ * profile i.e. profile 0 of the event port.
+ *
* @param dev_id
* The identifier of the device.
*
@@ -1594,6 +1602,10 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
* Event queue(s) to event port unlink establishment can be changed at runtime
* without re-configuring the device.
*
+ * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater
+ * than or equal to one, this function unlinks the event queues from the default
+ * profile i.e. profile 0 of the event port.
+ *
* @see rte_event_port_unlinks_in_progress() to poll for completed unlinks.
*
* @param dev_id
@@ -1627,6 +1639,137 @@ int
rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint16_t nb_unlinks);
+/**
+ * Link multiple source event queues supplied in *queues* to the destination
+ * event port designated by its *port_id* with associated profile identifier
+ * supplied in *profile* with service priorities supplied in *priorities* on
+ * the event device designated by its *dev_id*.
+ *
+ * If *profile* is set to 0 then, the links created by the call `rte_event_port_link`
+ * will be overwritten.
+ *
+ * Event ports by default use profile 0 unless it is changed using the
+ * call ``rte_event_port_change_profile()``.
+ *
+ * The link establishment shall enable the event port *port_id* from
+ * receiving events from the specified event queue(s) supplied in *queues*
+ *
+ * An event queue may link to one or more event ports.
+ * The number of links can be established from an event queue to event port is
+ * implementation defined.
+ *
+ * Event queue(s) to event port link establishment can be changed at runtime
+ * without re-configuring the device to support scaling and to reduce the
+ * latency of critical work by establishing the link with more event ports
+ * at runtime.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier to select the destination port to link.
+ *
+ * @param queues
+ * Points to an array of *nb_links* event queues to be linked
+ * to the event port.
+ * NULL value is allowed, in which case this function links all the configured
+ * event queues *nb_event_queues* which previously supplied to
+ * rte_event_dev_configure() to the event port *port_id*
+ *
+ * @param priorities
+ * Points to an array of *nb_links* service priorities associated with each
+ * event queue link to event port.
+ * The priority defines the event port's servicing priority for
+ * event queue, which may be ignored by an implementation.
+ * The requested priority should in the range of
+ * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST].
+ * The implementation shall normalize the requested priority to
+ * implementation supported priority value.
+ * NULL value is allowed, in which case this function links the event queues
+ * with RTE_EVENT_DEV_PRIORITY_NORMAL servicing priority
+ *
+ * @param nb_links
+ * The number of links to establish. This parameter is ignored if queues is
+ * NULL.
+ *
+ * @param profile
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of links actually established. The return value can be less than
+ * the value of the *nb_links* parameter when the implementation has the
+ * limitation on specific queue to port link establishment or if invalid
+ * parameters are specified in *queues*
+ * If the return value is less than *nb_links*, the remaining links at the end
+ * of link[] are not established, and the caller has to take care of them.
+ * If return value is less than *nb_links* then implementation shall update the
+ * rte_errno accordingly, Possible rte_errno values are
+ * (EDQUOT) Quota exceeded(Application tried to link the queue configured with
+ * RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports)
+ * (EINVAL) Invalid parameter
+ *
+ */
+__rte_experimental
+int
+rte_event_port_link_with_profile(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links,
+ uint8_t profile);
+
+/**
+ * Unlink multiple source event queues supplied in *queues* that belong to profile
+ * designated by *profile* from the destination event port designated by its
+ * *port_id* on the event device designated by its *dev_id*.
+ *
+ * If *profile* is set to 0 i.e., the default profile then, then this function will
+ * act as ``rte_event_port_unlink``.
+ *
+ * The unlink call issues an async request to disable the event port *port_id*
+ * from receiving events from the specified event queue *queue_id*.
+ * Event queue(s) to event port unlink establishment can be changed at runtime
+ * without re-configuring the device.
+ *
+ * @see rte_event_port_unlinks_in_progress() to poll for completed unlinks.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier to select the destination port to unlink.
+ *
+ * @param queues
+ * Points to an array of *nb_unlinks* event queues to be unlinked
+ * from the event port.
+ * NULL value is allowed, in which case this function unlinks all the
+ * event queue(s) from the event port *port_id*.
+ *
+ * @param nb_unlinks
+ * The number of unlinks to establish. This parameter is ignored if queues is
+ * NULL.
+ *
+ * @param profile
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of unlinks successfully requested. The return value can be less
+ * than the value of the *nb_unlinks* parameter when the implementation has the
+ * limitation on specific queue to port unlink establishment or
+ * if invalid parameters are specified.
+ * If the return value is less than *nb_unlinks*, the remaining queues at the
+ * end of queues[] are not unlinked, and the caller has to take care of them.
+ * If return value is less than *nb_unlinks* then implementation shall update
+ * the rte_errno accordingly, Possible rte_errno values are
+ * (EINVAL) Invalid parameter
+ *
+ */
+__rte_experimental
+int
+rte_event_port_unlink_with_profile(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile);
+
/**
* Returns the number of unlinks in progress.
*
@@ -1681,6 +1824,42 @@ int
rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint8_t priorities[]);
+/**
+ * Retrieve the list of source event queues and its service priority
+ * associated to a profile and linked to the destination event port
+ * designated by its *port_id* on the event device designated by its *dev_id*.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier.
+ *
+ * @param[out] queues
+ * Points to an array of *queues* for output.
+ * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
+ * store the event queue(s) linked with event port *port_id*
+ *
+ * @param[out] priorities
+ * Points to an array of *priorities* for output.
+ * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
+ * store the service priority associated with each event queue linked
+ *
+ * @param profile
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of links established on the event port designated by its
+ * *port_id*.
+ * - <0 on failure.
+ */
+__rte_experimental
+int
+rte_event_port_links_get_with_profile(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint8_t priorities[], uint8_t profile);
+
/**
* Retrieve the service ID of the event dev. If the adapter doesn't use
* a rte_service function, this function returns -ESRCH.
@@ -2266,6 +2445,53 @@ rte_event_maintain(uint8_t dev_id, uint8_t port_id, int op)
return 0;
}
+/**
+ * Change the active profile on an event port.
+ *
+ * This function is used to change the current active profile on an event port
+ * when multiple link profiles are configured on an event port through the
+ * function call ``rte_event_port_link_with_profile``.
+ *
+ * On the subsequent ``rte_event_dequeue_burst`` call, only the event queues
+ * that were associated with the newly active profile will participate in
+ * scheduling.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ * @param port_id
+ * The identifier of the event port.
+ * @param profile
+ * The identifier of the profile.
+ * @return
+ * - 0 on success.
+ * - -EINVAL if *dev_id*, *port_id*, or *profile* is invalid.
+ */
+__rte_experimental
+static inline uint8_t
+rte_event_port_change_profile(uint8_t dev_id, uint8_t port_id, uint8_t profile)
+{
+ const struct rte_event_fp_ops *fp_ops;
+ void *port;
+
+ fp_ops = &rte_event_fp_ops[dev_id];
+ port = fp_ops->data[port_id];
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+ if (dev_id >= RTE_EVENT_MAX_DEVS ||
+ port_id >= RTE_EVENT_MAX_PORTS_PER_DEV)
+ return -EINVAL;
+
+ if (port == NULL)
+ return -EINVAL;
+
+ if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT)
+ return -EINVAL;
+#endif
+ rte_eventdev_trace_change_profile(dev_id, port_id, profile);
+
+ return fp_ops->change_profile(port, profile);
+}
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
index c328bdbc82..ff5e711262 100644
--- a/lib/eventdev/rte_eventdev_core.h
+++ b/lib/eventdev/rte_eventdev_core.h
@@ -42,6 +42,8 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port,
uint16_t nb_events);
/**< @internal Enqueue burst of events on crypto adapter */
+typedef int (*event_change_profile_t)(void *port, uint8_t profile);
+
struct rte_event_fp_ops {
void **data;
/**< points to array of internal port data pointers */
@@ -65,6 +67,8 @@ struct rte_event_fp_ops {
/**< PMD Tx adapter enqueue same destination function. */
event_crypto_adapter_enqueue_t ca_enqueue;
/**< PMD Crypto adapter enqueue function. */
+ event_change_profile_t change_profile;
+ /**< PMD Event switch profile function. */
uintptr_t reserved[6];
} __rte_cache_aligned;
diff --git a/lib/eventdev/rte_eventdev_trace_fp.h b/lib/eventdev/rte_eventdev_trace_fp.h
index af2172d2a5..141a45ed58 100644
--- a/lib/eventdev/rte_eventdev_trace_fp.h
+++ b/lib/eventdev/rte_eventdev_trace_fp.h
@@ -46,6 +46,14 @@ RTE_TRACE_POINT_FP(
rte_trace_point_emit_int(op);
)
+RTE_TRACE_POINT_FP(
+ rte_eventdev_trace_change_profile,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, uint8_t profile),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u8(profile);
+)
+
RTE_TRACE_POINT_FP(
rte_eventdev_trace_eth_tx_adapter_enqueue,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table,
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index b03c10d99f..e98992e043 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -131,6 +131,11 @@ EXPERIMENTAL {
rte_event_eth_tx_adapter_runtime_params_init;
rte_event_eth_tx_adapter_runtime_params_set;
rte_event_timer_remaining_ticks_get;
+
+ # added in 23.11
+ rte_event_port_link_with_profile;
+ rte_event_port_unlink_with_profile;
+ rte_event_port_links_get_with_profile;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [RFC 2/3] event/cnxk: implement event link profiles
2023-08-09 14:26 [RFC 0/3] Introduce event link profiles pbhagavatula
2023-08-09 14:26 ` [RFC 1/3] eventdev: introduce " pbhagavatula
@ 2023-08-09 14:26 ` pbhagavatula
2023-08-09 14:26 ` [RFC 3/3] test/event: add event link profile test pbhagavatula
` (2 subsequent siblings)
4 siblings, 0 replies; 44+ messages in thread
From: pbhagavatula @ 2023-08-09 14:26 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Implement event link profiles support on CN10K and CN9K.
Both the platforms support up to 2 link profiles.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 1 +
drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
drivers/common/cnxk/roc_sso.c | 18 +++----
drivers/common/cnxk/roc_sso.h | 8 +--
drivers/common/cnxk/roc_sso_priv.h | 4 +-
drivers/event/cnxk/cn10k_eventdev.c | 45 +++++++++++------
drivers/event/cnxk/cn10k_worker.c | 11 ++++
drivers/event/cnxk/cn10k_worker.h | 1 +
drivers/event/cnxk/cn9k_eventdev.c | 72 ++++++++++++++++-----------
drivers/event/cnxk/cn9k_worker.c | 22 ++++++++
drivers/event/cnxk/cn9k_worker.h | 2 +
drivers/event/cnxk/cnxk_eventdev.c | 35 +++++++------
drivers/event/cnxk/cnxk_eventdev.h | 10 ++--
13 files changed, 153 insertions(+), 80 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 1a59233282..cccb8a0304 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -48,6 +48,7 @@ Features of the OCTEON cnxk SSO PMD are:
- HW managed event vectorization on CN10K for packets enqueued from ethdev to
eventdev configurable per each Rx queue in Rx adapter.
- Event vector transmission via Tx adapter.
+- Up to 2 event link profiles.
Prerequisites and Compilation procedure
---------------------------------------
diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c
index d76158e30d..690d47c045 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev.c
@@ -285,7 +285,7 @@ nix_inl_sso_setup(struct nix_inl_dev *inl_dev)
}
/* Setup hwgrp->hws link */
- sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, true);
+ sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, 0, true);
/* Enable HWGRP */
plt_write64(0x1, inl_dev->sso_base + SSO_LF_GGRP_QCTL);
@@ -315,7 +315,7 @@ nix_inl_sso_release(struct nix_inl_dev *inl_dev)
nix_inl_sso_unregister_irqs(inl_dev);
/* Unlink hws */
- sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, false);
+ sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, 0, false);
/* Release XAQ aura */
sso_hwgrp_release_xaq(&inl_dev->dev, 1);
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index a5f48d5bbc..f063184565 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -185,8 +185,8 @@ sso_rsrc_get(struct roc_sso *roc_sso)
}
void
-sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
- uint16_t hwgrp[], uint16_t n, uint16_t enable)
+sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, uint16_t hwgrp[],
+ uint16_t n, uint8_t set, uint16_t enable)
{
uint64_t reg;
int i, j, k;
@@ -203,7 +203,7 @@ sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
k = n % 4;
k = k ? k : 4;
for (j = 0; j < k; j++) {
- mask[j] = hwgrp[i + j] | enable << 14;
+ mask[j] = hwgrp[i + j] | (uint32_t)set << 12 | enable << 14;
if (bmp) {
enable ? plt_bitmap_set(bmp, hwgrp[i + j]) :
plt_bitmap_clear(bmp, hwgrp[i + j]);
@@ -289,8 +289,8 @@ roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns)
}
int
-roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
- uint16_t nb_hwgrp)
+roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], uint16_t nb_hwgrp,
+ uint8_t set)
{
struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
struct sso *sso;
@@ -298,14 +298,14 @@ roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
sso = roc_sso_to_sso_priv(roc_sso);
base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12);
- sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, 1);
+ sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, set, 1);
return nb_hwgrp;
}
int
-roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
- uint16_t nb_hwgrp)
+roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], uint16_t nb_hwgrp,
+ uint8_t set)
{
struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
struct sso *sso;
@@ -313,7 +313,7 @@ roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
sso = roc_sso_to_sso_priv(roc_sso);
base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12);
- sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, 0);
+ sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, set, 0);
return nb_hwgrp;
}
diff --git a/drivers/common/cnxk/roc_sso.h b/drivers/common/cnxk/roc_sso.h
index a2bb6fcb22..55a8894050 100644
--- a/drivers/common/cnxk/roc_sso.h
+++ b/drivers/common/cnxk/roc_sso.h
@@ -84,10 +84,10 @@ int __roc_api roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso,
uint16_t hwgrp, uint8_t weight,
uint8_t affinity, uint8_t priority);
uint64_t __roc_api roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns);
-int __roc_api roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws,
- uint16_t hwgrp[], uint16_t nb_hwgrp);
-int __roc_api roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws,
- uint16_t hwgrp[], uint16_t nb_hwgrp);
+int __roc_api roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
+ uint16_t nb_hwgrp, uint8_t set);
+int __roc_api roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
+ uint16_t nb_hwgrp, uint8_t set);
int __roc_api roc_sso_hwgrp_hws_link_status(struct roc_sso *roc_sso,
uint8_t hws, uint16_t hwgrp);
uintptr_t __roc_api roc_sso_hws_base_get(struct roc_sso *roc_sso, uint8_t hws);
diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h
index 09729d4f62..21c59c57e6 100644
--- a/drivers/common/cnxk/roc_sso_priv.h
+++ b/drivers/common/cnxk/roc_sso_priv.h
@@ -44,8 +44,8 @@ roc_sso_to_sso_priv(struct roc_sso *roc_sso)
int sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf,
void **rsp);
int sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf);
-void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
- uint16_t hwgrp[], uint16_t n, uint16_t enable);
+void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, uint16_t hwgrp[],
+ uint16_t n, uint8_t set, uint16_t enable);
int sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps);
int sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps);
int sso_hwgrp_init_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq,
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 499a3aace7..d75f98e14f 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -66,21 +66,21 @@ cn10k_sso_init_hws_mem(void *arg, uint8_t port_id)
}
static int
-cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = port;
- return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+ return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link, profile);
}
static int
-cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = port;
- return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+ return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link, profile);
}
static void
@@ -107,10 +107,11 @@ cn10k_sso_hws_release(void *arg, void *hws)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = hws;
- uint16_t i;
+ uint16_t i, j;
- for (i = 0; i < dev->nb_event_queues; i++)
- roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1);
+ for (i = 0; i < CNXK_SSO_MAX_PROFILES; i++)
+ for (j = 0; j < dev->nb_event_queues; j++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id, &j, 1, i);
memset(ws, 0, sizeof(*ws));
}
@@ -475,6 +476,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
CN10K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, sso_hws_tx_adptr_enq);
event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue;
+ event_dev->change_profile = cn10k_sso_hws_change_profile;
#else
RTE_SET_USED(event_dev);
#endif
@@ -618,9 +620,8 @@ cn10k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port,
}
static int
-cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port,
- const uint8_t queues[], const uint8_t priorities[],
- uint16_t nb_links)
+cn10k_sso_port_link_profile(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_links];
@@ -629,14 +630,14 @@ cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port,
RTE_SET_USED(priorities);
for (link = 0; link < nb_links; link++)
hwgrp_ids[link] = queues[link];
- nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+ nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links, profile);
return (int)nb_links;
}
static int
-cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
- uint8_t queues[], uint16_t nb_unlinks)
+cn10k_sso_port_unlink_profile(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_unlinks];
@@ -644,11 +645,25 @@ cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
for (unlink = 0; unlink < nb_unlinks; unlink++)
hwgrp_ids[unlink] = queues[unlink];
- nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+ nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks, profile);
return (int)nb_unlinks;
}
+static int
+cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links)
+{
+ return cn10k_sso_port_link_profile(event_dev, port, queues, priorities, nb_links, 0);
+}
+
+static int
+cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks)
+{
+ return cn10k_sso_port_unlink_profile(event_dev, port, queues, nb_unlinks, 0);
+}
+
static void
cn10k_sso_configure_queue_stash(struct rte_eventdev *event_dev)
{
@@ -993,6 +1008,8 @@ static struct eventdev_ops cn10k_sso_dev_ops = {
.port_quiesce = cn10k_sso_port_quiesce,
.port_link = cn10k_sso_port_link,
.port_unlink = cn10k_sso_port_unlink,
+ .port_link_profile = cn10k_sso_port_link_profile,
+ .port_unlink_profile = cn10k_sso_port_unlink_profile,
.timeout_ticks = cnxk_sso_timeout_ticks,
.eth_rx_adapter_caps_get = cn10k_sso_rx_adapter_caps_get,
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
index 9b5bf90159..d41c6efb8c 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -431,3 +431,14 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+
+int __rte_hot
+cn10k_sso_hws_change_profile(void *port, uint8_t profile)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ ws->gw_wdata &= ~(0xFFUL);
+ ws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index b4ee023723..e3e075e060 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -316,6 +316,7 @@ uint16_t __rte_hot cn10k_sso_hws_enq_new_burst(void *port,
uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn10k_sso_hws_change_profile(void *port, uint8_t profile);
#define R(name, flags) \
uint16_t __rte_hot cn10k_sso_hws_deq_##name( \
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 6cce5477f0..29876bd31e 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -15,7 +15,7 @@
enq_op = enq_ops[dev->tx_offloads & (NIX_TX_OFFLOAD_MAX - 1)]
static int
-cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
@@ -24,22 +24,20 @@ cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
if (dev->dual_ws) {
dws = port;
- rc = roc_sso_hws_link(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map,
- nb_link);
- rc |= roc_sso_hws_link(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
- map, nb_link);
+ rc = roc_sso_hws_link(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map, nb_link,
+ profile);
+ rc |= roc_sso_hws_link(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), map,
+ nb_link, profile);
} else {
ws = port;
- rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+ rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link, profile);
}
return rc;
}
static int
-cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
@@ -48,15 +46,13 @@ cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
if (dev->dual_ws) {
dws = port;
- rc = roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
- map, nb_link);
- rc |= roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
- map, nb_link);
+ rc = roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map,
+ nb_link, profile);
+ rc |= roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), map,
+ nb_link, profile);
} else {
ws = port;
- rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+ rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link, profile);
}
return rc;
@@ -97,21 +93,24 @@ cn9k_sso_hws_release(void *arg, void *hws)
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
struct cn9k_sso_hws *ws;
- uint16_t i;
+ uint16_t i, k;
if (dev->dual_ws) {
dws = hws;
for (i = 0; i < dev->nb_event_queues; i++) {
- roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), &i, 1);
- roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), &i, 1);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++) {
+ roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
+ &i, 1, k);
+ roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ &i, 1, k);
+ }
}
memset(dws, 0, sizeof(*dws));
} else {
ws = hws;
for (i = 0; i < dev->nb_event_queues; i++)
- roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1, k);
memset(ws, 0, sizeof(*ws));
}
}
@@ -695,9 +694,8 @@ cn9k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port,
}
static int
-cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port,
- const uint8_t queues[], const uint8_t priorities[],
- uint16_t nb_links)
+cn9k_sso_port_link_profile(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_links];
@@ -706,14 +704,14 @@ cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port,
RTE_SET_USED(priorities);
for (link = 0; link < nb_links; link++)
hwgrp_ids[link] = queues[link];
- nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+ nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links, profile);
return (int)nb_links;
}
static int
-cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
- uint8_t queues[], uint16_t nb_unlinks)
+cn9k_sso_port_unlink_profile(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_unlinks];
@@ -721,11 +719,25 @@ cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
for (unlink = 0; unlink < nb_unlinks; unlink++)
hwgrp_ids[unlink] = queues[unlink];
- nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+ nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks, profile);
return (int)nb_unlinks;
}
+static int
+cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links)
+{
+ return cn9k_sso_port_link_profile(event_dev, port, queues, priorities, nb_links, 0);
+}
+
+static int
+cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks)
+{
+ return cn9k_sso_port_unlink_profile(event_dev, port, queues, nb_unlinks, 0);
+}
+
static int
cn9k_sso_start(struct rte_eventdev *event_dev)
{
@@ -1006,6 +1018,8 @@ static struct eventdev_ops cn9k_sso_dev_ops = {
.port_quiesce = cn9k_sso_port_quiesce,
.port_link = cn9k_sso_port_link,
.port_unlink = cn9k_sso_port_unlink,
+ .port_link_profile = cn9k_sso_port_link_profile,
+ .port_unlink_profile = cn9k_sso_port_unlink_profile,
.timeout_ticks = cnxk_sso_timeout_ticks,
.eth_rx_adapter_caps_get = cn9k_sso_rx_adapter_caps_get,
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
index abbbfffd85..ab463b9901 100644
--- a/drivers/event/cnxk/cn9k_worker.c
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -66,6 +66,17 @@ cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+int __rte_hot
+cn9k_sso_hws_change_profile(void *port, uint8_t profile)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ ws->gw_wdata &= ~(0xFFUL);
+ ws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
+
/* Dual ws ops. */
uint16_t __rte_hot
@@ -149,3 +160,14 @@ cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events)
return cn9k_cpt_crypto_adapter_enqueue(dws->base[!dws->vws],
ev->event_ptr);
}
+
+int __rte_hot
+cn9k_sso_hws_dual_change_profile(void *port, uint8_t profile)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+
+ dws->gw_wdata &= ~(0xFFUL);
+ dws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index 9ddab095ac..b9a8314c9c 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -375,6 +375,7 @@ uint16_t __rte_hot cn9k_sso_hws_enq_new_burst(void *port,
uint16_t __rte_hot cn9k_sso_hws_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn9k_sso_hws_change_profile(void *port, uint8_t profile);
uint16_t __rte_hot cn9k_sso_hws_dual_enq(void *port,
const struct rte_event *ev);
@@ -391,6 +392,7 @@ uint16_t __rte_hot cn9k_sso_hws_ca_enq(void *port, struct rte_event ev[],
uint16_t nb_events);
uint16_t __rte_hot cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn9k_sso_hws_dual_change_profile(void *port, uint8_t profile);
#define R(name, flags) \
uint16_t __rte_hot cn9k_sso_hws_deq_##name( \
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 529622cac6..5fde1bbd0f 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -31,7 +31,7 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE |
RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR;
- dev_info->max_profiles_per_port = 1;
+ dev_info->max_profiles_per_port = CNXK_SSO_MAX_PROFILES;
}
int
@@ -129,23 +129,25 @@ cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t *links_map, hwgrp[CNXK_SSO_MAX_HWGRP];
- int i, j;
+ int i, j, k;
for (i = 0; i < dev->nb_event_ports; i++) {
- uint16_t nb_hwgrp = 0;
-
- links_map = event_dev->data->links_map[0];
- /* Point links_map to this port specific area */
- links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++) {
+ uint16_t nb_hwgrp = 0;
+
+ links_map = event_dev->data->links_map[k];
+ /* Point links_map to this port specific area */
+ links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
+
+ for (j = 0; j < dev->nb_event_queues; j++) {
+ if (links_map[j] == 0xdead)
+ continue;
+ hwgrp[nb_hwgrp] = j;
+ nb_hwgrp++;
+ }
- for (j = 0; j < dev->nb_event_queues; j++) {
- if (links_map[j] == 0xdead)
- continue;
- hwgrp[nb_hwgrp] = j;
- nb_hwgrp++;
+ link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp, k);
}
-
- link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp);
}
}
@@ -436,7 +438,7 @@ cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t all_queues[CNXK_SSO_MAX_HWGRP];
- uint16_t i;
+ uint16_t i, j;
void *ws;
if (!dev->configured)
@@ -447,7 +449,8 @@ cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn)
for (i = 0; i < dev->nb_event_ports; i++) {
ws = event_dev->data->ports[i];
- unlink_fn(dev, ws, all_queues, dev->nb_event_queues);
+ for (j = 0; j < CNXK_SSO_MAX_PROFILES; j++)
+ unlink_fn(dev, ws, all_queues, dev->nb_event_queues, j);
rte_free(cnxk_sso_hws_get_cookie(ws));
event_dev->data->ports[i] = NULL;
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 962e630256..d351314200 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -33,6 +33,8 @@
#define CN10K_SSO_GW_MODE "gw_mode"
#define CN10K_SSO_STASH "stash"
+#define CNXK_SSO_MAX_PROFILES 2
+
#define NSEC2USEC(__ns) ((__ns) / 1E3)
#define USEC2NSEC(__us) ((__us)*1E3)
#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
@@ -57,10 +59,10 @@
typedef void *(*cnxk_sso_init_hws_mem_t)(void *dev, uint8_t port_id);
typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
-typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
- uint16_t nb_link);
-typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map,
- uint16_t nb_link);
+typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map, uint16_t nb_link,
+ uint8_t profile);
+typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map, uint16_t nb_link,
+ uint8_t profile);
typedef void (*cnxk_handle_event_t)(void *arg, struct rte_event ev);
typedef void (*cnxk_sso_hws_reset_t)(void *arg, void *ws);
typedef int (*cnxk_sso_hws_flush_t)(void *ws, uint8_t queue_id, uintptr_t base,
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [RFC 3/3] test/event: add event link profile test
2023-08-09 14:26 [RFC 0/3] Introduce event link profiles pbhagavatula
2023-08-09 14:26 ` [RFC 1/3] eventdev: introduce " pbhagavatula
2023-08-09 14:26 ` [RFC 2/3] event/cnxk: implement event " pbhagavatula
@ 2023-08-09 14:26 ` pbhagavatula
2023-08-09 19:45 ` [RFC 0/3] Introduce event link profiles Mattias Rönnblom
2023-08-25 18:44 ` [PATCH " pbhagavatula
4 siblings, 0 replies; 44+ messages in thread
From: pbhagavatula @ 2023-08-09 14:26 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add test case to verify event link profiles.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
app/test/test_eventdev.c | 110 +++++++++++++++++++++++++++++++++++++++
1 file changed, 110 insertions(+)
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 336529038e..acce7cced8 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1129,6 +1129,114 @@ test_eventdev_link_get(void)
return TEST_SUCCESS;
}
+static int
+test_eventdev_change_profile(void)
+{
+#define MAX_RETRIES 4
+ uint8_t priorities[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ struct rte_event_queue_conf qcfg;
+ struct rte_event_port_conf pcfg;
+ struct rte_event_dev_info info;
+ struct rte_event ev;
+ uint8_t q, re;
+ int rc;
+
+ rte_event_dev_info_get(TEST_DEV_ID, &info);
+
+ if (info.max_profiles_per_port <= 1)
+ return TEST_SKIPPED;
+
+ if (info.max_event_queues <= 1)
+ return TEST_SKIPPED;
+
+ rc = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to get port0 default config");
+ rc = rte_event_port_setup(TEST_DEV_ID, 0, &pcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to setup port0");
+
+ rc = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, &qcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to get queue0 default config");
+ rc = rte_event_queue_setup(TEST_DEV_ID, 0, &qcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to setup queue0");
+
+ q = 0;
+ rc = rte_event_port_link_with_profile(TEST_DEV_ID, 0, &q, NULL, 1, 0);
+ TEST_ASSERT(rc == 1, "Failed to link queue 0 to port 0 with profile 0");
+ q = 1;
+ rc = rte_event_port_link_with_profile(TEST_DEV_ID, 0, &q, NULL, 1, 1);
+ TEST_ASSERT(rc == 1, "Failed to link queue 1 to port 0 with profile 1");
+
+ rc = rte_event_port_links_get_with_profile(TEST_DEV_ID, 0, queues, priorities, 0);
+ TEST_ASSERT(rc == 1, "Failed to links");
+ TEST_ASSERT(queues[0] == 0, "Invalid queue found in link");
+
+ rc = rte_event_port_links_get_with_profile(TEST_DEV_ID, 0, queues, priorities, 1);
+ TEST_ASSERT(rc == 1, "Failed to links");
+ TEST_ASSERT(queues[0] == 1, "Invalid queue found in link");
+
+ rc = rte_event_dev_start(TEST_DEV_ID);
+ TEST_ASSERT_SUCCESS(rc, "Failed to start event device");
+
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.queue_id = 0;
+ ev.op = RTE_EVENT_OP_NEW;
+ ev.flow_id = 0;
+ ev.u64 = 0xBADF00D0;
+ rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1);
+ TEST_ASSERT(rc == 1, "Failed to enqueue event");
+ ev.queue_id = 1;
+ ev.flow_id = 1;
+ rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1);
+ TEST_ASSERT(rc == 1, "Failed to enqueue event");
+
+ ev.event = 0;
+ ev.u64 = 0;
+
+ rc = rte_event_port_change_profile(TEST_DEV_ID, 0, 1);
+ TEST_ASSERT_SUCCESS(rc, "Failed to change profile");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ printf("rc %d\n", rc);
+ if (rc)
+ break;
+ }
+
+ TEST_ASSERT(rc == 1, "Failed to dequeue event from profile 1");
+ TEST_ASSERT(ev.flow_id == 1, "Incorrect flow identifier from profile 1");
+ TEST_ASSERT(ev.queue_id == 1, "Incorrect queue identifier from profile 1");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ TEST_ASSERT(rc == 0, "Unexpected event dequeued from active profile");
+ }
+
+ rc = rte_event_port_change_profile(TEST_DEV_ID, 0, 0);
+ TEST_ASSERT_SUCCESS(rc, "Failed to change profile");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ if (rc)
+ break;
+ }
+
+ TEST_ASSERT(rc == 1, "Failed to dequeue event from profile 1");
+ TEST_ASSERT(ev.flow_id == 0, "Incorrect flow identifier from profile 0");
+ TEST_ASSERT(ev.queue_id == 0, "Incorrect queue identifier from profile 0");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ TEST_ASSERT(rc == 0, "Unexpected event dequeued from active profile");
+ }
+
+ return TEST_SUCCESS;
+}
+
static int
test_eventdev_close(void)
{
@@ -1187,6 +1295,8 @@ static struct unit_test_suite eventdev_common_testsuite = {
test_eventdev_timeout_ticks),
TEST_CASE_ST(NULL, NULL,
test_eventdev_start_stop),
+ TEST_CASE_ST(eventdev_configure_setup, eventdev_stop_device,
+ test_eventdev_change_profile),
TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
test_eventdev_link),
TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [RFC 0/3] Introduce event link profiles
2023-08-09 14:26 [RFC 0/3] Introduce event link profiles pbhagavatula
` (2 preceding siblings ...)
2023-08-09 14:26 ` [RFC 3/3] test/event: add event link profile test pbhagavatula
@ 2023-08-09 19:45 ` Mattias Rönnblom
2023-08-10 5:17 ` [EXT] " Pavan Nikhilesh Bhagavatula
2023-08-25 18:44 ` [PATCH " pbhagavatula
4 siblings, 1 reply; 44+ messages in thread
From: Mattias Rönnblom @ 2023-08-09 19:45 UTC (permalink / raw)
To: pbhagavatula, jerinj, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
On 2023-08-09 16:26, pbhagavatula@marvell.com wrote:
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> A collection of event queues linked to an event port can be associated
> with unique identifier called as a profile, multiple such profiles can
> be configured based on the event device capability using the function
> `rte_event_port_link_with_profile` which takes arguments similar to
> `rte_event_port_link` in addition to the profile identifier.
>
What is the overall goal with this new API? What problems does it intend
to solve, that the old one doesn't.
> The maximum link profiles that are supported by an event device is
> advertised through the structure member
> `rte_event_dev_info::max_profiles_per_port`.
>
> By default, event ports are configured to use the link profile 0 on
> initialization.
>
> Once multiple link profiles are set up and the event device is started, the
> application can use the function `rte_event_port_change_profile` to change
> the currently active profile on an event port. This effects the next
> `rte_event_dequeue_burst` call, where the event queues associated with the
> newly active link profile will participate in scheduling.
>
> Rudementary work flow would something like:
>
> Config path:
>
> uint8_t lowQ[4] = {4, 5, 6, 7};
> uint8_t highQ[4] = {0, 1, 2, 3};
>
> if (rte_event_dev_info.max_profiles_per_port < 2)
> return -ENOTSUP;
>
> rte_event_port_link_with_profile(0, 0, highQ, NULL, 4, 0);
> rte_event_port_link_with_profile(0, 0, lowQ, NULL, 4, 1);
>
> Worker path:
>
> empty_high_deq = 0;
> empty_low_deq = 0;
> is_low_deq = 0;
> while (1) {
> deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
> if (deq == 0) {
> /**
> * Change link profile based on work activity on current
> * active profile
> */
> if (is_low_deq) {
> empty_low_deq++;
> if (empty_low_deq == MAX_LOW_RETRY) {
> rte_event_port_change_profile(0, 0, 0);
> is_low_deq = 0;
> empty_low_deq = 0;
> }
> continue;
> }
>
> if (empty_high_deq == MAX_HIGH_RETRY) {
> rte_event_port_change_profile(0, 0, 1);
> is_low_deq = 1;
> empty_high_deq = 0;
> }
> continue;
> }
>
> // Process the event received.
>
> if (is_low_deq++ == MAX_LOW_EVENTS) {
> rte_event_port_change_profile(0, 0, 0);
> is_low_deq = 0;
> }
> }
>
This thing looks like the application is asked to do work scheduling.
That doesn't sound right. That's the job of the work scheduler (i.e.,
the event device).
If this thing is merely a matter of changing what queues are linked to
which ports, wouldn't a new call:
rte_event_port_link_modify()
suffice?
> An application could use heuristic data of load/activity of a given event
> port and change its active profile to adapt to the traffic pattern.
>
> An unlink function `rte_event_port_unlink_with_profile` is provided to
> modify the links associated to a profile, and
> `rte_event_port_links_get_with_profile` can be used to retrieve the links
> associated with a profile.
>
> Pavan Nikhilesh (3):
> eventdev: introduce link profiles
> event/cnxk: implement event link profiles
> test/event: add event link profile test
>
> app/test/test_eventdev.c | 110 ++++++++++
> config/rte_config.h | 1 +
> doc/guides/eventdevs/cnxk.rst | 1 +
> doc/guides/prog_guide/eventdev.rst | 58 ++++++
> drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
> drivers/common/cnxk/roc_sso.c | 18 +-
> drivers/common/cnxk/roc_sso.h | 8 +-
> drivers/common/cnxk/roc_sso_priv.h | 4 +-
> drivers/event/cnxk/cn10k_eventdev.c | 45 ++--
> drivers/event/cnxk/cn10k_worker.c | 11 +
> drivers/event/cnxk/cn10k_worker.h | 1 +
> drivers/event/cnxk/cn9k_eventdev.c | 72 ++++---
> drivers/event/cnxk/cn9k_worker.c | 22 ++
> drivers/event/cnxk/cn9k_worker.h | 2 +
> drivers/event/cnxk/cnxk_eventdev.c | 34 ++--
> drivers/event/cnxk/cnxk_eventdev.h | 10 +-
> drivers/event/dlb2/dlb2.c | 1 +
> drivers/event/dpaa/dpaa_eventdev.c | 1 +
> drivers/event/dpaa2/dpaa2_eventdev.c | 2 +-
> drivers/event/dsw/dsw_evdev.c | 1 +
> drivers/event/octeontx/ssovf_evdev.c | 2 +-
> drivers/event/opdl/opdl_evdev.c | 1 +
> drivers/event/skeleton/skeleton_eventdev.c | 1 +
> drivers/event/sw/sw_evdev.c | 1 +
> lib/eventdev/eventdev_pmd.h | 59 +++++-
> lib/eventdev/eventdev_private.c | 9 +
> lib/eventdev/eventdev_trace.h | 22 ++
> lib/eventdev/eventdev_trace_points.c | 6 +
> lib/eventdev/rte_eventdev.c | 146 ++++++++++---
> lib/eventdev/rte_eventdev.h | 226 +++++++++++++++++++++
> lib/eventdev/rte_eventdev_core.h | 4 +
> lib/eventdev/rte_eventdev_trace_fp.h | 8 +
> lib/eventdev/version.map | 5 +
> 33 files changed, 788 insertions(+), 108 deletions(-)
>
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 44+ messages in thread
* RE: [EXT] Re: [RFC 0/3] Introduce event link profiles
2023-08-09 19:45 ` [RFC 0/3] Introduce event link profiles Mattias Rönnblom
@ 2023-08-10 5:17 ` Pavan Nikhilesh Bhagavatula
2023-08-12 5:52 ` Mattias Rönnblom
0 siblings, 1 reply; 44+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2023-08-10 5:17 UTC (permalink / raw)
To: Mattias Rönnblom, Jerin Jacob Kollanukkaran,
Shijith Thotton, timothy.mcdaniel, hemant.agrawal, sachin.saxena,
mattias.ronnblom, liangma, peter.mccarthy, harry.van.haaren,
erik.g.carrillo, abhinandan.gujjar, s.v.naga.harish.k,
anatoly.burakov
Cc: dev
> On 2023-08-09 16:26, pbhagavatula@marvell.com wrote:
> > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >
> > A collection of event queues linked to an event port can be associated
> > with unique identifier called as a profile, multiple such profiles can
> > be configured based on the event device capability using the function
> > `rte_event_port_link_with_profile` which takes arguments similar to
> > `rte_event_port_link` in addition to the profile identifier.
> >
>
> What is the overall goal with this new API? What problems does it intend
> to solve, that the old one doesn't.
Linking and unlinking currently has huge overhead and when it needs to be done
in fastpath, we have to wait for unlinks to complete and handle other corner cases.
This patch set solves it by avoiding linking/unlinking altogether in fastpath by
preconfigured set of link profiles out of which only one would be active and can
be changed in fastpath with a simple function call. There is no link/unlink waiting for
unlink overhead.
>
> > The maximum link profiles that are supported by an event device is
> > advertised through the structure member
> > `rte_event_dev_info::max_profiles_per_port`.
> >
> > By default, event ports are configured to use the link profile 0 on
> > initialization.
> >
> > Once multiple link profiles are set up and the event device is started, the
> > application can use the function `rte_event_port_change_profile` to
> change
> > the currently active profile on an event port. This effects the next
> > `rte_event_dequeue_burst` call, where the event queues associated with
> the
> > newly active link profile will participate in scheduling.
> >
> > Rudementary work flow would something like:
> >
> > Config path:
> >
> > uint8_t lowQ[4] = {4, 5, 6, 7};
> > uint8_t highQ[4] = {0, 1, 2, 3};
> >
> > if (rte_event_dev_info.max_profiles_per_port < 2)
> > return -ENOTSUP;
> >
> > rte_event_port_link_with_profile(0, 0, highQ, NULL, 4, 0);
> > rte_event_port_link_with_profile(0, 0, lowQ, NULL, 4, 1);
> >
> > Worker path:
> >
> > empty_high_deq = 0;
> > empty_low_deq = 0;
> > is_low_deq = 0;
> > while (1) {
> > deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
> > if (deq == 0) {
> > /**
> > * Change link profile based on work activity on current
> > * active profile
> > */
> > if (is_low_deq) {
> > empty_low_deq++;
> > if (empty_low_deq == MAX_LOW_RETRY) {
> > rte_event_port_change_profile(0, 0, 0);
> > is_low_deq = 0;
> > empty_low_deq = 0;
> > }
> > continue;
> > }
> >
> > if (empty_high_deq == MAX_HIGH_RETRY) {
> > rte_event_port_change_profile(0, 0, 1);
> > is_low_deq = 1;
> > empty_high_deq = 0;
> > }
> > continue;
> > }
> >
> > // Process the event received.
> >
> > if (is_low_deq++ == MAX_LOW_EVENTS) {
> > rte_event_port_change_profile(0, 0, 0);
> > is_low_deq = 0;
> > }
> > }
> >
>
> This thing looks like the application is asked to do work scheduling.
> That doesn't sound right. That's the job of the work scheduler (i.e.,
> the event device).
>
> If this thing is merely a matter of changing what queues are linked to
> which ports, wouldn't a new call:
> rte_event_port_link_modify()
> suffice?
Some applications divide their available lcores into multiple types of
workers which each work on a unique set of event queues, application might
need to modify the worker ratio based on various parameters at run time
without a lot of overhead.
Modifying links wouldn’t work because we might want to restore previous links
based on the new traffic pattern etc.,.
>
> > An application could use heuristic data of load/activity of a given event
> > port and change its active profile to adapt to the traffic pattern.
> >
> > An unlink function `rte_event_port_unlink_with_profile` is provided to
> > modify the links associated to a profile, and
> > `rte_event_port_links_get_with_profile` can be used to retrieve the links
> > associated with a profile.
> >
> > Pavan Nikhilesh (3):
> > eventdev: introduce link profiles
> > event/cnxk: implement event link profiles
> > test/event: add event link profile test
> >
> > app/test/test_eventdev.c | 110 ++++++++++
> > config/rte_config.h | 1 +
> > doc/guides/eventdevs/cnxk.rst | 1 +
> > doc/guides/prog_guide/eventdev.rst | 58 ++++++
> > drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
> > drivers/common/cnxk/roc_sso.c | 18 +-
> > drivers/common/cnxk/roc_sso.h | 8 +-
> > drivers/common/cnxk/roc_sso_priv.h | 4 +-
> > drivers/event/cnxk/cn10k_eventdev.c | 45 ++--
> > drivers/event/cnxk/cn10k_worker.c | 11 +
> > drivers/event/cnxk/cn10k_worker.h | 1 +
> > drivers/event/cnxk/cn9k_eventdev.c | 72 ++++---
> > drivers/event/cnxk/cn9k_worker.c | 22 ++
> > drivers/event/cnxk/cn9k_worker.h | 2 +
> > drivers/event/cnxk/cnxk_eventdev.c | 34 ++--
> > drivers/event/cnxk/cnxk_eventdev.h | 10 +-
> > drivers/event/dlb2/dlb2.c | 1 +
> > drivers/event/dpaa/dpaa_eventdev.c | 1 +
> > drivers/event/dpaa2/dpaa2_eventdev.c | 2 +-
> > drivers/event/dsw/dsw_evdev.c | 1 +
> > drivers/event/octeontx/ssovf_evdev.c | 2 +-
> > drivers/event/opdl/opdl_evdev.c | 1 +
> > drivers/event/skeleton/skeleton_eventdev.c | 1 +
> > drivers/event/sw/sw_evdev.c | 1 +
> > lib/eventdev/eventdev_pmd.h | 59 +++++-
> > lib/eventdev/eventdev_private.c | 9 +
> > lib/eventdev/eventdev_trace.h | 22 ++
> > lib/eventdev/eventdev_trace_points.c | 6 +
> > lib/eventdev/rte_eventdev.c | 146 ++++++++++---
> > lib/eventdev/rte_eventdev.h | 226 +++++++++++++++++++++
> > lib/eventdev/rte_eventdev_core.h | 4 +
> > lib/eventdev/rte_eventdev_trace_fp.h | 8 +
> > lib/eventdev/version.map | 5 +
> > 33 files changed, 788 insertions(+), 108 deletions(-)
> >
> > --
> > 2.25.1
> >
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [EXT] Re: [RFC 0/3] Introduce event link profiles
2023-08-10 5:17 ` [EXT] " Pavan Nikhilesh Bhagavatula
@ 2023-08-12 5:52 ` Mattias Rönnblom
2023-08-14 11:29 ` Pavan Nikhilesh Bhagavatula
0 siblings, 1 reply; 44+ messages in thread
From: Mattias Rönnblom @ 2023-08-12 5:52 UTC (permalink / raw)
To: Pavan Nikhilesh Bhagavatula, Jerin Jacob Kollanukkaran,
Shijith Thotton, timothy.mcdaniel, hemant.agrawal, sachin.saxena,
mattias.ronnblom, liangma, peter.mccarthy, harry.van.haaren,
erik.g.carrillo, abhinandan.gujjar, s.v.naga.harish.k,
anatoly.burakov
Cc: dev
On 2023-08-10 07:17, Pavan Nikhilesh Bhagavatula wrote:
>> On 2023-08-09 16:26, pbhagavatula@marvell.com wrote:
>>> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>>>
>>> A collection of event queues linked to an event port can be associated
>>> with unique identifier called as a profile, multiple such profiles can
>>> be configured based on the event device capability using the function
>>> `rte_event_port_link_with_profile` which takes arguments similar to
>>> `rte_event_port_link` in addition to the profile identifier.
>>>
>>
>> What is the overall goal with this new API? What problems does it intend
>> to solve, that the old one doesn't.
>
> Linking and unlinking currently has huge overhead and when it needs to be done
> in fastpath, we have to wait for unlinks to complete and handle other corner cases.
>
OK, so this API change is specific to some particular hardware? Is this
true for some other event devices? That "huge overhead" goes to "simple
function call" for unlinking+linking, provided the target configuration
is known in advance.
What is the overall use case?
> This patch set solves it by avoiding linking/unlinking altogether in fastpath by
> preconfigured set of link profiles out of which only one would be active and can
> be changed in fastpath with a simple function call. There is no link/unlink waiting for
> unlink overhead.
>
>>
>>> The maximum link profiles that are supported by an event device is
>>> advertised through the structure member
>>> `rte_event_dev_info::max_profiles_per_port`.
>>>
>>> By default, event ports are configured to use the link profile 0 on
>>> initialization.
>>>
>>> Once multiple link profiles are set up and the event device is started, the
>>> application can use the function `rte_event_port_change_profile` to
>> change
>>> the currently active profile on an event port. This effects the next
>>> `rte_event_dequeue_burst` call, where the event queues associated with
>> the
>>> newly active link profile will participate in scheduling.
>>>
>>> Rudementary work flow would something like:
>>>
>>> Config path:
>>>
>>> uint8_t lowQ[4] = {4, 5, 6, 7};
>>> uint8_t highQ[4] = {0, 1, 2, 3};
>>>
>>> if (rte_event_dev_info.max_profiles_per_port < 2)
>>> return -ENOTSUP;
>>>
>>> rte_event_port_link_with_profile(0, 0, highQ, NULL, 4, 0);
>>> rte_event_port_link_with_profile(0, 0, lowQ, NULL, 4, 1);
>>>
>>> Worker path:
>>>
>>> empty_high_deq = 0;
>>> empty_low_deq = 0;
>>> is_low_deq = 0;
>>> while (1) {
>>> deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
>>> if (deq == 0) {
>>> /**
>>> * Change link profile based on work activity on current
>>> * active profile
>>> */
>>> if (is_low_deq) {
>>> empty_low_deq++;
>>> if (empty_low_deq == MAX_LOW_RETRY) {
>>> rte_event_port_change_profile(0, 0, 0);
>>> is_low_deq = 0;
>>> empty_low_deq = 0;
>>> }
>>> continue;
>>> }
>>>
>>> if (empty_high_deq == MAX_HIGH_RETRY) {
>>> rte_event_port_change_profile(0, 0, 1);
>>> is_low_deq = 1;
>>> empty_high_deq = 0;
>>> }
>>> continue;
>>> }
>>>
>>> // Process the event received.
>>>
>>> if (is_low_deq++ == MAX_LOW_EVENTS) {
>>> rte_event_port_change_profile(0, 0, 0);
>>> is_low_deq = 0;
>>> }
>>> }
>>>
>>
>> This thing looks like the application is asked to do work scheduling.
>> That doesn't sound right. That's the job of the work scheduler (i.e.,
>> the event device).
>>
>> If this thing is merely a matter of changing what queues are linked to
>> which ports, wouldn't a new call:
>> rte_event_port_link_modify()
>> suffice?
>
>
> Some applications divide their available lcores into multiple types of
> workers which each work on a unique set of event queues, application might
> need to modify the worker ratio based on various parameters at run time
> without a lot of overhead.
>
> Modifying links wouldn’t work because we might want to restore previous links
> based on the new traffic pattern etc.,.
>
>>
>>> An application could use heuristic data of load/activity of a given event
>>> port and change its active profile to adapt to the traffic pattern.
>>>
>>> An unlink function `rte_event_port_unlink_with_profile` is provided to
>>> modify the links associated to a profile, and
>>> `rte_event_port_links_get_with_profile` can be used to retrieve the links
>>> associated with a profile.
>>>
>>> Pavan Nikhilesh (3):
>>> eventdev: introduce link profiles
>>> event/cnxk: implement event link profiles
>>> test/event: add event link profile test
>>>
>>> app/test/test_eventdev.c | 110 ++++++++++
>>> config/rte_config.h | 1 +
>>> doc/guides/eventdevs/cnxk.rst | 1 +
>>> doc/guides/prog_guide/eventdev.rst | 58 ++++++
>>> drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
>>> drivers/common/cnxk/roc_sso.c | 18 +-
>>> drivers/common/cnxk/roc_sso.h | 8 +-
>>> drivers/common/cnxk/roc_sso_priv.h | 4 +-
>>> drivers/event/cnxk/cn10k_eventdev.c | 45 ++--
>>> drivers/event/cnxk/cn10k_worker.c | 11 +
>>> drivers/event/cnxk/cn10k_worker.h | 1 +
>>> drivers/event/cnxk/cn9k_eventdev.c | 72 ++++---
>>> drivers/event/cnxk/cn9k_worker.c | 22 ++
>>> drivers/event/cnxk/cn9k_worker.h | 2 +
>>> drivers/event/cnxk/cnxk_eventdev.c | 34 ++--
>>> drivers/event/cnxk/cnxk_eventdev.h | 10 +-
>>> drivers/event/dlb2/dlb2.c | 1 +
>>> drivers/event/dpaa/dpaa_eventdev.c | 1 +
>>> drivers/event/dpaa2/dpaa2_eventdev.c | 2 +-
>>> drivers/event/dsw/dsw_evdev.c | 1 +
>>> drivers/event/octeontx/ssovf_evdev.c | 2 +-
>>> drivers/event/opdl/opdl_evdev.c | 1 +
>>> drivers/event/skeleton/skeleton_eventdev.c | 1 +
>>> drivers/event/sw/sw_evdev.c | 1 +
>>> lib/eventdev/eventdev_pmd.h | 59 +++++-
>>> lib/eventdev/eventdev_private.c | 9 +
>>> lib/eventdev/eventdev_trace.h | 22 ++
>>> lib/eventdev/eventdev_trace_points.c | 6 +
>>> lib/eventdev/rte_eventdev.c | 146 ++++++++++---
>>> lib/eventdev/rte_eventdev.h | 226 +++++++++++++++++++++
>>> lib/eventdev/rte_eventdev_core.h | 4 +
>>> lib/eventdev/rte_eventdev_trace_fp.h | 8 +
>>> lib/eventdev/version.map | 5 +
>>> 33 files changed, 788 insertions(+), 108 deletions(-)
>>>
>>> --
>>> 2.25.1
>>>
^ permalink raw reply [flat|nested] 44+ messages in thread
* RE: [EXT] Re: [RFC 0/3] Introduce event link profiles
2023-08-12 5:52 ` Mattias Rönnblom
@ 2023-08-14 11:29 ` Pavan Nikhilesh Bhagavatula
0 siblings, 0 replies; 44+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2023-08-14 11:29 UTC (permalink / raw)
To: Mattias Rönnblom, Jerin Jacob Kollanukkaran,
Shijith Thotton, timothy.mcdaniel, hemant.agrawal, sachin.saxena,
mattias.ronnblom, liangma, peter.mccarthy, harry.van.haaren,
erik.g.carrillo, abhinandan.gujjar, s.v.naga.harish.k,
anatoly.burakov
Cc: dev
> -----Original Message-----
> From: Mattias Rönnblom <hofors@lysator.liu.se>
> Sent: Saturday, August 12, 2023 11:23 AM
> To: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>; Jerin Jacob
> Kollanukkaran <jerinj@marvell.com>; Shijith Thotton
> <sthotton@marvell.com>; timothy.mcdaniel@intel.com;
> hemant.agrawal@nxp.com; sachin.saxena@nxp.com;
> mattias.ronnblom@ericsson.com; liangma@liangbit.com;
> peter.mccarthy@intel.com; harry.van.haaren@intel.com;
> erik.g.carrillo@intel.com; abhinandan.gujjar@intel.com;
> s.v.naga.harish.k@intel.com; anatoly.burakov@intel.com
> Cc: dev@dpdk.org
> Subject: Re: [EXT] Re: [RFC 0/3] Introduce event link profiles
>
> On 2023-08-10 07:17, Pavan Nikhilesh Bhagavatula wrote:
> >> On 2023-08-09 16:26, pbhagavatula@marvell.com wrote:
> >>> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >>>
> >>> A collection of event queues linked to an event port can be associated
> >>> with unique identifier called as a profile, multiple such profiles can
> >>> be configured based on the event device capability using the function
> >>> `rte_event_port_link_with_profile` which takes arguments similar to
> >>> `rte_event_port_link` in addition to the profile identifier.
> >>>
> >>
> >> What is the overall goal with this new API? What problems does it intend
> >> to solve, that the old one doesn't.
> >
> > Linking and unlinking currently has huge overhead and when it needs to be
> done
> > in fastpath, we have to wait for unlinks to complete and handle other
> corner cases.
> >
>
> OK, so this API change is specific to some particular hardware? Is this
> true for some other event devices? That "huge overhead" goes to "simple
> function call" for unlinking+linking, provided the target configuration
> is known in advance.
CNXK supports this feature in HW as an optional feature.
Drivers can return -ENOSUP or set info::max_profiles_per_port to 1 if the
feature cannot be implemented or decided to not implement this.
Although, I believe it can be easily integrated into other SW eventdevices.
>
> What is the overall use case?
>
One of the primary use case that our customers are interested is to modify the
worker ratio on the fly without the overhead of unlink/relink and also to make
sure low-priority queues are at least scheduled once in a while without worrying
about affinities and weights.
> > This patch set solves it by avoiding linking/unlinking altogether in fastpath
> by
> > preconfigured set of link profiles out of which only one would be active and
> can
> > be changed in fastpath with a simple function call. There is no link/unlink
> waiting for
> > unlink overhead.
> >
> >>
> >>> The maximum link profiles that are supported by an event device is
> >>> advertised through the structure member
> >>> `rte_event_dev_info::max_profiles_per_port`.
> >>>
> >>> By default, event ports are configured to use the link profile 0 on
> >>> initialization.
> >>>
> >>> Once multiple link profiles are set up and the event device is started, the
> >>> application can use the function `rte_event_port_change_profile` to
> >> change
> >>> the currently active profile on an event port. This effects the next
> >>> `rte_event_dequeue_burst` call, where the event queues associated
> with
> >> the
> >>> newly active link profile will participate in scheduling.
> >>>
> >>> Rudementary work flow would something like:
> >>>
> >>> Config path:
> >>>
> >>> uint8_t lowQ[4] = {4, 5, 6, 7};
> >>> uint8_t highQ[4] = {0, 1, 2, 3};
> >>>
> >>> if (rte_event_dev_info.max_profiles_per_port < 2)
> >>> return -ENOTSUP;
> >>>
> >>> rte_event_port_link_with_profile(0, 0, highQ, NULL, 4, 0);
> >>> rte_event_port_link_with_profile(0, 0, lowQ, NULL, 4, 1);
> >>>
> >>> Worker path:
> >>>
> >>> empty_high_deq = 0;
> >>> empty_low_deq = 0;
> >>> is_low_deq = 0;
> >>> while (1) {
> >>> deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
> >>> if (deq == 0) {
> >>> /**
> >>> * Change link profile based on work activity on current
> >>> * active profile
> >>> */
> >>> if (is_low_deq) {
> >>> empty_low_deq++;
> >>> if (empty_low_deq == MAX_LOW_RETRY) {
> >>> rte_event_port_change_profile(0, 0, 0);
> >>> is_low_deq = 0;
> >>> empty_low_deq = 0;
> >>> }
> >>> continue;
> >>> }
> >>>
> >>> if (empty_high_deq == MAX_HIGH_RETRY) {
> >>> rte_event_port_change_profile(0, 0, 1);
> >>> is_low_deq = 1;
> >>> empty_high_deq = 0;
> >>> }
> >>> continue;
> >>> }
> >>>
> >>> // Process the event received.
> >>>
> >>> if (is_low_deq++ == MAX_LOW_EVENTS) {
> >>> rte_event_port_change_profile(0, 0, 0);
> >>> is_low_deq = 0;
> >>> }
> >>> }
> >>>
> >>
> >> This thing looks like the application is asked to do work scheduling.
> >> That doesn't sound right. That's the job of the work scheduler (i.e.,
> >> the event device).
> >>
> >> If this thing is merely a matter of changing what queues are linked to
> >> which ports, wouldn't a new call:
> >> rte_event_port_link_modify()
> >> suffice?
> >
> >
> > Some applications divide their available lcores into multiple types of
> > workers which each work on a unique set of event queues, application
> might
> > need to modify the worker ratio based on various parameters at run time
> > without a lot of overhead.
> >
> > Modifying links wouldn’t work because we might want to restore previous
> links
> > based on the new traffic pattern etc.,.
> >
> >>
> >>> An application could use heuristic data of load/activity of a given event
> >>> port and change its active profile to adapt to the traffic pattern.
> >>>
> >>> An unlink function `rte_event_port_unlink_with_profile` is provided to
> >>> modify the links associated to a profile, and
> >>> `rte_event_port_links_get_with_profile` can be used to retrieve the
> links
> >>> associated with a profile.
> >>>
> >>> Pavan Nikhilesh (3):
> >>> eventdev: introduce link profiles
> >>> event/cnxk: implement event link profiles
> >>> test/event: add event link profile test
> >>>
> >>> app/test/test_eventdev.c | 110 ++++++++++
> >>> config/rte_config.h | 1 +
> >>> doc/guides/eventdevs/cnxk.rst | 1 +
> >>> doc/guides/prog_guide/eventdev.rst | 58 ++++++
> >>> drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
> >>> drivers/common/cnxk/roc_sso.c | 18 +-
> >>> drivers/common/cnxk/roc_sso.h | 8 +-
> >>> drivers/common/cnxk/roc_sso_priv.h | 4 +-
> >>> drivers/event/cnxk/cn10k_eventdev.c | 45 ++--
> >>> drivers/event/cnxk/cn10k_worker.c | 11 +
> >>> drivers/event/cnxk/cn10k_worker.h | 1 +
> >>> drivers/event/cnxk/cn9k_eventdev.c | 72 ++++---
> >>> drivers/event/cnxk/cn9k_worker.c | 22 ++
> >>> drivers/event/cnxk/cn9k_worker.h | 2 +
> >>> drivers/event/cnxk/cnxk_eventdev.c | 34 ++--
> >>> drivers/event/cnxk/cnxk_eventdev.h | 10 +-
> >>> drivers/event/dlb2/dlb2.c | 1 +
> >>> drivers/event/dpaa/dpaa_eventdev.c | 1 +
> >>> drivers/event/dpaa2/dpaa2_eventdev.c | 2 +-
> >>> drivers/event/dsw/dsw_evdev.c | 1 +
> >>> drivers/event/octeontx/ssovf_evdev.c | 2 +-
> >>> drivers/event/opdl/opdl_evdev.c | 1 +
> >>> drivers/event/skeleton/skeleton_eventdev.c | 1 +
> >>> drivers/event/sw/sw_evdev.c | 1 +
> >>> lib/eventdev/eventdev_pmd.h | 59 +++++-
> >>> lib/eventdev/eventdev_private.c | 9 +
> >>> lib/eventdev/eventdev_trace.h | 22 ++
> >>> lib/eventdev/eventdev_trace_points.c | 6 +
> >>> lib/eventdev/rte_eventdev.c | 146 ++++++++++---
> >>> lib/eventdev/rte_eventdev.h | 226 +++++++++++++++++++++
> >>> lib/eventdev/rte_eventdev_core.h | 4 +
> >>> lib/eventdev/rte_eventdev_trace_fp.h | 8 +
> >>> lib/eventdev/version.map | 5 +
> >>> 33 files changed, 788 insertions(+), 108 deletions(-)
> >>>
> >>> --
> >>> 2.25.1
> >>>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [RFC 1/3] eventdev: introduce link profiles
2023-08-09 14:26 ` [RFC 1/3] eventdev: introduce " pbhagavatula
@ 2023-08-18 10:27 ` Jerin Jacob
0 siblings, 0 replies; 44+ messages in thread
From: Jerin Jacob @ 2023-08-18 10:27 UTC (permalink / raw)
To: pbhagavatula
Cc: jerinj, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov, dev
On Wed, Aug 9, 2023 at 7:56 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> A collection of event queues linked to an event port can be
> associated with a unique identifier called as a profile, multiple
> such profiles can be created based on the event device capability
> using the function `rte_event_port_link_with_profile` which takes
> arguments similar to `rte_event_port_link` in addition to the profile
> identifier.
>
> The maximum link profiles that are supported by an event device
> is advertised through the structure member
> `rte_event_dev_info::max_profiles_per_port`.
> By default, event ports are configured to use the link profile 0
> on initialization.
>
> Once multiple link profiles are set up and the event device is started,
> the application can use the function `rte_event_port_change_profile`
> to change the currently active profile on an event port. This effects
> the next `rte_event_dequeue_burst` call, where the event queues
> associated with the newly active link profile will participate in
> scheduling.
>
> An unlink function `rte_event_port_unlink_with_profile` is provided
> to modify the links associated to a profile, and
> `rte_event_port_links_get_with_profile`can be used to retrieve the
> links associated with a profile.
in rte_flow APIs, similar concept is called as template.
I think "why" part is missing in the comment and programming guide.
i.e improving performance by creating template/profile in slow path
for faster link/unlink operation.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Some suggestion on API name to have proper prefix and keeping verb as last.
rte_event_port_profile_link_set()
rte_event_port_profile_link_get()
rte_event_port_profile_unlink()
rte_event_port_profile_switch()
>
Please start with heading for this new block
> +An application can also use link profiles if supported by the underlying event device to setup up
> +multiple link profile per port and change them run time depending up on heuristic data.
> +
> +An Example use case could be as follows.
> +
> +Config path:
> +
> +.. code-block:: c
> +
> + uint8_t lowQ[4] = {4, 5, 6, 7};
> + uint8_t highQ[4] = {0, 1, 2, 3};
Please remove Hungarian notation.
> +
> + if (rte_event_dev_info.max_profiles_per_port < 2)
> + return -ENOTSUP;
> +
> + rte_event_port_link_with_profile(0, 0, highQ, NULL, 4, 0);
> + rte_event_port_link_with_profile(0, 0, lowQ, NULL, 4, 1);
> +
> +Worker path:
> +
> +.. code-block:: c
> +
> + uint8_t empty_high_deq = 0;
> + uint8_t empty_low_deq = 0;
> + uint8_t is_low_deq = 0;
> + while (1) {
> + deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
> + if (deq == 0) {
> + /**
> + * Change link profile based on work activity on current
> + * active profile
> + */
> + if (is_low_deq) {
> + empty_low_deq++;
> + if (empty_low_deq == MAX_LOW_RETRY) {
> + rte_event_port_change_profile(0, 0, 0);
> + is_low_deq = 0;
> + empty_low_deq = 0;
> + }
> + continue;
> + }
> +
> + if (empty_high_deq == MAX_HIGH_RETRY) {
> + rte_event_port_change_profile(0, 0, 1);
> + is_low_deq = 1;
> + empty_high_deq = 0;
> + }
> + continue;
> + }
> +
> + // Process the event received.
> +
> + if (is_low_deq++ == MAX_LOW_EVENTS) {
> + rte_event_port_change_profile(0, 0, 0);
> + is_low_deq = 0;
> + }
> + }
As far programming document is concerned, we don't need to put such
complicated logic here.
We can put some comments something like “Find the profile ID to switch”
uint8_t profile_id_to_switch = app_find_profile_id_to_switch();
rte_event_port_profile_switch(.., profile_id_to_switch );
>
> diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
> index 60c5cd4804..580057870f 100644
> --- a/drivers/event/dlb2/dlb2.c
> +++ b/drivers/event/dlb2/dlb2.c
> @@ -79,6 +79,7 @@ static struct rte_event_dev_info evdev_dlb2_default_info = {
> RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
> RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
> RTE_EVENT_DEV_CAP_MAINTENANCE_FREE),
> + .max_profiles_per_port = 1,
> };
>
> struct process_local_port_data
> diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c
> index 4b3d16735b..f615da3813 100644
> --- a/drivers/event/dpaa/dpaa_eventdev.c
> +++ b/drivers/event/dpaa/dpaa_eventdev.c
> @@ -359,6 +359,7 @@ dpaa_event_dev_info_get(struct rte_eventdev *dev,
> RTE_EVENT_DEV_CAP_NONSEQ_MODE |
> RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
> RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
We are always adding CAPA for new eventdev features. Please add
RTE_EVENT_DEV_CAPA_PROFILE_LINK or so.
> @@ -131,6 +131,11 @@ EXPERIMENTAL {
> rte_event_eth_tx_adapter_runtime_params_init;
> rte_event_eth_tx_adapter_runtime_params_set;
> rte_event_timer_remaining_ticks_get;
> +
> + # added in 23.11
> + rte_event_port_link_with_profile;
> + rte_event_port_unlink_with_profile;
> + rte_event_port_links_get_with_profile;
Missed the API to switch the profile.
> };
>
> INTERNAL {
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH 0/3] Introduce event link profiles
2023-08-09 14:26 [RFC 0/3] Introduce event link profiles pbhagavatula
` (3 preceding siblings ...)
2023-08-09 19:45 ` [RFC 0/3] Introduce event link profiles Mattias Rönnblom
@ 2023-08-25 18:44 ` pbhagavatula
2023-08-25 18:44 ` [PATCH 1/3] eventdev: introduce " pbhagavatula
` (3 more replies)
4 siblings, 4 replies; 44+ messages in thread
From: pbhagavatula @ 2023-08-25 18:44 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
A collection of event queues linked to an event port can be associated
with unique identifier called as a profile, multiple such profiles can
be configured based on the event device capability using the function
`rte_event_port_profile_links_set` which takes arguments similar to
`rte_event_port_link` in addition to the profile identifier.
The maximum link profiles that are supported by an event device is
advertised through the structure member
`rte_event_dev_info::max_profiles_per_port`.
By default, event ports are configured to use the link profile 0 on
initialization.
Once multiple link profiles are set up and the event device is started, the
application can use the function `rte_event_port_profile_switch` to change
the currently active profile on an event port. This effects the next
`rte_event_dequeue_burst` call, where the event queues associated with the
newly active link profile will participate in scheduling.
Rudementary work flow would something like:
Config path:
uint8_t lowQ[4] = {4, 5, 6, 7};
uint8_t highQ[4] = {0, 1, 2, 3};
if (rte_event_dev_info.max_profiles_per_port < 2)
return -ENOTSUP;
rte_event_port_profile_links_set(0, 0, highQ, NULL, 4, 0);
rte_event_port_profile_links_set(0, 0, lowQ, NULL, 4, 1);
Worker path:
empty_high_deq = 0;
empty_low_deq = 0;
is_low_deq = 0;
while (1) {
deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
if (deq == 0) {
/**
* Change link profile based on work activity on current
* active profile
*/
if (is_low_deq) {
empty_low_deq++;
if (empty_low_deq == MAX_LOW_RETRY) {
rte_event_port_profile_switch(0, 0, 0);
is_low_deq = 0;
empty_low_deq = 0;
}
continue;
}
if (empty_high_deq == MAX_HIGH_RETRY) {
rte_event_port_profile_switch(0, 0, 1);
is_low_deq = 1;
empty_high_deq = 0;
}
continue;
}
// Process the event received.
if (is_low_deq++ == MAX_LOW_EVENTS) {
rte_event_port_profile_switch(0, 0, 0);
is_low_deq = 0;
}
}
An application could use heuristic data of load/activity of a given event
port and change its active profile to adapt to the traffic pattern.
An unlink function `rte_event_port_profile_unlink` is provided to
modify the links associated to a profile, and
`rte_event_port_profile_links_get` can be used to retrieve the links
associated with a profile.
Using Link profiles can reduce the overhead of linking/unlinking and
waiting for unlinks in progress in fast-path and gives applications
the ability to switch between preset profiles on the fly.
Pavan Nikhilesh (3):
eventdev: introduce link profiles
event/cnxk: implement event link profiles
test/event: add event link profile test
app/test/test_eventdev.c | 117 +++++++++++
config/rte_config.h | 1 +
doc/guides/eventdevs/cnxk.rst | 1 +
doc/guides/eventdevs/features/cnxk.ini | 3 +-
doc/guides/eventdevs/features/default.ini | 1 +
doc/guides/prog_guide/eventdev.rst | 40 ++++
doc/guides/rel_notes/release_23_11.rst | 22 ++
drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
drivers/common/cnxk/roc_sso.c | 18 +-
drivers/common/cnxk/roc_sso.h | 8 +-
drivers/common/cnxk/roc_sso_priv.h | 4 +-
drivers/event/cnxk/cn10k_eventdev.c | 45 ++--
drivers/event/cnxk/cn10k_worker.c | 11 +
drivers/event/cnxk/cn10k_worker.h | 1 +
drivers/event/cnxk/cn9k_eventdev.c | 74 ++++---
drivers/event/cnxk/cn9k_worker.c | 22 ++
drivers/event/cnxk/cn9k_worker.h | 2 +
drivers/event/cnxk/cnxk_eventdev.c | 37 ++--
drivers/event/cnxk/cnxk_eventdev.h | 10 +-
drivers/event/dlb2/dlb2.c | 1 +
drivers/event/dpaa/dpaa_eventdev.c | 1 +
drivers/event/dpaa2/dpaa2_eventdev.c | 2 +-
drivers/event/dsw/dsw_evdev.c | 1 +
drivers/event/octeontx/ssovf_evdev.c | 2 +-
drivers/event/opdl/opdl_evdev.c | 1 +
drivers/event/skeleton/skeleton_eventdev.c | 1 +
drivers/event/sw/sw_evdev.c | 1 +
lib/eventdev/eventdev_pmd.h | 59 +++++-
lib/eventdev/eventdev_private.c | 9 +
lib/eventdev/eventdev_trace.h | 22 ++
lib/eventdev/eventdev_trace_points.c | 6 +
lib/eventdev/rte_eventdev.c | 146 ++++++++++---
lib/eventdev/rte_eventdev.h | 231 +++++++++++++++++++++
lib/eventdev/rte_eventdev_core.h | 4 +
lib/eventdev/rte_eventdev_trace_fp.h | 8 +
lib/eventdev/version.map | 6 +
36 files changed, 812 insertions(+), 110 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH 1/3] eventdev: introduce link profiles
2023-08-25 18:44 ` [PATCH " pbhagavatula
@ 2023-08-25 18:44 ` pbhagavatula
2023-08-25 18:44 ` [PATCH 2/3] event/cnxk: implement event " pbhagavatula
` (2 subsequent siblings)
3 siblings, 0 replies; 44+ messages in thread
From: pbhagavatula @ 2023-08-25 18:44 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
A collection of event queues linked to an event port can be
associated with a unique identifier called as a profile, multiple
such profiles can be created based on the event device capability
using the function `rte_event_port_profile_links_set` which takes
arguments similar to `rte_event_port_link` in addition to the profile
identifier.
The maximum link profiles that are supported by an event device
is advertised through the structure member
`rte_event_dev_info::max_profiles_per_port`.
By default, event ports are configured to use the link profile 0
on initialization.
Once multiple link profiles are set up and the event device is started,
the application can use the function `rte_event_port_profile_switch`
to change the currently active profile on an event port. This effects
the next `rte_event_dequeue_burst` call, where the event queues
associated with the newly active link profile will participate in
scheduling.
An unlink function `rte_event_port_profile_unlink` is provided
to modify the links associated to a profile, and
`rte_event_port_profile_links_get` can be used to retrieve the
links associated with a profile.
Using Link profiles can reduce the overhead of linking/unlinking and
waiting for unlinks in progress in fast-path and gives applications
the ability to switch between preset profiles on the fly.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
config/rte_config.h | 1 +
doc/guides/eventdevs/features/default.ini | 1 +
doc/guides/prog_guide/eventdev.rst | 40 ++++
doc/guides/rel_notes/release_23_11.rst | 17 ++
drivers/event/cnxk/cnxk_eventdev.c | 3 +-
drivers/event/dlb2/dlb2.c | 1 +
drivers/event/dpaa/dpaa_eventdev.c | 1 +
drivers/event/dpaa2/dpaa2_eventdev.c | 2 +-
drivers/event/dsw/dsw_evdev.c | 1 +
drivers/event/octeontx/ssovf_evdev.c | 2 +-
drivers/event/opdl/opdl_evdev.c | 1 +
drivers/event/skeleton/skeleton_eventdev.c | 1 +
drivers/event/sw/sw_evdev.c | 1 +
lib/eventdev/eventdev_pmd.h | 59 +++++-
lib/eventdev/eventdev_private.c | 9 +
lib/eventdev/eventdev_trace.h | 22 ++
lib/eventdev/eventdev_trace_points.c | 6 +
lib/eventdev/rte_eventdev.c | 146 ++++++++++---
lib/eventdev/rte_eventdev.h | 231 +++++++++++++++++++++
lib/eventdev/rte_eventdev_core.h | 4 +
lib/eventdev/rte_eventdev_trace_fp.h | 8 +
lib/eventdev/version.map | 6 +
22 files changed, 533 insertions(+), 30 deletions(-)
diff --git a/config/rte_config.h b/config/rte_config.h
index 400e44e3cf..d43b3eecb8 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -73,6 +73,7 @@
#define RTE_EVENT_MAX_DEVS 16
#define RTE_EVENT_MAX_PORTS_PER_DEV 255
#define RTE_EVENT_MAX_QUEUES_PER_DEV 255
+#define RTE_EVENT_MAX_PROFILES_PER_PORT 8
#define RTE_EVENT_TIMER_ADAPTER_NUM_MAX 32
#define RTE_EVENT_ETH_INTR_RING_SIZE 1024
#define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini
index 00360f60c6..1c0082352b 100644
--- a/doc/guides/eventdevs/features/default.ini
+++ b/doc/guides/eventdevs/features/default.ini
@@ -18,6 +18,7 @@ multiple_queue_port =
carry_flow_id =
maintenance_free =
runtime_queue_attr =
+profile_links =
;
; Features of a default Ethernet Rx adapter.
diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst
index 2c83176846..9c07870a79 100644
--- a/doc/guides/prog_guide/eventdev.rst
+++ b/doc/guides/prog_guide/eventdev.rst
@@ -317,6 +317,46 @@ can be achieved like this:
}
int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1);
+Linking Queues to Ports with profiles
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+An application can use link profiles if supported by the underlying event device to setup up
+multiple link profile per port and change them run time depending up on heuristic data.
+Using Link profiles can reduce the overhead of linking/unlinking and wait for unlinks in progress
+in fast-path and gives applications the ability to switch between preset profiles on the fly.
+
+An Example use case could be as follows.
+
+Config path:
+
+.. code-block:: c
+
+ uint8_t lq[4] = {4, 5, 6, 7};
+ uint8_t hq[4] = {0, 1, 2, 3};
+
+ if (rte_event_dev_info.max_profiles_per_port < 2)
+ return -ENOTSUP;
+
+ rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0);
+ rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1);
+
+Worker path:
+
+.. code-block:: c
+
+ uint8_t profile_id_to_switch;
+
+ while (1) {
+ deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
+ if (deq == 0) {
+ profile_id_to_switch = app_findprofile_id_to_switch();
+ rte_event_port_profile_switch(0, 0, profile_id_to_switch);
+ continue;
+ }
+
+ // Process the event received.
+ }
+
Starting the EventDev
~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index 333e1d95a2..e19a0ed3c3 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -78,6 +78,23 @@ New Features
* build: Optional libraries can now be selected with the new ``enable_libs``
build option similarly to the existing ``enable_drivers`` build option.
+* **Added eventdev support to link queues to port with profile.**
+
+ Introduced event link profiles that can be used to associated links between
+ event queues and an event port with a unique identifier termed as profile.
+ The profile can be used to switch between the associated links in fast-path
+ without the additional overhead of linking/unlinking and waiting for unlinking.
+
+ * Added ``rte_event_port_profile_links_set`` to link event queues to an event
+ port with a unique profile identifier.
+
+ * Added ``rte_event_port_profile_unlink`` to unlink event queues from an event
+ port associated with a profile.
+
+ * Added ``rte_event_port_profile_links_get`` to retrieve links associated to a
+ profile.
+
+ * Added ``rte_event_port_profile_switch`` to switch between profiles as needed.
Removed Items
-------------
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 27883a3619..529622cac6 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -31,6 +31,7 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE |
RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR;
+ dev_info->max_profiles_per_port = 1;
}
int
@@ -133,7 +134,7 @@ cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
for (i = 0; i < dev->nb_event_ports; i++) {
uint16_t nb_hwgrp = 0;
- links_map = event_dev->data->links_map;
+ links_map = event_dev->data->links_map[0];
/* Point links_map to this port specific area */
links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 60c5cd4804..580057870f 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -79,6 +79,7 @@ static struct rte_event_dev_info evdev_dlb2_default_info = {
RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE),
+ .max_profiles_per_port = 1,
};
struct process_local_port_data
diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c
index 4b3d16735b..f615da3813 100644
--- a/drivers/event/dpaa/dpaa_eventdev.c
+++ b/drivers/event/dpaa/dpaa_eventdev.c
@@ -359,6 +359,7 @@ dpaa_event_dev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
+ dev_info->max_profiles_per_port = 1;
}
static int
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index fa1a1ade80..ffc5550f85 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -411,7 +411,7 @@ dpaa2_eventdev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
-
+ dev_info->max_profiles_per_port = 1;
}
static int
diff --git a/drivers/event/dsw/dsw_evdev.c b/drivers/event/dsw/dsw_evdev.c
index 6c5cde2468..785c12f61f 100644
--- a/drivers/event/dsw/dsw_evdev.c
+++ b/drivers/event/dsw/dsw_evdev.c
@@ -218,6 +218,7 @@ dsw_info_get(struct rte_eventdev *dev __rte_unused,
.max_event_port_dequeue_depth = DSW_MAX_PORT_DEQUEUE_DEPTH,
.max_event_port_enqueue_depth = DSW_MAX_PORT_ENQUEUE_DEPTH,
.max_num_events = DSW_MAX_EVENTS,
+ .max_profiles_per_port = 1,
.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE|
RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED|
RTE_EVENT_DEV_CAP_NONSEQ_MODE|
diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
index 650266b996..0eb9358981 100644
--- a/drivers/event/octeontx/ssovf_evdev.c
+++ b/drivers/event/octeontx/ssovf_evdev.c
@@ -158,7 +158,7 @@ ssovf_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *dev_info)
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
-
+ dev_info->max_profiles_per_port = 1;
}
static int
diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c
index 9ce8b39b60..dd25749654 100644
--- a/drivers/event/opdl/opdl_evdev.c
+++ b/drivers/event/opdl/opdl_evdev.c
@@ -378,6 +378,7 @@ opdl_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE,
+ .max_profiles_per_port = 1,
};
*info = evdev_opdl_info;
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
index 8513b9a013..dc9b131641 100644
--- a/drivers/event/skeleton/skeleton_eventdev.c
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -104,6 +104,7 @@ skeleton_eventdev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_EVENT_QOS |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
+ dev_info->max_profiles_per_port = 1;
}
static int
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index cfd659d774..6d1816b76d 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -609,6 +609,7 @@ sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE),
+ .max_profiles_per_port = 1,
};
*info = evdev_sw_info;
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index f62f42e140..66fdad71f3 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -119,8 +119,8 @@ struct rte_eventdev_data {
/**< Array of port configuration structures. */
struct rte_event_queue_conf queues_cfg[RTE_EVENT_MAX_QUEUES_PER_DEV];
/**< Array of queue configuration structures. */
- uint16_t links_map[RTE_EVENT_MAX_PORTS_PER_DEV *
- RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint16_t links_map[RTE_EVENT_MAX_PROFILES_PER_PORT]
+ [RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV];
/**< Memory to store queues to port connections. */
void *dev_private;
/**< PMD-specific private data */
@@ -178,6 +178,9 @@ struct rte_eventdev {
event_tx_adapter_enqueue_t txa_enqueue;
/**< Pointer to PMD eth Tx adapter enqueue function. */
event_crypto_adapter_enqueue_t ca_enqueue;
+ /**< PMD Crypto adapter enqueue function. */
+ event_profile_switch_t profile_switch;
+ /**< PMD Event switch profile function. */
uint64_t reserved_64s[4]; /**< Reserved for future fields */
void *reserved_ptrs[3]; /**< Reserved for future fields */
@@ -437,6 +440,32 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
const uint8_t queues[], const uint8_t priorities[],
uint16_t nb_links);
+/**
+ * Link multiple source event queues associated with a profile to a destination
+ * event port.
+ *
+ * @param dev
+ * Event device pointer
+ * @param port
+ * Event port pointer
+ * @param queues
+ * Points to an array of *nb_links* event queues to be linked
+ * to the event port.
+ * @param priorities
+ * Points to an array of *nb_links* service priorities associated with each
+ * event queue link to event port.
+ * @param nb_links
+ * The number of links to establish.
+ * @param profile
+ * The profile ID to associate the links.
+ *
+ * @return
+ * Returns 0 on success.
+ */
+typedef int (*eventdev_port_link_profile_t)(struct rte_eventdev *dev, void *port,
+ const uint8_t queues[], const uint8_t priorities[],
+ uint16_t nb_links, uint8_t profile);
+
/**
* Unlink multiple source event queues from destination event port.
*
@@ -455,6 +484,28 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
typedef int (*eventdev_port_unlink_t)(struct rte_eventdev *dev, void *port,
uint8_t queues[], uint16_t nb_unlinks);
+/**
+ * Unlink multiple source event queues associated with a profile from destination
+ * event port.
+ *
+ * @param dev
+ * Event device pointer
+ * @param port
+ * Event port pointer
+ * @param queues
+ * An array of *nb_unlinks* event queues to be unlinked from the event port.
+ * @param nb_unlinks
+ * The number of unlinks to establish
+ * @param profile
+ * The profile ID of the associated links.
+ *
+ * @return
+ * Returns 0 on success.
+ */
+typedef int (*eventdev_port_unlink_profile_t)(struct rte_eventdev *dev, void *port,
+ uint8_t queues[], uint16_t nb_unlinks,
+ uint8_t profile);
+
/**
* Unlinks in progress. Returns number of unlinks that the PMD is currently
* performing, but have not yet been completed.
@@ -1348,8 +1399,12 @@ struct eventdev_ops {
eventdev_port_link_t port_link;
/**< Link event queues to an event port. */
+ eventdev_port_link_profile_t port_link_profile;
+ /**< Link event queues associated with a profile to an event port. */
eventdev_port_unlink_t port_unlink;
/**< Unlink event queues from an event port. */
+ eventdev_port_unlink_profile_t port_unlink_profile;
+ /**< Unlink event queues associated with a profile from an event port. */
eventdev_port_unlinks_in_progress_t port_unlinks_in_progress;
/**< Unlinks in progress on an event port. */
eventdev_dequeue_timeout_ticks_t timeout_ticks;
diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c
index 1d3d9d357e..a5e0bd3de0 100644
--- a/lib/eventdev/eventdev_private.c
+++ b/lib/eventdev/eventdev_private.c
@@ -81,6 +81,13 @@ dummy_event_crypto_adapter_enqueue(__rte_unused void *port,
return 0;
}
+static int
+dummy_event_port_profile_switch(__rte_unused void *port, __rte_unused uint8_t profile)
+{
+ RTE_EDEV_LOG_ERR("change profile requested for unconfigured event device");
+ return -EINVAL;
+}
+
void
event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
{
@@ -97,6 +104,7 @@ event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
.txa_enqueue_same_dest =
dummy_event_tx_adapter_enqueue_same_dest,
.ca_enqueue = dummy_event_crypto_adapter_enqueue,
+ .profile_switch = dummy_event_port_profile_switch,
.data = dummy_data,
};
@@ -117,5 +125,6 @@ event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op,
fp_op->txa_enqueue = dev->txa_enqueue;
fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest;
fp_op->ca_enqueue = dev->ca_enqueue;
+ fp_op->profile_switch = dev->profile_switch;
fp_op->data = dev->data->ports;
}
diff --git a/lib/eventdev/eventdev_trace.h b/lib/eventdev/eventdev_trace.h
index f008ef0091..7b89b8d53f 100644
--- a/lib/eventdev/eventdev_trace.h
+++ b/lib/eventdev/eventdev_trace.h
@@ -76,6 +76,17 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(rc);
)
+RTE_TRACE_POINT(
+ rte_eventdev_trace_port_link_with_profile,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
+ uint16_t nb_links, uint8_t profile, int rc),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u16(nb_links);
+ rte_trace_point_emit_u8(profile);
+ rte_trace_point_emit_int(rc);
+)
+
RTE_TRACE_POINT(
rte_eventdev_trace_port_unlink,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
@@ -86,6 +97,17 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(rc);
)
+RTE_TRACE_POINT(
+ rte_eventdev_trace_port_unlink_with_profile,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
+ uint16_t nb_unlinks, uint8_t profile, int rc),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u16(nb_unlinks);
+ rte_trace_point_emit_u8(profile);
+ rte_trace_point_emit_int(rc);
+)
+
RTE_TRACE_POINT(
rte_eventdev_trace_start,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, int rc),
diff --git a/lib/eventdev/eventdev_trace_points.c b/lib/eventdev/eventdev_trace_points.c
index 76144cfe75..cfde20b2a8 100644
--- a/lib/eventdev/eventdev_trace_points.c
+++ b/lib/eventdev/eventdev_trace_points.c
@@ -19,9 +19,15 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_setup,
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_link,
lib.eventdev.port.link)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_link_with_profile,
+ lib.eventdev.port.link_with_profile)
+
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlink,
lib.eventdev.port.unlink)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlink_with_profile,
+ lib.eventdev.port.unlink_with_profile)
+
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_start,
lib.eventdev.start)
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 6ab4524332..2171d131ad 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -270,7 +270,7 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
void **ports;
uint16_t *links_map;
struct rte_event_port_conf *ports_cfg;
- unsigned int i;
+ unsigned int i, j;
RTE_EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
dev->data->dev_id);
@@ -281,7 +281,6 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
ports = dev->data->ports;
ports_cfg = dev->data->ports_cfg;
- links_map = dev->data->links_map;
for (i = nb_ports; i < old_nb_ports; i++)
(*dev->dev_ops->port_release)(ports[i]);
@@ -297,9 +296,11 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
sizeof(ports[0]) * new_ps);
memset(ports_cfg + old_nb_ports, 0,
sizeof(ports_cfg[0]) * new_ps);
- for (i = old_links_map_end; i < links_map_end; i++)
- links_map[i] =
- EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++) {
+ links_map = dev->data->links_map[i];
+ for (j = old_links_map_end; j < links_map_end; j++)
+ links_map[j] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ }
}
} else {
if (*dev->dev_ops->port_release == NULL)
@@ -953,21 +954,44 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
const uint8_t queues[], const uint8_t priorities[],
uint16_t nb_links)
{
- struct rte_eventdev *dev;
- uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ return rte_event_port_profile_links_set(dev_id, port_id, queues, priorities, nb_links, 0);
+}
+
+int
+rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile)
+{
uint8_t priorities_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
uint16_t *links_map;
int i, diag;
RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);
dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT || profile >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile=%" PRIu8, profile);
+ return -EINVAL;
+ }
+
if (*dev->dev_ops->port_link == NULL) {
RTE_EDEV_LOG_ERR("Function not supported\n");
rte_errno = ENOTSUP;
return 0;
}
+ if (profile && *dev->dev_ops->port_link_profile == NULL) {
+ RTE_EDEV_LOG_ERR("Function not supported\n");
+ rte_errno = ENOTSUP;
+ return 0;
+ }
+
if (!is_valid_port(dev, port_id)) {
RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
rte_errno = EINVAL;
@@ -995,18 +1019,22 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
return 0;
}
- diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id],
- queues, priorities, nb_links);
+ if (profile)
+ diag = (*dev->dev_ops->port_link_profile)(dev, dev->data->ports[port_id], queues,
+ priorities, nb_links, profile);
+ else
+ diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id], queues,
+ priorities, nb_links);
if (diag < 0)
return diag;
- links_map = dev->data->links_map;
+ links_map = dev->data->links_map[profile];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
for (i = 0; i < diag; i++)
links_map[queues[i]] = (uint8_t)priorities[i];
- rte_eventdev_trace_port_link(dev_id, port_id, nb_links, diag);
+ rte_eventdev_trace_port_link_with_profile(dev_id, port_id, nb_links, profile, diag);
return diag;
}
@@ -1014,27 +1042,50 @@ int
rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint16_t nb_unlinks)
{
- struct rte_eventdev *dev;
+ return rte_event_port_profile_unlink(dev_id, port_id, queues, nb_unlinks, 0);
+}
+
+int
+rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile)
+{
uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
- int i, diag, j;
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
uint16_t *links_map;
+ int i, diag, j;
RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);
dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT || profile >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile=%" PRIu8, profile);
+ return -EINVAL;
+ }
+
if (*dev->dev_ops->port_unlink == NULL) {
RTE_EDEV_LOG_ERR("Function not supported");
rte_errno = ENOTSUP;
return 0;
}
+ if (profile && *dev->dev_ops->port_unlink_profile == NULL) {
+ RTE_EDEV_LOG_ERR("Function not supported");
+ rte_errno = ENOTSUP;
+ return 0;
+ }
+
if (!is_valid_port(dev, port_id)) {
RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
rte_errno = EINVAL;
return 0;
}
- links_map = dev->data->links_map;
+ links_map = dev->data->links_map[profile];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
@@ -1063,16 +1114,19 @@ rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
return 0;
}
- diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id],
- queues, nb_unlinks);
-
+ if (profile)
+ diag = (*dev->dev_ops->port_unlink_profile)(dev, dev->data->ports[port_id], queues,
+ nb_unlinks, profile);
+ else
+ diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id], queues,
+ nb_unlinks);
if (diag < 0)
return diag;
for (i = 0; i < diag; i++)
links_map[queues[i]] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
- rte_eventdev_trace_port_unlink(dev_id, port_id, nb_unlinks, diag);
+ rte_eventdev_trace_port_unlink_with_profile(dev_id, port_id, nb_unlinks, profile, diag);
return diag;
}
@@ -1116,7 +1170,50 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
return -EINVAL;
}
- links_map = dev->data->links_map;
+ /* Use the default profile. */
+ links_map = dev->data->links_map[0];
+ /* Point links_map to this port specific area */
+ links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
+ for (i = 0; i < dev->data->nb_queues; i++) {
+ if (links_map[i] != EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
+ queues[count] = i;
+ priorities[count] = (uint8_t)links_map[i];
+ ++count;
+ }
+ }
+
+ rte_eventdev_trace_port_links_get(dev_id, port_id, count);
+
+ return count;
+}
+
+int
+rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint8_t priorities[], uint8_t profile)
+{
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
+ uint16_t *links_map;
+ int i, count = 0;
+
+ RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+
+ dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT || profile >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile=%" PRIu8, profile);
+ return -EINVAL;
+ }
+
+ if (!is_valid_port(dev, port_id)) {
+ RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+ return -EINVAL;
+ }
+
+ links_map = dev->data->links_map[profile];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
for (i = 0; i < dev->data->nb_queues; i++) {
@@ -1440,7 +1537,7 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
{
char mz_name[RTE_EVENTDEV_NAME_MAX_LEN];
const struct rte_memzone *mz;
- int n;
+ int i, n;
/* Generate memzone name */
n = snprintf(mz_name, sizeof(mz_name), "rte_eventdev_data_%u", dev_id);
@@ -1460,11 +1557,10 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
*data = mz->addr;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
memset(*data, 0, sizeof(struct rte_eventdev_data));
- for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV *
- RTE_EVENT_MAX_QUEUES_PER_DEV;
- n++)
- (*data)->links_map[n] =
- EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++)
+ for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV;
+ n++)
+ (*data)->links_map[i][n] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
}
return 0;
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 2ba8a7b090..7a169de067 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -320,6 +320,12 @@ struct rte_event;
* rte_event_queue_setup().
*/
+#define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12)
+/** Event device is capable of supporting multiple link profiles per event port
+ * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater
+ * than one.
+ */
+
/* Event device priority levels */
#define RTE_EVENT_DEV_PRIORITY_HIGHEST 0
/**< Highest priority expressed across eventdev subsystem
@@ -446,6 +452,10 @@ struct rte_event_dev_info {
* device. These ports and queues are not accounted for in
* max_event_ports or max_event_queues.
*/
+ uint8_t max_profiles_per_port;
+ /**< Maximum number of event queue profiles per event port.
+ * A device that doesn't support multiple profiles will set this as 1.
+ */
};
/**
@@ -1536,6 +1546,10 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
* latency of critical work by establishing the link with more event ports
* at runtime.
*
+ * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater
+ * than or equal to one, this function links the event queues to the default
+ * profile i.e. profile 0 of the event port.
+ *
* @param dev_id
* The identifier of the device.
*
@@ -1593,6 +1607,10 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
* Event queue(s) to event port unlink establishment can be changed at runtime
* without re-configuring the device.
*
+ * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater
+ * than or equal to one, this function unlinks the event queues from the default
+ * profile i.e. profile 0 of the event port.
+ *
* @see rte_event_port_unlinks_in_progress() to poll for completed unlinks.
*
* @param dev_id
@@ -1626,6 +1644,136 @@ int
rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint16_t nb_unlinks);
+/**
+ * Link multiple source event queues supplied in *queues* to the destination
+ * event port designated by its *port_id* with associated profile identifier
+ * supplied in *profile* with service priorities supplied in *priorities* on
+ * the event device designated by its *dev_id*.
+ *
+ * If *profile* is set to 0 then, the links created by the call `rte_event_port_link`
+ * will be overwritten.
+ *
+ * Event ports by default use profile 0 unless it is changed using the
+ * call ``rte_event_port_profile_switch()``.
+ *
+ * The link establishment shall enable the event port *port_id* from
+ * receiving events from the specified event queue(s) supplied in *queues*
+ *
+ * An event queue may link to one or more event ports.
+ * The number of links can be established from an event queue to event port is
+ * implementation defined.
+ *
+ * Event queue(s) to event port link establishment can be changed at runtime
+ * without re-configuring the device to support scaling and to reduce the
+ * latency of critical work by establishing the link with more event ports
+ * at runtime.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier to select the destination port to link.
+ *
+ * @param queues
+ * Points to an array of *nb_links* event queues to be linked
+ * to the event port.
+ * NULL value is allowed, in which case this function links all the configured
+ * event queues *nb_event_queues* which previously supplied to
+ * rte_event_dev_configure() to the event port *port_id*
+ *
+ * @param priorities
+ * Points to an array of *nb_links* service priorities associated with each
+ * event queue link to event port.
+ * The priority defines the event port's servicing priority for
+ * event queue, which may be ignored by an implementation.
+ * The requested priority should in the range of
+ * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST].
+ * The implementation shall normalize the requested priority to
+ * implementation supported priority value.
+ * NULL value is allowed, in which case this function links the event queues
+ * with RTE_EVENT_DEV_PRIORITY_NORMAL servicing priority
+ *
+ * @param nb_links
+ * The number of links to establish. This parameter is ignored if queues is
+ * NULL.
+ *
+ * @param profile
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of links actually established. The return value can be less than
+ * the value of the *nb_links* parameter when the implementation has the
+ * limitation on specific queue to port link establishment or if invalid
+ * parameters are specified in *queues*
+ * If the return value is less than *nb_links*, the remaining links at the end
+ * of link[] are not established, and the caller has to take care of them.
+ * If return value is less than *nb_links* then implementation shall update the
+ * rte_errno accordingly, Possible rte_errno values are
+ * (EDQUOT) Quota exceeded(Application tried to link the queue configured with
+ * RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports)
+ * (EINVAL) Invalid parameter
+ *
+ */
+__rte_experimental
+int
+rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile);
+
+/**
+ * Unlink multiple source event queues supplied in *queues* that belong to profile
+ * designated by *profile* from the destination event port designated by its
+ * *port_id* on the event device designated by its *dev_id*.
+ *
+ * If *profile* is set to 0 i.e., the default profile then, then this function will
+ * act as ``rte_event_port_unlink``.
+ *
+ * The unlink call issues an async request to disable the event port *port_id*
+ * from receiving events from the specified event queue *queue_id*.
+ * Event queue(s) to event port unlink establishment can be changed at runtime
+ * without re-configuring the device.
+ *
+ * @see rte_event_port_unlinks_in_progress() to poll for completed unlinks.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier to select the destination port to unlink.
+ *
+ * @param queues
+ * Points to an array of *nb_unlinks* event queues to be unlinked
+ * from the event port.
+ * NULL value is allowed, in which case this function unlinks all the
+ * event queue(s) from the event port *port_id*.
+ *
+ * @param nb_unlinks
+ * The number of unlinks to establish. This parameter is ignored if queues is
+ * NULL.
+ *
+ * @param profile
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of unlinks successfully requested. The return value can be less
+ * than the value of the *nb_unlinks* parameter when the implementation has the
+ * limitation on specific queue to port unlink establishment or
+ * if invalid parameters are specified.
+ * If the return value is less than *nb_unlinks*, the remaining queues at the
+ * end of queues[] are not unlinked, and the caller has to take care of them.
+ * If return value is less than *nb_unlinks* then implementation shall update
+ * the rte_errno accordingly, Possible rte_errno values are
+ * (EINVAL) Invalid parameter
+ *
+ */
+__rte_experimental
+int
+rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile);
+
/**
* Returns the number of unlinks in progress.
*
@@ -1680,6 +1828,42 @@ int
rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint8_t priorities[]);
+/**
+ * Retrieve the list of source event queues and its service priority
+ * associated to a profile and linked to the destination event port
+ * designated by its *port_id* on the event device designated by its *dev_id*.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier.
+ *
+ * @param[out] queues
+ * Points to an array of *queues* for output.
+ * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
+ * store the event queue(s) linked with event port *port_id*
+ *
+ * @param[out] priorities
+ * Points to an array of *priorities* for output.
+ * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
+ * store the service priority associated with each event queue linked
+ *
+ * @param profile
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of links established on the event port designated by its
+ * *port_id*.
+ * - <0 on failure.
+ */
+__rte_experimental
+int
+rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint8_t priorities[], uint8_t profile);
+
/**
* Retrieve the service ID of the event dev. If the adapter doesn't use
* a rte_service function, this function returns -ESRCH.
@@ -2265,6 +2449,53 @@ rte_event_maintain(uint8_t dev_id, uint8_t port_id, int op)
return 0;
}
+/**
+ * Change the active profile on an event port.
+ *
+ * This function is used to change the current active profile on an event port
+ * when multiple link profiles are configured on an event port through the
+ * function call ``rte_event_port_profile_links_set``.
+ *
+ * On the subsequent ``rte_event_dequeue_burst`` call, only the event queues
+ * that were associated with the newly active profile will participate in
+ * scheduling.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ * @param port_id
+ * The identifier of the event port.
+ * @param profile
+ * The identifier of the profile.
+ * @return
+ * - 0 on success.
+ * - -EINVAL if *dev_id*, *port_id*, or *profile* is invalid.
+ */
+__rte_experimental
+static inline uint8_t
+rte_event_port_profile_switch(uint8_t dev_id, uint8_t port_id, uint8_t profile)
+{
+ const struct rte_event_fp_ops *fp_ops;
+ void *port;
+
+ fp_ops = &rte_event_fp_ops[dev_id];
+ port = fp_ops->data[port_id];
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+ if (dev_id >= RTE_EVENT_MAX_DEVS ||
+ port_id >= RTE_EVENT_MAX_PORTS_PER_DEV)
+ return -EINVAL;
+
+ if (port == NULL)
+ return -EINVAL;
+
+ if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT)
+ return -EINVAL;
+#endif
+ rte_eventdev_trace_change_profile(dev_id, port_id, profile);
+
+ return fp_ops->profile_switch(port, profile);
+}
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
index c328bdbc82..dfde8500fc 100644
--- a/lib/eventdev/rte_eventdev_core.h
+++ b/lib/eventdev/rte_eventdev_core.h
@@ -42,6 +42,8 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port,
uint16_t nb_events);
/**< @internal Enqueue burst of events on crypto adapter */
+typedef int (*event_profile_switch_t)(void *port, uint8_t profile);
+
struct rte_event_fp_ops {
void **data;
/**< points to array of internal port data pointers */
@@ -65,6 +67,8 @@ struct rte_event_fp_ops {
/**< PMD Tx adapter enqueue same destination function. */
event_crypto_adapter_enqueue_t ca_enqueue;
/**< PMD Crypto adapter enqueue function. */
+ event_profile_switch_t profile_switch;
+ /**< PMD Event switch profile function. */
uintptr_t reserved[6];
} __rte_cache_aligned;
diff --git a/lib/eventdev/rte_eventdev_trace_fp.h b/lib/eventdev/rte_eventdev_trace_fp.h
index af2172d2a5..141a45ed58 100644
--- a/lib/eventdev/rte_eventdev_trace_fp.h
+++ b/lib/eventdev/rte_eventdev_trace_fp.h
@@ -46,6 +46,14 @@ RTE_TRACE_POINT_FP(
rte_trace_point_emit_int(op);
)
+RTE_TRACE_POINT_FP(
+ rte_eventdev_trace_change_profile,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, uint8_t profile),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u8(profile);
+)
+
RTE_TRACE_POINT_FP(
rte_eventdev_trace_eth_tx_adapter_enqueue,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table,
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index b03c10d99f..67efee5489 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -131,6 +131,12 @@ EXPERIMENTAL {
rte_event_eth_tx_adapter_runtime_params_init;
rte_event_eth_tx_adapter_runtime_params_set;
rte_event_timer_remaining_ticks_get;
+
+ # added in 23.11
+ rte_event_port_profile_links_set;
+ rte_event_port_profile_unlink;
+ rte_event_port_profile_links_get;
+ rte_event_port_profile_switch;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH 2/3] event/cnxk: implement event link profiles
2023-08-25 18:44 ` [PATCH " pbhagavatula
2023-08-25 18:44 ` [PATCH 1/3] eventdev: introduce " pbhagavatula
@ 2023-08-25 18:44 ` pbhagavatula
2023-08-25 18:44 ` [PATCH 3/3] test/event: add event link profile test pbhagavatula
2023-08-31 20:44 ` [PATCH v2 0/3] Introduce event link profiles pbhagavatula
3 siblings, 0 replies; 44+ messages in thread
From: pbhagavatula @ 2023-08-25 18:44 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Implement event link profiles support on CN10K and CN9K.
Both the platforms support up to 2 link profiles.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 1 +
doc/guides/eventdevs/features/cnxk.ini | 3 +-
doc/guides/rel_notes/release_23_11.rst | 5 ++
drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
drivers/common/cnxk/roc_sso.c | 18 +++----
drivers/common/cnxk/roc_sso.h | 8 +--
drivers/common/cnxk/roc_sso_priv.h | 4 +-
drivers/event/cnxk/cn10k_eventdev.c | 45 +++++++++++-----
drivers/event/cnxk/cn10k_worker.c | 11 ++++
drivers/event/cnxk/cn10k_worker.h | 1 +
drivers/event/cnxk/cn9k_eventdev.c | 74 ++++++++++++++++----------
drivers/event/cnxk/cn9k_worker.c | 22 ++++++++
drivers/event/cnxk/cn9k_worker.h | 2 +
drivers/event/cnxk/cnxk_eventdev.c | 38 +++++++------
drivers/event/cnxk/cnxk_eventdev.h | 10 ++--
15 files changed, 164 insertions(+), 82 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 1a59233282..cccb8a0304 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -48,6 +48,7 @@ Features of the OCTEON cnxk SSO PMD are:
- HW managed event vectorization on CN10K for packets enqueued from ethdev to
eventdev configurable per each Rx queue in Rx adapter.
- Event vector transmission via Tx adapter.
+- Up to 2 event link profiles.
Prerequisites and Compilation procedure
---------------------------------------
diff --git a/doc/guides/eventdevs/features/cnxk.ini b/doc/guides/eventdevs/features/cnxk.ini
index bee69bf8f4..5d353e3670 100644
--- a/doc/guides/eventdevs/features/cnxk.ini
+++ b/doc/guides/eventdevs/features/cnxk.ini
@@ -12,7 +12,8 @@ runtime_port_link = Y
multiple_queue_port = Y
carry_flow_id = Y
maintenance_free = Y
-runtime_queue_attr = y
+runtime_queue_attr = Y
+profile_links = Y
[Eth Rx adapter Features]
internal_port = Y
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index e19a0ed3c3..a6362ed7f8 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -96,6 +96,11 @@ New Features
* Added ``rte_event_port_profile_switch`` to switch between profiles as needed.
+* **Added support for link profiles for Marvell CNXK event device driver.**
+
+ Marvell CNXK event device driver supports up to two link profiles per event
+ port. Added support to advertise link profile capabilities and supporting APIs.
+
Removed Items
-------------
diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c
index d76158e30d..690d47c045 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev.c
@@ -285,7 +285,7 @@ nix_inl_sso_setup(struct nix_inl_dev *inl_dev)
}
/* Setup hwgrp->hws link */
- sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, true);
+ sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, 0, true);
/* Enable HWGRP */
plt_write64(0x1, inl_dev->sso_base + SSO_LF_GGRP_QCTL);
@@ -315,7 +315,7 @@ nix_inl_sso_release(struct nix_inl_dev *inl_dev)
nix_inl_sso_unregister_irqs(inl_dev);
/* Unlink hws */
- sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, false);
+ sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, 0, false);
/* Release XAQ aura */
sso_hwgrp_release_xaq(&inl_dev->dev, 1);
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index a5f48d5bbc..f063184565 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -185,8 +185,8 @@ sso_rsrc_get(struct roc_sso *roc_sso)
}
void
-sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
- uint16_t hwgrp[], uint16_t n, uint16_t enable)
+sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, uint16_t hwgrp[],
+ uint16_t n, uint8_t set, uint16_t enable)
{
uint64_t reg;
int i, j, k;
@@ -203,7 +203,7 @@ sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
k = n % 4;
k = k ? k : 4;
for (j = 0; j < k; j++) {
- mask[j] = hwgrp[i + j] | enable << 14;
+ mask[j] = hwgrp[i + j] | (uint32_t)set << 12 | enable << 14;
if (bmp) {
enable ? plt_bitmap_set(bmp, hwgrp[i + j]) :
plt_bitmap_clear(bmp, hwgrp[i + j]);
@@ -289,8 +289,8 @@ roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns)
}
int
-roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
- uint16_t nb_hwgrp)
+roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], uint16_t nb_hwgrp,
+ uint8_t set)
{
struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
struct sso *sso;
@@ -298,14 +298,14 @@ roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
sso = roc_sso_to_sso_priv(roc_sso);
base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12);
- sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, 1);
+ sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, set, 1);
return nb_hwgrp;
}
int
-roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
- uint16_t nb_hwgrp)
+roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], uint16_t nb_hwgrp,
+ uint8_t set)
{
struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
struct sso *sso;
@@ -313,7 +313,7 @@ roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
sso = roc_sso_to_sso_priv(roc_sso);
base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12);
- sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, 0);
+ sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, set, 0);
return nb_hwgrp;
}
diff --git a/drivers/common/cnxk/roc_sso.h b/drivers/common/cnxk/roc_sso.h
index a2bb6fcb22..55a8894050 100644
--- a/drivers/common/cnxk/roc_sso.h
+++ b/drivers/common/cnxk/roc_sso.h
@@ -84,10 +84,10 @@ int __roc_api roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso,
uint16_t hwgrp, uint8_t weight,
uint8_t affinity, uint8_t priority);
uint64_t __roc_api roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns);
-int __roc_api roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws,
- uint16_t hwgrp[], uint16_t nb_hwgrp);
-int __roc_api roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws,
- uint16_t hwgrp[], uint16_t nb_hwgrp);
+int __roc_api roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
+ uint16_t nb_hwgrp, uint8_t set);
+int __roc_api roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
+ uint16_t nb_hwgrp, uint8_t set);
int __roc_api roc_sso_hwgrp_hws_link_status(struct roc_sso *roc_sso,
uint8_t hws, uint16_t hwgrp);
uintptr_t __roc_api roc_sso_hws_base_get(struct roc_sso *roc_sso, uint8_t hws);
diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h
index 09729d4f62..21c59c57e6 100644
--- a/drivers/common/cnxk/roc_sso_priv.h
+++ b/drivers/common/cnxk/roc_sso_priv.h
@@ -44,8 +44,8 @@ roc_sso_to_sso_priv(struct roc_sso *roc_sso)
int sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf,
void **rsp);
int sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf);
-void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
- uint16_t hwgrp[], uint16_t n, uint16_t enable);
+void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, uint16_t hwgrp[],
+ uint16_t n, uint8_t set, uint16_t enable);
int sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps);
int sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps);
int sso_hwgrp_init_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq,
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 499a3aace7..69d970ac30 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -66,21 +66,21 @@ cn10k_sso_init_hws_mem(void *arg, uint8_t port_id)
}
static int
-cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = port;
- return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+ return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link, profile);
}
static int
-cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = port;
- return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+ return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link, profile);
}
static void
@@ -107,10 +107,11 @@ cn10k_sso_hws_release(void *arg, void *hws)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = hws;
- uint16_t i;
+ uint16_t i, j;
- for (i = 0; i < dev->nb_event_queues; i++)
- roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1);
+ for (i = 0; i < CNXK_SSO_MAX_PROFILES; i++)
+ for (j = 0; j < dev->nb_event_queues; j++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id, &j, 1, i);
memset(ws, 0, sizeof(*ws));
}
@@ -475,6 +476,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
CN10K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, sso_hws_tx_adptr_enq);
event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue;
+ event_dev->profile_switch = cn10k_sso_hws_profile_switch;
#else
RTE_SET_USED(event_dev);
#endif
@@ -618,9 +620,8 @@ cn10k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port,
}
static int
-cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port,
- const uint8_t queues[], const uint8_t priorities[],
- uint16_t nb_links)
+cn10k_sso_port_link_profile(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_links];
@@ -629,14 +630,14 @@ cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port,
RTE_SET_USED(priorities);
for (link = 0; link < nb_links; link++)
hwgrp_ids[link] = queues[link];
- nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+ nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links, profile);
return (int)nb_links;
}
static int
-cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
- uint8_t queues[], uint16_t nb_unlinks)
+cn10k_sso_port_unlink_profile(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_unlinks];
@@ -644,11 +645,25 @@ cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
for (unlink = 0; unlink < nb_unlinks; unlink++)
hwgrp_ids[unlink] = queues[unlink];
- nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+ nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks, profile);
return (int)nb_unlinks;
}
+static int
+cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links)
+{
+ return cn10k_sso_port_link_profile(event_dev, port, queues, priorities, nb_links, 0);
+}
+
+static int
+cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks)
+{
+ return cn10k_sso_port_unlink_profile(event_dev, port, queues, nb_unlinks, 0);
+}
+
static void
cn10k_sso_configure_queue_stash(struct rte_eventdev *event_dev)
{
@@ -993,6 +1008,8 @@ static struct eventdev_ops cn10k_sso_dev_ops = {
.port_quiesce = cn10k_sso_port_quiesce,
.port_link = cn10k_sso_port_link,
.port_unlink = cn10k_sso_port_unlink,
+ .port_link_profile = cn10k_sso_port_link_profile,
+ .port_unlink_profile = cn10k_sso_port_unlink_profile,
.timeout_ticks = cnxk_sso_timeout_ticks,
.eth_rx_adapter_caps_get = cn10k_sso_rx_adapter_caps_get,
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
index 9b5bf90159..d59769717e 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -431,3 +431,14 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+
+int __rte_hot
+cn10k_sso_hws_profile_switch(void *port, uint8_t profile)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ ws->gw_wdata &= ~(0xFFUL);
+ ws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index b4ee023723..7aa49d7b3b 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -316,6 +316,7 @@ uint16_t __rte_hot cn10k_sso_hws_enq_new_burst(void *port,
uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn10k_sso_hws_profile_switch(void *port, uint8_t profile);
#define R(name, flags) \
uint16_t __rte_hot cn10k_sso_hws_deq_##name( \
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 6cce5477f0..10a8c4dfbc 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -15,7 +15,7 @@
enq_op = enq_ops[dev->tx_offloads & (NIX_TX_OFFLOAD_MAX - 1)]
static int
-cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
@@ -24,22 +24,20 @@ cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
if (dev->dual_ws) {
dws = port;
- rc = roc_sso_hws_link(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map,
- nb_link);
- rc |= roc_sso_hws_link(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
- map, nb_link);
+ rc = roc_sso_hws_link(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map, nb_link,
+ profile);
+ rc |= roc_sso_hws_link(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), map,
+ nb_link, profile);
} else {
ws = port;
- rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+ rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link, profile);
}
return rc;
}
static int
-cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
@@ -48,15 +46,13 @@ cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
if (dev->dual_ws) {
dws = port;
- rc = roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
- map, nb_link);
- rc |= roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
- map, nb_link);
+ rc = roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map,
+ nb_link, profile);
+ rc |= roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), map,
+ nb_link, profile);
} else {
ws = port;
- rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+ rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link, profile);
}
return rc;
@@ -97,21 +93,24 @@ cn9k_sso_hws_release(void *arg, void *hws)
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
struct cn9k_sso_hws *ws;
- uint16_t i;
+ uint16_t i, k;
if (dev->dual_ws) {
dws = hws;
for (i = 0; i < dev->nb_event_queues; i++) {
- roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), &i, 1);
- roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), &i, 1);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++) {
+ roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
+ &i, 1, k);
+ roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ &i, 1, k);
+ }
}
memset(dws, 0, sizeof(*dws));
} else {
ws = hws;
for (i = 0; i < dev->nb_event_queues; i++)
- roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1, k);
memset(ws, 0, sizeof(*ws));
}
}
@@ -438,6 +437,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
event_dev->enqueue_burst = cn9k_sso_hws_enq_burst;
event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst;
+ event_dev->profile_switch = cn9k_sso_hws_profile_switch;
if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, sso_hws_deq_seg);
CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst,
@@ -475,6 +475,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
event_dev->enqueue_forward_burst =
cn9k_sso_hws_dual_enq_fwd_burst;
event_dev->ca_enqueue = cn9k_sso_hws_dual_ca_enq;
+ event_dev->profile_switch = cn9k_sso_hws_dual_profile_switch;
if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue,
@@ -695,9 +696,8 @@ cn9k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port,
}
static int
-cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port,
- const uint8_t queues[], const uint8_t priorities[],
- uint16_t nb_links)
+cn9k_sso_port_link_profile(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_links];
@@ -706,14 +706,14 @@ cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port,
RTE_SET_USED(priorities);
for (link = 0; link < nb_links; link++)
hwgrp_ids[link] = queues[link];
- nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+ nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links, profile);
return (int)nb_links;
}
static int
-cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
- uint8_t queues[], uint16_t nb_unlinks)
+cn9k_sso_port_unlink_profile(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_unlinks];
@@ -721,11 +721,25 @@ cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
for (unlink = 0; unlink < nb_unlinks; unlink++)
hwgrp_ids[unlink] = queues[unlink];
- nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+ nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks, profile);
return (int)nb_unlinks;
}
+static int
+cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links)
+{
+ return cn9k_sso_port_link_profile(event_dev, port, queues, priorities, nb_links, 0);
+}
+
+static int
+cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks)
+{
+ return cn9k_sso_port_unlink_profile(event_dev, port, queues, nb_unlinks, 0);
+}
+
static int
cn9k_sso_start(struct rte_eventdev *event_dev)
{
@@ -1006,6 +1020,8 @@ static struct eventdev_ops cn9k_sso_dev_ops = {
.port_quiesce = cn9k_sso_port_quiesce,
.port_link = cn9k_sso_port_link,
.port_unlink = cn9k_sso_port_unlink,
+ .port_link_profile = cn9k_sso_port_link_profile,
+ .port_unlink_profile = cn9k_sso_port_unlink_profile,
.timeout_ticks = cnxk_sso_timeout_ticks,
.eth_rx_adapter_caps_get = cn9k_sso_rx_adapter_caps_get,
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
index abbbfffd85..a9ac49a5a7 100644
--- a/drivers/event/cnxk/cn9k_worker.c
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -66,6 +66,17 @@ cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+int __rte_hot
+cn9k_sso_hws_profile_switch(void *port, uint8_t profile)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ ws->gw_wdata &= ~(0xFFUL);
+ ws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
+
/* Dual ws ops. */
uint16_t __rte_hot
@@ -149,3 +160,14 @@ cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events)
return cn9k_cpt_crypto_adapter_enqueue(dws->base[!dws->vws],
ev->event_ptr);
}
+
+int __rte_hot
+cn9k_sso_hws_dual_profile_switch(void *port, uint8_t profile)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+
+ dws->gw_wdata &= ~(0xFFUL);
+ dws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index 9ddab095ac..bb062a2eaf 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -375,6 +375,7 @@ uint16_t __rte_hot cn9k_sso_hws_enq_new_burst(void *port,
uint16_t __rte_hot cn9k_sso_hws_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn9k_sso_hws_profile_switch(void *port, uint8_t profile);
uint16_t __rte_hot cn9k_sso_hws_dual_enq(void *port,
const struct rte_event *ev);
@@ -391,6 +392,7 @@ uint16_t __rte_hot cn9k_sso_hws_ca_enq(void *port, struct rte_event ev[],
uint16_t nb_events);
uint16_t __rte_hot cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn9k_sso_hws_dual_profile_switch(void *port, uint8_t profile);
#define R(name, flags) \
uint16_t __rte_hot cn9k_sso_hws_deq_##name( \
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 529622cac6..f48d6d91b6 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -30,8 +30,9 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE |
- RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR;
- dev_info->max_profiles_per_port = 1;
+ RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR |
+ RTE_EVENT_DEV_CAP_PROFILE_LINK;
+ dev_info->max_profiles_per_port = CNXK_SSO_MAX_PROFILES;
}
int
@@ -129,23 +130,25 @@ cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t *links_map, hwgrp[CNXK_SSO_MAX_HWGRP];
- int i, j;
+ int i, j, k;
for (i = 0; i < dev->nb_event_ports; i++) {
- uint16_t nb_hwgrp = 0;
-
- links_map = event_dev->data->links_map[0];
- /* Point links_map to this port specific area */
- links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++) {
+ uint16_t nb_hwgrp = 0;
+
+ links_map = event_dev->data->links_map[k];
+ /* Point links_map to this port specific area */
+ links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
+
+ for (j = 0; j < dev->nb_event_queues; j++) {
+ if (links_map[j] == 0xdead)
+ continue;
+ hwgrp[nb_hwgrp] = j;
+ nb_hwgrp++;
+ }
- for (j = 0; j < dev->nb_event_queues; j++) {
- if (links_map[j] == 0xdead)
- continue;
- hwgrp[nb_hwgrp] = j;
- nb_hwgrp++;
+ link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp, k);
}
-
- link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp);
}
}
@@ -436,7 +439,7 @@ cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t all_queues[CNXK_SSO_MAX_HWGRP];
- uint16_t i;
+ uint16_t i, j;
void *ws;
if (!dev->configured)
@@ -447,7 +450,8 @@ cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn)
for (i = 0; i < dev->nb_event_ports; i++) {
ws = event_dev->data->ports[i];
- unlink_fn(dev, ws, all_queues, dev->nb_event_queues);
+ for (j = 0; j < CNXK_SSO_MAX_PROFILES; j++)
+ unlink_fn(dev, ws, all_queues, dev->nb_event_queues, j);
rte_free(cnxk_sso_hws_get_cookie(ws));
event_dev->data->ports[i] = NULL;
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 962e630256..d351314200 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -33,6 +33,8 @@
#define CN10K_SSO_GW_MODE "gw_mode"
#define CN10K_SSO_STASH "stash"
+#define CNXK_SSO_MAX_PROFILES 2
+
#define NSEC2USEC(__ns) ((__ns) / 1E3)
#define USEC2NSEC(__us) ((__us)*1E3)
#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
@@ -57,10 +59,10 @@
typedef void *(*cnxk_sso_init_hws_mem_t)(void *dev, uint8_t port_id);
typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
-typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
- uint16_t nb_link);
-typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map,
- uint16_t nb_link);
+typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map, uint16_t nb_link,
+ uint8_t profile);
+typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map, uint16_t nb_link,
+ uint8_t profile);
typedef void (*cnxk_handle_event_t)(void *arg, struct rte_event ev);
typedef void (*cnxk_sso_hws_reset_t)(void *arg, void *ws);
typedef int (*cnxk_sso_hws_flush_t)(void *ws, uint8_t queue_id, uintptr_t base,
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH 3/3] test/event: add event link profile test
2023-08-25 18:44 ` [PATCH " pbhagavatula
2023-08-25 18:44 ` [PATCH 1/3] eventdev: introduce " pbhagavatula
2023-08-25 18:44 ` [PATCH 2/3] event/cnxk: implement event " pbhagavatula
@ 2023-08-25 18:44 ` pbhagavatula
2023-08-31 20:44 ` [PATCH v2 0/3] Introduce event link profiles pbhagavatula
3 siblings, 0 replies; 44+ messages in thread
From: pbhagavatula @ 2023-08-25 18:44 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add test case to verify event link profiles.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
app/test/test_eventdev.c | 117 +++++++++++++++++++++++++++++++++++++++
1 file changed, 117 insertions(+)
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 29354a24c9..b333fec634 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1129,6 +1129,121 @@ test_eventdev_link_get(void)
return TEST_SUCCESS;
}
+static int
+test_eventdev_change_profile(void)
+{
+#define MAX_RETRIES 4
+ uint8_t priorities[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ struct rte_event_queue_conf qcfg;
+ struct rte_event_port_conf pcfg;
+ struct rte_event_dev_info info;
+ struct rte_event ev;
+ uint8_t q, re;
+ int rc;
+
+ rte_event_dev_info_get(TEST_DEV_ID, &info);
+
+ if (info.max_profiles_per_port <= 1)
+ return TEST_SKIPPED;
+
+ if (info.max_event_queues <= 1)
+ return TEST_SKIPPED;
+
+ rc = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to get port0 default config");
+ rc = rte_event_port_setup(TEST_DEV_ID, 0, &pcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to setup port0");
+
+ rc = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, &qcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to get queue0 default config");
+ rc = rte_event_queue_setup(TEST_DEV_ID, 0, &qcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to setup queue0");
+
+ q = 0;
+ rc = rte_event_port_profile_links_set(TEST_DEV_ID, 0, &q, NULL, 1, 0);
+ TEST_ASSERT(rc == 1, "Failed to link queue 0 to port 0 with profile 0");
+ q = 1;
+ rc = rte_event_port_profile_links_set(TEST_DEV_ID, 0, &q, NULL, 1, 1);
+ TEST_ASSERT(rc == 1, "Failed to link queue 1 to port 0 with profile 1");
+
+ rc = rte_event_port_profile_links_get(TEST_DEV_ID, 0, queues, priorities, 0);
+ TEST_ASSERT(rc == 1, "Failed to links");
+ TEST_ASSERT(queues[0] == 0, "Invalid queue found in link");
+
+ rc = rte_event_port_profile_links_get(TEST_DEV_ID, 0, queues, priorities, 1);
+ TEST_ASSERT(rc == 1, "Failed to links");
+ TEST_ASSERT(queues[0] == 1, "Invalid queue found in link");
+
+ rc = rte_event_dev_start(TEST_DEV_ID);
+ TEST_ASSERT_SUCCESS(rc, "Failed to start event device");
+
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.queue_id = 0;
+ ev.op = RTE_EVENT_OP_NEW;
+ ev.flow_id = 0;
+ ev.u64 = 0xBADF00D0;
+ rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1);
+ TEST_ASSERT(rc == 1, "Failed to enqueue event");
+ ev.queue_id = 1;
+ ev.flow_id = 1;
+ rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1);
+ TEST_ASSERT(rc == 1, "Failed to enqueue event");
+
+ ev.event = 0;
+ ev.u64 = 0;
+
+ rc = rte_event_port_profile_switch(TEST_DEV_ID, 0, 1);
+ TEST_ASSERT_SUCCESS(rc, "Failed to change profile");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ printf("rc %d\n", rc);
+ if (rc)
+ break;
+ }
+
+ TEST_ASSERT(rc == 1, "Failed to dequeue event from profile 1");
+ TEST_ASSERT(ev.flow_id == 1, "Incorrect flow identifier from profile 1");
+ TEST_ASSERT(ev.queue_id == 1, "Incorrect queue identifier from profile 1");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ TEST_ASSERT(rc == 0, "Unexpected event dequeued from active profile");
+ }
+
+ rc = rte_event_port_profile_switch(TEST_DEV_ID, 0, 0);
+ TEST_ASSERT_SUCCESS(rc, "Failed to change profile");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ if (rc)
+ break;
+ }
+
+ TEST_ASSERT(rc == 1, "Failed to dequeue event from profile 1");
+ TEST_ASSERT(ev.flow_id == 0, "Incorrect flow identifier from profile 0");
+ TEST_ASSERT(ev.queue_id == 0, "Incorrect queue identifier from profile 0");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ TEST_ASSERT(rc == 0, "Unexpected event dequeued from active profile");
+ }
+
+ q = 0;
+ rc = rte_event_port_profile_unlink(TEST_DEV_ID, 0, &q, 1, 0);
+ TEST_ASSERT(rc == 1, "Failed to unlink queue 0 to port 0 with profile 0");
+ q = 1;
+ rc = rte_event_port_profile_unlink(TEST_DEV_ID, 0, &q, 1, 1);
+ TEST_ASSERT(rc == 1, "Failed to unlink queue 1 to port 0 with profile 1");
+
+ return TEST_SUCCESS;
+}
+
static int
test_eventdev_close(void)
{
@@ -1187,6 +1302,8 @@ static struct unit_test_suite eventdev_common_testsuite = {
test_eventdev_timeout_ticks),
TEST_CASE_ST(NULL, NULL,
test_eventdev_start_stop),
+ TEST_CASE_ST(eventdev_configure_setup, eventdev_stop_device,
+ test_eventdev_change_profile),
TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
test_eventdev_link),
TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH v2 0/3] Introduce event link profiles
2023-08-25 18:44 ` [PATCH " pbhagavatula
` (2 preceding siblings ...)
2023-08-25 18:44 ` [PATCH 3/3] test/event: add event link profile test pbhagavatula
@ 2023-08-31 20:44 ` pbhagavatula
2023-08-31 20:44 ` [PATCH v2 1/3] eventdev: introduce " pbhagavatula
` (3 more replies)
3 siblings, 4 replies; 44+ messages in thread
From: pbhagavatula @ 2023-08-31 20:44 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
A collection of event queues linked to an event port can be associated
with unique identifier called as a profile, multiple such profiles can
be configured based on the event device capability using the function
`rte_event_port_profile_links_set` which takes arguments similar to
`rte_event_port_link` in addition to the profile identifier.
The maximum link profiles that are supported by an event device is
advertised through the structure member
`rte_event_dev_info::max_profiles_per_port`.
By default, event ports are configured to use the link profile 0 on
initialization.
Once multiple link profiles are set up and the event device is started, the
application can use the function `rte_event_port_profile_switch` to change
the currently active profile on an event port. This effects the next
`rte_event_dequeue_burst` call, where the event queues associated with the
newly active link profile will participate in scheduling.
Rudementary work flow would something like:
Config path:
uint8_t lowQ[4] = {4, 5, 6, 7};
uint8_t highQ[4] = {0, 1, 2, 3};
if (rte_event_dev_info.max_profiles_per_port < 2)
return -ENOTSUP;
rte_event_port_profile_links_set(0, 0, highQ, NULL, 4, 0);
rte_event_port_profile_links_set(0, 0, lowQ, NULL, 4, 1);
Worker path:
empty_high_deq = 0;
empty_low_deq = 0;
is_low_deq = 0;
while (1) {
deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
if (deq == 0) {
/**
* Change link profile based on work activity on current
* active profile
*/
if (is_low_deq) {
empty_low_deq++;
if (empty_low_deq == MAX_LOW_RETRY) {
rte_event_port_profile_switch(0, 0, 0);
is_low_deq = 0;
empty_low_deq = 0;
}
continue;
}
if (empty_high_deq == MAX_HIGH_RETRY) {
rte_event_port_profile_switch(0, 0, 1);
is_low_deq = 1;
empty_high_deq = 0;
}
continue;
}
// Process the event received.
if (is_low_deq++ == MAX_LOW_EVENTS) {
rte_event_port_profile_switch(0, 0, 0);
is_low_deq = 0;
}
}
An application could use heuristic data of load/activity of a given event
port and change its active profile to adapt to the traffic pattern.
An unlink function `rte_event_port_profile_unlink` is provided to
modify the links associated to a profile, and
`rte_event_port_profile_links_get` can be used to retrieve the links
associated with a profile.
Using Link profiles can reduce the overhead of linking/unlinking and
waiting for unlinks in progress in fast-path and gives applications
the ability to switch between preset profiles on the fly.
v2 Changes:
----------
- Fix compilation.
Pavan Nikhilesh (3):
eventdev: introduce link profiles
event/cnxk: implement event link profiles
test/event: add event link profile test
app/test/test_eventdev.c | 117 +++++++++++
config/rte_config.h | 1 +
doc/guides/eventdevs/cnxk.rst | 1 +
doc/guides/eventdevs/features/cnxk.ini | 3 +-
doc/guides/eventdevs/features/default.ini | 1 +
doc/guides/prog_guide/eventdev.rst | 40 ++++
doc/guides/rel_notes/release_23_11.rst | 22 ++
drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
drivers/common/cnxk/roc_sso.c | 18 +-
drivers/common/cnxk/roc_sso.h | 8 +-
drivers/common/cnxk/roc_sso_priv.h | 4 +-
drivers/event/cnxk/cn10k_eventdev.c | 45 ++--
drivers/event/cnxk/cn10k_worker.c | 11 +
drivers/event/cnxk/cn10k_worker.h | 1 +
drivers/event/cnxk/cn9k_eventdev.c | 74 ++++---
drivers/event/cnxk/cn9k_worker.c | 22 ++
drivers/event/cnxk/cn9k_worker.h | 2 +
drivers/event/cnxk/cnxk_eventdev.c | 37 ++--
drivers/event/cnxk/cnxk_eventdev.h | 10 +-
drivers/event/dlb2/dlb2.c | 1 +
drivers/event/dpaa/dpaa_eventdev.c | 1 +
drivers/event/dpaa2/dpaa2_eventdev.c | 2 +-
drivers/event/dsw/dsw_evdev.c | 1 +
drivers/event/octeontx/ssovf_evdev.c | 2 +-
drivers/event/opdl/opdl_evdev.c | 1 +
drivers/event/skeleton/skeleton_eventdev.c | 1 +
drivers/event/sw/sw_evdev.c | 1 +
lib/eventdev/eventdev_pmd.h | 59 +++++-
lib/eventdev/eventdev_private.c | 9 +
lib/eventdev/eventdev_trace.h | 32 +++
lib/eventdev/eventdev_trace_points.c | 12 ++
lib/eventdev/rte_eventdev.c | 146 ++++++++++---
lib/eventdev/rte_eventdev.h | 231 +++++++++++++++++++++
lib/eventdev/rte_eventdev_core.h | 4 +
lib/eventdev/rte_eventdev_trace_fp.h | 8 +
lib/eventdev/version.map | 6 +
36 files changed, 828 insertions(+), 110 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH v2 1/3] eventdev: introduce link profiles
2023-08-31 20:44 ` [PATCH v2 0/3] Introduce event link profiles pbhagavatula
@ 2023-08-31 20:44 ` pbhagavatula
2023-09-20 4:22 ` Jerin Jacob
2023-08-31 20:44 ` [PATCH v2 2/3] event/cnxk: implement event " pbhagavatula
` (2 subsequent siblings)
3 siblings, 1 reply; 44+ messages in thread
From: pbhagavatula @ 2023-08-31 20:44 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
A collection of event queues linked to an event port can be
associated with a unique identifier called as a profile, multiple
such profiles can be created based on the event device capability
using the function `rte_event_port_profile_links_set` which takes
arguments similar to `rte_event_port_link` in addition to the profile
identifier.
The maximum link profiles that are supported by an event device
is advertised through the structure member
`rte_event_dev_info::max_profiles_per_port`.
By default, event ports are configured to use the link profile 0
on initialization.
Once multiple link profiles are set up and the event device is started,
the application can use the function `rte_event_port_profile_switch`
to change the currently active profile on an event port. This effects
the next `rte_event_dequeue_burst` call, where the event queues
associated with the newly active link profile will participate in
scheduling.
An unlink function `rte_event_port_profile_unlink` is provided
to modify the links associated to a profile, and
`rte_event_port_profile_links_get` can be used to retrieve the
links associated with a profile.
Using Link profiles can reduce the overhead of linking/unlinking and
waiting for unlinks in progress in fast-path and gives applications
the ability to switch between preset profiles on the fly.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
config/rte_config.h | 1 +
doc/guides/eventdevs/features/default.ini | 1 +
doc/guides/prog_guide/eventdev.rst | 40 ++++
doc/guides/rel_notes/release_23_11.rst | 17 ++
drivers/event/cnxk/cnxk_eventdev.c | 3 +-
drivers/event/dlb2/dlb2.c | 1 +
drivers/event/dpaa/dpaa_eventdev.c | 1 +
drivers/event/dpaa2/dpaa2_eventdev.c | 2 +-
drivers/event/dsw/dsw_evdev.c | 1 +
drivers/event/octeontx/ssovf_evdev.c | 2 +-
drivers/event/opdl/opdl_evdev.c | 1 +
drivers/event/skeleton/skeleton_eventdev.c | 1 +
drivers/event/sw/sw_evdev.c | 1 +
lib/eventdev/eventdev_pmd.h | 59 +++++-
lib/eventdev/eventdev_private.c | 9 +
lib/eventdev/eventdev_trace.h | 32 +++
lib/eventdev/eventdev_trace_points.c | 12 ++
lib/eventdev/rte_eventdev.c | 146 ++++++++++---
lib/eventdev/rte_eventdev.h | 231 +++++++++++++++++++++
lib/eventdev/rte_eventdev_core.h | 4 +
lib/eventdev/rte_eventdev_trace_fp.h | 8 +
lib/eventdev/version.map | 6 +
22 files changed, 549 insertions(+), 30 deletions(-)
diff --git a/config/rte_config.h b/config/rte_config.h
index 400e44e3cf..d43b3eecb8 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -73,6 +73,7 @@
#define RTE_EVENT_MAX_DEVS 16
#define RTE_EVENT_MAX_PORTS_PER_DEV 255
#define RTE_EVENT_MAX_QUEUES_PER_DEV 255
+#define RTE_EVENT_MAX_PROFILES_PER_PORT 8
#define RTE_EVENT_TIMER_ADAPTER_NUM_MAX 32
#define RTE_EVENT_ETH_INTR_RING_SIZE 1024
#define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini
index 00360f60c6..1c0082352b 100644
--- a/doc/guides/eventdevs/features/default.ini
+++ b/doc/guides/eventdevs/features/default.ini
@@ -18,6 +18,7 @@ multiple_queue_port =
carry_flow_id =
maintenance_free =
runtime_queue_attr =
+profile_links =
;
; Features of a default Ethernet Rx adapter.
diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst
index 2c83176846..9c07870a79 100644
--- a/doc/guides/prog_guide/eventdev.rst
+++ b/doc/guides/prog_guide/eventdev.rst
@@ -317,6 +317,46 @@ can be achieved like this:
}
int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1);
+Linking Queues to Ports with profiles
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+An application can use link profiles if supported by the underlying event device to setup up
+multiple link profile per port and change them run time depending up on heuristic data.
+Using Link profiles can reduce the overhead of linking/unlinking and wait for unlinks in progress
+in fast-path and gives applications the ability to switch between preset profiles on the fly.
+
+An Example use case could be as follows.
+
+Config path:
+
+.. code-block:: c
+
+ uint8_t lq[4] = {4, 5, 6, 7};
+ uint8_t hq[4] = {0, 1, 2, 3};
+
+ if (rte_event_dev_info.max_profiles_per_port < 2)
+ return -ENOTSUP;
+
+ rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0);
+ rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1);
+
+Worker path:
+
+.. code-block:: c
+
+ uint8_t profile_id_to_switch;
+
+ while (1) {
+ deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
+ if (deq == 0) {
+ profile_id_to_switch = app_findprofile_id_to_switch();
+ rte_event_port_profile_switch(0, 0, profile_id_to_switch);
+ continue;
+ }
+
+ // Process the event received.
+ }
+
Starting the EventDev
~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index 333e1d95a2..e19a0ed3c3 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -78,6 +78,23 @@ New Features
* build: Optional libraries can now be selected with the new ``enable_libs``
build option similarly to the existing ``enable_drivers`` build option.
+* **Added eventdev support to link queues to port with profile.**
+
+ Introduced event link profiles that can be used to associated links between
+ event queues and an event port with a unique identifier termed as profile.
+ The profile can be used to switch between the associated links in fast-path
+ without the additional overhead of linking/unlinking and waiting for unlinking.
+
+ * Added ``rte_event_port_profile_links_set`` to link event queues to an event
+ port with a unique profile identifier.
+
+ * Added ``rte_event_port_profile_unlink`` to unlink event queues from an event
+ port associated with a profile.
+
+ * Added ``rte_event_port_profile_links_get`` to retrieve links associated to a
+ profile.
+
+ * Added ``rte_event_port_profile_switch`` to switch between profiles as needed.
Removed Items
-------------
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 27883a3619..529622cac6 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -31,6 +31,7 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE |
RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR;
+ dev_info->max_profiles_per_port = 1;
}
int
@@ -133,7 +134,7 @@ cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
for (i = 0; i < dev->nb_event_ports; i++) {
uint16_t nb_hwgrp = 0;
- links_map = event_dev->data->links_map;
+ links_map = event_dev->data->links_map[0];
/* Point links_map to this port specific area */
links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 60c5cd4804..580057870f 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -79,6 +79,7 @@ static struct rte_event_dev_info evdev_dlb2_default_info = {
RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE),
+ .max_profiles_per_port = 1,
};
struct process_local_port_data
diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c
index 4b3d16735b..f615da3813 100644
--- a/drivers/event/dpaa/dpaa_eventdev.c
+++ b/drivers/event/dpaa/dpaa_eventdev.c
@@ -359,6 +359,7 @@ dpaa_event_dev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
+ dev_info->max_profiles_per_port = 1;
}
static int
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index fa1a1ade80..ffc5550f85 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -411,7 +411,7 @@ dpaa2_eventdev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
-
+ dev_info->max_profiles_per_port = 1;
}
static int
diff --git a/drivers/event/dsw/dsw_evdev.c b/drivers/event/dsw/dsw_evdev.c
index 6c5cde2468..785c12f61f 100644
--- a/drivers/event/dsw/dsw_evdev.c
+++ b/drivers/event/dsw/dsw_evdev.c
@@ -218,6 +218,7 @@ dsw_info_get(struct rte_eventdev *dev __rte_unused,
.max_event_port_dequeue_depth = DSW_MAX_PORT_DEQUEUE_DEPTH,
.max_event_port_enqueue_depth = DSW_MAX_PORT_ENQUEUE_DEPTH,
.max_num_events = DSW_MAX_EVENTS,
+ .max_profiles_per_port = 1,
.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE|
RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED|
RTE_EVENT_DEV_CAP_NONSEQ_MODE|
diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
index 650266b996..0eb9358981 100644
--- a/drivers/event/octeontx/ssovf_evdev.c
+++ b/drivers/event/octeontx/ssovf_evdev.c
@@ -158,7 +158,7 @@ ssovf_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *dev_info)
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
-
+ dev_info->max_profiles_per_port = 1;
}
static int
diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c
index 9ce8b39b60..dd25749654 100644
--- a/drivers/event/opdl/opdl_evdev.c
+++ b/drivers/event/opdl/opdl_evdev.c
@@ -378,6 +378,7 @@ opdl_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE,
+ .max_profiles_per_port = 1,
};
*info = evdev_opdl_info;
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
index 8513b9a013..dc9b131641 100644
--- a/drivers/event/skeleton/skeleton_eventdev.c
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -104,6 +104,7 @@ skeleton_eventdev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_EVENT_QOS |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
+ dev_info->max_profiles_per_port = 1;
}
static int
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index cfd659d774..6d1816b76d 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -609,6 +609,7 @@ sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE),
+ .max_profiles_per_port = 1,
};
*info = evdev_sw_info;
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index f62f42e140..66fdad71f3 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -119,8 +119,8 @@ struct rte_eventdev_data {
/**< Array of port configuration structures. */
struct rte_event_queue_conf queues_cfg[RTE_EVENT_MAX_QUEUES_PER_DEV];
/**< Array of queue configuration structures. */
- uint16_t links_map[RTE_EVENT_MAX_PORTS_PER_DEV *
- RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint16_t links_map[RTE_EVENT_MAX_PROFILES_PER_PORT]
+ [RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV];
/**< Memory to store queues to port connections. */
void *dev_private;
/**< PMD-specific private data */
@@ -178,6 +178,9 @@ struct rte_eventdev {
event_tx_adapter_enqueue_t txa_enqueue;
/**< Pointer to PMD eth Tx adapter enqueue function. */
event_crypto_adapter_enqueue_t ca_enqueue;
+ /**< PMD Crypto adapter enqueue function. */
+ event_profile_switch_t profile_switch;
+ /**< PMD Event switch profile function. */
uint64_t reserved_64s[4]; /**< Reserved for future fields */
void *reserved_ptrs[3]; /**< Reserved for future fields */
@@ -437,6 +440,32 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
const uint8_t queues[], const uint8_t priorities[],
uint16_t nb_links);
+/**
+ * Link multiple source event queues associated with a profile to a destination
+ * event port.
+ *
+ * @param dev
+ * Event device pointer
+ * @param port
+ * Event port pointer
+ * @param queues
+ * Points to an array of *nb_links* event queues to be linked
+ * to the event port.
+ * @param priorities
+ * Points to an array of *nb_links* service priorities associated with each
+ * event queue link to event port.
+ * @param nb_links
+ * The number of links to establish.
+ * @param profile
+ * The profile ID to associate the links.
+ *
+ * @return
+ * Returns 0 on success.
+ */
+typedef int (*eventdev_port_link_profile_t)(struct rte_eventdev *dev, void *port,
+ const uint8_t queues[], const uint8_t priorities[],
+ uint16_t nb_links, uint8_t profile);
+
/**
* Unlink multiple source event queues from destination event port.
*
@@ -455,6 +484,28 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
typedef int (*eventdev_port_unlink_t)(struct rte_eventdev *dev, void *port,
uint8_t queues[], uint16_t nb_unlinks);
+/**
+ * Unlink multiple source event queues associated with a profile from destination
+ * event port.
+ *
+ * @param dev
+ * Event device pointer
+ * @param port
+ * Event port pointer
+ * @param queues
+ * An array of *nb_unlinks* event queues to be unlinked from the event port.
+ * @param nb_unlinks
+ * The number of unlinks to establish
+ * @param profile
+ * The profile ID of the associated links.
+ *
+ * @return
+ * Returns 0 on success.
+ */
+typedef int (*eventdev_port_unlink_profile_t)(struct rte_eventdev *dev, void *port,
+ uint8_t queues[], uint16_t nb_unlinks,
+ uint8_t profile);
+
/**
* Unlinks in progress. Returns number of unlinks that the PMD is currently
* performing, but have not yet been completed.
@@ -1348,8 +1399,12 @@ struct eventdev_ops {
eventdev_port_link_t port_link;
/**< Link event queues to an event port. */
+ eventdev_port_link_profile_t port_link_profile;
+ /**< Link event queues associated with a profile to an event port. */
eventdev_port_unlink_t port_unlink;
/**< Unlink event queues from an event port. */
+ eventdev_port_unlink_profile_t port_unlink_profile;
+ /**< Unlink event queues associated with a profile from an event port. */
eventdev_port_unlinks_in_progress_t port_unlinks_in_progress;
/**< Unlinks in progress on an event port. */
eventdev_dequeue_timeout_ticks_t timeout_ticks;
diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c
index 1d3d9d357e..a5e0bd3de0 100644
--- a/lib/eventdev/eventdev_private.c
+++ b/lib/eventdev/eventdev_private.c
@@ -81,6 +81,13 @@ dummy_event_crypto_adapter_enqueue(__rte_unused void *port,
return 0;
}
+static int
+dummy_event_port_profile_switch(__rte_unused void *port, __rte_unused uint8_t profile)
+{
+ RTE_EDEV_LOG_ERR("change profile requested for unconfigured event device");
+ return -EINVAL;
+}
+
void
event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
{
@@ -97,6 +104,7 @@ event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
.txa_enqueue_same_dest =
dummy_event_tx_adapter_enqueue_same_dest,
.ca_enqueue = dummy_event_crypto_adapter_enqueue,
+ .profile_switch = dummy_event_port_profile_switch,
.data = dummy_data,
};
@@ -117,5 +125,6 @@ event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op,
fp_op->txa_enqueue = dev->txa_enqueue;
fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest;
fp_op->ca_enqueue = dev->ca_enqueue;
+ fp_op->profile_switch = dev->profile_switch;
fp_op->data = dev->data->ports;
}
diff --git a/lib/eventdev/eventdev_trace.h b/lib/eventdev/eventdev_trace.h
index f008ef0091..5fc9bebd13 100644
--- a/lib/eventdev/eventdev_trace.h
+++ b/lib/eventdev/eventdev_trace.h
@@ -76,6 +76,17 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(rc);
)
+RTE_TRACE_POINT(
+ rte_eventdev_trace_port_profile_links_set,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
+ uint16_t nb_links, uint8_t profile, int rc),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u16(nb_links);
+ rte_trace_point_emit_u8(profile);
+ rte_trace_point_emit_int(rc);
+)
+
RTE_TRACE_POINT(
rte_eventdev_trace_port_unlink,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
@@ -86,6 +97,17 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(rc);
)
+RTE_TRACE_POINT(
+ rte_eventdev_trace_port_profile_unlink,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
+ uint16_t nb_unlinks, uint8_t profile, int rc),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u16(nb_unlinks);
+ rte_trace_point_emit_u8(profile);
+ rte_trace_point_emit_int(rc);
+)
+
RTE_TRACE_POINT(
rte_eventdev_trace_start,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, int rc),
@@ -487,6 +509,16 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(count);
)
+RTE_TRACE_POINT(
+ rte_eventdev_trace_port_profile_links_get,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, uint8_t profile,
+ int count),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u8(profile);
+ rte_trace_point_emit_int(count);
+)
+
RTE_TRACE_POINT(
rte_eventdev_trace_port_unlinks_in_progress,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id),
diff --git a/lib/eventdev/eventdev_trace_points.c b/lib/eventdev/eventdev_trace_points.c
index 76144cfe75..8024e07531 100644
--- a/lib/eventdev/eventdev_trace_points.c
+++ b/lib/eventdev/eventdev_trace_points.c
@@ -19,9 +19,15 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_setup,
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_link,
lib.eventdev.port.link)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_links_set,
+ lib.eventdev.port.profile.links.set)
+
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlink,
lib.eventdev.port.unlink)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_unlink,
+ lib.eventdev.port.profile.unlink)
+
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_start,
lib.eventdev.start)
@@ -40,6 +46,9 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_deq_burst,
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_maintain,
lib.eventdev.maintain)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_switch,
+ lib.eventdev.port.profile.switch)
+
/* Eventdev Rx adapter trace points */
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_eth_rx_adapter_create,
lib.eventdev.rx.adapter.create)
@@ -206,6 +215,9 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_default_conf_get,
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_links_get,
lib.eventdev.port.links.get)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_links_get,
+ lib.eventdev.port.profile.links.get)
+
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlinks_in_progress,
lib.eventdev.port.unlinks.in.progress)
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 6ab4524332..30df0572d2 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -270,7 +270,7 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
void **ports;
uint16_t *links_map;
struct rte_event_port_conf *ports_cfg;
- unsigned int i;
+ unsigned int i, j;
RTE_EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
dev->data->dev_id);
@@ -281,7 +281,6 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
ports = dev->data->ports;
ports_cfg = dev->data->ports_cfg;
- links_map = dev->data->links_map;
for (i = nb_ports; i < old_nb_ports; i++)
(*dev->dev_ops->port_release)(ports[i]);
@@ -297,9 +296,11 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
sizeof(ports[0]) * new_ps);
memset(ports_cfg + old_nb_ports, 0,
sizeof(ports_cfg[0]) * new_ps);
- for (i = old_links_map_end; i < links_map_end; i++)
- links_map[i] =
- EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++) {
+ links_map = dev->data->links_map[i];
+ for (j = old_links_map_end; j < links_map_end; j++)
+ links_map[j] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ }
}
} else {
if (*dev->dev_ops->port_release == NULL)
@@ -953,21 +954,44 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
const uint8_t queues[], const uint8_t priorities[],
uint16_t nb_links)
{
- struct rte_eventdev *dev;
- uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ return rte_event_port_profile_links_set(dev_id, port_id, queues, priorities, nb_links, 0);
+}
+
+int
+rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile)
+{
uint8_t priorities_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
uint16_t *links_map;
int i, diag;
RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);
dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT || profile >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile=%" PRIu8, profile);
+ return -EINVAL;
+ }
+
if (*dev->dev_ops->port_link == NULL) {
RTE_EDEV_LOG_ERR("Function not supported\n");
rte_errno = ENOTSUP;
return 0;
}
+ if (profile && *dev->dev_ops->port_link_profile == NULL) {
+ RTE_EDEV_LOG_ERR("Function not supported\n");
+ rte_errno = ENOTSUP;
+ return 0;
+ }
+
if (!is_valid_port(dev, port_id)) {
RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
rte_errno = EINVAL;
@@ -995,18 +1019,22 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
return 0;
}
- diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id],
- queues, priorities, nb_links);
+ if (profile)
+ diag = (*dev->dev_ops->port_link_profile)(dev, dev->data->ports[port_id], queues,
+ priorities, nb_links, profile);
+ else
+ diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id], queues,
+ priorities, nb_links);
if (diag < 0)
return diag;
- links_map = dev->data->links_map;
+ links_map = dev->data->links_map[profile];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
for (i = 0; i < diag; i++)
links_map[queues[i]] = (uint8_t)priorities[i];
- rte_eventdev_trace_port_link(dev_id, port_id, nb_links, diag);
+ rte_eventdev_trace_port_profile_links_set(dev_id, port_id, nb_links, profile, diag);
return diag;
}
@@ -1014,27 +1042,50 @@ int
rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint16_t nb_unlinks)
{
- struct rte_eventdev *dev;
+ return rte_event_port_profile_unlink(dev_id, port_id, queues, nb_unlinks, 0);
+}
+
+int
+rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile)
+{
uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
- int i, diag, j;
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
uint16_t *links_map;
+ int i, diag, j;
RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);
dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT || profile >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile=%" PRIu8, profile);
+ return -EINVAL;
+ }
+
if (*dev->dev_ops->port_unlink == NULL) {
RTE_EDEV_LOG_ERR("Function not supported");
rte_errno = ENOTSUP;
return 0;
}
+ if (profile && *dev->dev_ops->port_unlink_profile == NULL) {
+ RTE_EDEV_LOG_ERR("Function not supported");
+ rte_errno = ENOTSUP;
+ return 0;
+ }
+
if (!is_valid_port(dev, port_id)) {
RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
rte_errno = EINVAL;
return 0;
}
- links_map = dev->data->links_map;
+ links_map = dev->data->links_map[profile];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
@@ -1063,16 +1114,19 @@ rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
return 0;
}
- diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id],
- queues, nb_unlinks);
-
+ if (profile)
+ diag = (*dev->dev_ops->port_unlink_profile)(dev, dev->data->ports[port_id], queues,
+ nb_unlinks, profile);
+ else
+ diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id], queues,
+ nb_unlinks);
if (diag < 0)
return diag;
for (i = 0; i < diag; i++)
links_map[queues[i]] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
- rte_eventdev_trace_port_unlink(dev_id, port_id, nb_unlinks, diag);
+ rte_eventdev_trace_port_profile_unlink(dev_id, port_id, nb_unlinks, profile, diag);
return diag;
}
@@ -1116,7 +1170,8 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
return -EINVAL;
}
- links_map = dev->data->links_map;
+ /* Use the default profile. */
+ links_map = dev->data->links_map[0];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
for (i = 0; i < dev->data->nb_queues; i++) {
@@ -1132,6 +1187,48 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
return count;
}
+int
+rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint8_t priorities[], uint8_t profile)
+{
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
+ uint16_t *links_map;
+ int i, count = 0;
+
+ RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+
+ dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT || profile >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile=%" PRIu8, profile);
+ return -EINVAL;
+ }
+
+ if (!is_valid_port(dev, port_id)) {
+ RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+ return -EINVAL;
+ }
+
+ links_map = dev->data->links_map[profile];
+ /* Point links_map to this port specific area */
+ links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
+ for (i = 0; i < dev->data->nb_queues; i++) {
+ if (links_map[i] != EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
+ queues[count] = i;
+ priorities[count] = (uint8_t)links_map[i];
+ ++count;
+ }
+ }
+
+ rte_eventdev_trace_port_profile_links_get(dev_id, port_id, profile, count);
+
+ return count;
+}
+
int
rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
uint64_t *timeout_ticks)
@@ -1440,7 +1537,7 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
{
char mz_name[RTE_EVENTDEV_NAME_MAX_LEN];
const struct rte_memzone *mz;
- int n;
+ int i, n;
/* Generate memzone name */
n = snprintf(mz_name, sizeof(mz_name), "rte_eventdev_data_%u", dev_id);
@@ -1460,11 +1557,10 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
*data = mz->addr;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
memset(*data, 0, sizeof(struct rte_eventdev_data));
- for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV *
- RTE_EVENT_MAX_QUEUES_PER_DEV;
- n++)
- (*data)->links_map[n] =
- EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++)
+ for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV;
+ n++)
+ (*data)->links_map[i][n] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
}
return 0;
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 2ba8a7b090..f6ce45d160 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -320,6 +320,12 @@ struct rte_event;
* rte_event_queue_setup().
*/
+#define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12)
+/** Event device is capable of supporting multiple link profiles per event port
+ * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater
+ * than one.
+ */
+
/* Event device priority levels */
#define RTE_EVENT_DEV_PRIORITY_HIGHEST 0
/**< Highest priority expressed across eventdev subsystem
@@ -446,6 +452,10 @@ struct rte_event_dev_info {
* device. These ports and queues are not accounted for in
* max_event_ports or max_event_queues.
*/
+ uint8_t max_profiles_per_port;
+ /**< Maximum number of event queue profiles per event port.
+ * A device that doesn't support multiple profiles will set this as 1.
+ */
};
/**
@@ -1536,6 +1546,10 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
* latency of critical work by establishing the link with more event ports
* at runtime.
*
+ * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater
+ * than or equal to one, this function links the event queues to the default
+ * profile i.e. profile 0 of the event port.
+ *
* @param dev_id
* The identifier of the device.
*
@@ -1593,6 +1607,10 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
* Event queue(s) to event port unlink establishment can be changed at runtime
* without re-configuring the device.
*
+ * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater
+ * than or equal to one, this function unlinks the event queues from the default
+ * profile i.e. profile 0 of the event port.
+ *
* @see rte_event_port_unlinks_in_progress() to poll for completed unlinks.
*
* @param dev_id
@@ -1626,6 +1644,136 @@ int
rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint16_t nb_unlinks);
+/**
+ * Link multiple source event queues supplied in *queues* to the destination
+ * event port designated by its *port_id* with associated profile identifier
+ * supplied in *profile* with service priorities supplied in *priorities* on
+ * the event device designated by its *dev_id*.
+ *
+ * If *profile* is set to 0 then, the links created by the call `rte_event_port_link`
+ * will be overwritten.
+ *
+ * Event ports by default use profile 0 unless it is changed using the
+ * call ``rte_event_port_profile_switch()``.
+ *
+ * The link establishment shall enable the event port *port_id* from
+ * receiving events from the specified event queue(s) supplied in *queues*
+ *
+ * An event queue may link to one or more event ports.
+ * The number of links can be established from an event queue to event port is
+ * implementation defined.
+ *
+ * Event queue(s) to event port link establishment can be changed at runtime
+ * without re-configuring the device to support scaling and to reduce the
+ * latency of critical work by establishing the link with more event ports
+ * at runtime.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier to select the destination port to link.
+ *
+ * @param queues
+ * Points to an array of *nb_links* event queues to be linked
+ * to the event port.
+ * NULL value is allowed, in which case this function links all the configured
+ * event queues *nb_event_queues* which previously supplied to
+ * rte_event_dev_configure() to the event port *port_id*
+ *
+ * @param priorities
+ * Points to an array of *nb_links* service priorities associated with each
+ * event queue link to event port.
+ * The priority defines the event port's servicing priority for
+ * event queue, which may be ignored by an implementation.
+ * The requested priority should in the range of
+ * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST].
+ * The implementation shall normalize the requested priority to
+ * implementation supported priority value.
+ * NULL value is allowed, in which case this function links the event queues
+ * with RTE_EVENT_DEV_PRIORITY_NORMAL servicing priority
+ *
+ * @param nb_links
+ * The number of links to establish. This parameter is ignored if queues is
+ * NULL.
+ *
+ * @param profile
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of links actually established. The return value can be less than
+ * the value of the *nb_links* parameter when the implementation has the
+ * limitation on specific queue to port link establishment or if invalid
+ * parameters are specified in *queues*
+ * If the return value is less than *nb_links*, the remaining links at the end
+ * of link[] are not established, and the caller has to take care of them.
+ * If return value is less than *nb_links* then implementation shall update the
+ * rte_errno accordingly, Possible rte_errno values are
+ * (EDQUOT) Quota exceeded(Application tried to link the queue configured with
+ * RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports)
+ * (EINVAL) Invalid parameter
+ *
+ */
+__rte_experimental
+int
+rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile);
+
+/**
+ * Unlink multiple source event queues supplied in *queues* that belong to profile
+ * designated by *profile* from the destination event port designated by its
+ * *port_id* on the event device designated by its *dev_id*.
+ *
+ * If *profile* is set to 0 i.e., the default profile then, then this function will
+ * act as ``rte_event_port_unlink``.
+ *
+ * The unlink call issues an async request to disable the event port *port_id*
+ * from receiving events from the specified event queue *queue_id*.
+ * Event queue(s) to event port unlink establishment can be changed at runtime
+ * without re-configuring the device.
+ *
+ * @see rte_event_port_unlinks_in_progress() to poll for completed unlinks.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier to select the destination port to unlink.
+ *
+ * @param queues
+ * Points to an array of *nb_unlinks* event queues to be unlinked
+ * from the event port.
+ * NULL value is allowed, in which case this function unlinks all the
+ * event queue(s) from the event port *port_id*.
+ *
+ * @param nb_unlinks
+ * The number of unlinks to establish. This parameter is ignored if queues is
+ * NULL.
+ *
+ * @param profile
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of unlinks successfully requested. The return value can be less
+ * than the value of the *nb_unlinks* parameter when the implementation has the
+ * limitation on specific queue to port unlink establishment or
+ * if invalid parameters are specified.
+ * If the return value is less than *nb_unlinks*, the remaining queues at the
+ * end of queues[] are not unlinked, and the caller has to take care of them.
+ * If return value is less than *nb_unlinks* then implementation shall update
+ * the rte_errno accordingly, Possible rte_errno values are
+ * (EINVAL) Invalid parameter
+ *
+ */
+__rte_experimental
+int
+rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile);
+
/**
* Returns the number of unlinks in progress.
*
@@ -1680,6 +1828,42 @@ int
rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint8_t priorities[]);
+/**
+ * Retrieve the list of source event queues and its service priority
+ * associated to a profile and linked to the destination event port
+ * designated by its *port_id* on the event device designated by its *dev_id*.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier.
+ *
+ * @param[out] queues
+ * Points to an array of *queues* for output.
+ * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
+ * store the event queue(s) linked with event port *port_id*
+ *
+ * @param[out] priorities
+ * Points to an array of *priorities* for output.
+ * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
+ * store the service priority associated with each event queue linked
+ *
+ * @param profile
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of links established on the event port designated by its
+ * *port_id*.
+ * - <0 on failure.
+ */
+__rte_experimental
+int
+rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint8_t priorities[], uint8_t profile);
+
/**
* Retrieve the service ID of the event dev. If the adapter doesn't use
* a rte_service function, this function returns -ESRCH.
@@ -2265,6 +2449,53 @@ rte_event_maintain(uint8_t dev_id, uint8_t port_id, int op)
return 0;
}
+/**
+ * Change the active profile on an event port.
+ *
+ * This function is used to change the current active profile on an event port
+ * when multiple link profiles are configured on an event port through the
+ * function call ``rte_event_port_profile_links_set``.
+ *
+ * On the subsequent ``rte_event_dequeue_burst`` call, only the event queues
+ * that were associated with the newly active profile will participate in
+ * scheduling.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ * @param port_id
+ * The identifier of the event port.
+ * @param profile
+ * The identifier of the profile.
+ * @return
+ * - 0 on success.
+ * - -EINVAL if *dev_id*, *port_id*, or *profile* is invalid.
+ */
+__rte_experimental
+static inline uint8_t
+rte_event_port_profile_switch(uint8_t dev_id, uint8_t port_id, uint8_t profile)
+{
+ const struct rte_event_fp_ops *fp_ops;
+ void *port;
+
+ fp_ops = &rte_event_fp_ops[dev_id];
+ port = fp_ops->data[port_id];
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+ if (dev_id >= RTE_EVENT_MAX_DEVS ||
+ port_id >= RTE_EVENT_MAX_PORTS_PER_DEV)
+ return -EINVAL;
+
+ if (port == NULL)
+ return -EINVAL;
+
+ if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT)
+ return -EINVAL;
+#endif
+ rte_eventdev_trace_port_profile_switch(dev_id, port_id, profile);
+
+ return fp_ops->profile_switch(port, profile);
+}
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
index c328bdbc82..dfde8500fc 100644
--- a/lib/eventdev/rte_eventdev_core.h
+++ b/lib/eventdev/rte_eventdev_core.h
@@ -42,6 +42,8 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port,
uint16_t nb_events);
/**< @internal Enqueue burst of events on crypto adapter */
+typedef int (*event_profile_switch_t)(void *port, uint8_t profile);
+
struct rte_event_fp_ops {
void **data;
/**< points to array of internal port data pointers */
@@ -65,6 +67,8 @@ struct rte_event_fp_ops {
/**< PMD Tx adapter enqueue same destination function. */
event_crypto_adapter_enqueue_t ca_enqueue;
/**< PMD Crypto adapter enqueue function. */
+ event_profile_switch_t profile_switch;
+ /**< PMD Event switch profile function. */
uintptr_t reserved[6];
} __rte_cache_aligned;
diff --git a/lib/eventdev/rte_eventdev_trace_fp.h b/lib/eventdev/rte_eventdev_trace_fp.h
index af2172d2a5..04d510ad00 100644
--- a/lib/eventdev/rte_eventdev_trace_fp.h
+++ b/lib/eventdev/rte_eventdev_trace_fp.h
@@ -46,6 +46,14 @@ RTE_TRACE_POINT_FP(
rte_trace_point_emit_int(op);
)
+RTE_TRACE_POINT_FP(
+ rte_eventdev_trace_port_profile_switch,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, uint8_t profile),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u8(profile);
+)
+
RTE_TRACE_POINT_FP(
rte_eventdev_trace_eth_tx_adapter_enqueue,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table,
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index b03c10d99f..22e88185b7 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -131,6 +131,12 @@ EXPERIMENTAL {
rte_event_eth_tx_adapter_runtime_params_init;
rte_event_eth_tx_adapter_runtime_params_set;
rte_event_timer_remaining_ticks_get;
+
+ # added in 23.11
+ rte_event_port_profile_links_set;
+ rte_event_port_profile_unlink;
+ rte_event_port_profile_links_get;
+ __rte_eventdev_trace_port_profile_switch;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH v2 2/3] event/cnxk: implement event link profiles
2023-08-31 20:44 ` [PATCH v2 0/3] Introduce event link profiles pbhagavatula
2023-08-31 20:44 ` [PATCH v2 1/3] eventdev: introduce " pbhagavatula
@ 2023-08-31 20:44 ` pbhagavatula
2023-08-31 20:44 ` [PATCH v2 3/3] test/event: add event link profile test pbhagavatula
2023-09-21 10:28 ` [PATCH v3 0/3] Introduce event link profiles pbhagavatula
3 siblings, 0 replies; 44+ messages in thread
From: pbhagavatula @ 2023-08-31 20:44 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Implement event link profiles support on CN10K and CN9K.
Both the platforms support up to 2 link profiles.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 1 +
doc/guides/eventdevs/features/cnxk.ini | 3 +-
doc/guides/rel_notes/release_23_11.rst | 5 ++
drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
drivers/common/cnxk/roc_sso.c | 18 +++----
drivers/common/cnxk/roc_sso.h | 8 +--
drivers/common/cnxk/roc_sso_priv.h | 4 +-
drivers/event/cnxk/cn10k_eventdev.c | 45 +++++++++++-----
drivers/event/cnxk/cn10k_worker.c | 11 ++++
drivers/event/cnxk/cn10k_worker.h | 1 +
drivers/event/cnxk/cn9k_eventdev.c | 74 ++++++++++++++++----------
drivers/event/cnxk/cn9k_worker.c | 22 ++++++++
drivers/event/cnxk/cn9k_worker.h | 2 +
drivers/event/cnxk/cnxk_eventdev.c | 38 +++++++------
drivers/event/cnxk/cnxk_eventdev.h | 10 ++--
15 files changed, 164 insertions(+), 82 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 1a59233282..cccb8a0304 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -48,6 +48,7 @@ Features of the OCTEON cnxk SSO PMD are:
- HW managed event vectorization on CN10K for packets enqueued from ethdev to
eventdev configurable per each Rx queue in Rx adapter.
- Event vector transmission via Tx adapter.
+- Up to 2 event link profiles.
Prerequisites and Compilation procedure
---------------------------------------
diff --git a/doc/guides/eventdevs/features/cnxk.ini b/doc/guides/eventdevs/features/cnxk.ini
index bee69bf8f4..5d353e3670 100644
--- a/doc/guides/eventdevs/features/cnxk.ini
+++ b/doc/guides/eventdevs/features/cnxk.ini
@@ -12,7 +12,8 @@ runtime_port_link = Y
multiple_queue_port = Y
carry_flow_id = Y
maintenance_free = Y
-runtime_queue_attr = y
+runtime_queue_attr = Y
+profile_links = Y
[Eth Rx adapter Features]
internal_port = Y
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index e19a0ed3c3..a6362ed7f8 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -96,6 +96,11 @@ New Features
* Added ``rte_event_port_profile_switch`` to switch between profiles as needed.
+* **Added support for link profiles for Marvell CNXK event device driver.**
+
+ Marvell CNXK event device driver supports up to two link profiles per event
+ port. Added support to advertise link profile capabilities and supporting APIs.
+
Removed Items
-------------
diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c
index d76158e30d..690d47c045 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev.c
@@ -285,7 +285,7 @@ nix_inl_sso_setup(struct nix_inl_dev *inl_dev)
}
/* Setup hwgrp->hws link */
- sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, true);
+ sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, 0, true);
/* Enable HWGRP */
plt_write64(0x1, inl_dev->sso_base + SSO_LF_GGRP_QCTL);
@@ -315,7 +315,7 @@ nix_inl_sso_release(struct nix_inl_dev *inl_dev)
nix_inl_sso_unregister_irqs(inl_dev);
/* Unlink hws */
- sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, false);
+ sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, 0, false);
/* Release XAQ aura */
sso_hwgrp_release_xaq(&inl_dev->dev, 1);
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index a5f48d5bbc..f063184565 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -185,8 +185,8 @@ sso_rsrc_get(struct roc_sso *roc_sso)
}
void
-sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
- uint16_t hwgrp[], uint16_t n, uint16_t enable)
+sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, uint16_t hwgrp[],
+ uint16_t n, uint8_t set, uint16_t enable)
{
uint64_t reg;
int i, j, k;
@@ -203,7 +203,7 @@ sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
k = n % 4;
k = k ? k : 4;
for (j = 0; j < k; j++) {
- mask[j] = hwgrp[i + j] | enable << 14;
+ mask[j] = hwgrp[i + j] | (uint32_t)set << 12 | enable << 14;
if (bmp) {
enable ? plt_bitmap_set(bmp, hwgrp[i + j]) :
plt_bitmap_clear(bmp, hwgrp[i + j]);
@@ -289,8 +289,8 @@ roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns)
}
int
-roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
- uint16_t nb_hwgrp)
+roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], uint16_t nb_hwgrp,
+ uint8_t set)
{
struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
struct sso *sso;
@@ -298,14 +298,14 @@ roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
sso = roc_sso_to_sso_priv(roc_sso);
base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12);
- sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, 1);
+ sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, set, 1);
return nb_hwgrp;
}
int
-roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
- uint16_t nb_hwgrp)
+roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], uint16_t nb_hwgrp,
+ uint8_t set)
{
struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
struct sso *sso;
@@ -313,7 +313,7 @@ roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
sso = roc_sso_to_sso_priv(roc_sso);
base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12);
- sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, 0);
+ sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, set, 0);
return nb_hwgrp;
}
diff --git a/drivers/common/cnxk/roc_sso.h b/drivers/common/cnxk/roc_sso.h
index a2bb6fcb22..55a8894050 100644
--- a/drivers/common/cnxk/roc_sso.h
+++ b/drivers/common/cnxk/roc_sso.h
@@ -84,10 +84,10 @@ int __roc_api roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso,
uint16_t hwgrp, uint8_t weight,
uint8_t affinity, uint8_t priority);
uint64_t __roc_api roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns);
-int __roc_api roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws,
- uint16_t hwgrp[], uint16_t nb_hwgrp);
-int __roc_api roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws,
- uint16_t hwgrp[], uint16_t nb_hwgrp);
+int __roc_api roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
+ uint16_t nb_hwgrp, uint8_t set);
+int __roc_api roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
+ uint16_t nb_hwgrp, uint8_t set);
int __roc_api roc_sso_hwgrp_hws_link_status(struct roc_sso *roc_sso,
uint8_t hws, uint16_t hwgrp);
uintptr_t __roc_api roc_sso_hws_base_get(struct roc_sso *roc_sso, uint8_t hws);
diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h
index 09729d4f62..21c59c57e6 100644
--- a/drivers/common/cnxk/roc_sso_priv.h
+++ b/drivers/common/cnxk/roc_sso_priv.h
@@ -44,8 +44,8 @@ roc_sso_to_sso_priv(struct roc_sso *roc_sso)
int sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf,
void **rsp);
int sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf);
-void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
- uint16_t hwgrp[], uint16_t n, uint16_t enable);
+void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, uint16_t hwgrp[],
+ uint16_t n, uint8_t set, uint16_t enable);
int sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps);
int sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps);
int sso_hwgrp_init_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq,
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 499a3aace7..69d970ac30 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -66,21 +66,21 @@ cn10k_sso_init_hws_mem(void *arg, uint8_t port_id)
}
static int
-cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = port;
- return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+ return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link, profile);
}
static int
-cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = port;
- return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+ return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link, profile);
}
static void
@@ -107,10 +107,11 @@ cn10k_sso_hws_release(void *arg, void *hws)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = hws;
- uint16_t i;
+ uint16_t i, j;
- for (i = 0; i < dev->nb_event_queues; i++)
- roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1);
+ for (i = 0; i < CNXK_SSO_MAX_PROFILES; i++)
+ for (j = 0; j < dev->nb_event_queues; j++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id, &j, 1, i);
memset(ws, 0, sizeof(*ws));
}
@@ -475,6 +476,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
CN10K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, sso_hws_tx_adptr_enq);
event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue;
+ event_dev->profile_switch = cn10k_sso_hws_profile_switch;
#else
RTE_SET_USED(event_dev);
#endif
@@ -618,9 +620,8 @@ cn10k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port,
}
static int
-cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port,
- const uint8_t queues[], const uint8_t priorities[],
- uint16_t nb_links)
+cn10k_sso_port_link_profile(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_links];
@@ -629,14 +630,14 @@ cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port,
RTE_SET_USED(priorities);
for (link = 0; link < nb_links; link++)
hwgrp_ids[link] = queues[link];
- nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+ nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links, profile);
return (int)nb_links;
}
static int
-cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
- uint8_t queues[], uint16_t nb_unlinks)
+cn10k_sso_port_unlink_profile(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_unlinks];
@@ -644,11 +645,25 @@ cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
for (unlink = 0; unlink < nb_unlinks; unlink++)
hwgrp_ids[unlink] = queues[unlink];
- nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+ nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks, profile);
return (int)nb_unlinks;
}
+static int
+cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links)
+{
+ return cn10k_sso_port_link_profile(event_dev, port, queues, priorities, nb_links, 0);
+}
+
+static int
+cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks)
+{
+ return cn10k_sso_port_unlink_profile(event_dev, port, queues, nb_unlinks, 0);
+}
+
static void
cn10k_sso_configure_queue_stash(struct rte_eventdev *event_dev)
{
@@ -993,6 +1008,8 @@ static struct eventdev_ops cn10k_sso_dev_ops = {
.port_quiesce = cn10k_sso_port_quiesce,
.port_link = cn10k_sso_port_link,
.port_unlink = cn10k_sso_port_unlink,
+ .port_link_profile = cn10k_sso_port_link_profile,
+ .port_unlink_profile = cn10k_sso_port_unlink_profile,
.timeout_ticks = cnxk_sso_timeout_ticks,
.eth_rx_adapter_caps_get = cn10k_sso_rx_adapter_caps_get,
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
index 9b5bf90159..d59769717e 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -431,3 +431,14 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+
+int __rte_hot
+cn10k_sso_hws_profile_switch(void *port, uint8_t profile)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ ws->gw_wdata &= ~(0xFFUL);
+ ws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index b4ee023723..7aa49d7b3b 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -316,6 +316,7 @@ uint16_t __rte_hot cn10k_sso_hws_enq_new_burst(void *port,
uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn10k_sso_hws_profile_switch(void *port, uint8_t profile);
#define R(name, flags) \
uint16_t __rte_hot cn10k_sso_hws_deq_##name( \
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 6cce5477f0..10a8c4dfbc 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -15,7 +15,7 @@
enq_op = enq_ops[dev->tx_offloads & (NIX_TX_OFFLOAD_MAX - 1)]
static int
-cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
@@ -24,22 +24,20 @@ cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
if (dev->dual_ws) {
dws = port;
- rc = roc_sso_hws_link(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map,
- nb_link);
- rc |= roc_sso_hws_link(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
- map, nb_link);
+ rc = roc_sso_hws_link(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map, nb_link,
+ profile);
+ rc |= roc_sso_hws_link(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), map,
+ nb_link, profile);
} else {
ws = port;
- rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+ rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link, profile);
}
return rc;
}
static int
-cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
@@ -48,15 +46,13 @@ cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
if (dev->dual_ws) {
dws = port;
- rc = roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
- map, nb_link);
- rc |= roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
- map, nb_link);
+ rc = roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map,
+ nb_link, profile);
+ rc |= roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), map,
+ nb_link, profile);
} else {
ws = port;
- rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+ rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link, profile);
}
return rc;
@@ -97,21 +93,24 @@ cn9k_sso_hws_release(void *arg, void *hws)
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
struct cn9k_sso_hws *ws;
- uint16_t i;
+ uint16_t i, k;
if (dev->dual_ws) {
dws = hws;
for (i = 0; i < dev->nb_event_queues; i++) {
- roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), &i, 1);
- roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), &i, 1);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++) {
+ roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
+ &i, 1, k);
+ roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ &i, 1, k);
+ }
}
memset(dws, 0, sizeof(*dws));
} else {
ws = hws;
for (i = 0; i < dev->nb_event_queues; i++)
- roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1, k);
memset(ws, 0, sizeof(*ws));
}
}
@@ -438,6 +437,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
event_dev->enqueue_burst = cn9k_sso_hws_enq_burst;
event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst;
+ event_dev->profile_switch = cn9k_sso_hws_profile_switch;
if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, sso_hws_deq_seg);
CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst,
@@ -475,6 +475,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
event_dev->enqueue_forward_burst =
cn9k_sso_hws_dual_enq_fwd_burst;
event_dev->ca_enqueue = cn9k_sso_hws_dual_ca_enq;
+ event_dev->profile_switch = cn9k_sso_hws_dual_profile_switch;
if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue,
@@ -695,9 +696,8 @@ cn9k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port,
}
static int
-cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port,
- const uint8_t queues[], const uint8_t priorities[],
- uint16_t nb_links)
+cn9k_sso_port_link_profile(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_links];
@@ -706,14 +706,14 @@ cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port,
RTE_SET_USED(priorities);
for (link = 0; link < nb_links; link++)
hwgrp_ids[link] = queues[link];
- nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+ nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links, profile);
return (int)nb_links;
}
static int
-cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
- uint8_t queues[], uint16_t nb_unlinks)
+cn9k_sso_port_unlink_profile(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_unlinks];
@@ -721,11 +721,25 @@ cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
for (unlink = 0; unlink < nb_unlinks; unlink++)
hwgrp_ids[unlink] = queues[unlink];
- nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+ nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks, profile);
return (int)nb_unlinks;
}
+static int
+cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links)
+{
+ return cn9k_sso_port_link_profile(event_dev, port, queues, priorities, nb_links, 0);
+}
+
+static int
+cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks)
+{
+ return cn9k_sso_port_unlink_profile(event_dev, port, queues, nb_unlinks, 0);
+}
+
static int
cn9k_sso_start(struct rte_eventdev *event_dev)
{
@@ -1006,6 +1020,8 @@ static struct eventdev_ops cn9k_sso_dev_ops = {
.port_quiesce = cn9k_sso_port_quiesce,
.port_link = cn9k_sso_port_link,
.port_unlink = cn9k_sso_port_unlink,
+ .port_link_profile = cn9k_sso_port_link_profile,
+ .port_unlink_profile = cn9k_sso_port_unlink_profile,
.timeout_ticks = cnxk_sso_timeout_ticks,
.eth_rx_adapter_caps_get = cn9k_sso_rx_adapter_caps_get,
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
index abbbfffd85..a9ac49a5a7 100644
--- a/drivers/event/cnxk/cn9k_worker.c
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -66,6 +66,17 @@ cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+int __rte_hot
+cn9k_sso_hws_profile_switch(void *port, uint8_t profile)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ ws->gw_wdata &= ~(0xFFUL);
+ ws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
+
/* Dual ws ops. */
uint16_t __rte_hot
@@ -149,3 +160,14 @@ cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events)
return cn9k_cpt_crypto_adapter_enqueue(dws->base[!dws->vws],
ev->event_ptr);
}
+
+int __rte_hot
+cn9k_sso_hws_dual_profile_switch(void *port, uint8_t profile)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+
+ dws->gw_wdata &= ~(0xFFUL);
+ dws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index 9ddab095ac..bb062a2eaf 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -375,6 +375,7 @@ uint16_t __rte_hot cn9k_sso_hws_enq_new_burst(void *port,
uint16_t __rte_hot cn9k_sso_hws_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn9k_sso_hws_profile_switch(void *port, uint8_t profile);
uint16_t __rte_hot cn9k_sso_hws_dual_enq(void *port,
const struct rte_event *ev);
@@ -391,6 +392,7 @@ uint16_t __rte_hot cn9k_sso_hws_ca_enq(void *port, struct rte_event ev[],
uint16_t nb_events);
uint16_t __rte_hot cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn9k_sso_hws_dual_profile_switch(void *port, uint8_t profile);
#define R(name, flags) \
uint16_t __rte_hot cn9k_sso_hws_deq_##name( \
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 529622cac6..f48d6d91b6 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -30,8 +30,9 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE |
- RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR;
- dev_info->max_profiles_per_port = 1;
+ RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR |
+ RTE_EVENT_DEV_CAP_PROFILE_LINK;
+ dev_info->max_profiles_per_port = CNXK_SSO_MAX_PROFILES;
}
int
@@ -129,23 +130,25 @@ cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t *links_map, hwgrp[CNXK_SSO_MAX_HWGRP];
- int i, j;
+ int i, j, k;
for (i = 0; i < dev->nb_event_ports; i++) {
- uint16_t nb_hwgrp = 0;
-
- links_map = event_dev->data->links_map[0];
- /* Point links_map to this port specific area */
- links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++) {
+ uint16_t nb_hwgrp = 0;
+
+ links_map = event_dev->data->links_map[k];
+ /* Point links_map to this port specific area */
+ links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
+
+ for (j = 0; j < dev->nb_event_queues; j++) {
+ if (links_map[j] == 0xdead)
+ continue;
+ hwgrp[nb_hwgrp] = j;
+ nb_hwgrp++;
+ }
- for (j = 0; j < dev->nb_event_queues; j++) {
- if (links_map[j] == 0xdead)
- continue;
- hwgrp[nb_hwgrp] = j;
- nb_hwgrp++;
+ link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp, k);
}
-
- link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp);
}
}
@@ -436,7 +439,7 @@ cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t all_queues[CNXK_SSO_MAX_HWGRP];
- uint16_t i;
+ uint16_t i, j;
void *ws;
if (!dev->configured)
@@ -447,7 +450,8 @@ cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn)
for (i = 0; i < dev->nb_event_ports; i++) {
ws = event_dev->data->ports[i];
- unlink_fn(dev, ws, all_queues, dev->nb_event_queues);
+ for (j = 0; j < CNXK_SSO_MAX_PROFILES; j++)
+ unlink_fn(dev, ws, all_queues, dev->nb_event_queues, j);
rte_free(cnxk_sso_hws_get_cookie(ws));
event_dev->data->ports[i] = NULL;
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 962e630256..d351314200 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -33,6 +33,8 @@
#define CN10K_SSO_GW_MODE "gw_mode"
#define CN10K_SSO_STASH "stash"
+#define CNXK_SSO_MAX_PROFILES 2
+
#define NSEC2USEC(__ns) ((__ns) / 1E3)
#define USEC2NSEC(__us) ((__us)*1E3)
#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
@@ -57,10 +59,10 @@
typedef void *(*cnxk_sso_init_hws_mem_t)(void *dev, uint8_t port_id);
typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
-typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
- uint16_t nb_link);
-typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map,
- uint16_t nb_link);
+typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map, uint16_t nb_link,
+ uint8_t profile);
+typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map, uint16_t nb_link,
+ uint8_t profile);
typedef void (*cnxk_handle_event_t)(void *arg, struct rte_event ev);
typedef void (*cnxk_sso_hws_reset_t)(void *arg, void *ws);
typedef int (*cnxk_sso_hws_flush_t)(void *ws, uint8_t queue_id, uintptr_t base,
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH v2 3/3] test/event: add event link profile test
2023-08-31 20:44 ` [PATCH v2 0/3] Introduce event link profiles pbhagavatula
2023-08-31 20:44 ` [PATCH v2 1/3] eventdev: introduce " pbhagavatula
2023-08-31 20:44 ` [PATCH v2 2/3] event/cnxk: implement event " pbhagavatula
@ 2023-08-31 20:44 ` pbhagavatula
2023-09-21 10:28 ` [PATCH v3 0/3] Introduce event link profiles pbhagavatula
3 siblings, 0 replies; 44+ messages in thread
From: pbhagavatula @ 2023-08-31 20:44 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add test case to verify event link profiles.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
app/test/test_eventdev.c | 117 +++++++++++++++++++++++++++++++++++++++
1 file changed, 117 insertions(+)
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 29354a24c9..b333fec634 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1129,6 +1129,121 @@ test_eventdev_link_get(void)
return TEST_SUCCESS;
}
+static int
+test_eventdev_change_profile(void)
+{
+#define MAX_RETRIES 4
+ uint8_t priorities[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ struct rte_event_queue_conf qcfg;
+ struct rte_event_port_conf pcfg;
+ struct rte_event_dev_info info;
+ struct rte_event ev;
+ uint8_t q, re;
+ int rc;
+
+ rte_event_dev_info_get(TEST_DEV_ID, &info);
+
+ if (info.max_profiles_per_port <= 1)
+ return TEST_SKIPPED;
+
+ if (info.max_event_queues <= 1)
+ return TEST_SKIPPED;
+
+ rc = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to get port0 default config");
+ rc = rte_event_port_setup(TEST_DEV_ID, 0, &pcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to setup port0");
+
+ rc = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, &qcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to get queue0 default config");
+ rc = rte_event_queue_setup(TEST_DEV_ID, 0, &qcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to setup queue0");
+
+ q = 0;
+ rc = rte_event_port_profile_links_set(TEST_DEV_ID, 0, &q, NULL, 1, 0);
+ TEST_ASSERT(rc == 1, "Failed to link queue 0 to port 0 with profile 0");
+ q = 1;
+ rc = rte_event_port_profile_links_set(TEST_DEV_ID, 0, &q, NULL, 1, 1);
+ TEST_ASSERT(rc == 1, "Failed to link queue 1 to port 0 with profile 1");
+
+ rc = rte_event_port_profile_links_get(TEST_DEV_ID, 0, queues, priorities, 0);
+ TEST_ASSERT(rc == 1, "Failed to links");
+ TEST_ASSERT(queues[0] == 0, "Invalid queue found in link");
+
+ rc = rte_event_port_profile_links_get(TEST_DEV_ID, 0, queues, priorities, 1);
+ TEST_ASSERT(rc == 1, "Failed to links");
+ TEST_ASSERT(queues[0] == 1, "Invalid queue found in link");
+
+ rc = rte_event_dev_start(TEST_DEV_ID);
+ TEST_ASSERT_SUCCESS(rc, "Failed to start event device");
+
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.queue_id = 0;
+ ev.op = RTE_EVENT_OP_NEW;
+ ev.flow_id = 0;
+ ev.u64 = 0xBADF00D0;
+ rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1);
+ TEST_ASSERT(rc == 1, "Failed to enqueue event");
+ ev.queue_id = 1;
+ ev.flow_id = 1;
+ rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1);
+ TEST_ASSERT(rc == 1, "Failed to enqueue event");
+
+ ev.event = 0;
+ ev.u64 = 0;
+
+ rc = rte_event_port_profile_switch(TEST_DEV_ID, 0, 1);
+ TEST_ASSERT_SUCCESS(rc, "Failed to change profile");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ printf("rc %d\n", rc);
+ if (rc)
+ break;
+ }
+
+ TEST_ASSERT(rc == 1, "Failed to dequeue event from profile 1");
+ TEST_ASSERT(ev.flow_id == 1, "Incorrect flow identifier from profile 1");
+ TEST_ASSERT(ev.queue_id == 1, "Incorrect queue identifier from profile 1");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ TEST_ASSERT(rc == 0, "Unexpected event dequeued from active profile");
+ }
+
+ rc = rte_event_port_profile_switch(TEST_DEV_ID, 0, 0);
+ TEST_ASSERT_SUCCESS(rc, "Failed to change profile");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ if (rc)
+ break;
+ }
+
+ TEST_ASSERT(rc == 1, "Failed to dequeue event from profile 1");
+ TEST_ASSERT(ev.flow_id == 0, "Incorrect flow identifier from profile 0");
+ TEST_ASSERT(ev.queue_id == 0, "Incorrect queue identifier from profile 0");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ TEST_ASSERT(rc == 0, "Unexpected event dequeued from active profile");
+ }
+
+ q = 0;
+ rc = rte_event_port_profile_unlink(TEST_DEV_ID, 0, &q, 1, 0);
+ TEST_ASSERT(rc == 1, "Failed to unlink queue 0 to port 0 with profile 0");
+ q = 1;
+ rc = rte_event_port_profile_unlink(TEST_DEV_ID, 0, &q, 1, 1);
+ TEST_ASSERT(rc == 1, "Failed to unlink queue 1 to port 0 with profile 1");
+
+ return TEST_SUCCESS;
+}
+
static int
test_eventdev_close(void)
{
@@ -1187,6 +1302,8 @@ static struct unit_test_suite eventdev_common_testsuite = {
test_eventdev_timeout_ticks),
TEST_CASE_ST(NULL, NULL,
test_eventdev_start_stop),
+ TEST_CASE_ST(eventdev_configure_setup, eventdev_stop_device,
+ test_eventdev_change_profile),
TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
test_eventdev_link),
TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [PATCH v2 1/3] eventdev: introduce link profiles
2023-08-31 20:44 ` [PATCH v2 1/3] eventdev: introduce " pbhagavatula
@ 2023-09-20 4:22 ` Jerin Jacob
0 siblings, 0 replies; 44+ messages in thread
From: Jerin Jacob @ 2023-09-20 4:22 UTC (permalink / raw)
To: pbhagavatula
Cc: jerinj, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov, dev
On Fri, Sep 1, 2023 at 3:10 AM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> A collection of event queues linked to an event port can be
> associated with a unique identifier called as a profile, multiple
> such profiles can be created based on the event device capability
> using the function `rte_event_port_profile_links_set` which takes
> arguments similar to `rte_event_port_link` in addition to the profile
> identifier.
>
> The maximum link profiles that are supported by an event device
> is advertised through the structure member
> `rte_event_dev_info::max_profiles_per_port`.
> By default, event ports are configured to use the link profile 0
> on initialization.
>
> Once multiple link profiles are set up and the event device is started,
> the application can use the function `rte_event_port_profile_switch`
> to change the currently active profile on an event port. This effects
> the next `rte_event_dequeue_burst` call, where the event queues
> associated with the newly active link profile will participate in
> scheduling.
>
> An unlink function `rte_event_port_profile_unlink` is provided
> to modify the links associated to a profile, and
> `rte_event_port_profile_links_get` can be used to retrieve the
> links associated with a profile.
>
> Using Link profiles can reduce the overhead of linking/unlinking and
> waiting for unlinks in progress in fast-path and gives applications
> the ability to switch between preset profiles on the fly.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Could you rebase with next-eventdev tree.
[for-main]dell[dpdk-next-eventdev] $ git pw series apply 29396
Failed to apply patch:
Applying: eventdev: introduce link profiles
Using index info to reconstruct a base tree...
M doc/guides/rel_notes/release_23_11.rst
M drivers/event/cnxk/cnxk_eventdev.c
M drivers/event/dlb2/dlb2.c
M lib/eventdev/rte_eventdev_core.h
M lib/eventdev/version.map
Falling back to patching base and 3-way merge...
Auto-merging lib/eventdev/version.map
CONFLICT (content): Merge conflict in lib/eventdev/version.map
Auto-merging lib/eventdev/rte_eventdev_core.h
CONFLICT (content): Merge conflict in lib/eventdev/rte_eventdev_core.h
Auto-merging drivers/event/dlb2/dlb2.c
Auto-merging drivers/event/cnxk/cnxk_eventdev.c
Auto-merging doc/guides/rel_notes/release_23_11.rst
CONFLICT (content): Merge conflict in doc/guides/rel_notes/release_23_11.rst
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0001 eventdev: introduce link profiles
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".
[for-main]dell[dpdk-next-eventdev] $
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH v3 0/3] Introduce event link profiles
2023-08-31 20:44 ` [PATCH v2 0/3] Introduce event link profiles pbhagavatula
` (2 preceding siblings ...)
2023-08-31 20:44 ` [PATCH v2 3/3] test/event: add event link profile test pbhagavatula
@ 2023-09-21 10:28 ` pbhagavatula
2023-09-21 10:28 ` [PATCH v3 1/3] eventdev: introduce " pbhagavatula
` (4 more replies)
3 siblings, 5 replies; 44+ messages in thread
From: pbhagavatula @ 2023-09-21 10:28 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
A collection of event queues linked to an event port can be associated
with unique identifier called as a profile, multiple such profiles can
be configured based on the event device capability using the function
`rte_event_port_profile_links_set` which takes arguments similar to
`rte_event_port_link` in addition to the profile identifier.
The maximum link profiles that are supported by an event device is
advertised through the structure member
`rte_event_dev_info::max_profiles_per_port`.
By default, event ports are configured to use the link profile 0 on
initialization.
Once multiple link profiles are set up and the event device is started, the
application can use the function `rte_event_port_profile_switch` to change
the currently active profile on an event port. This effects the next
`rte_event_dequeue_burst` call, where the event queues associated with the
newly active link profile will participate in scheduling.
Rudementary work flow would something like:
Config path:
uint8_t lowQ[4] = {4, 5, 6, 7};
uint8_t highQ[4] = {0, 1, 2, 3};
if (rte_event_dev_info.max_profiles_per_port < 2)
return -ENOTSUP;
rte_event_port_profile_links_set(0, 0, highQ, NULL, 4, 0);
rte_event_port_profile_links_set(0, 0, lowQ, NULL, 4, 1);
Worker path:
empty_high_deq = 0;
empty_low_deq = 0;
is_low_deq = 0;
while (1) {
deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
if (deq == 0) {
/**
* Change link profile based on work activity on current
* active profile
*/
if (is_low_deq) {
empty_low_deq++;
if (empty_low_deq == MAX_LOW_RETRY) {
rte_event_port_profile_switch(0, 0, 0);
is_low_deq = 0;
empty_low_deq = 0;
}
continue;
}
if (empty_high_deq == MAX_HIGH_RETRY) {
rte_event_port_profile_switch(0, 0, 1);
is_low_deq = 1;
empty_high_deq = 0;
}
continue;
}
// Process the event received.
if (is_low_deq++ == MAX_LOW_EVENTS) {
rte_event_port_profile_switch(0, 0, 0);
is_low_deq = 0;
}
}
An application could use heuristic data of load/activity of a given event
port and change its active profile to adapt to the traffic pattern.
An unlink function `rte_event_port_profile_unlink` is provided to
modify the links associated to a profile, and
`rte_event_port_profile_links_get` can be used to retrieve the links
associated with a profile.
Using Link profiles can reduce the overhead of linking/unlinking and
waiting for unlinks in progress in fast-path and gives applications
the ability to switch between preset profiles on the fly.
v3 Changes:
----------
- Rebase to next-eventdev
- Rename testcase name to match API.
v2 Changes:
----------
- Fix compilation.
Pavan Nikhilesh (3):
eventdev: introduce link profiles
event/cnxk: implement event link profiles
test/event: add event link profile test
app/test/test_eventdev.c | 117 +++++++++++
config/rte_config.h | 1 +
doc/guides/eventdevs/cnxk.rst | 1 +
doc/guides/eventdevs/features/cnxk.ini | 3 +-
doc/guides/eventdevs/features/default.ini | 1 +
doc/guides/prog_guide/eventdev.rst | 40 ++++
doc/guides/rel_notes/release_23_11.rst | 22 ++
drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
drivers/common/cnxk/roc_sso.c | 18 +-
drivers/common/cnxk/roc_sso.h | 8 +-
drivers/common/cnxk/roc_sso_priv.h | 4 +-
drivers/event/cnxk/cn10k_eventdev.c | 45 ++--
drivers/event/cnxk/cn10k_worker.c | 11 +
drivers/event/cnxk/cn10k_worker.h | 1 +
drivers/event/cnxk/cn9k_eventdev.c | 74 ++++---
drivers/event/cnxk/cn9k_worker.c | 22 ++
drivers/event/cnxk/cn9k_worker.h | 2 +
drivers/event/cnxk/cnxk_eventdev.c | 37 ++--
drivers/event/cnxk/cnxk_eventdev.h | 10 +-
drivers/event/dlb2/dlb2.c | 1 +
drivers/event/dpaa/dpaa_eventdev.c | 1 +
drivers/event/dpaa2/dpaa2_eventdev.c | 2 +-
drivers/event/dsw/dsw_evdev.c | 1 +
drivers/event/octeontx/ssovf_evdev.c | 2 +-
drivers/event/opdl/opdl_evdev.c | 1 +
drivers/event/skeleton/skeleton_eventdev.c | 1 +
drivers/event/sw/sw_evdev.c | 1 +
lib/eventdev/eventdev_pmd.h | 59 +++++-
lib/eventdev/eventdev_private.c | 9 +
lib/eventdev/eventdev_trace.h | 32 +++
lib/eventdev/eventdev_trace_points.c | 12 ++
lib/eventdev/rte_eventdev.c | 146 ++++++++++---
lib/eventdev/rte_eventdev.h | 231 +++++++++++++++++++++
lib/eventdev/rte_eventdev_core.h | 6 +-
lib/eventdev/rte_eventdev_trace_fp.h | 8 +
lib/eventdev/version.map | 4 +
36 files changed, 827 insertions(+), 111 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH v3 1/3] eventdev: introduce link profiles
2023-09-21 10:28 ` [PATCH v3 0/3] Introduce event link profiles pbhagavatula
@ 2023-09-21 10:28 ` pbhagavatula
2023-09-27 15:23 ` Jerin Jacob
2023-09-21 10:28 ` [PATCH v3 2/3] event/cnxk: implement event " pbhagavatula
` (3 subsequent siblings)
4 siblings, 1 reply; 44+ messages in thread
From: pbhagavatula @ 2023-09-21 10:28 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
A collection of event queues linked to an event port can be
associated with a unique identifier called as a profile, multiple
such profiles can be created based on the event device capability
using the function `rte_event_port_profile_links_set` which takes
arguments similar to `rte_event_port_link` in addition to the profile
identifier.
The maximum link profiles that are supported by an event device
is advertised through the structure member
`rte_event_dev_info::max_profiles_per_port`.
By default, event ports are configured to use the link profile 0
on initialization.
Once multiple link profiles are set up and the event device is started,
the application can use the function `rte_event_port_profile_switch`
to change the currently active profile on an event port. This effects
the next `rte_event_dequeue_burst` call, where the event queues
associated with the newly active link profile will participate in
scheduling.
An unlink function `rte_event_port_profile_unlink` is provided
to modify the links associated to a profile, and
`rte_event_port_profile_links_get` can be used to retrieve the
links associated with a profile.
Using Link profiles can reduce the overhead of linking/unlinking and
waiting for unlinks in progress in fast-path and gives applications
the ability to switch between preset profiles on the fly.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
config/rte_config.h | 1 +
doc/guides/eventdevs/features/default.ini | 1 +
doc/guides/prog_guide/eventdev.rst | 40 ++++
doc/guides/rel_notes/release_23_11.rst | 17 ++
drivers/event/cnxk/cnxk_eventdev.c | 3 +-
drivers/event/dlb2/dlb2.c | 1 +
drivers/event/dpaa/dpaa_eventdev.c | 1 +
drivers/event/dpaa2/dpaa2_eventdev.c | 2 +-
drivers/event/dsw/dsw_evdev.c | 1 +
drivers/event/octeontx/ssovf_evdev.c | 2 +-
drivers/event/opdl/opdl_evdev.c | 1 +
drivers/event/skeleton/skeleton_eventdev.c | 1 +
drivers/event/sw/sw_evdev.c | 1 +
lib/eventdev/eventdev_pmd.h | 59 +++++-
lib/eventdev/eventdev_private.c | 9 +
lib/eventdev/eventdev_trace.h | 32 +++
lib/eventdev/eventdev_trace_points.c | 12 ++
lib/eventdev/rte_eventdev.c | 146 ++++++++++---
lib/eventdev/rte_eventdev.h | 231 +++++++++++++++++++++
lib/eventdev/rte_eventdev_core.h | 6 +-
lib/eventdev/rte_eventdev_trace_fp.h | 8 +
lib/eventdev/version.map | 4 +
22 files changed, 548 insertions(+), 31 deletions(-)
diff --git a/config/rte_config.h b/config/rte_config.h
index 400e44e3cf..d43b3eecb8 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -73,6 +73,7 @@
#define RTE_EVENT_MAX_DEVS 16
#define RTE_EVENT_MAX_PORTS_PER_DEV 255
#define RTE_EVENT_MAX_QUEUES_PER_DEV 255
+#define RTE_EVENT_MAX_PROFILES_PER_PORT 8
#define RTE_EVENT_TIMER_ADAPTER_NUM_MAX 32
#define RTE_EVENT_ETH_INTR_RING_SIZE 1024
#define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini
index 00360f60c6..1c0082352b 100644
--- a/doc/guides/eventdevs/features/default.ini
+++ b/doc/guides/eventdevs/features/default.ini
@@ -18,6 +18,7 @@ multiple_queue_port =
carry_flow_id =
maintenance_free =
runtime_queue_attr =
+profile_links =
;
; Features of a default Ethernet Rx adapter.
diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst
index 2c83176846..9c07870a79 100644
--- a/doc/guides/prog_guide/eventdev.rst
+++ b/doc/guides/prog_guide/eventdev.rst
@@ -317,6 +317,46 @@ can be achieved like this:
}
int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1);
+Linking Queues to Ports with profiles
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+An application can use link profiles if supported by the underlying event device to setup up
+multiple link profile per port and change them run time depending up on heuristic data.
+Using Link profiles can reduce the overhead of linking/unlinking and wait for unlinks in progress
+in fast-path and gives applications the ability to switch between preset profiles on the fly.
+
+An Example use case could be as follows.
+
+Config path:
+
+.. code-block:: c
+
+ uint8_t lq[4] = {4, 5, 6, 7};
+ uint8_t hq[4] = {0, 1, 2, 3};
+
+ if (rte_event_dev_info.max_profiles_per_port < 2)
+ return -ENOTSUP;
+
+ rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0);
+ rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1);
+
+Worker path:
+
+.. code-block:: c
+
+ uint8_t profile_id_to_switch;
+
+ while (1) {
+ deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
+ if (deq == 0) {
+ profile_id_to_switch = app_findprofile_id_to_switch();
+ rte_event_port_profile_switch(0, 0, profile_id_to_switch);
+ continue;
+ }
+
+ // Process the event received.
+ }
+
Starting the EventDev
~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index b34ddc0860..e714fc2be5 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -89,6 +89,23 @@ New Features
* Added support for ``remaining_ticks_get`` timer adapter PMD callback
to get the remaining ticks to expire for a given event timer.
+* **Added eventdev support to link queues to port with profile.**
+
+ Introduced event link profiles that can be used to associated links between
+ event queues and an event port with a unique identifier termed as profile.
+ The profile can be used to switch between the associated links in fast-path
+ without the additional overhead of linking/unlinking and waiting for unlinking.
+
+ * Added ``rte_event_port_profile_links_set`` to link event queues to an event
+ port with a unique profile identifier.
+
+ * Added ``rte_event_port_profile_unlink`` to unlink event queues from an event
+ port associated with a profile.
+
+ * Added ``rte_event_port_profile_links_get`` to retrieve links associated to a
+ profile.
+
+ * Added ``rte_event_port_profile_switch`` to switch between profiles as needed.
Removed Items
-------------
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 9c9192bd40..f3394a20b1 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -31,6 +31,7 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE |
RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR;
+ dev_info->max_profiles_per_port = 1;
}
int
@@ -133,7 +134,7 @@ cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
for (i = 0; i < dev->nb_event_ports; i++) {
uint16_t nb_hwgrp = 0;
- links_map = event_dev->data->links_map;
+ links_map = event_dev->data->links_map[0];
/* Point links_map to this port specific area */
links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index cf2764364f..e645f7595a 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -79,6 +79,7 @@ static struct rte_event_dev_info evdev_dlb2_default_info = {
RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE),
+ .max_profiles_per_port = 1,
};
struct process_local_port_data
diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c
index 4b3d16735b..f615da3813 100644
--- a/drivers/event/dpaa/dpaa_eventdev.c
+++ b/drivers/event/dpaa/dpaa_eventdev.c
@@ -359,6 +359,7 @@ dpaa_event_dev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
+ dev_info->max_profiles_per_port = 1;
}
static int
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index fa1a1ade80..ffc5550f85 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -411,7 +411,7 @@ dpaa2_eventdev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
-
+ dev_info->max_profiles_per_port = 1;
}
static int
diff --git a/drivers/event/dsw/dsw_evdev.c b/drivers/event/dsw/dsw_evdev.c
index 6c5cde2468..785c12f61f 100644
--- a/drivers/event/dsw/dsw_evdev.c
+++ b/drivers/event/dsw/dsw_evdev.c
@@ -218,6 +218,7 @@ dsw_info_get(struct rte_eventdev *dev __rte_unused,
.max_event_port_dequeue_depth = DSW_MAX_PORT_DEQUEUE_DEPTH,
.max_event_port_enqueue_depth = DSW_MAX_PORT_ENQUEUE_DEPTH,
.max_num_events = DSW_MAX_EVENTS,
+ .max_profiles_per_port = 1,
.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE|
RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED|
RTE_EVENT_DEV_CAP_NONSEQ_MODE|
diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
index 650266b996..0eb9358981 100644
--- a/drivers/event/octeontx/ssovf_evdev.c
+++ b/drivers/event/octeontx/ssovf_evdev.c
@@ -158,7 +158,7 @@ ssovf_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *dev_info)
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
-
+ dev_info->max_profiles_per_port = 1;
}
static int
diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c
index 9ce8b39b60..dd25749654 100644
--- a/drivers/event/opdl/opdl_evdev.c
+++ b/drivers/event/opdl/opdl_evdev.c
@@ -378,6 +378,7 @@ opdl_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE,
+ .max_profiles_per_port = 1,
};
*info = evdev_opdl_info;
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
index 8513b9a013..dc9b131641 100644
--- a/drivers/event/skeleton/skeleton_eventdev.c
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -104,6 +104,7 @@ skeleton_eventdev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_EVENT_QOS |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
+ dev_info->max_profiles_per_port = 1;
}
static int
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index cfd659d774..6d1816b76d 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -609,6 +609,7 @@ sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE),
+ .max_profiles_per_port = 1,
};
*info = evdev_sw_info;
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index f62f42e140..66fdad71f3 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -119,8 +119,8 @@ struct rte_eventdev_data {
/**< Array of port configuration structures. */
struct rte_event_queue_conf queues_cfg[RTE_EVENT_MAX_QUEUES_PER_DEV];
/**< Array of queue configuration structures. */
- uint16_t links_map[RTE_EVENT_MAX_PORTS_PER_DEV *
- RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint16_t links_map[RTE_EVENT_MAX_PROFILES_PER_PORT]
+ [RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV];
/**< Memory to store queues to port connections. */
void *dev_private;
/**< PMD-specific private data */
@@ -178,6 +178,9 @@ struct rte_eventdev {
event_tx_adapter_enqueue_t txa_enqueue;
/**< Pointer to PMD eth Tx adapter enqueue function. */
event_crypto_adapter_enqueue_t ca_enqueue;
+ /**< PMD Crypto adapter enqueue function. */
+ event_profile_switch_t profile_switch;
+ /**< PMD Event switch profile function. */
uint64_t reserved_64s[4]; /**< Reserved for future fields */
void *reserved_ptrs[3]; /**< Reserved for future fields */
@@ -437,6 +440,32 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
const uint8_t queues[], const uint8_t priorities[],
uint16_t nb_links);
+/**
+ * Link multiple source event queues associated with a profile to a destination
+ * event port.
+ *
+ * @param dev
+ * Event device pointer
+ * @param port
+ * Event port pointer
+ * @param queues
+ * Points to an array of *nb_links* event queues to be linked
+ * to the event port.
+ * @param priorities
+ * Points to an array of *nb_links* service priorities associated with each
+ * event queue link to event port.
+ * @param nb_links
+ * The number of links to establish.
+ * @param profile
+ * The profile ID to associate the links.
+ *
+ * @return
+ * Returns 0 on success.
+ */
+typedef int (*eventdev_port_link_profile_t)(struct rte_eventdev *dev, void *port,
+ const uint8_t queues[], const uint8_t priorities[],
+ uint16_t nb_links, uint8_t profile);
+
/**
* Unlink multiple source event queues from destination event port.
*
@@ -455,6 +484,28 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
typedef int (*eventdev_port_unlink_t)(struct rte_eventdev *dev, void *port,
uint8_t queues[], uint16_t nb_unlinks);
+/**
+ * Unlink multiple source event queues associated with a profile from destination
+ * event port.
+ *
+ * @param dev
+ * Event device pointer
+ * @param port
+ * Event port pointer
+ * @param queues
+ * An array of *nb_unlinks* event queues to be unlinked from the event port.
+ * @param nb_unlinks
+ * The number of unlinks to establish
+ * @param profile
+ * The profile ID of the associated links.
+ *
+ * @return
+ * Returns 0 on success.
+ */
+typedef int (*eventdev_port_unlink_profile_t)(struct rte_eventdev *dev, void *port,
+ uint8_t queues[], uint16_t nb_unlinks,
+ uint8_t profile);
+
/**
* Unlinks in progress. Returns number of unlinks that the PMD is currently
* performing, but have not yet been completed.
@@ -1348,8 +1399,12 @@ struct eventdev_ops {
eventdev_port_link_t port_link;
/**< Link event queues to an event port. */
+ eventdev_port_link_profile_t port_link_profile;
+ /**< Link event queues associated with a profile to an event port. */
eventdev_port_unlink_t port_unlink;
/**< Unlink event queues from an event port. */
+ eventdev_port_unlink_profile_t port_unlink_profile;
+ /**< Unlink event queues associated with a profile from an event port. */
eventdev_port_unlinks_in_progress_t port_unlinks_in_progress;
/**< Unlinks in progress on an event port. */
eventdev_dequeue_timeout_ticks_t timeout_ticks;
diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c
index 1d3d9d357e..a5e0bd3de0 100644
--- a/lib/eventdev/eventdev_private.c
+++ b/lib/eventdev/eventdev_private.c
@@ -81,6 +81,13 @@ dummy_event_crypto_adapter_enqueue(__rte_unused void *port,
return 0;
}
+static int
+dummy_event_port_profile_switch(__rte_unused void *port, __rte_unused uint8_t profile)
+{
+ RTE_EDEV_LOG_ERR("change profile requested for unconfigured event device");
+ return -EINVAL;
+}
+
void
event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
{
@@ -97,6 +104,7 @@ event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
.txa_enqueue_same_dest =
dummy_event_tx_adapter_enqueue_same_dest,
.ca_enqueue = dummy_event_crypto_adapter_enqueue,
+ .profile_switch = dummy_event_port_profile_switch,
.data = dummy_data,
};
@@ -117,5 +125,6 @@ event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op,
fp_op->txa_enqueue = dev->txa_enqueue;
fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest;
fp_op->ca_enqueue = dev->ca_enqueue;
+ fp_op->profile_switch = dev->profile_switch;
fp_op->data = dev->data->ports;
}
diff --git a/lib/eventdev/eventdev_trace.h b/lib/eventdev/eventdev_trace.h
index f008ef0091..5fc9bebd13 100644
--- a/lib/eventdev/eventdev_trace.h
+++ b/lib/eventdev/eventdev_trace.h
@@ -76,6 +76,17 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(rc);
)
+RTE_TRACE_POINT(
+ rte_eventdev_trace_port_profile_links_set,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
+ uint16_t nb_links, uint8_t profile, int rc),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u16(nb_links);
+ rte_trace_point_emit_u8(profile);
+ rte_trace_point_emit_int(rc);
+)
+
RTE_TRACE_POINT(
rte_eventdev_trace_port_unlink,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
@@ -86,6 +97,17 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(rc);
)
+RTE_TRACE_POINT(
+ rte_eventdev_trace_port_profile_unlink,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
+ uint16_t nb_unlinks, uint8_t profile, int rc),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u16(nb_unlinks);
+ rte_trace_point_emit_u8(profile);
+ rte_trace_point_emit_int(rc);
+)
+
RTE_TRACE_POINT(
rte_eventdev_trace_start,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, int rc),
@@ -487,6 +509,16 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(count);
)
+RTE_TRACE_POINT(
+ rte_eventdev_trace_port_profile_links_get,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, uint8_t profile,
+ int count),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u8(profile);
+ rte_trace_point_emit_int(count);
+)
+
RTE_TRACE_POINT(
rte_eventdev_trace_port_unlinks_in_progress,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id),
diff --git a/lib/eventdev/eventdev_trace_points.c b/lib/eventdev/eventdev_trace_points.c
index 76144cfe75..8024e07531 100644
--- a/lib/eventdev/eventdev_trace_points.c
+++ b/lib/eventdev/eventdev_trace_points.c
@@ -19,9 +19,15 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_setup,
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_link,
lib.eventdev.port.link)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_links_set,
+ lib.eventdev.port.profile.links.set)
+
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlink,
lib.eventdev.port.unlink)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_unlink,
+ lib.eventdev.port.profile.unlink)
+
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_start,
lib.eventdev.start)
@@ -40,6 +46,9 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_deq_burst,
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_maintain,
lib.eventdev.maintain)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_switch,
+ lib.eventdev.port.profile.switch)
+
/* Eventdev Rx adapter trace points */
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_eth_rx_adapter_create,
lib.eventdev.rx.adapter.create)
@@ -206,6 +215,9 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_default_conf_get,
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_links_get,
lib.eventdev.port.links.get)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_links_get,
+ lib.eventdev.port.profile.links.get)
+
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlinks_in_progress,
lib.eventdev.port.unlinks.in.progress)
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 6ab4524332..30df0572d2 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -270,7 +270,7 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
void **ports;
uint16_t *links_map;
struct rte_event_port_conf *ports_cfg;
- unsigned int i;
+ unsigned int i, j;
RTE_EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
dev->data->dev_id);
@@ -281,7 +281,6 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
ports = dev->data->ports;
ports_cfg = dev->data->ports_cfg;
- links_map = dev->data->links_map;
for (i = nb_ports; i < old_nb_ports; i++)
(*dev->dev_ops->port_release)(ports[i]);
@@ -297,9 +296,11 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
sizeof(ports[0]) * new_ps);
memset(ports_cfg + old_nb_ports, 0,
sizeof(ports_cfg[0]) * new_ps);
- for (i = old_links_map_end; i < links_map_end; i++)
- links_map[i] =
- EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++) {
+ links_map = dev->data->links_map[i];
+ for (j = old_links_map_end; j < links_map_end; j++)
+ links_map[j] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ }
}
} else {
if (*dev->dev_ops->port_release == NULL)
@@ -953,21 +954,44 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
const uint8_t queues[], const uint8_t priorities[],
uint16_t nb_links)
{
- struct rte_eventdev *dev;
- uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ return rte_event_port_profile_links_set(dev_id, port_id, queues, priorities, nb_links, 0);
+}
+
+int
+rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile)
+{
uint8_t priorities_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
uint16_t *links_map;
int i, diag;
RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);
dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT || profile >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile=%" PRIu8, profile);
+ return -EINVAL;
+ }
+
if (*dev->dev_ops->port_link == NULL) {
RTE_EDEV_LOG_ERR("Function not supported\n");
rte_errno = ENOTSUP;
return 0;
}
+ if (profile && *dev->dev_ops->port_link_profile == NULL) {
+ RTE_EDEV_LOG_ERR("Function not supported\n");
+ rte_errno = ENOTSUP;
+ return 0;
+ }
+
if (!is_valid_port(dev, port_id)) {
RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
rte_errno = EINVAL;
@@ -995,18 +1019,22 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
return 0;
}
- diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id],
- queues, priorities, nb_links);
+ if (profile)
+ diag = (*dev->dev_ops->port_link_profile)(dev, dev->data->ports[port_id], queues,
+ priorities, nb_links, profile);
+ else
+ diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id], queues,
+ priorities, nb_links);
if (diag < 0)
return diag;
- links_map = dev->data->links_map;
+ links_map = dev->data->links_map[profile];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
for (i = 0; i < diag; i++)
links_map[queues[i]] = (uint8_t)priorities[i];
- rte_eventdev_trace_port_link(dev_id, port_id, nb_links, diag);
+ rte_eventdev_trace_port_profile_links_set(dev_id, port_id, nb_links, profile, diag);
return diag;
}
@@ -1014,27 +1042,50 @@ int
rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint16_t nb_unlinks)
{
- struct rte_eventdev *dev;
+ return rte_event_port_profile_unlink(dev_id, port_id, queues, nb_unlinks, 0);
+}
+
+int
+rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile)
+{
uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
- int i, diag, j;
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
uint16_t *links_map;
+ int i, diag, j;
RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);
dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT || profile >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile=%" PRIu8, profile);
+ return -EINVAL;
+ }
+
if (*dev->dev_ops->port_unlink == NULL) {
RTE_EDEV_LOG_ERR("Function not supported");
rte_errno = ENOTSUP;
return 0;
}
+ if (profile && *dev->dev_ops->port_unlink_profile == NULL) {
+ RTE_EDEV_LOG_ERR("Function not supported");
+ rte_errno = ENOTSUP;
+ return 0;
+ }
+
if (!is_valid_port(dev, port_id)) {
RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
rte_errno = EINVAL;
return 0;
}
- links_map = dev->data->links_map;
+ links_map = dev->data->links_map[profile];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
@@ -1063,16 +1114,19 @@ rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
return 0;
}
- diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id],
- queues, nb_unlinks);
-
+ if (profile)
+ diag = (*dev->dev_ops->port_unlink_profile)(dev, dev->data->ports[port_id], queues,
+ nb_unlinks, profile);
+ else
+ diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id], queues,
+ nb_unlinks);
if (diag < 0)
return diag;
for (i = 0; i < diag; i++)
links_map[queues[i]] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
- rte_eventdev_trace_port_unlink(dev_id, port_id, nb_unlinks, diag);
+ rte_eventdev_trace_port_profile_unlink(dev_id, port_id, nb_unlinks, profile, diag);
return diag;
}
@@ -1116,7 +1170,8 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
return -EINVAL;
}
- links_map = dev->data->links_map;
+ /* Use the default profile. */
+ links_map = dev->data->links_map[0];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
for (i = 0; i < dev->data->nb_queues; i++) {
@@ -1132,6 +1187,48 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
return count;
}
+int
+rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint8_t priorities[], uint8_t profile)
+{
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
+ uint16_t *links_map;
+ int i, count = 0;
+
+ RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+
+ dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT || profile >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile=%" PRIu8, profile);
+ return -EINVAL;
+ }
+
+ if (!is_valid_port(dev, port_id)) {
+ RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+ return -EINVAL;
+ }
+
+ links_map = dev->data->links_map[profile];
+ /* Point links_map to this port specific area */
+ links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
+ for (i = 0; i < dev->data->nb_queues; i++) {
+ if (links_map[i] != EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
+ queues[count] = i;
+ priorities[count] = (uint8_t)links_map[i];
+ ++count;
+ }
+ }
+
+ rte_eventdev_trace_port_profile_links_get(dev_id, port_id, profile, count);
+
+ return count;
+}
+
int
rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
uint64_t *timeout_ticks)
@@ -1440,7 +1537,7 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
{
char mz_name[RTE_EVENTDEV_NAME_MAX_LEN];
const struct rte_memzone *mz;
- int n;
+ int i, n;
/* Generate memzone name */
n = snprintf(mz_name, sizeof(mz_name), "rte_eventdev_data_%u", dev_id);
@@ -1460,11 +1557,10 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
*data = mz->addr;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
memset(*data, 0, sizeof(struct rte_eventdev_data));
- for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV *
- RTE_EVENT_MAX_QUEUES_PER_DEV;
- n++)
- (*data)->links_map[n] =
- EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++)
+ for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV;
+ n++)
+ (*data)->links_map[i][n] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
}
return 0;
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 2ba8a7b090..f6ce45d160 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -320,6 +320,12 @@ struct rte_event;
* rte_event_queue_setup().
*/
+#define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12)
+/** Event device is capable of supporting multiple link profiles per event port
+ * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater
+ * than one.
+ */
+
/* Event device priority levels */
#define RTE_EVENT_DEV_PRIORITY_HIGHEST 0
/**< Highest priority expressed across eventdev subsystem
@@ -446,6 +452,10 @@ struct rte_event_dev_info {
* device. These ports and queues are not accounted for in
* max_event_ports or max_event_queues.
*/
+ uint8_t max_profiles_per_port;
+ /**< Maximum number of event queue profiles per event port.
+ * A device that doesn't support multiple profiles will set this as 1.
+ */
};
/**
@@ -1536,6 +1546,10 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
* latency of critical work by establishing the link with more event ports
* at runtime.
*
+ * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater
+ * than or equal to one, this function links the event queues to the default
+ * profile i.e. profile 0 of the event port.
+ *
* @param dev_id
* The identifier of the device.
*
@@ -1593,6 +1607,10 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
* Event queue(s) to event port unlink establishment can be changed at runtime
* without re-configuring the device.
*
+ * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater
+ * than or equal to one, this function unlinks the event queues from the default
+ * profile i.e. profile 0 of the event port.
+ *
* @see rte_event_port_unlinks_in_progress() to poll for completed unlinks.
*
* @param dev_id
@@ -1626,6 +1644,136 @@ int
rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint16_t nb_unlinks);
+/**
+ * Link multiple source event queues supplied in *queues* to the destination
+ * event port designated by its *port_id* with associated profile identifier
+ * supplied in *profile* with service priorities supplied in *priorities* on
+ * the event device designated by its *dev_id*.
+ *
+ * If *profile* is set to 0 then, the links created by the call `rte_event_port_link`
+ * will be overwritten.
+ *
+ * Event ports by default use profile 0 unless it is changed using the
+ * call ``rte_event_port_profile_switch()``.
+ *
+ * The link establishment shall enable the event port *port_id* from
+ * receiving events from the specified event queue(s) supplied in *queues*
+ *
+ * An event queue may link to one or more event ports.
+ * The number of links can be established from an event queue to event port is
+ * implementation defined.
+ *
+ * Event queue(s) to event port link establishment can be changed at runtime
+ * without re-configuring the device to support scaling and to reduce the
+ * latency of critical work by establishing the link with more event ports
+ * at runtime.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier to select the destination port to link.
+ *
+ * @param queues
+ * Points to an array of *nb_links* event queues to be linked
+ * to the event port.
+ * NULL value is allowed, in which case this function links all the configured
+ * event queues *nb_event_queues* which previously supplied to
+ * rte_event_dev_configure() to the event port *port_id*
+ *
+ * @param priorities
+ * Points to an array of *nb_links* service priorities associated with each
+ * event queue link to event port.
+ * The priority defines the event port's servicing priority for
+ * event queue, which may be ignored by an implementation.
+ * The requested priority should in the range of
+ * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST].
+ * The implementation shall normalize the requested priority to
+ * implementation supported priority value.
+ * NULL value is allowed, in which case this function links the event queues
+ * with RTE_EVENT_DEV_PRIORITY_NORMAL servicing priority
+ *
+ * @param nb_links
+ * The number of links to establish. This parameter is ignored if queues is
+ * NULL.
+ *
+ * @param profile
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of links actually established. The return value can be less than
+ * the value of the *nb_links* parameter when the implementation has the
+ * limitation on specific queue to port link establishment or if invalid
+ * parameters are specified in *queues*
+ * If the return value is less than *nb_links*, the remaining links at the end
+ * of link[] are not established, and the caller has to take care of them.
+ * If return value is less than *nb_links* then implementation shall update the
+ * rte_errno accordingly, Possible rte_errno values are
+ * (EDQUOT) Quota exceeded(Application tried to link the queue configured with
+ * RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports)
+ * (EINVAL) Invalid parameter
+ *
+ */
+__rte_experimental
+int
+rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile);
+
+/**
+ * Unlink multiple source event queues supplied in *queues* that belong to profile
+ * designated by *profile* from the destination event port designated by its
+ * *port_id* on the event device designated by its *dev_id*.
+ *
+ * If *profile* is set to 0 i.e., the default profile then, then this function will
+ * act as ``rte_event_port_unlink``.
+ *
+ * The unlink call issues an async request to disable the event port *port_id*
+ * from receiving events from the specified event queue *queue_id*.
+ * Event queue(s) to event port unlink establishment can be changed at runtime
+ * without re-configuring the device.
+ *
+ * @see rte_event_port_unlinks_in_progress() to poll for completed unlinks.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier to select the destination port to unlink.
+ *
+ * @param queues
+ * Points to an array of *nb_unlinks* event queues to be unlinked
+ * from the event port.
+ * NULL value is allowed, in which case this function unlinks all the
+ * event queue(s) from the event port *port_id*.
+ *
+ * @param nb_unlinks
+ * The number of unlinks to establish. This parameter is ignored if queues is
+ * NULL.
+ *
+ * @param profile
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of unlinks successfully requested. The return value can be less
+ * than the value of the *nb_unlinks* parameter when the implementation has the
+ * limitation on specific queue to port unlink establishment or
+ * if invalid parameters are specified.
+ * If the return value is less than *nb_unlinks*, the remaining queues at the
+ * end of queues[] are not unlinked, and the caller has to take care of them.
+ * If return value is less than *nb_unlinks* then implementation shall update
+ * the rte_errno accordingly, Possible rte_errno values are
+ * (EINVAL) Invalid parameter
+ *
+ */
+__rte_experimental
+int
+rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile);
+
/**
* Returns the number of unlinks in progress.
*
@@ -1680,6 +1828,42 @@ int
rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint8_t priorities[]);
+/**
+ * Retrieve the list of source event queues and its service priority
+ * associated to a profile and linked to the destination event port
+ * designated by its *port_id* on the event device designated by its *dev_id*.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier.
+ *
+ * @param[out] queues
+ * Points to an array of *queues* for output.
+ * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
+ * store the event queue(s) linked with event port *port_id*
+ *
+ * @param[out] priorities
+ * Points to an array of *priorities* for output.
+ * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
+ * store the service priority associated with each event queue linked
+ *
+ * @param profile
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of links established on the event port designated by its
+ * *port_id*.
+ * - <0 on failure.
+ */
+__rte_experimental
+int
+rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint8_t priorities[], uint8_t profile);
+
/**
* Retrieve the service ID of the event dev. If the adapter doesn't use
* a rte_service function, this function returns -ESRCH.
@@ -2265,6 +2449,53 @@ rte_event_maintain(uint8_t dev_id, uint8_t port_id, int op)
return 0;
}
+/**
+ * Change the active profile on an event port.
+ *
+ * This function is used to change the current active profile on an event port
+ * when multiple link profiles are configured on an event port through the
+ * function call ``rte_event_port_profile_links_set``.
+ *
+ * On the subsequent ``rte_event_dequeue_burst`` call, only the event queues
+ * that were associated with the newly active profile will participate in
+ * scheduling.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ * @param port_id
+ * The identifier of the event port.
+ * @param profile
+ * The identifier of the profile.
+ * @return
+ * - 0 on success.
+ * - -EINVAL if *dev_id*, *port_id*, or *profile* is invalid.
+ */
+__rte_experimental
+static inline uint8_t
+rte_event_port_profile_switch(uint8_t dev_id, uint8_t port_id, uint8_t profile)
+{
+ const struct rte_event_fp_ops *fp_ops;
+ void *port;
+
+ fp_ops = &rte_event_fp_ops[dev_id];
+ port = fp_ops->data[port_id];
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+ if (dev_id >= RTE_EVENT_MAX_DEVS ||
+ port_id >= RTE_EVENT_MAX_PORTS_PER_DEV)
+ return -EINVAL;
+
+ if (port == NULL)
+ return -EINVAL;
+
+ if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT)
+ return -EINVAL;
+#endif
+ rte_eventdev_trace_port_profile_switch(dev_id, port_id, profile);
+
+ return fp_ops->profile_switch(port, profile);
+}
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
index c27a52ccc0..5af646ed5c 100644
--- a/lib/eventdev/rte_eventdev_core.h
+++ b/lib/eventdev/rte_eventdev_core.h
@@ -42,6 +42,8 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port,
uint16_t nb_events);
/**< @internal Enqueue burst of events on crypto adapter */
+typedef int (*event_profile_switch_t)(void *port, uint8_t profile);
+
struct rte_event_fp_ops {
void **data;
/**< points to array of internal port data pointers */
@@ -65,7 +67,9 @@ struct rte_event_fp_ops {
/**< PMD Tx adapter enqueue same destination function. */
event_crypto_adapter_enqueue_t ca_enqueue;
/**< PMD Crypto adapter enqueue function. */
- uintptr_t reserved[5];
+ event_profile_switch_t profile_switch;
+ /**< PMD Event switch profile function. */
+ uintptr_t reserved[4];
} __rte_cache_aligned;
extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
diff --git a/lib/eventdev/rte_eventdev_trace_fp.h b/lib/eventdev/rte_eventdev_trace_fp.h
index af2172d2a5..04d510ad00 100644
--- a/lib/eventdev/rte_eventdev_trace_fp.h
+++ b/lib/eventdev/rte_eventdev_trace_fp.h
@@ -46,6 +46,14 @@ RTE_TRACE_POINT_FP(
rte_trace_point_emit_int(op);
)
+RTE_TRACE_POINT_FP(
+ rte_eventdev_trace_port_profile_switch,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, uint8_t profile),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u8(profile);
+)
+
RTE_TRACE_POINT_FP(
rte_eventdev_trace_eth_tx_adapter_enqueue,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table,
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index 7ce09a87bb..f88decee39 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -134,6 +134,10 @@ EXPERIMENTAL {
# added in 23.11
rte_event_eth_rx_adapter_create_ext_with_params;
+ rte_event_port_profile_links_set;
+ rte_event_port_profile_unlink;
+ rte_event_port_profile_links_get;
+ __rte_eventdev_trace_port_profile_switch;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH v3 2/3] event/cnxk: implement event link profiles
2023-09-21 10:28 ` [PATCH v3 0/3] Introduce event link profiles pbhagavatula
2023-09-21 10:28 ` [PATCH v3 1/3] eventdev: introduce " pbhagavatula
@ 2023-09-21 10:28 ` pbhagavatula
2023-09-27 15:29 ` Jerin Jacob
2023-09-21 10:28 ` [PATCH v3 3/3] test/event: add event link profile test pbhagavatula
` (2 subsequent siblings)
4 siblings, 1 reply; 44+ messages in thread
From: pbhagavatula @ 2023-09-21 10:28 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Implement event link profiles support on CN10K and CN9K.
Both the platforms support up to 2 link profiles.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 1 +
doc/guides/eventdevs/features/cnxk.ini | 3 +-
doc/guides/rel_notes/release_23_11.rst | 5 ++
drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
drivers/common/cnxk/roc_sso.c | 18 +++----
drivers/common/cnxk/roc_sso.h | 8 +--
drivers/common/cnxk/roc_sso_priv.h | 4 +-
drivers/event/cnxk/cn10k_eventdev.c | 45 +++++++++++-----
drivers/event/cnxk/cn10k_worker.c | 11 ++++
drivers/event/cnxk/cn10k_worker.h | 1 +
drivers/event/cnxk/cn9k_eventdev.c | 74 ++++++++++++++++----------
drivers/event/cnxk/cn9k_worker.c | 22 ++++++++
drivers/event/cnxk/cn9k_worker.h | 2 +
drivers/event/cnxk/cnxk_eventdev.c | 38 +++++++------
drivers/event/cnxk/cnxk_eventdev.h | 10 ++--
15 files changed, 164 insertions(+), 82 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 1a59233282..cccb8a0304 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -48,6 +48,7 @@ Features of the OCTEON cnxk SSO PMD are:
- HW managed event vectorization on CN10K for packets enqueued from ethdev to
eventdev configurable per each Rx queue in Rx adapter.
- Event vector transmission via Tx adapter.
+- Up to 2 event link profiles.
Prerequisites and Compilation procedure
---------------------------------------
diff --git a/doc/guides/eventdevs/features/cnxk.ini b/doc/guides/eventdevs/features/cnxk.ini
index bee69bf8f4..5d353e3670 100644
--- a/doc/guides/eventdevs/features/cnxk.ini
+++ b/doc/guides/eventdevs/features/cnxk.ini
@@ -12,7 +12,8 @@ runtime_port_link = Y
multiple_queue_port = Y
carry_flow_id = Y
maintenance_free = Y
-runtime_queue_attr = y
+runtime_queue_attr = Y
+profile_links = Y
[Eth Rx adapter Features]
internal_port = Y
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index e714fc2be5..69b3e4a1d8 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -107,6 +107,11 @@ New Features
* Added ``rte_event_port_profile_switch`` to switch between profiles as needed.
+* **Added support for link profiles for Marvell CNXK event device driver.**
+
+ Marvell CNXK event device driver supports up to two link profiles per event
+ port. Added support to advertise link profile capabilities and supporting APIs.
+
Removed Items
-------------
diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c
index d76158e30d..690d47c045 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev.c
@@ -285,7 +285,7 @@ nix_inl_sso_setup(struct nix_inl_dev *inl_dev)
}
/* Setup hwgrp->hws link */
- sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, true);
+ sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, 0, true);
/* Enable HWGRP */
plt_write64(0x1, inl_dev->sso_base + SSO_LF_GGRP_QCTL);
@@ -315,7 +315,7 @@ nix_inl_sso_release(struct nix_inl_dev *inl_dev)
nix_inl_sso_unregister_irqs(inl_dev);
/* Unlink hws */
- sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, false);
+ sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, 0, false);
/* Release XAQ aura */
sso_hwgrp_release_xaq(&inl_dev->dev, 1);
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index c37da685da..748d287bad 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -186,8 +186,8 @@ sso_rsrc_get(struct roc_sso *roc_sso)
}
void
-sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
- uint16_t hwgrp[], uint16_t n, uint16_t enable)
+sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, uint16_t hwgrp[],
+ uint16_t n, uint8_t set, uint16_t enable)
{
uint64_t reg;
int i, j, k;
@@ -204,7 +204,7 @@ sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
k = n % 4;
k = k ? k : 4;
for (j = 0; j < k; j++) {
- mask[j] = hwgrp[i + j] | enable << 14;
+ mask[j] = hwgrp[i + j] | (uint32_t)set << 12 | enable << 14;
if (bmp) {
enable ? plt_bitmap_set(bmp, hwgrp[i + j]) :
plt_bitmap_clear(bmp, hwgrp[i + j]);
@@ -290,8 +290,8 @@ roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns)
}
int
-roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
- uint16_t nb_hwgrp)
+roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], uint16_t nb_hwgrp,
+ uint8_t set)
{
struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
struct sso *sso;
@@ -299,14 +299,14 @@ roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
sso = roc_sso_to_sso_priv(roc_sso);
base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12);
- sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, 1);
+ sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, set, 1);
return nb_hwgrp;
}
int
-roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
- uint16_t nb_hwgrp)
+roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], uint16_t nb_hwgrp,
+ uint8_t set)
{
struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
struct sso *sso;
@@ -314,7 +314,7 @@ roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
sso = roc_sso_to_sso_priv(roc_sso);
base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12);
- sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, 0);
+ sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, set, 0);
return nb_hwgrp;
}
diff --git a/drivers/common/cnxk/roc_sso.h b/drivers/common/cnxk/roc_sso.h
index 8ee62afb9a..64f14b8119 100644
--- a/drivers/common/cnxk/roc_sso.h
+++ b/drivers/common/cnxk/roc_sso.h
@@ -84,10 +84,10 @@ int __roc_api roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso,
uint16_t hwgrp, uint8_t weight,
uint8_t affinity, uint8_t priority);
uint64_t __roc_api roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns);
-int __roc_api roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws,
- uint16_t hwgrp[], uint16_t nb_hwgrp);
-int __roc_api roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws,
- uint16_t hwgrp[], uint16_t nb_hwgrp);
+int __roc_api roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
+ uint16_t nb_hwgrp, uint8_t set);
+int __roc_api roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
+ uint16_t nb_hwgrp, uint8_t set);
int __roc_api roc_sso_hwgrp_hws_link_status(struct roc_sso *roc_sso,
uint8_t hws, uint16_t hwgrp);
uintptr_t __roc_api roc_sso_hws_base_get(struct roc_sso *roc_sso, uint8_t hws);
diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h
index 09729d4f62..21c59c57e6 100644
--- a/drivers/common/cnxk/roc_sso_priv.h
+++ b/drivers/common/cnxk/roc_sso_priv.h
@@ -44,8 +44,8 @@ roc_sso_to_sso_priv(struct roc_sso *roc_sso)
int sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf,
void **rsp);
int sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf);
-void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
- uint16_t hwgrp[], uint16_t n, uint16_t enable);
+void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, uint16_t hwgrp[],
+ uint16_t n, uint8_t set, uint16_t enable);
int sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps);
int sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps);
int sso_hwgrp_init_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq,
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index cf186b9af4..bb0c910553 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -66,21 +66,21 @@ cn10k_sso_init_hws_mem(void *arg, uint8_t port_id)
}
static int
-cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = port;
- return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+ return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link, profile);
}
static int
-cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = port;
- return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+ return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link, profile);
}
static void
@@ -107,10 +107,11 @@ cn10k_sso_hws_release(void *arg, void *hws)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = hws;
- uint16_t i;
+ uint16_t i, j;
- for (i = 0; i < dev->nb_event_queues; i++)
- roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1);
+ for (i = 0; i < CNXK_SSO_MAX_PROFILES; i++)
+ for (j = 0; j < dev->nb_event_queues; j++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id, &j, 1, i);
memset(ws, 0, sizeof(*ws));
}
@@ -482,6 +483,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
CN10K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, sso_hws_tx_adptr_enq);
event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue;
+ event_dev->profile_switch = cn10k_sso_hws_profile_switch;
#else
RTE_SET_USED(event_dev);
#endif
@@ -633,9 +635,8 @@ cn10k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port,
}
static int
-cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port,
- const uint8_t queues[], const uint8_t priorities[],
- uint16_t nb_links)
+cn10k_sso_port_link_profile(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_links];
@@ -644,14 +645,14 @@ cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port,
RTE_SET_USED(priorities);
for (link = 0; link < nb_links; link++)
hwgrp_ids[link] = queues[link];
- nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+ nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links, profile);
return (int)nb_links;
}
static int
-cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
- uint8_t queues[], uint16_t nb_unlinks)
+cn10k_sso_port_unlink_profile(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_unlinks];
@@ -659,11 +660,25 @@ cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
for (unlink = 0; unlink < nb_unlinks; unlink++)
hwgrp_ids[unlink] = queues[unlink];
- nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+ nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks, profile);
return (int)nb_unlinks;
}
+static int
+cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links)
+{
+ return cn10k_sso_port_link_profile(event_dev, port, queues, priorities, nb_links, 0);
+}
+
+static int
+cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks)
+{
+ return cn10k_sso_port_unlink_profile(event_dev, port, queues, nb_unlinks, 0);
+}
+
static void
cn10k_sso_configure_queue_stash(struct rte_eventdev *event_dev)
{
@@ -1020,6 +1035,8 @@ static struct eventdev_ops cn10k_sso_dev_ops = {
.port_quiesce = cn10k_sso_port_quiesce,
.port_link = cn10k_sso_port_link,
.port_unlink = cn10k_sso_port_unlink,
+ .port_link_profile = cn10k_sso_port_link_profile,
+ .port_unlink_profile = cn10k_sso_port_unlink_profile,
.timeout_ticks = cnxk_sso_timeout_ticks,
.eth_rx_adapter_caps_get = cn10k_sso_rx_adapter_caps_get,
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
index 9b5bf90159..d59769717e 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -431,3 +431,14 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+
+int __rte_hot
+cn10k_sso_hws_profile_switch(void *port, uint8_t profile)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ ws->gw_wdata &= ~(0xFFUL);
+ ws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index e71ab3c523..26fecf21fb 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -329,6 +329,7 @@ uint16_t __rte_hot cn10k_sso_hws_enq_new_burst(void *port,
uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn10k_sso_hws_profile_switch(void *port, uint8_t profile);
#define R(name, flags) \
uint16_t __rte_hot cn10k_sso_hws_deq_##name( \
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index fe6f5d9f86..9fb9ca0d63 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -15,7 +15,7 @@
enq_op = enq_ops[dev->tx_offloads & (NIX_TX_OFFLOAD_MAX - 1)]
static int
-cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
@@ -24,22 +24,20 @@ cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
if (dev->dual_ws) {
dws = port;
- rc = roc_sso_hws_link(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map,
- nb_link);
- rc |= roc_sso_hws_link(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
- map, nb_link);
+ rc = roc_sso_hws_link(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map, nb_link,
+ profile);
+ rc |= roc_sso_hws_link(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), map,
+ nb_link, profile);
} else {
ws = port;
- rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+ rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link, profile);
}
return rc;
}
static int
-cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
@@ -48,15 +46,13 @@ cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
if (dev->dual_ws) {
dws = port;
- rc = roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
- map, nb_link);
- rc |= roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
- map, nb_link);
+ rc = roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map,
+ nb_link, profile);
+ rc |= roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), map,
+ nb_link, profile);
} else {
ws = port;
- rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+ rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link, profile);
}
return rc;
@@ -97,21 +93,24 @@ cn9k_sso_hws_release(void *arg, void *hws)
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
struct cn9k_sso_hws *ws;
- uint16_t i;
+ uint16_t i, k;
if (dev->dual_ws) {
dws = hws;
for (i = 0; i < dev->nb_event_queues; i++) {
- roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), &i, 1);
- roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), &i, 1);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++) {
+ roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
+ &i, 1, k);
+ roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ &i, 1, k);
+ }
}
memset(dws, 0, sizeof(*dws));
} else {
ws = hws;
for (i = 0; i < dev->nb_event_queues; i++)
- roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1, k);
memset(ws, 0, sizeof(*ws));
}
}
@@ -438,6 +437,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
event_dev->enqueue_burst = cn9k_sso_hws_enq_burst;
event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst;
+ event_dev->profile_switch = cn9k_sso_hws_profile_switch;
if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, sso_hws_deq_seg);
CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst,
@@ -475,6 +475,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
event_dev->enqueue_forward_burst =
cn9k_sso_hws_dual_enq_fwd_burst;
event_dev->ca_enqueue = cn9k_sso_hws_dual_ca_enq;
+ event_dev->profile_switch = cn9k_sso_hws_dual_profile_switch;
if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue,
@@ -708,9 +709,8 @@ cn9k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port,
}
static int
-cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port,
- const uint8_t queues[], const uint8_t priorities[],
- uint16_t nb_links)
+cn9k_sso_port_link_profile(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_links];
@@ -719,14 +719,14 @@ cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port,
RTE_SET_USED(priorities);
for (link = 0; link < nb_links; link++)
hwgrp_ids[link] = queues[link];
- nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+ nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links, profile);
return (int)nb_links;
}
static int
-cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
- uint8_t queues[], uint16_t nb_unlinks)
+cn9k_sso_port_unlink_profile(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_unlinks];
@@ -734,11 +734,25 @@ cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
for (unlink = 0; unlink < nb_unlinks; unlink++)
hwgrp_ids[unlink] = queues[unlink];
- nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+ nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks, profile);
return (int)nb_unlinks;
}
+static int
+cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links)
+{
+ return cn9k_sso_port_link_profile(event_dev, port, queues, priorities, nb_links, 0);
+}
+
+static int
+cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks)
+{
+ return cn9k_sso_port_unlink_profile(event_dev, port, queues, nb_unlinks, 0);
+}
+
static int
cn9k_sso_start(struct rte_eventdev *event_dev)
{
@@ -1019,6 +1033,8 @@ static struct eventdev_ops cn9k_sso_dev_ops = {
.port_quiesce = cn9k_sso_port_quiesce,
.port_link = cn9k_sso_port_link,
.port_unlink = cn9k_sso_port_unlink,
+ .port_link_profile = cn9k_sso_port_link_profile,
+ .port_unlink_profile = cn9k_sso_port_unlink_profile,
.timeout_ticks = cnxk_sso_timeout_ticks,
.eth_rx_adapter_caps_get = cn9k_sso_rx_adapter_caps_get,
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
index abbbfffd85..a9ac49a5a7 100644
--- a/drivers/event/cnxk/cn9k_worker.c
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -66,6 +66,17 @@ cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+int __rte_hot
+cn9k_sso_hws_profile_switch(void *port, uint8_t profile)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ ws->gw_wdata &= ~(0xFFUL);
+ ws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
+
/* Dual ws ops. */
uint16_t __rte_hot
@@ -149,3 +160,14 @@ cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events)
return cn9k_cpt_crypto_adapter_enqueue(dws->base[!dws->vws],
ev->event_ptr);
}
+
+int __rte_hot
+cn9k_sso_hws_dual_profile_switch(void *port, uint8_t profile)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+
+ dws->gw_wdata &= ~(0xFFUL);
+ dws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index ee659e80d6..6936b7ad04 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -366,6 +366,7 @@ uint16_t __rte_hot cn9k_sso_hws_enq_new_burst(void *port,
uint16_t __rte_hot cn9k_sso_hws_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn9k_sso_hws_profile_switch(void *port, uint8_t profile);
uint16_t __rte_hot cn9k_sso_hws_dual_enq(void *port,
const struct rte_event *ev);
@@ -382,6 +383,7 @@ uint16_t __rte_hot cn9k_sso_hws_ca_enq(void *port, struct rte_event ev[],
uint16_t nb_events);
uint16_t __rte_hot cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn9k_sso_hws_dual_profile_switch(void *port, uint8_t profile);
#define R(name, flags) \
uint16_t __rte_hot cn9k_sso_hws_deq_##name( \
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index f3394a20b1..0c61f4c20e 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -30,8 +30,9 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE |
- RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR;
- dev_info->max_profiles_per_port = 1;
+ RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR |
+ RTE_EVENT_DEV_CAP_PROFILE_LINK;
+ dev_info->max_profiles_per_port = CNXK_SSO_MAX_PROFILES;
}
int
@@ -129,23 +130,25 @@ cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t *links_map, hwgrp[CNXK_SSO_MAX_HWGRP];
- int i, j;
+ int i, j, k;
for (i = 0; i < dev->nb_event_ports; i++) {
- uint16_t nb_hwgrp = 0;
-
- links_map = event_dev->data->links_map[0];
- /* Point links_map to this port specific area */
- links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++) {
+ uint16_t nb_hwgrp = 0;
+
+ links_map = event_dev->data->links_map[k];
+ /* Point links_map to this port specific area */
+ links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
+
+ for (j = 0; j < dev->nb_event_queues; j++) {
+ if (links_map[j] == 0xdead)
+ continue;
+ hwgrp[nb_hwgrp] = j;
+ nb_hwgrp++;
+ }
- for (j = 0; j < dev->nb_event_queues; j++) {
- if (links_map[j] == 0xdead)
- continue;
- hwgrp[nb_hwgrp] = j;
- nb_hwgrp++;
+ link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp, k);
}
-
- link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp);
}
}
@@ -436,7 +439,7 @@ cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t all_queues[CNXK_SSO_MAX_HWGRP];
- uint16_t i;
+ uint16_t i, j;
void *ws;
if (!dev->configured)
@@ -447,7 +450,8 @@ cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn)
for (i = 0; i < dev->nb_event_ports; i++) {
ws = event_dev->data->ports[i];
- unlink_fn(dev, ws, all_queues, dev->nb_event_queues);
+ for (j = 0; j < CNXK_SSO_MAX_PROFILES; j++)
+ unlink_fn(dev, ws, all_queues, dev->nb_event_queues, j);
rte_free(cnxk_sso_hws_get_cookie(ws));
event_dev->data->ports[i] = NULL;
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index bd50de87c0..d42d1afa1a 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -33,6 +33,8 @@
#define CN10K_SSO_GW_MODE "gw_mode"
#define CN10K_SSO_STASH "stash"
+#define CNXK_SSO_MAX_PROFILES 2
+
#define NSEC2USEC(__ns) ((__ns) / 1E3)
#define USEC2NSEC(__us) ((__us)*1E3)
#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
@@ -57,10 +59,10 @@
typedef void *(*cnxk_sso_init_hws_mem_t)(void *dev, uint8_t port_id);
typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
-typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
- uint16_t nb_link);
-typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map,
- uint16_t nb_link);
+typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map, uint16_t nb_link,
+ uint8_t profile);
+typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map, uint16_t nb_link,
+ uint8_t profile);
typedef void (*cnxk_handle_event_t)(void *arg, struct rte_event ev);
typedef void (*cnxk_sso_hws_reset_t)(void *arg, void *ws);
typedef int (*cnxk_sso_hws_flush_t)(void *ws, uint8_t queue_id, uintptr_t base,
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH v3 3/3] test/event: add event link profile test
2023-09-21 10:28 ` [PATCH v3 0/3] Introduce event link profiles pbhagavatula
2023-09-21 10:28 ` [PATCH v3 1/3] eventdev: introduce " pbhagavatula
2023-09-21 10:28 ` [PATCH v3 2/3] event/cnxk: implement event " pbhagavatula
@ 2023-09-21 10:28 ` pbhagavatula
2023-09-27 14:56 ` [PATCH v3 0/3] Introduce event link profiles Jerin Jacob
2023-09-28 10:12 ` [PATCH v4 " pbhagavatula
4 siblings, 0 replies; 44+ messages in thread
From: pbhagavatula @ 2023-09-21 10:28 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add test case to verify event link profiles.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
app/test/test_eventdev.c | 117 +++++++++++++++++++++++++++++++++++++++
1 file changed, 117 insertions(+)
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index c51c93bdbd..0ecfa7db02 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1129,6 +1129,121 @@ test_eventdev_link_get(void)
return TEST_SUCCESS;
}
+static int
+test_eventdev_profile_switch(void)
+{
+#define MAX_RETRIES 4
+ uint8_t priorities[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ struct rte_event_queue_conf qcfg;
+ struct rte_event_port_conf pcfg;
+ struct rte_event_dev_info info;
+ struct rte_event ev;
+ uint8_t q, re;
+ int rc;
+
+ rte_event_dev_info_get(TEST_DEV_ID, &info);
+
+ if (info.max_profiles_per_port <= 1)
+ return TEST_SKIPPED;
+
+ if (info.max_event_queues <= 1)
+ return TEST_SKIPPED;
+
+ rc = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to get port0 default config");
+ rc = rte_event_port_setup(TEST_DEV_ID, 0, &pcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to setup port0");
+
+ rc = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, &qcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to get queue0 default config");
+ rc = rte_event_queue_setup(TEST_DEV_ID, 0, &qcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to setup queue0");
+
+ q = 0;
+ rc = rte_event_port_profile_links_set(TEST_DEV_ID, 0, &q, NULL, 1, 0);
+ TEST_ASSERT(rc == 1, "Failed to link queue 0 to port 0 with profile 0");
+ q = 1;
+ rc = rte_event_port_profile_links_set(TEST_DEV_ID, 0, &q, NULL, 1, 1);
+ TEST_ASSERT(rc == 1, "Failed to link queue 1 to port 0 with profile 1");
+
+ rc = rte_event_port_profile_links_get(TEST_DEV_ID, 0, queues, priorities, 0);
+ TEST_ASSERT(rc == 1, "Failed to links");
+ TEST_ASSERT(queues[0] == 0, "Invalid queue found in link");
+
+ rc = rte_event_port_profile_links_get(TEST_DEV_ID, 0, queues, priorities, 1);
+ TEST_ASSERT(rc == 1, "Failed to links");
+ TEST_ASSERT(queues[0] == 1, "Invalid queue found in link");
+
+ rc = rte_event_dev_start(TEST_DEV_ID);
+ TEST_ASSERT_SUCCESS(rc, "Failed to start event device");
+
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.queue_id = 0;
+ ev.op = RTE_EVENT_OP_NEW;
+ ev.flow_id = 0;
+ ev.u64 = 0xBADF00D0;
+ rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1);
+ TEST_ASSERT(rc == 1, "Failed to enqueue event");
+ ev.queue_id = 1;
+ ev.flow_id = 1;
+ rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1);
+ TEST_ASSERT(rc == 1, "Failed to enqueue event");
+
+ ev.event = 0;
+ ev.u64 = 0;
+
+ rc = rte_event_port_profile_switch(TEST_DEV_ID, 0, 1);
+ TEST_ASSERT_SUCCESS(rc, "Failed to change profile");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ printf("rc %d\n", rc);
+ if (rc)
+ break;
+ }
+
+ TEST_ASSERT(rc == 1, "Failed to dequeue event from profile 1");
+ TEST_ASSERT(ev.flow_id == 1, "Incorrect flow identifier from profile 1");
+ TEST_ASSERT(ev.queue_id == 1, "Incorrect queue identifier from profile 1");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ TEST_ASSERT(rc == 0, "Unexpected event dequeued from active profile");
+ }
+
+ rc = rte_event_port_profile_switch(TEST_DEV_ID, 0, 0);
+ TEST_ASSERT_SUCCESS(rc, "Failed to change profile");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ if (rc)
+ break;
+ }
+
+ TEST_ASSERT(rc == 1, "Failed to dequeue event from profile 1");
+ TEST_ASSERT(ev.flow_id == 0, "Incorrect flow identifier from profile 0");
+ TEST_ASSERT(ev.queue_id == 0, "Incorrect queue identifier from profile 0");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ TEST_ASSERT(rc == 0, "Unexpected event dequeued from active profile");
+ }
+
+ q = 0;
+ rc = rte_event_port_profile_unlink(TEST_DEV_ID, 0, &q, 1, 0);
+ TEST_ASSERT(rc == 1, "Failed to unlink queue 0 to port 0 with profile 0");
+ q = 1;
+ rc = rte_event_port_profile_unlink(TEST_DEV_ID, 0, &q, 1, 1);
+ TEST_ASSERT(rc == 1, "Failed to unlink queue 1 to port 0 with profile 1");
+
+ return TEST_SUCCESS;
+}
+
static int
test_eventdev_close(void)
{
@@ -1187,6 +1302,8 @@ static struct unit_test_suite eventdev_common_testsuite = {
test_eventdev_timeout_ticks),
TEST_CASE_ST(NULL, NULL,
test_eventdev_start_stop),
+ TEST_CASE_ST(eventdev_configure_setup, eventdev_stop_device,
+ test_eventdev_profile_switch),
TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
test_eventdev_link),
TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [PATCH v3 0/3] Introduce event link profiles
2023-09-21 10:28 ` [PATCH v3 0/3] Introduce event link profiles pbhagavatula
` (2 preceding siblings ...)
2023-09-21 10:28 ` [PATCH v3 3/3] test/event: add event link profile test pbhagavatula
@ 2023-09-27 14:56 ` Jerin Jacob
2023-09-28 10:12 ` [PATCH v4 " pbhagavatula
4 siblings, 0 replies; 44+ messages in thread
From: Jerin Jacob @ 2023-09-27 14:56 UTC (permalink / raw)
To: pbhagavatula
Cc: jerinj, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov, dev
On Thu, Sep 21, 2023 at 5:16 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> A collection of event queues linked to an event port can be associated
> with unique identifier called as a profile, multiple such profiles can
as a "link profile"
> be configured based on the event device capability using the function
> `rte_event_port_profile_links_set` which takes arguments similar to
> `rte_event_port_link` in addition to the profile identifier.
>
> The maximum link profiles that are supported by an event device is
> advertised through the structure member
> `rte_event_dev_info::max_profiles_per_port`.
>
> By default, event ports are configured to use the link profile 0 on
> initialization.
>
> Once multiple link profiles are set up and the event device is started, the
> application can use the function `rte_event_port_profile_switch` to change
> the currently active profile on an event port. This effects the next
> `rte_event_dequeue_burst` call, where the event queues associated with the
> newly active link profile will participate in scheduling.
>
> Rudementary work flow would something like:
>
> Config path:
>
> uint8_t lowQ[4] = {4, 5, 6, 7};
lowq
> uint8_t highQ[4] = {0, 1, 2, 3};
highq
>
> if (rte_event_dev_info.max_profiles_per_port < 2)
> return -ENOTSUP;
>
> rte_event_port_profile_links_set(0, 0, highQ, NULL, 4, 0);
> rte_event_port_profile_links_set(0, 0, lowQ, NULL, 4, 1);
>
> Worker path:
Simplify by:
uint8_t profile_id_to_switch;
while (1) {
deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
if (deq == 0) {
profile_id_to_switch = app_find_profile_id_to_switch();
rte_event_port_profile_switch(0, 0, profile_id_to_switch);
continue;
}
// Process the event received.
}
> An application could use heuristic data of load/activity of a given event
> port and change its active profile to adapt to the traffic pattern.
>
> An unlink function `rte_event_port_profile_unlink` is provided to
> modify the links associated to a profile, and
> `rte_event_port_profile_links_get` can be used to retrieve the links
> associated with a profile.
>
> Using Link profiles can reduce the overhead of linking/unlinking and
> waiting for unlinks in progress in fast-path and gives applications
> the ability to switch between preset profiles on the fly.
>
> v3 Changes:
> ----------
> - Rebase to next-eventdev
> - Rename testcase name to match API.
>
> v2 Changes:
> ----------
> - Fix compilation.
>
> Pavan Nikhilesh (3):
> eventdev: introduce link profiles
> event/cnxk: implement event link profiles
> test/event: add event link profile test
>
> app/test/test_eventdev.c | 117 +++++++++++
> config/rte_config.h | 1 +
> doc/guides/eventdevs/cnxk.rst | 1 +
> doc/guides/eventdevs/features/cnxk.ini | 3 +-
> doc/guides/eventdevs/features/default.ini | 1 +
> doc/guides/prog_guide/eventdev.rst | 40 ++++
> doc/guides/rel_notes/release_23_11.rst | 22 ++
> drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
> drivers/common/cnxk/roc_sso.c | 18 +-
> drivers/common/cnxk/roc_sso.h | 8 +-
> drivers/common/cnxk/roc_sso_priv.h | 4 +-
> drivers/event/cnxk/cn10k_eventdev.c | 45 ++--
> drivers/event/cnxk/cn10k_worker.c | 11 +
> drivers/event/cnxk/cn10k_worker.h | 1 +
> drivers/event/cnxk/cn9k_eventdev.c | 74 ++++---
> drivers/event/cnxk/cn9k_worker.c | 22 ++
> drivers/event/cnxk/cn9k_worker.h | 2 +
> drivers/event/cnxk/cnxk_eventdev.c | 37 ++--
> drivers/event/cnxk/cnxk_eventdev.h | 10 +-
> drivers/event/dlb2/dlb2.c | 1 +
> drivers/event/dpaa/dpaa_eventdev.c | 1 +
> drivers/event/dpaa2/dpaa2_eventdev.c | 2 +-
> drivers/event/dsw/dsw_evdev.c | 1 +
> drivers/event/octeontx/ssovf_evdev.c | 2 +-
> drivers/event/opdl/opdl_evdev.c | 1 +
> drivers/event/skeleton/skeleton_eventdev.c | 1 +
> drivers/event/sw/sw_evdev.c | 1 +
> lib/eventdev/eventdev_pmd.h | 59 +++++-
> lib/eventdev/eventdev_private.c | 9 +
> lib/eventdev/eventdev_trace.h | 32 +++
> lib/eventdev/eventdev_trace_points.c | 12 ++
> lib/eventdev/rte_eventdev.c | 146 ++++++++++---
> lib/eventdev/rte_eventdev.h | 231 +++++++++++++++++++++
> lib/eventdev/rte_eventdev_core.h | 6 +-
> lib/eventdev/rte_eventdev_trace_fp.h | 8 +
> lib/eventdev/version.map | 4 +
> 36 files changed, 827 insertions(+), 111 deletions(-)
>
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [PATCH v3 1/3] eventdev: introduce link profiles
2023-09-21 10:28 ` [PATCH v3 1/3] eventdev: introduce " pbhagavatula
@ 2023-09-27 15:23 ` Jerin Jacob
0 siblings, 0 replies; 44+ messages in thread
From: Jerin Jacob @ 2023-09-27 15:23 UTC (permalink / raw)
To: pbhagavatula
Cc: jerinj, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov, dev
On Thu, Sep 21, 2023 at 3:58 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> A collection of event queues linked to an event port can be
> associated with a unique identifier called as a profile, multiple
as a "link profile"
> such profiles can be created based on the event device capability
> using the function `rte_event_port_profile_links_set` which takes
> arguments similar to `rte_event_port_link` in addition to the profile
> identifier.
>
> The maximum link profiles that are supported by an event device
> is advertised through the structure member
> `rte_event_dev_info::max_profiles_per_port`.
> By default, event ports are configured to use the link profile 0
> on initialization.
>
> Once multiple link profiles are set up and the event device is started,
> the application can use the function `rte_event_port_profile_switch`
> to change the currently active profile on an event port. This effects
> the next `rte_event_dequeue_burst` call, where the event queues
> associated with the newly active link profile will participate in
> scheduling.
>
> An unlink function `rte_event_port_profile_unlink` is provided
> to modify the links associated to a profile, and
> `rte_event_port_profile_links_get` can be used to retrieve the
> links associated with a profile.
>
> Using Link profiles can reduce the overhead of linking/unlinking and
> waiting for unlinks in progress in fast-path and gives applications
> the ability to switch between preset profiles on the fly.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
> +Linking Queues to Ports with profiles
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +An application can use link profiles if supported by the underlying event device to setup up
use "link profiles"
> +multiple link profile per port and change them run time depending up on heuristic data.
> +Using Link profiles can reduce the overhead of linking/unlinking and wait for unlinks in progress
> +in fast-path and gives applications the ability to switch between preset profiles on the fly.
> +
> +An Example use case could be as follows.
> +
> +Config path:
> +
> +.. code-block:: c
> +
> + uint8_t lq[4] = {4, 5, 6, 7};
> + uint8_t hq[4] = {0, 1, 2, 3};
> +
> + if (rte_event_dev_info.max_profiles_per_port < 2)
> + return -ENOTSUP;
> +
> + rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0);
> + rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1);
> +
> +Worker path:
> +
> +.. code-block:: c
> +
> + uint8_t profile_id_to_switch;
> +
> + while (1) {
> + deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
> + if (deq == 0) {
> + profile_id_to_switch = app_findprofile_id_to_switch();
app_find_profile_id_to_switch()
> + rte_event_port_profile_switch(0, 0, profile_id_to_switch);
> + continue;
> + }
> +
> + // Process the event received.
> + }
> +
> Starting the EventDev
> ~~~~~~~~~~~~~~~~~~~~~
>
> diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
> index b34ddc0860..e714fc2be5 100644
> --- a/doc/guides/rel_notes/release_23_11.rst
> +++ b/doc/guides/rel_notes/release_23_11.rst
> @@ -89,6 +89,23 @@ New Features
> * Added support for ``remaining_ticks_get`` timer adapter PMD callback
> to get the remaining ticks to expire for a given event timer.
>
> +* **Added eventdev support to link queues to port with profile.**
> +
> + Introduced event link profiles that can be used to associated links between
``link profiles``
> + event queues and an event port with a unique identifier termed as profile.
> + The profile can be used to switch between the associated links in fast-path
> + without the additional overhead of linking/unlinking and waiting for unlinking.
Added ``rte_event_port_profile_links_set``, ``rte_event_port_profile_unlink``,
``rte_event_port_profile_links_get`` and ``rte_event_port_profile_switch`` APIs
to enable this feature.
In order to reduce the verbose in release notes, below text can be
replaced with above text.
> +
> + * Added ``rte_event_port_profile_links_set`` to link event queues to an event
> + port with a unique profile identifier.
> +
> + * Added ``rte_event_port_profile_unlink`` to unlink event queues from an event
> + port associated with a profile.
> +
> + * Added ``rte_event_port_profile_links_get`` to retrieve links associated to a
> + profile.
> +
> + * Added ``rte_event_port_profile_switch`` to switch between profiles as needed.
>
> diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
> index cf2764364f..e645f7595a 100644
> --- a/drivers/event/dlb2/dlb2.c
> +++ b/drivers/event/dlb2/dlb2.c
> @@ -79,6 +79,7 @@ static struct rte_event_dev_info evdev_dlb2_default_info = {
> RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
> RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
> RTE_EVENT_DEV_CAP_MAINTENANCE_FREE),
> + .max_profiles_per_port = 1,
> };
>
> struct process_local_port_data
> diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c
> index 4b3d16735b..f615da3813 100644
> --- a/drivers/event/dpaa/dpaa_eventdev.c
> +++ b/drivers/event/dpaa/dpaa_eventdev.c
> @@ -359,6 +359,7 @@ dpaa_event_dev_info_get(struct rte_eventdev *dev,
> RTE_EVENT_DEV_CAP_NONSEQ_MODE |
> RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
> RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
> + dev_info->max_profiles_per_port = 1;
> }
>
> static int
> diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
> index fa1a1ade80..ffc5550f85 100644
> --- a/drivers/event/dpaa2/dpaa2_eventdev.c
> +++ b/drivers/event/dpaa2/dpaa2_eventdev.c
> @@ -411,7 +411,7 @@ dpaa2_eventdev_info_get(struct rte_eventdev *dev,
> RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
> RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
> RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
> -
> + dev_info->max_profiles_per_port = 1;
> }
>
> static int
> diff --git a/drivers/event/dsw/dsw_evdev.c b/drivers/event/dsw/dsw_evdev.c
> index 6c5cde2468..785c12f61f 100644
> --- a/drivers/event/dsw/dsw_evdev.c
> +++ b/drivers/event/dsw/dsw_evdev.c
> @@ -218,6 +218,7 @@ dsw_info_get(struct rte_eventdev *dev __rte_unused,
> .max_event_port_dequeue_depth = DSW_MAX_PORT_DEQUEUE_DEPTH,
> .max_event_port_enqueue_depth = DSW_MAX_PORT_ENQUEUE_DEPTH,
> .max_num_events = DSW_MAX_EVENTS,
> + .max_profiles_per_port = 1,
> .event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE|
> RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED|
> RTE_EVENT_DEV_CAP_NONSEQ_MODE|
> diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
> index 650266b996..0eb9358981 100644
> --- a/drivers/event/octeontx/ssovf_evdev.c
> +++ b/drivers/event/octeontx/ssovf_evdev.c
> @@ -158,7 +158,7 @@ ssovf_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *dev_info)
> RTE_EVENT_DEV_CAP_NONSEQ_MODE |
> RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
> RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
> -
> + dev_info->max_profiles_per_port = 1;
> }
>
> static int
> diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c
> index 9ce8b39b60..dd25749654 100644
> --- a/drivers/event/opdl/opdl_evdev.c
> +++ b/drivers/event/opdl/opdl_evdev.c
> @@ -378,6 +378,7 @@ opdl_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
> .event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE |
> RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
> RTE_EVENT_DEV_CAP_MAINTENANCE_FREE,
> + .max_profiles_per_port = 1,
> };
>
> *info = evdev_opdl_info;
> diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
> index 8513b9a013..dc9b131641 100644
> --- a/drivers/event/skeleton/skeleton_eventdev.c
> +++ b/drivers/event/skeleton/skeleton_eventdev.c
> @@ -104,6 +104,7 @@ skeleton_eventdev_info_get(struct rte_eventdev *dev,
> RTE_EVENT_DEV_CAP_EVENT_QOS |
> RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
> RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
> + dev_info->max_profiles_per_port = 1;
> }
>
> static int
> diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
> index cfd659d774..6d1816b76d 100644
> --- a/drivers/event/sw/sw_evdev.c
> +++ b/drivers/event/sw/sw_evdev.c
> @@ -609,6 +609,7 @@ sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
> RTE_EVENT_DEV_CAP_NONSEQ_MODE |
> RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
> RTE_EVENT_DEV_CAP_MAINTENANCE_FREE),
> + .max_profiles_per_port = 1,
> };
>
In order to optimze for common case i.e chnages in all PMD, Please
move this logic to common code
i.e
[for-main]dell[dpdk-next-eventdev] $ git diff
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 30df0572d2..39018f23b6 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -95,6 +95,7 @@ rte_event_dev_info_get(uint8_t dev_id, struct
rte_event_dev_info *dev_info)
return -EINVAL;
memset(dev_info, 0, sizeof(struct rte_event_dev_info));
+ dev_info->max_profiles_per_port = 1;
if (*dev->dev_ops->dev_infos_get == NULL)
return -ENOTSUP;
[for-main]dell[dpdk-next-eventdev] $
> +/**
> + * Link multiple source event queues associated with a profile to a destination
> + * event port.
> + *
> + * @param dev
> + * Event device pointer
> + * @param port
> + * Event port pointer
> + * @param queues
> + * Points to an array of *nb_links* event queues to be linked
> + * to the event port.
> + * @param priorities
> + * Points to an array of *nb_links* service priorities associated with each
> + * event queue link to event port.
> + * @param nb_links
> + * The number of links to establish.
> + * @param profile
profile_id
> + * The profile ID to associate the links.
> + *
> + * @return
> + * Returns 0 on success.
> + */
> +typedef int (*eventdev_port_link_profile_t)(struct rte_eventdev *dev, void *port,
> + const uint8_t queues[], const uint8_t priorities[],
> + uint16_t nb_links, uint8_t profile);
profile_id
> +
> /**
> * Unlink multiple source event queues from destination event port.
> *
> @@ -455,6 +484,28 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
> typedef int (*eventdev_port_unlink_t)(struct rte_eventdev *dev, void *port,
> uint8_t queues[], uint16_t nb_unlinks);
>
> +/**
> + * Unlink multiple source event queues associated with a profile from destination
> + * event port.
> + *
> + * @param dev
> + * Event device pointer
> + * @param port
> + * Event port pointer
> + * @param queues
> + * An array of *nb_unlinks* event queues to be unlinked from the event port.
> + * @param nb_unlinks
> + * The number of unlinks to establish
> + * @param profile
profile_id
> + * The profile ID of the associated links.
> + *
> + * @return
> + * Returns 0 on success.
> + */
> +typedef int (*eventdev_port_unlink_profile_t)(struct rte_eventdev *dev, void *port,
> + uint8_t queues[], uint16_t nb_unlinks,
> + uint8_t profile);
profile_id
> RTE_TRACE_POINT(
> rte_eventdev_trace_port_unlinks_in_progress,
> RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id),
> diff --git a/lib/eventdev/eventdev_trace_points.c b/lib/eventdev/eventdev_trace_points.c
> index 76144cfe75..8024e07531 100644
> --- a/lib/eventdev/eventdev_trace_points.c
> +++ b/lib/eventdev/eventdev_trace_points.c
> @@ -19,9 +19,15 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_setup,
> RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_link,
> lib.eventdev.port.link)
>
> +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_links_set,
> + lib.eventdev.port.profile.links.set)
> +
> RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlink,
> lib.eventdev.port.unlink)
>
> +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_unlink,
> + lib.eventdev.port.profile.unlink)
> +
> RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_start,
> lib.eventdev.start)
>
> @@ -40,6 +46,9 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_deq_burst,
> RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_maintain,
> lib.eventdev.maintain)
>
> +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_switch,
> + lib.eventdev.port.profile.switch)
> +
> /* Eventdev Rx adapter trace points */
> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
> index 6ab4524332..30df0572d2 100644
> --- a/lib/eventdev/rte_eventdev.c
> +++ b/lib/eventdev/rte_eventdev.c
> +int
> +rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
> + uint16_t nb_unlinks, uint8_t profile)
prfofile->profile_id
> +int
> +rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
> + uint8_t priorities[], uint8_t profile)
profile_id
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 2ba8a7b090..f6ce45d160 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -320,6 +320,12 @@ struct rte_event;
> * rte_event_queue_setup().
> */
>
> +#define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12)
Need to change as /**< in comment to show up in doxygen
> +/** Event device is capable of supporting multiple link profiles per event port
> + * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater
> + * than one.
> + */
> +
> +/**
> + * Link multiple source event queues supplied in *queues* to the destination
> + * event port designated by its *port_id* with associated profile identifier
> + * supplied in *profile* with service priorities supplied in *priorities* on
profile_id
> + * the event device designated by its *dev_id*.
> + *
> + * If *profile* is set to 0 then, the links created by the call `rte_event_port_link`
profile_id
> + * will be overwritten.
> + *
> + * Event ports by default use profile 0 unless it is changed using the
> + * call ``rte_event_port_profile_switch()``.
> + *
> + * The link establishment shall enable the event port *port_id* from
> + * receiving events from the specified event queue(s) supplied in *queues*
> + *
> + * An event queue may link to one or more event ports.
> + * The number of links can be established from an event queue to event port is
> + * implementation defined.
> + *
> + * Event queue(s) to event port link establishment can be changed at runtime
> + * without re-configuring the device to support scaling and to reduce the
> + * latency of critical work by establishing the link with more event ports
> + * at runtime.
> + *
> + * @param dev_id
> + * The identifier of the device.
> + *
> + * @param port_id
> + * Event port identifier to select the destination port to link.
> + *
> + * @param queues
> + * Points to an array of *nb_links* event queues to be linked
> + * to the event port.
> + * NULL value is allowed, in which case this function links all the configured
> + * event queues *nb_event_queues* which previously supplied to
> + * rte_event_dev_configure() to the event port *port_id*
> + *
> + * @param priorities
> + * Points to an array of *nb_links* service priorities associated with each
> + * event queue link to event port.
> + * The priority defines the event port's servicing priority for
> + * event queue, which may be ignored by an implementation.
> + * The requested priority should in the range of
> + * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST].
> + * The implementation shall normalize the requested priority to
> + * implementation supported priority value.
> + * NULL value is allowed, in which case this function links the event queues
> + * with RTE_EVENT_DEV_PRIORITY_NORMAL servicing priority
> + *
> + * @param nb_links
> + * The number of links to establish. This parameter is ignored if queues is
> + * NULL.
> + *
> + * @param profile
profile_id
> + * The profile identifier associated with the links between event queues and
> + * event port. Should be less than the max capability reported by
> + * ``rte_event_dev_info::max_profiles_per_port``
> + *
> + * @return
> + * The number of links actually established. The return value can be less than
> + * the value of the *nb_links* parameter when the implementation has the
> + * limitation on specific queue to port link establishment or if invalid
> + * parameters are specified in *queues*
> + * If the return value is less than *nb_links*, the remaining links at the end
> + * of link[] are not established, and the caller has to take care of them.
> + * If return value is less than *nb_links* then implementation shall update the
> + * rte_errno accordingly, Possible rte_errno values are
> + * (EDQUOT) Quota exceeded(Application tried to link the queue configured with
> + * RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports)
> + * (EINVAL) Invalid parameter
> + *
> + */
> +__rte_experimental
> +int
> +rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
> + const uint8_t priorities[], uint16_t nb_links, uint8_t profile);
profile_id
> +
> +/**
> + * Unlink multiple source event queues supplied in *queues* that belong to profile
> + * designated by *profile* from the destination event port designated by its
> + * *port_id* on the event device designated by its *dev_id*.
> + *
> + * If *profile* is set to 0 i.e., the default profile then, then this function will
profile_id
> + * act as ``rte_event_port_unlink``.
> + *
> + * The unlink call issues an async request to disable the event port *port_id*
> + * from receiving events from the specified event queue *queue_id*.
> + * Event queue(s) to event port unlink establishment can be changed at runtime
> + * without re-configuring the device.
> + *
> + * @see rte_event_port_unlinks_in_progress() to poll for completed unlinks.
> + *
> + * @param dev_id
> + * The identifier of the device.
> + *
> + * @param port_id
> + * Event port identifier to select the destination port to unlink.
> + *
> + * @param queues
> + * Points to an array of *nb_unlinks* event queues to be unlinked
> + * from the event port.
> + * NULL value is allowed, in which case this function unlinks all the
> + * event queue(s) from the event port *port_id*.
> + *
> + * @param nb_unlinks
> + * The number of unlinks to establish. This parameter is ignored if queues is
> + * NULL.
> + *
> + * @param profile
profile_id
> + * The profile identifier associated with the links between event queues and
> + * event port. Should be less than the max capability reported by
> + * ``rte_event_dev_info::max_profiles_per_port``
> + *
> + * @return
> + * The number of unlinks successfully requested. The return value can be less
> + * than the value of the *nb_unlinks* parameter when the implementation has the
> + * limitation on specific queue to port unlink establishment or
> + * if invalid parameters are specified.
> + * If the return value is less than *nb_unlinks*, the remaining queues at the
> + * end of queues[] are not unlinked, and the caller has to take care of them.
> + * If return value is less than *nb_unlinks* then implementation shall update
> + * the rte_errno accordingly, Possible rte_errno values are
> + * (EINVAL) Invalid parameter
> + *
> + */
> +__rte_experimental
> +int
> +rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
> + uint16_t nb_unlinks, uint8_t profile);
> +
profile_id
> +/**
> + * Retrieve the list of source event queues and its service priority
> + * associated to a profile and linked to the destination event port
> + * designated by its *port_id* on the event device designated by its *dev_id*.
> + *
> + * @param dev_id
> + * The identifier of the device.
> + *
> + * @param port_id
> + * Event port identifier.
> + *
> + * @param[out] queues
> + * Points to an array of *queues* for output.
> + * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
> + * store the event queue(s) linked with event port *port_id*
> + *
> + * @param[out] priorities
> + * Points to an array of *priorities* for output.
> + * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
> + * store the service priority associated with each event queue linked
> + *
> + * @param profile
profile_id
> + * The profile identifier associated with the links between event queues and
> + * event port. Should be less than the max capability reported by
> + * ``rte_event_dev_info::max_profiles_per_port``
> + *
> + * @return
> + * The number of links established on the event port designated by its
> + * *port_id*.
> + * - <0 on failure.
> + */
> +__rte_experimental
> +int
> +rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
> + uint8_t priorities[], uint8_t profile);
> +
> /**
> * Retrieve the service ID of the event dev. If the adapter doesn't use
> * a rte_service function, this function returns -ESRCH.
> @@ -2265,6 +2449,53 @@ rte_event_maintain(uint8_t dev_id, uint8_t port_id, int op)
> return 0;
> }
>
> +/**
> + * Change the active profile on an event port.
> + *
> + * This function is used to change the current active profile on an event port
> + * when multiple link profiles are configured on an event port through the
> + * function call ``rte_event_port_profile_links_set``.
> + *
> + * On the subsequent ``rte_event_dequeue_burst`` call, only the event queues
> + * that were associated with the newly active profile will participate in
> + * scheduling.
> + *
> + * @param dev_id
> + * The identifier of the device.
> + * @param port_id
> + * The identifier of the event port.
> + * @param profile
profile_id
> + * The identifier of the profile.
> + * @return
> + * - 0 on success.
> + * - -EINVAL if *dev_id*, *port_id*, or *profile* is invalid.
> + */
> +__rte_experimental
> +static inline uint8_t
> +rte_event_port_profile_switch(uint8_t dev_id, uint8_t port_id, uint8_t profile)
profile_id
> +{
> + const struct rte_event_fp_ops *fp_ops;
> + void *port;
> +
> + fp_ops = &rte_event_fp_ops[dev_id];
> + port = fp_ops->data[port_id];
> +
> +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> + if (dev_id >= RTE_EVENT_MAX_DEVS ||
> + port_id >= RTE_EVENT_MAX_PORTS_PER_DEV)
> + return -EINVAL;
> +
> + if (port == NULL)
> + return -EINVAL;
> +
> + if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT)
> + return -EINVAL;
> +#endif
> + rte_eventdev_trace_port_profile_switch(dev_id, port_id, profile);
> +
> + return fp_ops->profile_switch(port, profile);
> +}
> +
> #ifdef __cplusplus
> }
> #endif
> diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
> index c27a52ccc0..5af646ed5c 100644
> --- a/lib/eventdev/rte_eventdev_core.h
> +++ b/lib/eventdev/rte_eventdev_core.h
> @@ -42,6 +42,8 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port,
> uint16_t nb_events);
> /**< @internal Enqueue burst of events on crypto adapter */
>
> +typedef int (*event_profile_switch_t)(void *port, uint8_t profile);
> +
> struct rte_event_fp_ops {
> void **data;
> /**< points to array of internal port data pointers */
> @@ -65,7 +67,9 @@ struct rte_event_fp_ops {
> /**< PMD Tx adapter enqueue same destination function. */
> event_crypto_adapter_enqueue_t ca_enqueue;
> /**< PMD Crypto adapter enqueue function. */
> - uintptr_t reserved[5];
> + event_profile_switch_t profile_switch;
> + /**< PMD Event switch profile function. */
> + uintptr_t reserved[4];
> } __rte_cache_aligned;
>
> extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
> diff --git a/lib/eventdev/rte_eventdev_trace_fp.h b/lib/eventdev/rte_eventdev_trace_fp.h
> index af2172d2a5..04d510ad00 100644
> --- a/lib/eventdev/rte_eventdev_trace_fp.h
> +++ b/lib/eventdev/rte_eventdev_trace_fp.h
> @@ -46,6 +46,14 @@ RTE_TRACE_POINT_FP(
> rte_trace_point_emit_int(op);
> )
>
> +RTE_TRACE_POINT_FP(
> + rte_eventdev_trace_port_profile_switch,
> + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, uint8_t profile),
> + rte_trace_point_emit_u8(dev_id);
> + rte_trace_point_emit_u8(port_id);
> + rte_trace_point_emit_u8(profile);
> +)
> +
> RTE_TRACE_POINT_FP(
> rte_eventdev_trace_eth_tx_adapter_enqueue,
> RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table,
> diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
> index 7ce09a87bb..f88decee39 100644
> --- a/lib/eventdev/version.map
> +++ b/lib/eventdev/version.map
> @@ -134,6 +134,10 @@ EXPERIMENTAL {
>
> # added in 23.11
> rte_event_eth_rx_adapter_create_ext_with_params;
> + rte_event_port_profile_links_set;
> + rte_event_port_profile_unlink;
> + rte_event_port_profile_links_get;
> + __rte_eventdev_trace_port_profile_switch;
> };
With above changes,
Acked-by: Jerin Jacob <jerinj@marvell.com>
>
> INTERNAL {
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [PATCH v3 2/3] event/cnxk: implement event link profiles
2023-09-21 10:28 ` [PATCH v3 2/3] event/cnxk: implement event " pbhagavatula
@ 2023-09-27 15:29 ` Jerin Jacob
0 siblings, 0 replies; 44+ messages in thread
From: Jerin Jacob @ 2023-09-27 15:29 UTC (permalink / raw)
To: pbhagavatula
Cc: jerinj, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov, dev
On Thu, Sep 21, 2023 at 3:59 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Implement event link profiles support on CN10K and CN9K.
> Both the platforms support up to 2 link profiles.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
> doc/guides/eventdevs/cnxk.rst | 1 +
> doc/guides/eventdevs/features/cnxk.ini | 3 +-
> doc/guides/rel_notes/release_23_11.rst | 5 ++
> drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
> drivers/common/cnxk/roc_sso.c | 18 +++----
> drivers/common/cnxk/roc_sso.h | 8 +--
> drivers/common/cnxk/roc_sso_priv.h | 4 +-
> drivers/event/cnxk/cn10k_eventdev.c | 45 +++++++++++-----
> drivers/event/cnxk/cn10k_worker.c | 11 ++++
> drivers/event/cnxk/cn10k_worker.h | 1 +
> drivers/event/cnxk/cn9k_eventdev.c | 74 ++++++++++++++++----------
> drivers/event/cnxk/cn9k_worker.c | 22 ++++++++
> drivers/event/cnxk/cn9k_worker.h | 2 +
> drivers/event/cnxk/cnxk_eventdev.c | 38 +++++++------
> drivers/event/cnxk/cnxk_eventdev.h | 10 ++--
> 15 files changed, 164 insertions(+), 82 deletions(-)
>
> diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
> index 1a59233282..cccb8a0304 100644
> --- a/doc/guides/eventdevs/cnxk.rst
> +++ b/doc/guides/eventdevs/cnxk.rst
> @@ -48,6 +48,7 @@ Features of the OCTEON cnxk SSO PMD are:
> - HW managed event vectorization on CN10K for packets enqueued from ethdev to
> eventdev configurable per each Rx queue in Rx adapter.
> - Event vector transmission via Tx adapter.
> +- Up to 2 event link profiles.
> [Eth Rx adapter Features]
> internal_port = Y
> diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
> index e714fc2be5..69b3e4a1d8 100644
> --- a/doc/guides/rel_notes/release_23_11.rst
> +++ b/doc/guides/rel_notes/release_23_11.rst
> @@ -107,6 +107,11 @@ New Features
>
> * Added ``rte_event_port_profile_switch`` to switch between profiles as needed.
>
> +* **Added support for link profiles for Marvell CNXK event device driver.**
> +
> + Marvell CNXK event device driver supports up to two link profiles per event
> + port. Added support to advertise link profile capabilities and supporting APIs.
> +
Move "Added eventdev support to link queues to port with profile"
section after "Added new Ethernet Rx Adapter create API"
As lib changes should comes first and then PMD changes.
Trim the above text one bullet under "Updated Marvell cnxk eventdev driver".
* Added support for ``link profiles``.
or so.
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH v4 0/3] Introduce event link profiles
2023-09-21 10:28 ` [PATCH v3 0/3] Introduce event link profiles pbhagavatula
` (3 preceding siblings ...)
2023-09-27 14:56 ` [PATCH v3 0/3] Introduce event link profiles Jerin Jacob
@ 2023-09-28 10:12 ` pbhagavatula
2023-09-28 10:12 ` [PATCH v4 1/3] eventdev: introduce " pbhagavatula
` (4 more replies)
4 siblings, 5 replies; 44+ messages in thread
From: pbhagavatula @ 2023-09-28 10:12 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
A collection of event queues linked to an event port can be associated
with unique identifier called as a link profile, multiple such profiles
can be configured based on the event device capability using the function
`rte_event_port_profile_links_set` which takes arguments similar to
`rte_event_port_link` in addition to the profile identifier.
The maximum link profiles that are supported by an event device is
advertised through the structure member
`rte_event_dev_info::max_profiles_per_port`.
By default, event ports are configured to use the link profile 0 on
initialization.
Once multiple link profiles are set up and the event device is started, the
application can use the function `rte_event_port_profile_switch` to change
the currently active profile on an event port. This effects the next
`rte_event_dequeue_burst` call, where the event queues associated with the
newly active link profile will participate in scheduling.
Rudementary work flow would something like:
Config path:
uint8_t lq[4] = {4, 5, 6, 7};
uint8_t hq[4] = {0, 1, 2, 3};
if (rte_event_dev_info.max_profiles_per_port < 2)
return -ENOTSUP;
rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0);
rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1);
Worker path:
empty_high_deq = 0;
empty_low_deq = 0;
is_low_deq = 0;
while (1) {
deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
if (deq == 0) {
/**
* Change link profile based on work activity on current
* active profile
*/
if (is_low_deq) {
empty_low_deq++;
if (empty_low_deq == MAX_LOW_RETRY) {
rte_event_port_profile_switch(0, 0, 0);
is_low_deq = 0;
empty_low_deq = 0;
}
continue;
}
if (empty_high_deq == MAX_HIGH_RETRY) {
rte_event_port_profile_switch(0, 0, 1);
is_low_deq = 1;
empty_high_deq = 0;
}
continue;
}
// Process the event received.
if (is_low_deq++ == MAX_LOW_EVENTS) {
rte_event_port_profile_switch(0, 0, 0);
is_low_deq = 0;
}
}
An application could use heuristic data of load/activity of a given event
port and change its active profile to adapt to the traffic pattern.
An unlink function `rte_event_port_profile_unlink` is provided to
modify the links associated to a profile, and
`rte_event_port_profile_links_get` can be used to retrieve the links
associated with a profile.
Using Link profiles can reduce the overhead of linking/unlinking and
waiting for unlinks in progress in fast-path and gives applications
the ability to switch between preset profiles on the fly.
v4 Changes:
----------
- Address review comments (Jerin).
v3 Changes:
----------
- Rebase to next-eventdev
- Rename testcase name to match API.
v2 Changes:
----------
- Fix compilation.
Pavan Nikhilesh (3):
eventdev: introduce link profiles
event/cnxk: implement event link profiles
test/event: add event link profile test
app/test/test_eventdev.c | 117 +++++++++++
config/rte_config.h | 1 +
doc/guides/eventdevs/cnxk.rst | 1 +
doc/guides/eventdevs/features/cnxk.ini | 3 +-
doc/guides/eventdevs/features/default.ini | 1 +
doc/guides/prog_guide/eventdev.rst | 40 ++++
doc/guides/rel_notes/release_23_11.rst | 14 +-
drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
drivers/common/cnxk/roc_sso.c | 18 +-
drivers/common/cnxk/roc_sso.h | 8 +-
drivers/common/cnxk/roc_sso_priv.h | 4 +-
drivers/event/cnxk/cn10k_eventdev.c | 45 +++--
drivers/event/cnxk/cn10k_worker.c | 11 ++
drivers/event/cnxk/cn10k_worker.h | 1 +
drivers/event/cnxk/cn9k_eventdev.c | 74 ++++---
drivers/event/cnxk/cn9k_worker.c | 22 +++
drivers/event/cnxk/cn9k_worker.h | 2 +
drivers/event/cnxk/cnxk_eventdev.c | 37 ++--
drivers/event/cnxk/cnxk_eventdev.h | 10 +-
lib/eventdev/eventdev_pmd.h | 59 +++++-
lib/eventdev/eventdev_private.c | 9 +
lib/eventdev/eventdev_trace.h | 32 +++
lib/eventdev/eventdev_trace_points.c | 12 ++
lib/eventdev/rte_eventdev.c | 150 +++++++++++---
lib/eventdev/rte_eventdev.h | 231 ++++++++++++++++++++++
lib/eventdev/rte_eventdev_core.h | 6 +-
lib/eventdev/rte_eventdev_trace_fp.h | 8 +
lib/eventdev/version.map | 4 +
28 files changed, 814 insertions(+), 110 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH v4 1/3] eventdev: introduce link profiles
2023-09-28 10:12 ` [PATCH v4 " pbhagavatula
@ 2023-09-28 10:12 ` pbhagavatula
2023-10-03 6:55 ` Jerin Jacob
2023-09-28 10:12 ` [PATCH v4 2/3] event/cnxk: implement event " pbhagavatula
` (3 subsequent siblings)
4 siblings, 1 reply; 44+ messages in thread
From: pbhagavatula @ 2023-09-28 10:12 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
A collection of event queues linked to an event port can be
associated with a unique identifier called as a link profile, multiple
such profiles can be created based on the event device capability
using the function `rte_event_port_profile_links_set` which takes
arguments similar to `rte_event_port_link` in addition to the profile
identifier.
The maximum link profiles that are supported by an event device
is advertised through the structure member
`rte_event_dev_info::max_profiles_per_port`.
By default, event ports are configured to use the link profile 0
on initialization.
Once multiple link profiles are set up and the event device is started,
the application can use the function `rte_event_port_profile_switch`
to change the currently active profile on an event port. This effects
the next `rte_event_dequeue_burst` call, where the event queues
associated with the newly active link profile will participate in
scheduling.
An unlink function `rte_event_port_profile_unlink` is provided
to modify the links associated to a profile, and
`rte_event_port_profile_links_get` can be used to retrieve the
links associated with a profile.
Using Link profiles can reduce the overhead of linking/unlinking and
waiting for unlinks in progress in fast-path and gives applications
the ability to switch between preset profiles on the fly.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
config/rte_config.h | 1 +
doc/guides/eventdevs/features/default.ini | 1 +
doc/guides/prog_guide/eventdev.rst | 40 ++++
doc/guides/rel_notes/release_23_11.rst | 10 +
lib/eventdev/eventdev_pmd.h | 59 +++++-
lib/eventdev/eventdev_private.c | 9 +
lib/eventdev/eventdev_trace.h | 32 +++
lib/eventdev/eventdev_trace_points.c | 12 ++
lib/eventdev/rte_eventdev.c | 150 +++++++++++---
lib/eventdev/rte_eventdev.h | 231 ++++++++++++++++++++++
lib/eventdev/rte_eventdev_core.h | 6 +-
lib/eventdev/rte_eventdev_trace_fp.h | 8 +
lib/eventdev/version.map | 4 +
13 files changed, 535 insertions(+), 28 deletions(-)
diff --git a/config/rte_config.h b/config/rte_config.h
index 400e44e3cf..d43b3eecb8 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -73,6 +73,7 @@
#define RTE_EVENT_MAX_DEVS 16
#define RTE_EVENT_MAX_PORTS_PER_DEV 255
#define RTE_EVENT_MAX_QUEUES_PER_DEV 255
+#define RTE_EVENT_MAX_PROFILES_PER_PORT 8
#define RTE_EVENT_TIMER_ADAPTER_NUM_MAX 32
#define RTE_EVENT_ETH_INTR_RING_SIZE 1024
#define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini
index 00360f60c6..1c0082352b 100644
--- a/doc/guides/eventdevs/features/default.ini
+++ b/doc/guides/eventdevs/features/default.ini
@@ -18,6 +18,7 @@ multiple_queue_port =
carry_flow_id =
maintenance_free =
runtime_queue_attr =
+profile_links =
;
; Features of a default Ethernet Rx adapter.
diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst
index 2c83176846..4bc0de4cdc 100644
--- a/doc/guides/prog_guide/eventdev.rst
+++ b/doc/guides/prog_guide/eventdev.rst
@@ -317,6 +317,46 @@ can be achieved like this:
}
int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1);
+Linking Queues to Ports with link profiles
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+An application can use link profiles if supported by the underlying event device to setup up
+multiple link profile per port and change them run time depending up on heuristic data.
+Using Link profiles can reduce the overhead of linking/unlinking and wait for unlinks in progress
+in fast-path and gives applications the ability to switch between preset profiles on the fly.
+
+An Example use case could be as follows.
+
+Config path:
+
+.. code-block:: c
+
+ uint8_t lq[4] = {4, 5, 6, 7};
+ uint8_t hq[4] = {0, 1, 2, 3};
+
+ if (rte_event_dev_info.max_profiles_per_port < 2)
+ return -ENOTSUP;
+
+ rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0);
+ rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1);
+
+Worker path:
+
+.. code-block:: c
+
+ uint8_t profile_id_to_switch;
+
+ while (1) {
+ deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
+ if (deq == 0) {
+ profile_id_to_switch = app_find_profile_id_to_switch();
+ rte_event_port_profile_switch(0, 0, profile_id_to_switch);
+ continue;
+ }
+
+ // Process the event received.
+ }
+
Starting the EventDev
~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index b34ddc0860..e08e2eadce 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -89,6 +89,16 @@ New Features
* Added support for ``remaining_ticks_get`` timer adapter PMD callback
to get the remaining ticks to expire for a given event timer.
+* **Added eventdev support to link queues to port with link profile.**
+
+ Introduced event link profiles that can be used to associated links between
+ event queues and an event port with a unique identifier termed as link profile.
+ The profile can be used to switch between the associated links in fast-path
+ without the additional overhead of linking/unlinking and waiting for unlinking.
+
+ * Added ``rte_event_port_profile_links_set``, ``rte_event_port_profile_unlink``
+ ``rte_event_port_profile_links_get`` and ``rte_event_port_profile_switch``
+ APIs to enable this feature.
Removed Items
-------------
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index f62f42e140..9585c0ca24 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -119,8 +119,8 @@ struct rte_eventdev_data {
/**< Array of port configuration structures. */
struct rte_event_queue_conf queues_cfg[RTE_EVENT_MAX_QUEUES_PER_DEV];
/**< Array of queue configuration structures. */
- uint16_t links_map[RTE_EVENT_MAX_PORTS_PER_DEV *
- RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint16_t links_map[RTE_EVENT_MAX_PROFILES_PER_PORT]
+ [RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV];
/**< Memory to store queues to port connections. */
void *dev_private;
/**< PMD-specific private data */
@@ -178,6 +178,9 @@ struct rte_eventdev {
event_tx_adapter_enqueue_t txa_enqueue;
/**< Pointer to PMD eth Tx adapter enqueue function. */
event_crypto_adapter_enqueue_t ca_enqueue;
+ /**< PMD Crypto adapter enqueue function. */
+ event_profile_switch_t profile_switch;
+ /**< PMD Event switch profile function. */
uint64_t reserved_64s[4]; /**< Reserved for future fields */
void *reserved_ptrs[3]; /**< Reserved for future fields */
@@ -437,6 +440,32 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
const uint8_t queues[], const uint8_t priorities[],
uint16_t nb_links);
+/**
+ * Link multiple source event queues associated with a link profile to a
+ * destination event port.
+ *
+ * @param dev
+ * Event device pointer
+ * @param port
+ * Event port pointer
+ * @param queues
+ * Points to an array of *nb_links* event queues to be linked
+ * to the event port.
+ * @param priorities
+ * Points to an array of *nb_links* service priorities associated with each
+ * event queue link to event port.
+ * @param nb_links
+ * The number of links to establish.
+ * @param profile_id
+ * The profile ID to associate the links.
+ *
+ * @return
+ * Returns 0 on success.
+ */
+typedef int (*eventdev_port_link_profile_t)(struct rte_eventdev *dev, void *port,
+ const uint8_t queues[], const uint8_t priorities[],
+ uint16_t nb_links, uint8_t profile_id);
+
/**
* Unlink multiple source event queues from destination event port.
*
@@ -455,6 +484,28 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
typedef int (*eventdev_port_unlink_t)(struct rte_eventdev *dev, void *port,
uint8_t queues[], uint16_t nb_unlinks);
+/**
+ * Unlink multiple source event queues associated with a link profile from
+ * destination event port.
+ *
+ * @param dev
+ * Event device pointer
+ * @param port
+ * Event port pointer
+ * @param queues
+ * An array of *nb_unlinks* event queues to be unlinked from the event port.
+ * @param nb_unlinks
+ * The number of unlinks to establish
+ * @param profile_id
+ * The profile ID of the associated links.
+ *
+ * @return
+ * Returns 0 on success.
+ */
+typedef int (*eventdev_port_unlink_profile_t)(struct rte_eventdev *dev, void *port,
+ uint8_t queues[], uint16_t nb_unlinks,
+ uint8_t profile_id);
+
/**
* Unlinks in progress. Returns number of unlinks that the PMD is currently
* performing, but have not yet been completed.
@@ -1348,8 +1399,12 @@ struct eventdev_ops {
eventdev_port_link_t port_link;
/**< Link event queues to an event port. */
+ eventdev_port_link_profile_t port_link_profile;
+ /**< Link event queues associated with a profile to an event port. */
eventdev_port_unlink_t port_unlink;
/**< Unlink event queues from an event port. */
+ eventdev_port_unlink_profile_t port_unlink_profile;
+ /**< Unlink event queues associated with a profile from an event port. */
eventdev_port_unlinks_in_progress_t port_unlinks_in_progress;
/**< Unlinks in progress on an event port. */
eventdev_dequeue_timeout_ticks_t timeout_ticks;
diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c
index 1d3d9d357e..b90a3a3833 100644
--- a/lib/eventdev/eventdev_private.c
+++ b/lib/eventdev/eventdev_private.c
@@ -81,6 +81,13 @@ dummy_event_crypto_adapter_enqueue(__rte_unused void *port,
return 0;
}
+static int
+dummy_event_port_profile_switch(__rte_unused void *port, __rte_unused uint8_t profile_id)
+{
+ RTE_EDEV_LOG_ERR("change profile requested for unconfigured event device");
+ return -EINVAL;
+}
+
void
event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
{
@@ -97,6 +104,7 @@ event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
.txa_enqueue_same_dest =
dummy_event_tx_adapter_enqueue_same_dest,
.ca_enqueue = dummy_event_crypto_adapter_enqueue,
+ .profile_switch = dummy_event_port_profile_switch,
.data = dummy_data,
};
@@ -117,5 +125,6 @@ event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op,
fp_op->txa_enqueue = dev->txa_enqueue;
fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest;
fp_op->ca_enqueue = dev->ca_enqueue;
+ fp_op->profile_switch = dev->profile_switch;
fp_op->data = dev->data->ports;
}
diff --git a/lib/eventdev/eventdev_trace.h b/lib/eventdev/eventdev_trace.h
index f008ef0091..9c2b261c06 100644
--- a/lib/eventdev/eventdev_trace.h
+++ b/lib/eventdev/eventdev_trace.h
@@ -76,6 +76,17 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(rc);
)
+RTE_TRACE_POINT(
+ rte_eventdev_trace_port_profile_links_set,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
+ uint16_t nb_links, uint8_t profile_id, int rc),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u16(nb_links);
+ rte_trace_point_emit_u8(profile_id);
+ rte_trace_point_emit_int(rc);
+)
+
RTE_TRACE_POINT(
rte_eventdev_trace_port_unlink,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
@@ -86,6 +97,17 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(rc);
)
+RTE_TRACE_POINT(
+ rte_eventdev_trace_port_profile_unlink,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
+ uint16_t nb_unlinks, uint8_t profile_id, int rc),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u16(nb_unlinks);
+ rte_trace_point_emit_u8(profile_id);
+ rte_trace_point_emit_int(rc);
+)
+
RTE_TRACE_POINT(
rte_eventdev_trace_start,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, int rc),
@@ -487,6 +509,16 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(count);
)
+RTE_TRACE_POINT(
+ rte_eventdev_trace_port_profile_links_get,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, uint8_t profile_id,
+ int count),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u8(profile_id);
+ rte_trace_point_emit_int(count);
+)
+
RTE_TRACE_POINT(
rte_eventdev_trace_port_unlinks_in_progress,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id),
diff --git a/lib/eventdev/eventdev_trace_points.c b/lib/eventdev/eventdev_trace_points.c
index 76144cfe75..8024e07531 100644
--- a/lib/eventdev/eventdev_trace_points.c
+++ b/lib/eventdev/eventdev_trace_points.c
@@ -19,9 +19,15 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_setup,
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_link,
lib.eventdev.port.link)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_links_set,
+ lib.eventdev.port.profile.links.set)
+
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlink,
lib.eventdev.port.unlink)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_unlink,
+ lib.eventdev.port.profile.unlink)
+
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_start,
lib.eventdev.start)
@@ -40,6 +46,9 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_deq_burst,
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_maintain,
lib.eventdev.maintain)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_switch,
+ lib.eventdev.port.profile.switch)
+
/* Eventdev Rx adapter trace points */
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_eth_rx_adapter_create,
lib.eventdev.rx.adapter.create)
@@ -206,6 +215,9 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_default_conf_get,
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_links_get,
lib.eventdev.port.links.get)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_links_get,
+ lib.eventdev.port.profile.links.get)
+
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlinks_in_progress,
lib.eventdev.port.unlinks.in.progress)
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 6ab4524332..33a3154d5d 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -95,6 +95,7 @@ rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)
return -EINVAL;
memset(dev_info, 0, sizeof(struct rte_event_dev_info));
+ dev_info->max_profiles_per_port = 1;
if (*dev->dev_ops->dev_infos_get == NULL)
return -ENOTSUP;
@@ -270,7 +271,7 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
void **ports;
uint16_t *links_map;
struct rte_event_port_conf *ports_cfg;
- unsigned int i;
+ unsigned int i, j;
RTE_EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
dev->data->dev_id);
@@ -281,7 +282,6 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
ports = dev->data->ports;
ports_cfg = dev->data->ports_cfg;
- links_map = dev->data->links_map;
for (i = nb_ports; i < old_nb_ports; i++)
(*dev->dev_ops->port_release)(ports[i]);
@@ -297,9 +297,11 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
sizeof(ports[0]) * new_ps);
memset(ports_cfg + old_nb_ports, 0,
sizeof(ports_cfg[0]) * new_ps);
- for (i = old_links_map_end; i < links_map_end; i++)
- links_map[i] =
- EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++) {
+ links_map = dev->data->links_map[i];
+ for (j = old_links_map_end; j < links_map_end; j++)
+ links_map[j] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ }
}
} else {
if (*dev->dev_ops->port_release == NULL)
@@ -953,21 +955,45 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
const uint8_t queues[], const uint8_t priorities[],
uint16_t nb_links)
{
- struct rte_eventdev *dev;
- uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ return rte_event_port_profile_links_set(dev_id, port_id, queues, priorities, nb_links, 0);
+}
+
+int
+rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile_id)
+{
uint8_t priorities_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
uint16_t *links_map;
int i, diag;
RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);
dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile_id >= RTE_EVENT_MAX_PROFILES_PER_PORT ||
+ profile_id >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile_id=%" PRIu8, profile_id);
+ return -EINVAL;
+ }
+
if (*dev->dev_ops->port_link == NULL) {
RTE_EDEV_LOG_ERR("Function not supported\n");
rte_errno = ENOTSUP;
return 0;
}
+ if (profile_id && *dev->dev_ops->port_link_profile == NULL) {
+ RTE_EDEV_LOG_ERR("Function not supported\n");
+ rte_errno = ENOTSUP;
+ return 0;
+ }
+
if (!is_valid_port(dev, port_id)) {
RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
rte_errno = EINVAL;
@@ -995,18 +1021,22 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
return 0;
}
- diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id],
- queues, priorities, nb_links);
+ if (profile_id)
+ diag = (*dev->dev_ops->port_link_profile)(dev, dev->data->ports[port_id], queues,
+ priorities, nb_links, profile_id);
+ else
+ diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id], queues,
+ priorities, nb_links);
if (diag < 0)
return diag;
- links_map = dev->data->links_map;
+ links_map = dev->data->links_map[profile_id];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
for (i = 0; i < diag; i++)
links_map[queues[i]] = (uint8_t)priorities[i];
- rte_eventdev_trace_port_link(dev_id, port_id, nb_links, diag);
+ rte_eventdev_trace_port_profile_links_set(dev_id, port_id, nb_links, profile_id, diag);
return diag;
}
@@ -1014,27 +1044,51 @@ int
rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint16_t nb_unlinks)
{
- struct rte_eventdev *dev;
+ return rte_event_port_profile_unlink(dev_id, port_id, queues, nb_unlinks, 0);
+}
+
+int
+rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile_id)
+{
uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
- int i, diag, j;
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
uint16_t *links_map;
+ int i, diag, j;
RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);
dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile_id >= RTE_EVENT_MAX_PROFILES_PER_PORT ||
+ profile_id >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile_id=%" PRIu8, profile_id);
+ return -EINVAL;
+ }
+
if (*dev->dev_ops->port_unlink == NULL) {
RTE_EDEV_LOG_ERR("Function not supported");
rte_errno = ENOTSUP;
return 0;
}
+ if (profile_id && *dev->dev_ops->port_unlink_profile == NULL) {
+ RTE_EDEV_LOG_ERR("Function not supported");
+ rte_errno = ENOTSUP;
+ return 0;
+ }
+
if (!is_valid_port(dev, port_id)) {
RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
rte_errno = EINVAL;
return 0;
}
- links_map = dev->data->links_map;
+ links_map = dev->data->links_map[profile_id];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
@@ -1063,16 +1117,19 @@ rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
return 0;
}
- diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id],
- queues, nb_unlinks);
-
+ if (profile_id)
+ diag = (*dev->dev_ops->port_unlink_profile)(dev, dev->data->ports[port_id], queues,
+ nb_unlinks, profile_id);
+ else
+ diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id], queues,
+ nb_unlinks);
if (diag < 0)
return diag;
for (i = 0; i < diag; i++)
links_map[queues[i]] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
- rte_eventdev_trace_port_unlink(dev_id, port_id, nb_unlinks, diag);
+ rte_eventdev_trace_port_profile_unlink(dev_id, port_id, nb_unlinks, profile_id, diag);
return diag;
}
@@ -1116,7 +1173,8 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
return -EINVAL;
}
- links_map = dev->data->links_map;
+ /* Use the default profile_id. */
+ links_map = dev->data->links_map[0];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
for (i = 0; i < dev->data->nb_queues; i++) {
@@ -1132,6 +1190,49 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
return count;
}
+int
+rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint8_t priorities[], uint8_t profile_id)
+{
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
+ uint16_t *links_map;
+ int i, count = 0;
+
+ RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+
+ dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile_id >= RTE_EVENT_MAX_PROFILES_PER_PORT ||
+ profile_id >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile_id=%" PRIu8, profile_id);
+ return -EINVAL;
+ }
+
+ if (!is_valid_port(dev, port_id)) {
+ RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+ return -EINVAL;
+ }
+
+ links_map = dev->data->links_map[profile_id];
+ /* Point links_map to this port specific area */
+ links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
+ for (i = 0; i < dev->data->nb_queues; i++) {
+ if (links_map[i] != EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
+ queues[count] = i;
+ priorities[count] = (uint8_t)links_map[i];
+ ++count;
+ }
+ }
+
+ rte_eventdev_trace_port_profile_links_get(dev_id, port_id, profile_id, count);
+
+ return count;
+}
+
int
rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
uint64_t *timeout_ticks)
@@ -1440,7 +1541,7 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
{
char mz_name[RTE_EVENTDEV_NAME_MAX_LEN];
const struct rte_memzone *mz;
- int n;
+ int i, n;
/* Generate memzone name */
n = snprintf(mz_name, sizeof(mz_name), "rte_eventdev_data_%u", dev_id);
@@ -1460,11 +1561,10 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
*data = mz->addr;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
memset(*data, 0, sizeof(struct rte_eventdev_data));
- for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV *
- RTE_EVENT_MAX_QUEUES_PER_DEV;
- n++)
- (*data)->links_map[n] =
- EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++)
+ for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV;
+ n++)
+ (*data)->links_map[i][n] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
}
return 0;
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 2ba8a7b090..23cbff939f 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -320,6 +320,12 @@ struct rte_event;
* rte_event_queue_setup().
*/
+#define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12)
+/**< Event device is capable of supporting multiple link profiles per event port
+ * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater
+ * than one.
+ */
+
/* Event device priority levels */
#define RTE_EVENT_DEV_PRIORITY_HIGHEST 0
/**< Highest priority expressed across eventdev subsystem
@@ -446,6 +452,10 @@ struct rte_event_dev_info {
* device. These ports and queues are not accounted for in
* max_event_ports or max_event_queues.
*/
+ uint8_t max_profiles_per_port;
+ /**< Maximum number of event queue profiles per event port.
+ * A device that doesn't support multiple profiles will set this as 1.
+ */
};
/**
@@ -1536,6 +1546,10 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
* latency of critical work by establishing the link with more event ports
* at runtime.
*
+ * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater
+ * than or equal to one, this function links the event queues to the default
+ * profile_id i.e. profile_id 0 of the event port.
+ *
* @param dev_id
* The identifier of the device.
*
@@ -1593,6 +1607,10 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
* Event queue(s) to event port unlink establishment can be changed at runtime
* without re-configuring the device.
*
+ * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater
+ * than or equal to one, this function unlinks the event queues from the default
+ * profile identifier i.e. profile 0 of the event port.
+ *
* @see rte_event_port_unlinks_in_progress() to poll for completed unlinks.
*
* @param dev_id
@@ -1626,6 +1644,136 @@ int
rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint16_t nb_unlinks);
+/**
+ * Link multiple source event queues supplied in *queues* to the destination
+ * event port designated by its *port_id* with associated profile identifier
+ * supplied in *profile_id* with service priorities supplied in *priorities*
+ * on the event device designated by its *dev_id*.
+ *
+ * If *profile_id* is set to 0 then, the links created by the call `rte_event_port_link`
+ * will be overwritten.
+ *
+ * Event ports by default use profile_id 0 unless it is changed using the
+ * call ``rte_event_port_profile_switch()``.
+ *
+ * The link establishment shall enable the event port *port_id* from
+ * receiving events from the specified event queue(s) supplied in *queues*
+ *
+ * An event queue may link to one or more event ports.
+ * The number of links can be established from an event queue to event port is
+ * implementation defined.
+ *
+ * Event queue(s) to event port link establishment can be changed at runtime
+ * without re-configuring the device to support scaling and to reduce the
+ * latency of critical work by establishing the link with more event ports
+ * at runtime.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier to select the destination port to link.
+ *
+ * @param queues
+ * Points to an array of *nb_links* event queues to be linked
+ * to the event port.
+ * NULL value is allowed, in which case this function links all the configured
+ * event queues *nb_event_queues* which previously supplied to
+ * rte_event_dev_configure() to the event port *port_id*
+ *
+ * @param priorities
+ * Points to an array of *nb_links* service priorities associated with each
+ * event queue link to event port.
+ * The priority defines the event port's servicing priority for
+ * event queue, which may be ignored by an implementation.
+ * The requested priority should in the range of
+ * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST].
+ * The implementation shall normalize the requested priority to
+ * implementation supported priority value.
+ * NULL value is allowed, in which case this function links the event queues
+ * with RTE_EVENT_DEV_PRIORITY_NORMAL servicing priority
+ *
+ * @param nb_links
+ * The number of links to establish. This parameter is ignored if queues is
+ * NULL.
+ *
+ * @param profile_id
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of links actually established. The return value can be less than
+ * the value of the *nb_links* parameter when the implementation has the
+ * limitation on specific queue to port link establishment or if invalid
+ * parameters are specified in *queues*
+ * If the return value is less than *nb_links*, the remaining links at the end
+ * of link[] are not established, and the caller has to take care of them.
+ * If return value is less than *nb_links* then implementation shall update the
+ * rte_errno accordingly, Possible rte_errno values are
+ * (EDQUOT) Quota exceeded(Application tried to link the queue configured with
+ * RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports)
+ * (EINVAL) Invalid parameter
+ *
+ */
+__rte_experimental
+int
+rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile_id);
+
+/**
+ * Unlink multiple source event queues supplied in *queues* that belong to profile
+ * designated by *profile_id* from the destination event port designated by its
+ * *port_id* on the event device designated by its *dev_id*.
+ *
+ * If *profile_id* is set to 0 i.e., the default profile then, then this function
+ * will act as ``rte_event_port_unlink``.
+ *
+ * The unlink call issues an async request to disable the event port *port_id*
+ * from receiving events from the specified event queue *queue_id*.
+ * Event queue(s) to event port unlink establishment can be changed at runtime
+ * without re-configuring the device.
+ *
+ * @see rte_event_port_unlinks_in_progress() to poll for completed unlinks.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier to select the destination port to unlink.
+ *
+ * @param queues
+ * Points to an array of *nb_unlinks* event queues to be unlinked
+ * from the event port.
+ * NULL value is allowed, in which case this function unlinks all the
+ * event queue(s) from the event port *port_id*.
+ *
+ * @param nb_unlinks
+ * The number of unlinks to establish. This parameter is ignored if queues is
+ * NULL.
+ *
+ * @param profile_id
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of unlinks successfully requested. The return value can be less
+ * than the value of the *nb_unlinks* parameter when the implementation has the
+ * limitation on specific queue to port unlink establishment or
+ * if invalid parameters are specified.
+ * If the return value is less than *nb_unlinks*, the remaining queues at the
+ * end of queues[] are not unlinked, and the caller has to take care of them.
+ * If return value is less than *nb_unlinks* then implementation shall update
+ * the rte_errno accordingly, Possible rte_errno values are
+ * (EINVAL) Invalid parameter
+ *
+ */
+__rte_experimental
+int
+rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile_id);
+
/**
* Returns the number of unlinks in progress.
*
@@ -1680,6 +1828,42 @@ int
rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint8_t priorities[]);
+/**
+ * Retrieve the list of source event queues and its service priority
+ * associated to a *profile_id* and linked to the destination event port
+ * designated by its *port_id* on the event device designated by its *dev_id*.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier.
+ *
+ * @param[out] queues
+ * Points to an array of *queues* for output.
+ * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
+ * store the event queue(s) linked with event port *port_id*
+ *
+ * @param[out] priorities
+ * Points to an array of *priorities* for output.
+ * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
+ * store the service priority associated with each event queue linked
+ *
+ * @param profile_id
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of links established on the event port designated by its
+ * *port_id*.
+ * - <0 on failure.
+ */
+__rte_experimental
+int
+rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint8_t priorities[], uint8_t profile_id);
+
/**
* Retrieve the service ID of the event dev. If the adapter doesn't use
* a rte_service function, this function returns -ESRCH.
@@ -2265,6 +2449,53 @@ rte_event_maintain(uint8_t dev_id, uint8_t port_id, int op)
return 0;
}
+/**
+ * Change the active profile on an event port.
+ *
+ * This function is used to change the current active profile on an event port
+ * when multiple link profiles are configured on an event port through the
+ * function call ``rte_event_port_profile_links_set``.
+ *
+ * On the subsequent ``rte_event_dequeue_burst`` call, only the event queues
+ * that were associated with the newly active profile will participate in
+ * scheduling.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ * @param port_id
+ * The identifier of the event port.
+ * @param profile_id
+ * The identifier of the profile.
+ * @return
+ * - 0 on success.
+ * - -EINVAL if *dev_id*, *port_id*, or *profile_id* is invalid.
+ */
+__rte_experimental
+static inline uint8_t
+rte_event_port_profile_switch(uint8_t dev_id, uint8_t port_id, uint8_t profile_id)
+{
+ const struct rte_event_fp_ops *fp_ops;
+ void *port;
+
+ fp_ops = &rte_event_fp_ops[dev_id];
+ port = fp_ops->data[port_id];
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+ if (dev_id >= RTE_EVENT_MAX_DEVS ||
+ port_id >= RTE_EVENT_MAX_PORTS_PER_DEV)
+ return -EINVAL;
+
+ if (port == NULL)
+ return -EINVAL;
+
+ if (profile_id >= RTE_EVENT_MAX_PROFILES_PER_PORT)
+ return -EINVAL;
+#endif
+ rte_eventdev_trace_port_profile_switch(dev_id, port_id, profile_id);
+
+ return fp_ops->profile_switch(port, profile_id);
+}
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
index c27a52ccc0..5af646ed5c 100644
--- a/lib/eventdev/rte_eventdev_core.h
+++ b/lib/eventdev/rte_eventdev_core.h
@@ -42,6 +42,8 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port,
uint16_t nb_events);
/**< @internal Enqueue burst of events on crypto adapter */
+typedef int (*event_profile_switch_t)(void *port, uint8_t profile);
+
struct rte_event_fp_ops {
void **data;
/**< points to array of internal port data pointers */
@@ -65,7 +67,9 @@ struct rte_event_fp_ops {
/**< PMD Tx adapter enqueue same destination function. */
event_crypto_adapter_enqueue_t ca_enqueue;
/**< PMD Crypto adapter enqueue function. */
- uintptr_t reserved[5];
+ event_profile_switch_t profile_switch;
+ /**< PMD Event switch profile function. */
+ uintptr_t reserved[4];
} __rte_cache_aligned;
extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
diff --git a/lib/eventdev/rte_eventdev_trace_fp.h b/lib/eventdev/rte_eventdev_trace_fp.h
index af2172d2a5..04d510ad00 100644
--- a/lib/eventdev/rte_eventdev_trace_fp.h
+++ b/lib/eventdev/rte_eventdev_trace_fp.h
@@ -46,6 +46,14 @@ RTE_TRACE_POINT_FP(
rte_trace_point_emit_int(op);
)
+RTE_TRACE_POINT_FP(
+ rte_eventdev_trace_port_profile_switch,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, uint8_t profile),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u8(profile);
+)
+
RTE_TRACE_POINT_FP(
rte_eventdev_trace_eth_tx_adapter_enqueue,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table,
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index 7ce09a87bb..f88decee39 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -134,6 +134,10 @@ EXPERIMENTAL {
# added in 23.11
rte_event_eth_rx_adapter_create_ext_with_params;
+ rte_event_port_profile_links_set;
+ rte_event_port_profile_unlink;
+ rte_event_port_profile_links_get;
+ __rte_eventdev_trace_port_profile_switch;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH v4 2/3] event/cnxk: implement event link profiles
2023-09-28 10:12 ` [PATCH v4 " pbhagavatula
2023-09-28 10:12 ` [PATCH v4 1/3] eventdev: introduce " pbhagavatula
@ 2023-09-28 10:12 ` pbhagavatula
2023-09-28 10:12 ` [PATCH v4 3/3] test/event: add event link profile test pbhagavatula
` (2 subsequent siblings)
4 siblings, 0 replies; 44+ messages in thread
From: pbhagavatula @ 2023-09-28 10:12 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Implement event link profiles support on CN10K and CN9K.
Both the platforms support up to 2 link profiles.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 1 +
doc/guides/eventdevs/features/cnxk.ini | 3 +-
doc/guides/rel_notes/release_23_11.rst | 12 +++--
drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
drivers/common/cnxk/roc_sso.c | 18 +++----
drivers/common/cnxk/roc_sso.h | 8 +--
drivers/common/cnxk/roc_sso_priv.h | 4 +-
drivers/event/cnxk/cn10k_eventdev.c | 45 +++++++++++-----
drivers/event/cnxk/cn10k_worker.c | 11 ++++
drivers/event/cnxk/cn10k_worker.h | 1 +
drivers/event/cnxk/cn9k_eventdev.c | 74 ++++++++++++++++----------
drivers/event/cnxk/cn9k_worker.c | 22 ++++++++
drivers/event/cnxk/cn9k_worker.h | 2 +
drivers/event/cnxk/cnxk_eventdev.c | 37 +++++++------
drivers/event/cnxk/cnxk_eventdev.h | 10 ++--
15 files changed, 166 insertions(+), 86 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 1a59233282..cccb8a0304 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -48,6 +48,7 @@ Features of the OCTEON cnxk SSO PMD are:
- HW managed event vectorization on CN10K for packets enqueued from ethdev to
eventdev configurable per each Rx queue in Rx adapter.
- Event vector transmission via Tx adapter.
+- Up to 2 event link profiles.
Prerequisites and Compilation procedure
---------------------------------------
diff --git a/doc/guides/eventdevs/features/cnxk.ini b/doc/guides/eventdevs/features/cnxk.ini
index bee69bf8f4..5d353e3670 100644
--- a/doc/guides/eventdevs/features/cnxk.ini
+++ b/doc/guides/eventdevs/features/cnxk.ini
@@ -12,7 +12,8 @@ runtime_port_link = Y
multiple_queue_port = Y
carry_flow_id = Y
maintenance_free = Y
-runtime_queue_attr = y
+runtime_queue_attr = Y
+profile_links = Y
[Eth Rx adapter Features]
internal_port = Y
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index e08e2eadce..700a1557e6 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -84,11 +84,6 @@ New Features
for creating Rx adapter instance for the applications desire to
control both the event port allocation and event buffer size.
-* **Updated Marvell cnxk eventdev driver.**
-
- * Added support for ``remaining_ticks_get`` timer adapter PMD callback
- to get the remaining ticks to expire for a given event timer.
-
* **Added eventdev support to link queues to port with link profile.**
Introduced event link profiles that can be used to associated links between
@@ -100,6 +95,13 @@ New Features
``rte_event_port_profile_links_get`` and ``rte_event_port_profile_switch``
APIs to enable this feature.
+* **Updated Marvell cnxk eventdev driver.**
+
+ * Added support for ``remaining_ticks_get`` timer adapter PMD callback
+ to get the remaining ticks to expire for a given event timer.
+ * Added link profiles support for Marvell CNXK event device driver,
+ up to two link profiles are supported per event port.
+
Removed Items
-------------
diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c
index d76158e30d..690d47c045 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev.c
@@ -285,7 +285,7 @@ nix_inl_sso_setup(struct nix_inl_dev *inl_dev)
}
/* Setup hwgrp->hws link */
- sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, true);
+ sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, 0, true);
/* Enable HWGRP */
plt_write64(0x1, inl_dev->sso_base + SSO_LF_GGRP_QCTL);
@@ -315,7 +315,7 @@ nix_inl_sso_release(struct nix_inl_dev *inl_dev)
nix_inl_sso_unregister_irqs(inl_dev);
/* Unlink hws */
- sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, false);
+ sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, 0, false);
/* Release XAQ aura */
sso_hwgrp_release_xaq(&inl_dev->dev, 1);
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index c37da685da..748d287bad 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -186,8 +186,8 @@ sso_rsrc_get(struct roc_sso *roc_sso)
}
void
-sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
- uint16_t hwgrp[], uint16_t n, uint16_t enable)
+sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, uint16_t hwgrp[],
+ uint16_t n, uint8_t set, uint16_t enable)
{
uint64_t reg;
int i, j, k;
@@ -204,7 +204,7 @@ sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
k = n % 4;
k = k ? k : 4;
for (j = 0; j < k; j++) {
- mask[j] = hwgrp[i + j] | enable << 14;
+ mask[j] = hwgrp[i + j] | (uint32_t)set << 12 | enable << 14;
if (bmp) {
enable ? plt_bitmap_set(bmp, hwgrp[i + j]) :
plt_bitmap_clear(bmp, hwgrp[i + j]);
@@ -290,8 +290,8 @@ roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns)
}
int
-roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
- uint16_t nb_hwgrp)
+roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], uint16_t nb_hwgrp,
+ uint8_t set)
{
struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
struct sso *sso;
@@ -299,14 +299,14 @@ roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
sso = roc_sso_to_sso_priv(roc_sso);
base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12);
- sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, 1);
+ sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, set, 1);
return nb_hwgrp;
}
int
-roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
- uint16_t nb_hwgrp)
+roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], uint16_t nb_hwgrp,
+ uint8_t set)
{
struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
struct sso *sso;
@@ -314,7 +314,7 @@ roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
sso = roc_sso_to_sso_priv(roc_sso);
base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12);
- sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, 0);
+ sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, set, 0);
return nb_hwgrp;
}
diff --git a/drivers/common/cnxk/roc_sso.h b/drivers/common/cnxk/roc_sso.h
index 8ee62afb9a..64f14b8119 100644
--- a/drivers/common/cnxk/roc_sso.h
+++ b/drivers/common/cnxk/roc_sso.h
@@ -84,10 +84,10 @@ int __roc_api roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso,
uint16_t hwgrp, uint8_t weight,
uint8_t affinity, uint8_t priority);
uint64_t __roc_api roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns);
-int __roc_api roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws,
- uint16_t hwgrp[], uint16_t nb_hwgrp);
-int __roc_api roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws,
- uint16_t hwgrp[], uint16_t nb_hwgrp);
+int __roc_api roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
+ uint16_t nb_hwgrp, uint8_t set);
+int __roc_api roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
+ uint16_t nb_hwgrp, uint8_t set);
int __roc_api roc_sso_hwgrp_hws_link_status(struct roc_sso *roc_sso,
uint8_t hws, uint16_t hwgrp);
uintptr_t __roc_api roc_sso_hws_base_get(struct roc_sso *roc_sso, uint8_t hws);
diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h
index 09729d4f62..21c59c57e6 100644
--- a/drivers/common/cnxk/roc_sso_priv.h
+++ b/drivers/common/cnxk/roc_sso_priv.h
@@ -44,8 +44,8 @@ roc_sso_to_sso_priv(struct roc_sso *roc_sso)
int sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf,
void **rsp);
int sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf);
-void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
- uint16_t hwgrp[], uint16_t n, uint16_t enable);
+void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, uint16_t hwgrp[],
+ uint16_t n, uint8_t set, uint16_t enable);
int sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps);
int sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps);
int sso_hwgrp_init_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq,
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index cf186b9af4..bb0c910553 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -66,21 +66,21 @@ cn10k_sso_init_hws_mem(void *arg, uint8_t port_id)
}
static int
-cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = port;
- return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+ return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link, profile);
}
static int
-cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = port;
- return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+ return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link, profile);
}
static void
@@ -107,10 +107,11 @@ cn10k_sso_hws_release(void *arg, void *hws)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = hws;
- uint16_t i;
+ uint16_t i, j;
- for (i = 0; i < dev->nb_event_queues; i++)
- roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1);
+ for (i = 0; i < CNXK_SSO_MAX_PROFILES; i++)
+ for (j = 0; j < dev->nb_event_queues; j++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id, &j, 1, i);
memset(ws, 0, sizeof(*ws));
}
@@ -482,6 +483,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
CN10K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, sso_hws_tx_adptr_enq);
event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue;
+ event_dev->profile_switch = cn10k_sso_hws_profile_switch;
#else
RTE_SET_USED(event_dev);
#endif
@@ -633,9 +635,8 @@ cn10k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port,
}
static int
-cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port,
- const uint8_t queues[], const uint8_t priorities[],
- uint16_t nb_links)
+cn10k_sso_port_link_profile(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_links];
@@ -644,14 +645,14 @@ cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port,
RTE_SET_USED(priorities);
for (link = 0; link < nb_links; link++)
hwgrp_ids[link] = queues[link];
- nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+ nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links, profile);
return (int)nb_links;
}
static int
-cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
- uint8_t queues[], uint16_t nb_unlinks)
+cn10k_sso_port_unlink_profile(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_unlinks];
@@ -659,11 +660,25 @@ cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
for (unlink = 0; unlink < nb_unlinks; unlink++)
hwgrp_ids[unlink] = queues[unlink];
- nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+ nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks, profile);
return (int)nb_unlinks;
}
+static int
+cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links)
+{
+ return cn10k_sso_port_link_profile(event_dev, port, queues, priorities, nb_links, 0);
+}
+
+static int
+cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks)
+{
+ return cn10k_sso_port_unlink_profile(event_dev, port, queues, nb_unlinks, 0);
+}
+
static void
cn10k_sso_configure_queue_stash(struct rte_eventdev *event_dev)
{
@@ -1020,6 +1035,8 @@ static struct eventdev_ops cn10k_sso_dev_ops = {
.port_quiesce = cn10k_sso_port_quiesce,
.port_link = cn10k_sso_port_link,
.port_unlink = cn10k_sso_port_unlink,
+ .port_link_profile = cn10k_sso_port_link_profile,
+ .port_unlink_profile = cn10k_sso_port_unlink_profile,
.timeout_ticks = cnxk_sso_timeout_ticks,
.eth_rx_adapter_caps_get = cn10k_sso_rx_adapter_caps_get,
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
index 9b5bf90159..d59769717e 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -431,3 +431,14 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+
+int __rte_hot
+cn10k_sso_hws_profile_switch(void *port, uint8_t profile)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ ws->gw_wdata &= ~(0xFFUL);
+ ws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index e71ab3c523..26fecf21fb 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -329,6 +329,7 @@ uint16_t __rte_hot cn10k_sso_hws_enq_new_burst(void *port,
uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn10k_sso_hws_profile_switch(void *port, uint8_t profile);
#define R(name, flags) \
uint16_t __rte_hot cn10k_sso_hws_deq_##name( \
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index fe6f5d9f86..9fb9ca0d63 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -15,7 +15,7 @@
enq_op = enq_ops[dev->tx_offloads & (NIX_TX_OFFLOAD_MAX - 1)]
static int
-cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
@@ -24,22 +24,20 @@ cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
if (dev->dual_ws) {
dws = port;
- rc = roc_sso_hws_link(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map,
- nb_link);
- rc |= roc_sso_hws_link(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
- map, nb_link);
+ rc = roc_sso_hws_link(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map, nb_link,
+ profile);
+ rc |= roc_sso_hws_link(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), map,
+ nb_link, profile);
} else {
ws = port;
- rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+ rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link, profile);
}
return rc;
}
static int
-cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
@@ -48,15 +46,13 @@ cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
if (dev->dual_ws) {
dws = port;
- rc = roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
- map, nb_link);
- rc |= roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
- map, nb_link);
+ rc = roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map,
+ nb_link, profile);
+ rc |= roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), map,
+ nb_link, profile);
} else {
ws = port;
- rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+ rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link, profile);
}
return rc;
@@ -97,21 +93,24 @@ cn9k_sso_hws_release(void *arg, void *hws)
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
struct cn9k_sso_hws *ws;
- uint16_t i;
+ uint16_t i, k;
if (dev->dual_ws) {
dws = hws;
for (i = 0; i < dev->nb_event_queues; i++) {
- roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), &i, 1);
- roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), &i, 1);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++) {
+ roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
+ &i, 1, k);
+ roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ &i, 1, k);
+ }
}
memset(dws, 0, sizeof(*dws));
} else {
ws = hws;
for (i = 0; i < dev->nb_event_queues; i++)
- roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1, k);
memset(ws, 0, sizeof(*ws));
}
}
@@ -438,6 +437,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
event_dev->enqueue_burst = cn9k_sso_hws_enq_burst;
event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst;
+ event_dev->profile_switch = cn9k_sso_hws_profile_switch;
if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, sso_hws_deq_seg);
CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst,
@@ -475,6 +475,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
event_dev->enqueue_forward_burst =
cn9k_sso_hws_dual_enq_fwd_burst;
event_dev->ca_enqueue = cn9k_sso_hws_dual_ca_enq;
+ event_dev->profile_switch = cn9k_sso_hws_dual_profile_switch;
if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue,
@@ -708,9 +709,8 @@ cn9k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port,
}
static int
-cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port,
- const uint8_t queues[], const uint8_t priorities[],
- uint16_t nb_links)
+cn9k_sso_port_link_profile(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_links];
@@ -719,14 +719,14 @@ cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port,
RTE_SET_USED(priorities);
for (link = 0; link < nb_links; link++)
hwgrp_ids[link] = queues[link];
- nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+ nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links, profile);
return (int)nb_links;
}
static int
-cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
- uint8_t queues[], uint16_t nb_unlinks)
+cn9k_sso_port_unlink_profile(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_unlinks];
@@ -734,11 +734,25 @@ cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
for (unlink = 0; unlink < nb_unlinks; unlink++)
hwgrp_ids[unlink] = queues[unlink];
- nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+ nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks, profile);
return (int)nb_unlinks;
}
+static int
+cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links)
+{
+ return cn9k_sso_port_link_profile(event_dev, port, queues, priorities, nb_links, 0);
+}
+
+static int
+cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks)
+{
+ return cn9k_sso_port_unlink_profile(event_dev, port, queues, nb_unlinks, 0);
+}
+
static int
cn9k_sso_start(struct rte_eventdev *event_dev)
{
@@ -1019,6 +1033,8 @@ static struct eventdev_ops cn9k_sso_dev_ops = {
.port_quiesce = cn9k_sso_port_quiesce,
.port_link = cn9k_sso_port_link,
.port_unlink = cn9k_sso_port_unlink,
+ .port_link_profile = cn9k_sso_port_link_profile,
+ .port_unlink_profile = cn9k_sso_port_unlink_profile,
.timeout_ticks = cnxk_sso_timeout_ticks,
.eth_rx_adapter_caps_get = cn9k_sso_rx_adapter_caps_get,
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
index abbbfffd85..a9ac49a5a7 100644
--- a/drivers/event/cnxk/cn9k_worker.c
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -66,6 +66,17 @@ cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+int __rte_hot
+cn9k_sso_hws_profile_switch(void *port, uint8_t profile)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ ws->gw_wdata &= ~(0xFFUL);
+ ws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
+
/* Dual ws ops. */
uint16_t __rte_hot
@@ -149,3 +160,14 @@ cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events)
return cn9k_cpt_crypto_adapter_enqueue(dws->base[!dws->vws],
ev->event_ptr);
}
+
+int __rte_hot
+cn9k_sso_hws_dual_profile_switch(void *port, uint8_t profile)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+
+ dws->gw_wdata &= ~(0xFFUL);
+ dws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index ee659e80d6..6936b7ad04 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -366,6 +366,7 @@ uint16_t __rte_hot cn9k_sso_hws_enq_new_burst(void *port,
uint16_t __rte_hot cn9k_sso_hws_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn9k_sso_hws_profile_switch(void *port, uint8_t profile);
uint16_t __rte_hot cn9k_sso_hws_dual_enq(void *port,
const struct rte_event *ev);
@@ -382,6 +383,7 @@ uint16_t __rte_hot cn9k_sso_hws_ca_enq(void *port, struct rte_event ev[],
uint16_t nb_events);
uint16_t __rte_hot cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn9k_sso_hws_dual_profile_switch(void *port, uint8_t profile);
#define R(name, flags) \
uint16_t __rte_hot cn9k_sso_hws_deq_##name( \
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 9c9192bd40..0c61f4c20e 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -30,7 +30,9 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE |
- RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR;
+ RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR |
+ RTE_EVENT_DEV_CAP_PROFILE_LINK;
+ dev_info->max_profiles_per_port = CNXK_SSO_MAX_PROFILES;
}
int
@@ -128,23 +130,25 @@ cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t *links_map, hwgrp[CNXK_SSO_MAX_HWGRP];
- int i, j;
+ int i, j, k;
for (i = 0; i < dev->nb_event_ports; i++) {
- uint16_t nb_hwgrp = 0;
-
- links_map = event_dev->data->links_map;
- /* Point links_map to this port specific area */
- links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++) {
+ uint16_t nb_hwgrp = 0;
+
+ links_map = event_dev->data->links_map[k];
+ /* Point links_map to this port specific area */
+ links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
+
+ for (j = 0; j < dev->nb_event_queues; j++) {
+ if (links_map[j] == 0xdead)
+ continue;
+ hwgrp[nb_hwgrp] = j;
+ nb_hwgrp++;
+ }
- for (j = 0; j < dev->nb_event_queues; j++) {
- if (links_map[j] == 0xdead)
- continue;
- hwgrp[nb_hwgrp] = j;
- nb_hwgrp++;
+ link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp, k);
}
-
- link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp);
}
}
@@ -435,7 +439,7 @@ cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t all_queues[CNXK_SSO_MAX_HWGRP];
- uint16_t i;
+ uint16_t i, j;
void *ws;
if (!dev->configured)
@@ -446,7 +450,8 @@ cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn)
for (i = 0; i < dev->nb_event_ports; i++) {
ws = event_dev->data->ports[i];
- unlink_fn(dev, ws, all_queues, dev->nb_event_queues);
+ for (j = 0; j < CNXK_SSO_MAX_PROFILES; j++)
+ unlink_fn(dev, ws, all_queues, dev->nb_event_queues, j);
rte_free(cnxk_sso_hws_get_cookie(ws));
event_dev->data->ports[i] = NULL;
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index bd50de87c0..d42d1afa1a 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -33,6 +33,8 @@
#define CN10K_SSO_GW_MODE "gw_mode"
#define CN10K_SSO_STASH "stash"
+#define CNXK_SSO_MAX_PROFILES 2
+
#define NSEC2USEC(__ns) ((__ns) / 1E3)
#define USEC2NSEC(__us) ((__us)*1E3)
#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
@@ -57,10 +59,10 @@
typedef void *(*cnxk_sso_init_hws_mem_t)(void *dev, uint8_t port_id);
typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
-typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
- uint16_t nb_link);
-typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map,
- uint16_t nb_link);
+typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map, uint16_t nb_link,
+ uint8_t profile);
+typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map, uint16_t nb_link,
+ uint8_t profile);
typedef void (*cnxk_handle_event_t)(void *arg, struct rte_event ev);
typedef void (*cnxk_sso_hws_reset_t)(void *arg, void *ws);
typedef int (*cnxk_sso_hws_flush_t)(void *ws, uint8_t queue_id, uintptr_t base,
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH v4 3/3] test/event: add event link profile test
2023-09-28 10:12 ` [PATCH v4 " pbhagavatula
2023-09-28 10:12 ` [PATCH v4 1/3] eventdev: introduce " pbhagavatula
2023-09-28 10:12 ` [PATCH v4 2/3] event/cnxk: implement event " pbhagavatula
@ 2023-09-28 10:12 ` pbhagavatula
2023-09-28 14:45 ` [PATCH v4 0/3] Introduce event link profiles Jerin Jacob
2023-10-03 7:51 ` [PATCH v5 " pbhagavatula
4 siblings, 0 replies; 44+ messages in thread
From: pbhagavatula @ 2023-09-28 10:12 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add test case to verify event link profiles.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
app/test/test_eventdev.c | 117 +++++++++++++++++++++++++++++++++++++++
1 file changed, 117 insertions(+)
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index c51c93bdbd..0ecfa7db02 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1129,6 +1129,121 @@ test_eventdev_link_get(void)
return TEST_SUCCESS;
}
+static int
+test_eventdev_profile_switch(void)
+{
+#define MAX_RETRIES 4
+ uint8_t priorities[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ struct rte_event_queue_conf qcfg;
+ struct rte_event_port_conf pcfg;
+ struct rte_event_dev_info info;
+ struct rte_event ev;
+ uint8_t q, re;
+ int rc;
+
+ rte_event_dev_info_get(TEST_DEV_ID, &info);
+
+ if (info.max_profiles_per_port <= 1)
+ return TEST_SKIPPED;
+
+ if (info.max_event_queues <= 1)
+ return TEST_SKIPPED;
+
+ rc = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to get port0 default config");
+ rc = rte_event_port_setup(TEST_DEV_ID, 0, &pcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to setup port0");
+
+ rc = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, &qcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to get queue0 default config");
+ rc = rte_event_queue_setup(TEST_DEV_ID, 0, &qcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to setup queue0");
+
+ q = 0;
+ rc = rte_event_port_profile_links_set(TEST_DEV_ID, 0, &q, NULL, 1, 0);
+ TEST_ASSERT(rc == 1, "Failed to link queue 0 to port 0 with profile 0");
+ q = 1;
+ rc = rte_event_port_profile_links_set(TEST_DEV_ID, 0, &q, NULL, 1, 1);
+ TEST_ASSERT(rc == 1, "Failed to link queue 1 to port 0 with profile 1");
+
+ rc = rte_event_port_profile_links_get(TEST_DEV_ID, 0, queues, priorities, 0);
+ TEST_ASSERT(rc == 1, "Failed to links");
+ TEST_ASSERT(queues[0] == 0, "Invalid queue found in link");
+
+ rc = rte_event_port_profile_links_get(TEST_DEV_ID, 0, queues, priorities, 1);
+ TEST_ASSERT(rc == 1, "Failed to links");
+ TEST_ASSERT(queues[0] == 1, "Invalid queue found in link");
+
+ rc = rte_event_dev_start(TEST_DEV_ID);
+ TEST_ASSERT_SUCCESS(rc, "Failed to start event device");
+
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.queue_id = 0;
+ ev.op = RTE_EVENT_OP_NEW;
+ ev.flow_id = 0;
+ ev.u64 = 0xBADF00D0;
+ rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1);
+ TEST_ASSERT(rc == 1, "Failed to enqueue event");
+ ev.queue_id = 1;
+ ev.flow_id = 1;
+ rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1);
+ TEST_ASSERT(rc == 1, "Failed to enqueue event");
+
+ ev.event = 0;
+ ev.u64 = 0;
+
+ rc = rte_event_port_profile_switch(TEST_DEV_ID, 0, 1);
+ TEST_ASSERT_SUCCESS(rc, "Failed to change profile");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ printf("rc %d\n", rc);
+ if (rc)
+ break;
+ }
+
+ TEST_ASSERT(rc == 1, "Failed to dequeue event from profile 1");
+ TEST_ASSERT(ev.flow_id == 1, "Incorrect flow identifier from profile 1");
+ TEST_ASSERT(ev.queue_id == 1, "Incorrect queue identifier from profile 1");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ TEST_ASSERT(rc == 0, "Unexpected event dequeued from active profile");
+ }
+
+ rc = rte_event_port_profile_switch(TEST_DEV_ID, 0, 0);
+ TEST_ASSERT_SUCCESS(rc, "Failed to change profile");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ if (rc)
+ break;
+ }
+
+ TEST_ASSERT(rc == 1, "Failed to dequeue event from profile 1");
+ TEST_ASSERT(ev.flow_id == 0, "Incorrect flow identifier from profile 0");
+ TEST_ASSERT(ev.queue_id == 0, "Incorrect queue identifier from profile 0");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ TEST_ASSERT(rc == 0, "Unexpected event dequeued from active profile");
+ }
+
+ q = 0;
+ rc = rte_event_port_profile_unlink(TEST_DEV_ID, 0, &q, 1, 0);
+ TEST_ASSERT(rc == 1, "Failed to unlink queue 0 to port 0 with profile 0");
+ q = 1;
+ rc = rte_event_port_profile_unlink(TEST_DEV_ID, 0, &q, 1, 1);
+ TEST_ASSERT(rc == 1, "Failed to unlink queue 1 to port 0 with profile 1");
+
+ return TEST_SUCCESS;
+}
+
static int
test_eventdev_close(void)
{
@@ -1187,6 +1302,8 @@ static struct unit_test_suite eventdev_common_testsuite = {
test_eventdev_timeout_ticks),
TEST_CASE_ST(NULL, NULL,
test_eventdev_start_stop),
+ TEST_CASE_ST(eventdev_configure_setup, eventdev_stop_device,
+ test_eventdev_profile_switch),
TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
test_eventdev_link),
TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [PATCH v4 0/3] Introduce event link profiles
2023-09-28 10:12 ` [PATCH v4 " pbhagavatula
` (2 preceding siblings ...)
2023-09-28 10:12 ` [PATCH v4 3/3] test/event: add event link profile test pbhagavatula
@ 2023-09-28 14:45 ` Jerin Jacob
2023-09-29 9:27 ` [EXT] " Pavan Nikhilesh Bhagavatula
2023-10-03 7:51 ` [PATCH v5 " pbhagavatula
4 siblings, 1 reply; 44+ messages in thread
From: Jerin Jacob @ 2023-09-28 14:45 UTC (permalink / raw)
To: Pavan Nikhilesh
Cc: jerinj, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov, dev
On Thu, Sep 28, 2023 at 3:42 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
+ @Thomas Monjalon @David Marchand @Aaron Conole @Michael Santana
There is CI failure in apply stage[1] where it is taking main tree
commit. Not sure why it is taking main tree?
Pavan,
Could you resend this series again to give one more chance to CI.
[1]
https://patches.dpdk.org/project/dpdk/patch/20230928101205.4352-2-pbhagavatula@marvell.com/
>
> A collection of event queues linked to an event port can be associated
> with unique identifier called as a link profile, multiple such profiles
> can be configured based on the event device capability using the function
> `rte_event_port_profile_links_set` which takes arguments similar to
> `rte_event_port_link` in addition to the profile identifier.
>
> The maximum link profiles that are supported by an event device is
> advertised through the structure member
> `rte_event_dev_info::max_profiles_per_port`.
>
> By default, event ports are configured to use the link profile 0 on
> initialization.
>
> Once multiple link profiles are set up and the event device is started, the
> application can use the function `rte_event_port_profile_switch` to change
> the currently active profile on an event port. This effects the next
> `rte_event_dequeue_burst` call, where the event queues associated with the
> newly active link profile will participate in scheduling.
>
> Rudementary work flow would something like:
>
> Config path:
>
> uint8_t lq[4] = {4, 5, 6, 7};
> uint8_t hq[4] = {0, 1, 2, 3};
>
> if (rte_event_dev_info.max_profiles_per_port < 2)
> return -ENOTSUP;
>
> rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0);
> rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1);
>
> Worker path:
>
> empty_high_deq = 0;
> empty_low_deq = 0;
> is_low_deq = 0;
> while (1) {
> deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
> if (deq == 0) {
> /**
> * Change link profile based on work activity on current
> * active profile
> */
> if (is_low_deq) {
> empty_low_deq++;
> if (empty_low_deq == MAX_LOW_RETRY) {
> rte_event_port_profile_switch(0, 0, 0);
> is_low_deq = 0;
> empty_low_deq = 0;
> }
> continue;
> }
>
> if (empty_high_deq == MAX_HIGH_RETRY) {
> rte_event_port_profile_switch(0, 0, 1);
> is_low_deq = 1;
> empty_high_deq = 0;
> }
> continue;
> }
>
> // Process the event received.
>
> if (is_low_deq++ == MAX_LOW_EVENTS) {
> rte_event_port_profile_switch(0, 0, 0);
> is_low_deq = 0;
> }
> }
>
> An application could use heuristic data of load/activity of a given event
> port and change its active profile to adapt to the traffic pattern.
>
> An unlink function `rte_event_port_profile_unlink` is provided to
> modify the links associated to a profile, and
> `rte_event_port_profile_links_get` can be used to retrieve the links
> associated with a profile.
>
> Using Link profiles can reduce the overhead of linking/unlinking and
> waiting for unlinks in progress in fast-path and gives applications
> the ability to switch between preset profiles on the fly.
>
> v4 Changes:
> ----------
> - Address review comments (Jerin).
>
> v3 Changes:
> ----------
> - Rebase to next-eventdev
> - Rename testcase name to match API.
>
> v2 Changes:
> ----------
> - Fix compilation.
>
> Pavan Nikhilesh (3):
> eventdev: introduce link profiles
> event/cnxk: implement event link profiles
> test/event: add event link profile test
>
> app/test/test_eventdev.c | 117 +++++++++++
> config/rte_config.h | 1 +
> doc/guides/eventdevs/cnxk.rst | 1 +
> doc/guides/eventdevs/features/cnxk.ini | 3 +-
> doc/guides/eventdevs/features/default.ini | 1 +
> doc/guides/prog_guide/eventdev.rst | 40 ++++
> doc/guides/rel_notes/release_23_11.rst | 14 +-
> drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
> drivers/common/cnxk/roc_sso.c | 18 +-
> drivers/common/cnxk/roc_sso.h | 8 +-
> drivers/common/cnxk/roc_sso_priv.h | 4 +-
> drivers/event/cnxk/cn10k_eventdev.c | 45 +++--
> drivers/event/cnxk/cn10k_worker.c | 11 ++
> drivers/event/cnxk/cn10k_worker.h | 1 +
> drivers/event/cnxk/cn9k_eventdev.c | 74 ++++---
> drivers/event/cnxk/cn9k_worker.c | 22 +++
> drivers/event/cnxk/cn9k_worker.h | 2 +
> drivers/event/cnxk/cnxk_eventdev.c | 37 ++--
> drivers/event/cnxk/cnxk_eventdev.h | 10 +-
> lib/eventdev/eventdev_pmd.h | 59 +++++-
> lib/eventdev/eventdev_private.c | 9 +
> lib/eventdev/eventdev_trace.h | 32 +++
> lib/eventdev/eventdev_trace_points.c | 12 ++
> lib/eventdev/rte_eventdev.c | 150 +++++++++++---
> lib/eventdev/rte_eventdev.h | 231 ++++++++++++++++++++++
> lib/eventdev/rte_eventdev_core.h | 6 +-
> lib/eventdev/rte_eventdev_trace_fp.h | 8 +
> lib/eventdev/version.map | 4 +
> 28 files changed, 814 insertions(+), 110 deletions(-)
>
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 44+ messages in thread
* RE: [EXT] Re: [PATCH v4 0/3] Introduce event link profiles
2023-09-28 14:45 ` [PATCH v4 0/3] Introduce event link profiles Jerin Jacob
@ 2023-09-29 9:27 ` Pavan Nikhilesh Bhagavatula
0 siblings, 0 replies; 44+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2023-09-29 9:27 UTC (permalink / raw)
To: Jerin Jacob
Cc: Jerin Jacob Kollanukkaran, Shijith Thotton, timothy.mcdaniel,
hemant.agrawal, sachin.saxena, mattias.ronnblom, liangma,
peter.mccarthy, harry.van.haaren, erik.g.carrillo,
abhinandan.gujjar, s.v.naga.harish.k, anatoly.burakov, dev
> On Thu, Sep 28, 2023 at 3:42 PM <pbhagavatula@marvell.com> wrote:
> >
> > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> + @Thomas Monjalon @David Marchand @Aaron Conole @Michael
> Santana
>
> There is CI failure in apply stage[1] where it is taking main tree
> commit. Not sure why it is taking main tree?
>
> Pavan,
>
> Could you resend this series again to give one more chance to CI.
>
>
> [1]
> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__patches.dpdk.org_project_dpdk_patch_20230928101205.4352-2D2-
> 2Dpbhagavatula-
> 40marvell.com_&d=DwIFaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=E3SgYMjtKC
> MVsB-fmvgGV3o-g_fjLhk5Pupi9ijohpc&m=s0pyfDe0ZocDrutPG-
> dljjgkODjEcIJ2NEAbull1QXuFTK1wlg4H42nArfxOqW29&s=h8P-
> KWtqiKfO0rinRfnUMFHtuGEFZp2fku5fG6xu3uY&e=
>
>
The CI script which decides the tree to run tests on needs an update when
a series contains a spec change followed by driver implementation,
I submitted the following patch to ci@dpdk.org
https://patches.dpdk.org/project/ci/patch/20230929083443.9925-1-pbhagavatula@marvell.com/
>
> >
> > A collection of event queues linked to an event port can be associated
> > with unique identifier called as a link profile, multiple such profiles
> > can be configured based on the event device capability using the function
> > `rte_event_port_profile_links_set` which takes arguments similar to
> > `rte_event_port_link` in addition to the profile identifier.
> >
> > The maximum link profiles that are supported by an event device is
> > advertised through the structure member
> > `rte_event_dev_info::max_profiles_per_port`.
> >
> > By default, event ports are configured to use the link profile 0 on
> > initialization.
> >
> > Once multiple link profiles are set up and the event device is started, the
> > application can use the function `rte_event_port_profile_switch` to change
> > the currently active profile on an event port. This effects the next
> > `rte_event_dequeue_burst` call, where the event queues associated with
> the
> > newly active link profile will participate in scheduling.
> >
> > Rudementary work flow would something like:
> >
> > Config path:
> >
> > uint8_t lq[4] = {4, 5, 6, 7};
> > uint8_t hq[4] = {0, 1, 2, 3};
> >
> > if (rte_event_dev_info.max_profiles_per_port < 2)
> > return -ENOTSUP;
> >
> > rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0);
> > rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1);
> >
> > Worker path:
> >
> > empty_high_deq = 0;
> > empty_low_deq = 0;
> > is_low_deq = 0;
> > while (1) {
> > deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
> > if (deq == 0) {
> > /**
> > * Change link profile based on work activity on current
> > * active profile
> > */
> > if (is_low_deq) {
> > empty_low_deq++;
> > if (empty_low_deq == MAX_LOW_RETRY) {
> > rte_event_port_profile_switch(0, 0, 0);
> > is_low_deq = 0;
> > empty_low_deq = 0;
> > }
> > continue;
> > }
> >
> > if (empty_high_deq == MAX_HIGH_RETRY) {
> > rte_event_port_profile_switch(0, 0, 1);
> > is_low_deq = 1;
> > empty_high_deq = 0;
> > }
> > continue;
> > }
> >
> > // Process the event received.
> >
> > if (is_low_deq++ == MAX_LOW_EVENTS) {
> > rte_event_port_profile_switch(0, 0, 0);
> > is_low_deq = 0;
> > }
> > }
> >
> > An application could use heuristic data of load/activity of a given event
> > port and change its active profile to adapt to the traffic pattern.
> >
> > An unlink function `rte_event_port_profile_unlink` is provided to
> > modify the links associated to a profile, and
> > `rte_event_port_profile_links_get` can be used to retrieve the links
> > associated with a profile.
> >
> > Using Link profiles can reduce the overhead of linking/unlinking and
> > waiting for unlinks in progress in fast-path and gives applications
> > the ability to switch between preset profiles on the fly.
> >
> > v4 Changes:
> > ----------
> > - Address review comments (Jerin).
> >
> > v3 Changes:
> > ----------
> > - Rebase to next-eventdev
> > - Rename testcase name to match API.
> >
> > v2 Changes:
> > ----------
> > - Fix compilation.
> >
> > Pavan Nikhilesh (3):
> > eventdev: introduce link profiles
> > event/cnxk: implement event link profiles
> > test/event: add event link profile test
> >
> > app/test/test_eventdev.c | 117 +++++++++++
> > config/rte_config.h | 1 +
> > doc/guides/eventdevs/cnxk.rst | 1 +
> > doc/guides/eventdevs/features/cnxk.ini | 3 +-
> > doc/guides/eventdevs/features/default.ini | 1 +
> > doc/guides/prog_guide/eventdev.rst | 40 ++++
> > doc/guides/rel_notes/release_23_11.rst | 14 +-
> > drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
> > drivers/common/cnxk/roc_sso.c | 18 +-
> > drivers/common/cnxk/roc_sso.h | 8 +-
> > drivers/common/cnxk/roc_sso_priv.h | 4 +-
> > drivers/event/cnxk/cn10k_eventdev.c | 45 +++--
> > drivers/event/cnxk/cn10k_worker.c | 11 ++
> > drivers/event/cnxk/cn10k_worker.h | 1 +
> > drivers/event/cnxk/cn9k_eventdev.c | 74 ++++---
> > drivers/event/cnxk/cn9k_worker.c | 22 +++
> > drivers/event/cnxk/cn9k_worker.h | 2 +
> > drivers/event/cnxk/cnxk_eventdev.c | 37 ++--
> > drivers/event/cnxk/cnxk_eventdev.h | 10 +-
> > lib/eventdev/eventdev_pmd.h | 59 +++++-
> > lib/eventdev/eventdev_private.c | 9 +
> > lib/eventdev/eventdev_trace.h | 32 +++
> > lib/eventdev/eventdev_trace_points.c | 12 ++
> > lib/eventdev/rte_eventdev.c | 150 +++++++++++---
> > lib/eventdev/rte_eventdev.h | 231 ++++++++++++++++++++++
> > lib/eventdev/rte_eventdev_core.h | 6 +-
> > lib/eventdev/rte_eventdev_trace_fp.h | 8 +
> > lib/eventdev/version.map | 4 +
> > 28 files changed, 814 insertions(+), 110 deletions(-)
> >
> > --
> > 2.25.1
> >
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [PATCH v4 1/3] eventdev: introduce link profiles
2023-09-28 10:12 ` [PATCH v4 1/3] eventdev: introduce " pbhagavatula
@ 2023-10-03 6:55 ` Jerin Jacob
0 siblings, 0 replies; 44+ messages in thread
From: Jerin Jacob @ 2023-10-03 6:55 UTC (permalink / raw)
To: pbhagavatula
Cc: jerinj, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov, dev
On Thu, Sep 28, 2023 at 9:41 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
DMA adapter merge created conflicts, Please rebase to next-event tree.
[for-main]dell[dpdk-next-eventdev] $ git pw series apply 29675
Failed to apply patch:
Applying: eventdev: introduce link profiles
Using index info to reconstruct a base tree...
M config/rte_config.h
M doc/guides/eventdevs/features/default.ini
M doc/guides/prog_guide/eventdev.rst
M doc/guides/rel_notes/release_23_11.rst
M lib/eventdev/eventdev_pmd.h
M lib/eventdev/eventdev_private.c
M lib/eventdev/rte_eventdev.c
M lib/eventdev/rte_eventdev.h
M lib/eventdev/rte_eventdev_core.h
M lib/eventdev/version.map
Falling back to patching base and 3-way merge...
Auto-merging lib/eventdev/version.map
Auto-merging lib/eventdev/rte_eventdev_core.h
CONFLICT (content): Merge conflict in lib/eventdev/rte_eventdev_core.h
Auto-merging lib/eventdev/rte_eventdev.h
Auto-merging lib/eventdev/rte_eventdev.c
Auto-merging lib/eventdev/eventdev_private.c
CONFLICT (content): Merge conflict in lib/eventdev/eventdev_private.c
Auto-merging lib/eventdev/eventdev_pmd.h
CONFLICT (content): Merge conflict in lib/eventdev/eventdev_pmd.h
Auto-merging doc/guides/rel_notes/release_23_11.rst
Auto-merging doc/guides/prog_guide/eventdev.rst
Auto-merging doc/guides/eventdevs/features/default.ini
Auto-merging config/rte_config.h
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0001 eventdev: introduce link profiles
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".
>
> A collection of event queues linked to an event port can be
> associated with a unique identifier called as a link profile, multiple
> such profiles can be created based on the event device capability
> using the function `rte_event_port_profile_links_set` which takes
> arguments similar to `rte_event_port_link` in addition to the profile
> identifier.
>
> The maximum link profiles that are supported by an event device
> is advertised through the structure member
> `rte_event_dev_info::max_profiles_per_port`.
> By default, event ports are configured to use the link profile 0
> on initialization.
>
> Once multiple link profiles are set up and the event device is started,
> the application can use the function `rte_event_port_profile_switch`
> to change the currently active profile on an event port. This effects
> the next `rte_event_dequeue_burst` call, where the event queues
> associated with the newly active link profile will participate in
> scheduling.
>
> An unlink function `rte_event_port_profile_unlink` is provided
> to modify the links associated to a profile, and
> `rte_event_port_profile_links_get` can be used to retrieve the
> links associated with a profile.
>
> Using Link profiles can reduce the overhead of linking/unlinking and
> waiting for unlinks in progress in fast-path and gives applications
> the ability to switch between preset profiles on the fly.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
> ---
> config/rte_config.h | 1 +
> doc/guides/eventdevs/features/default.ini | 1 +
> doc/guides/prog_guide/eventdev.rst | 40 ++++
> doc/guides/rel_notes/release_23_11.rst | 10 +
> lib/eventdev/eventdev_pmd.h | 59 +++++-
> lib/eventdev/eventdev_private.c | 9 +
> lib/eventdev/eventdev_trace.h | 32 +++
> lib/eventdev/eventdev_trace_points.c | 12 ++
> lib/eventdev/rte_eventdev.c | 150 +++++++++++---
> lib/eventdev/rte_eventdev.h | 231 ++++++++++++++++++++++
> lib/eventdev/rte_eventdev_core.h | 6 +-
> lib/eventdev/rte_eventdev_trace_fp.h | 8 +
> lib/eventdev/version.map | 4 +
> 13 files changed, 535 insertions(+), 28 deletions(-)
>
> diff --git a/config/rte_config.h b/config/rte_config.h
> index 400e44e3cf..d43b3eecb8 100644
> --- a/config/rte_config.h
> +++ b/config/rte_config.h
> @@ -73,6 +73,7 @@
> #define RTE_EVENT_MAX_DEVS 16
> #define RTE_EVENT_MAX_PORTS_PER_DEV 255
> #define RTE_EVENT_MAX_QUEUES_PER_DEV 255
> +#define RTE_EVENT_MAX_PROFILES_PER_PORT 8
> #define RTE_EVENT_TIMER_ADAPTER_NUM_MAX 32
> #define RTE_EVENT_ETH_INTR_RING_SIZE 1024
> #define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
> diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini
> index 00360f60c6..1c0082352b 100644
> --- a/doc/guides/eventdevs/features/default.ini
> +++ b/doc/guides/eventdevs/features/default.ini
> @@ -18,6 +18,7 @@ multiple_queue_port =
> carry_flow_id =
> maintenance_free =
> runtime_queue_attr =
> +profile_links =
>
> ;
> ; Features of a default Ethernet Rx adapter.
> diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst
> index 2c83176846..4bc0de4cdc 100644
> --- a/doc/guides/prog_guide/eventdev.rst
> +++ b/doc/guides/prog_guide/eventdev.rst
> @@ -317,6 +317,46 @@ can be achieved like this:
> }
> int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1);
>
> +Linking Queues to Ports with link profiles
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +An application can use link profiles if supported by the underlying event device to setup up
> +multiple link profile per port and change them run time depending up on heuristic data.
> +Using Link profiles can reduce the overhead of linking/unlinking and wait for unlinks in progress
> +in fast-path and gives applications the ability to switch between preset profiles on the fly.
> +
> +An Example use case could be as follows.
> +
> +Config path:
> +
> +.. code-block:: c
> +
> + uint8_t lq[4] = {4, 5, 6, 7};
> + uint8_t hq[4] = {0, 1, 2, 3};
> +
> + if (rte_event_dev_info.max_profiles_per_port < 2)
> + return -ENOTSUP;
> +
> + rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0);
> + rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1);
> +
> +Worker path:
> +
> +.. code-block:: c
> +
> + uint8_t profile_id_to_switch;
> +
> + while (1) {
> + deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
> + if (deq == 0) {
> + profile_id_to_switch = app_find_profile_id_to_switch();
> + rte_event_port_profile_switch(0, 0, profile_id_to_switch);
> + continue;
> + }
> +
> + // Process the event received.
> + }
> +
> Starting the EventDev
> ~~~~~~~~~~~~~~~~~~~~~
>
> diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
> index b34ddc0860..e08e2eadce 100644
> --- a/doc/guides/rel_notes/release_23_11.rst
> +++ b/doc/guides/rel_notes/release_23_11.rst
> @@ -89,6 +89,16 @@ New Features
> * Added support for ``remaining_ticks_get`` timer adapter PMD callback
> to get the remaining ticks to expire for a given event timer.
>
> +* **Added eventdev support to link queues to port with link profile.**
> +
> + Introduced event link profiles that can be used to associated links between
> + event queues and an event port with a unique identifier termed as link profile.
> + The profile can be used to switch between the associated links in fast-path
> + without the additional overhead of linking/unlinking and waiting for unlinking.
> +
> + * Added ``rte_event_port_profile_links_set``, ``rte_event_port_profile_unlink``
> + ``rte_event_port_profile_links_get`` and ``rte_event_port_profile_switch``
> + APIs to enable this feature.
>
> Removed Items
> -------------
> diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
> index f62f42e140..9585c0ca24 100644
> --- a/lib/eventdev/eventdev_pmd.h
> +++ b/lib/eventdev/eventdev_pmd.h
> @@ -119,8 +119,8 @@ struct rte_eventdev_data {
> /**< Array of port configuration structures. */
> struct rte_event_queue_conf queues_cfg[RTE_EVENT_MAX_QUEUES_PER_DEV];
> /**< Array of queue configuration structures. */
> - uint16_t links_map[RTE_EVENT_MAX_PORTS_PER_DEV *
> - RTE_EVENT_MAX_QUEUES_PER_DEV];
> + uint16_t links_map[RTE_EVENT_MAX_PROFILES_PER_PORT]
> + [RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV];
> /**< Memory to store queues to port connections. */
> void *dev_private;
> /**< PMD-specific private data */
> @@ -178,6 +178,9 @@ struct rte_eventdev {
> event_tx_adapter_enqueue_t txa_enqueue;
> /**< Pointer to PMD eth Tx adapter enqueue function. */
> event_crypto_adapter_enqueue_t ca_enqueue;
> + /**< PMD Crypto adapter enqueue function. */
> + event_profile_switch_t profile_switch;
> + /**< PMD Event switch profile function. */
>
> uint64_t reserved_64s[4]; /**< Reserved for future fields */
> void *reserved_ptrs[3]; /**< Reserved for future fields */
> @@ -437,6 +440,32 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
> const uint8_t queues[], const uint8_t priorities[],
> uint16_t nb_links);
>
> +/**
> + * Link multiple source event queues associated with a link profile to a
> + * destination event port.
> + *
> + * @param dev
> + * Event device pointer
> + * @param port
> + * Event port pointer
> + * @param queues
> + * Points to an array of *nb_links* event queues to be linked
> + * to the event port.
> + * @param priorities
> + * Points to an array of *nb_links* service priorities associated with each
> + * event queue link to event port.
> + * @param nb_links
> + * The number of links to establish.
> + * @param profile_id
> + * The profile ID to associate the links.
> + *
> + * @return
> + * Returns 0 on success.
> + */
> +typedef int (*eventdev_port_link_profile_t)(struct rte_eventdev *dev, void *port,
> + const uint8_t queues[], const uint8_t priorities[],
> + uint16_t nb_links, uint8_t profile_id);
> +
> /**
> * Unlink multiple source event queues from destination event port.
> *
> @@ -455,6 +484,28 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
> typedef int (*eventdev_port_unlink_t)(struct rte_eventdev *dev, void *port,
> uint8_t queues[], uint16_t nb_unlinks);
>
> +/**
> + * Unlink multiple source event queues associated with a link profile from
> + * destination event port.
> + *
> + * @param dev
> + * Event device pointer
> + * @param port
> + * Event port pointer
> + * @param queues
> + * An array of *nb_unlinks* event queues to be unlinked from the event port.
> + * @param nb_unlinks
> + * The number of unlinks to establish
> + * @param profile_id
> + * The profile ID of the associated links.
> + *
> + * @return
> + * Returns 0 on success.
> + */
> +typedef int (*eventdev_port_unlink_profile_t)(struct rte_eventdev *dev, void *port,
> + uint8_t queues[], uint16_t nb_unlinks,
> + uint8_t profile_id);
> +
> /**
> * Unlinks in progress. Returns number of unlinks that the PMD is currently
> * performing, but have not yet been completed.
> @@ -1348,8 +1399,12 @@ struct eventdev_ops {
>
> eventdev_port_link_t port_link;
> /**< Link event queues to an event port. */
> + eventdev_port_link_profile_t port_link_profile;
> + /**< Link event queues associated with a profile to an event port. */
> eventdev_port_unlink_t port_unlink;
> /**< Unlink event queues from an event port. */
> + eventdev_port_unlink_profile_t port_unlink_profile;
> + /**< Unlink event queues associated with a profile from an event port. */
> eventdev_port_unlinks_in_progress_t port_unlinks_in_progress;
> /**< Unlinks in progress on an event port. */
> eventdev_dequeue_timeout_ticks_t timeout_ticks;
> diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c
> index 1d3d9d357e..b90a3a3833 100644
> --- a/lib/eventdev/eventdev_private.c
> +++ b/lib/eventdev/eventdev_private.c
> @@ -81,6 +81,13 @@ dummy_event_crypto_adapter_enqueue(__rte_unused void *port,
> return 0;
> }
>
> +static int
> +dummy_event_port_profile_switch(__rte_unused void *port, __rte_unused uint8_t profile_id)
> +{
> + RTE_EDEV_LOG_ERR("change profile requested for unconfigured event device");
> + return -EINVAL;
> +}
> +
> void
> event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
> {
> @@ -97,6 +104,7 @@ event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
> .txa_enqueue_same_dest =
> dummy_event_tx_adapter_enqueue_same_dest,
> .ca_enqueue = dummy_event_crypto_adapter_enqueue,
> + .profile_switch = dummy_event_port_profile_switch,
> .data = dummy_data,
> };
>
> @@ -117,5 +125,6 @@ event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op,
> fp_op->txa_enqueue = dev->txa_enqueue;
> fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest;
> fp_op->ca_enqueue = dev->ca_enqueue;
> + fp_op->profile_switch = dev->profile_switch;
> fp_op->data = dev->data->ports;
> }
> diff --git a/lib/eventdev/eventdev_trace.h b/lib/eventdev/eventdev_trace.h
> index f008ef0091..9c2b261c06 100644
> --- a/lib/eventdev/eventdev_trace.h
> +++ b/lib/eventdev/eventdev_trace.h
> @@ -76,6 +76,17 @@ RTE_TRACE_POINT(
> rte_trace_point_emit_int(rc);
> )
>
> +RTE_TRACE_POINT(
> + rte_eventdev_trace_port_profile_links_set,
> + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
> + uint16_t nb_links, uint8_t profile_id, int rc),
> + rte_trace_point_emit_u8(dev_id);
> + rte_trace_point_emit_u8(port_id);
> + rte_trace_point_emit_u16(nb_links);
> + rte_trace_point_emit_u8(profile_id);
> + rte_trace_point_emit_int(rc);
> +)
> +
> RTE_TRACE_POINT(
> rte_eventdev_trace_port_unlink,
> RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
> @@ -86,6 +97,17 @@ RTE_TRACE_POINT(
> rte_trace_point_emit_int(rc);
> )
>
> +RTE_TRACE_POINT(
> + rte_eventdev_trace_port_profile_unlink,
> + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
> + uint16_t nb_unlinks, uint8_t profile_id, int rc),
> + rte_trace_point_emit_u8(dev_id);
> + rte_trace_point_emit_u8(port_id);
> + rte_trace_point_emit_u16(nb_unlinks);
> + rte_trace_point_emit_u8(profile_id);
> + rte_trace_point_emit_int(rc);
> +)
> +
> RTE_TRACE_POINT(
> rte_eventdev_trace_start,
> RTE_TRACE_POINT_ARGS(uint8_t dev_id, int rc),
> @@ -487,6 +509,16 @@ RTE_TRACE_POINT(
> rte_trace_point_emit_int(count);
> )
>
> +RTE_TRACE_POINT(
> + rte_eventdev_trace_port_profile_links_get,
> + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, uint8_t profile_id,
> + int count),
> + rte_trace_point_emit_u8(dev_id);
> + rte_trace_point_emit_u8(port_id);
> + rte_trace_point_emit_u8(profile_id);
> + rte_trace_point_emit_int(count);
> +)
> +
> RTE_TRACE_POINT(
> rte_eventdev_trace_port_unlinks_in_progress,
> RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id),
> diff --git a/lib/eventdev/eventdev_trace_points.c b/lib/eventdev/eventdev_trace_points.c
> index 76144cfe75..8024e07531 100644
> --- a/lib/eventdev/eventdev_trace_points.c
> +++ b/lib/eventdev/eventdev_trace_points.c
> @@ -19,9 +19,15 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_setup,
> RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_link,
> lib.eventdev.port.link)
>
> +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_links_set,
> + lib.eventdev.port.profile.links.set)
> +
> RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlink,
> lib.eventdev.port.unlink)
>
> +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_unlink,
> + lib.eventdev.port.profile.unlink)
> +
> RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_start,
> lib.eventdev.start)
>
> @@ -40,6 +46,9 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_deq_burst,
> RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_maintain,
> lib.eventdev.maintain)
>
> +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_switch,
> + lib.eventdev.port.profile.switch)
> +
> /* Eventdev Rx adapter trace points */
> RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_eth_rx_adapter_create,
> lib.eventdev.rx.adapter.create)
> @@ -206,6 +215,9 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_default_conf_get,
> RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_links_get,
> lib.eventdev.port.links.get)
>
> +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_links_get,
> + lib.eventdev.port.profile.links.get)
> +
> RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlinks_in_progress,
> lib.eventdev.port.unlinks.in.progress)
>
> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
> index 6ab4524332..33a3154d5d 100644
> --- a/lib/eventdev/rte_eventdev.c
> +++ b/lib/eventdev/rte_eventdev.c
> @@ -95,6 +95,7 @@ rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)
> return -EINVAL;
>
> memset(dev_info, 0, sizeof(struct rte_event_dev_info));
> + dev_info->max_profiles_per_port = 1;
>
> if (*dev->dev_ops->dev_infos_get == NULL)
> return -ENOTSUP;
> @@ -270,7 +271,7 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
> void **ports;
> uint16_t *links_map;
> struct rte_event_port_conf *ports_cfg;
> - unsigned int i;
> + unsigned int i, j;
>
> RTE_EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
> dev->data->dev_id);
> @@ -281,7 +282,6 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
>
> ports = dev->data->ports;
> ports_cfg = dev->data->ports_cfg;
> - links_map = dev->data->links_map;
>
> for (i = nb_ports; i < old_nb_ports; i++)
> (*dev->dev_ops->port_release)(ports[i]);
> @@ -297,9 +297,11 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
> sizeof(ports[0]) * new_ps);
> memset(ports_cfg + old_nb_ports, 0,
> sizeof(ports_cfg[0]) * new_ps);
> - for (i = old_links_map_end; i < links_map_end; i++)
> - links_map[i] =
> - EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
> + for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++) {
> + links_map = dev->data->links_map[i];
> + for (j = old_links_map_end; j < links_map_end; j++)
> + links_map[j] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
> + }
> }
> } else {
> if (*dev->dev_ops->port_release == NULL)
> @@ -953,21 +955,45 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
> const uint8_t queues[], const uint8_t priorities[],
> uint16_t nb_links)
> {
> - struct rte_eventdev *dev;
> - uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
> + return rte_event_port_profile_links_set(dev_id, port_id, queues, priorities, nb_links, 0);
> +}
> +
> +int
> +rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
> + const uint8_t priorities[], uint16_t nb_links, uint8_t profile_id)
> +{
> uint8_t priorities_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
> + uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
> + struct rte_event_dev_info info;
> + struct rte_eventdev *dev;
> uint16_t *links_map;
> int i, diag;
>
> RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);
> dev = &rte_eventdevs[dev_id];
>
> + if (*dev->dev_ops->dev_infos_get == NULL)
> + return -ENOTSUP;
> +
> + (*dev->dev_ops->dev_infos_get)(dev, &info);
> + if (profile_id >= RTE_EVENT_MAX_PROFILES_PER_PORT ||
> + profile_id >= info.max_profiles_per_port) {
> + RTE_EDEV_LOG_ERR("Invalid profile_id=%" PRIu8, profile_id);
> + return -EINVAL;
> + }
> +
> if (*dev->dev_ops->port_link == NULL) {
> RTE_EDEV_LOG_ERR("Function not supported\n");
> rte_errno = ENOTSUP;
> return 0;
> }
>
> + if (profile_id && *dev->dev_ops->port_link_profile == NULL) {
> + RTE_EDEV_LOG_ERR("Function not supported\n");
> + rte_errno = ENOTSUP;
> + return 0;
> + }
> +
> if (!is_valid_port(dev, port_id)) {
> RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> rte_errno = EINVAL;
> @@ -995,18 +1021,22 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
> return 0;
> }
>
> - diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id],
> - queues, priorities, nb_links);
> + if (profile_id)
> + diag = (*dev->dev_ops->port_link_profile)(dev, dev->data->ports[port_id], queues,
> + priorities, nb_links, profile_id);
> + else
> + diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id], queues,
> + priorities, nb_links);
> if (diag < 0)
> return diag;
>
> - links_map = dev->data->links_map;
> + links_map = dev->data->links_map[profile_id];
> /* Point links_map to this port specific area */
> links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
> for (i = 0; i < diag; i++)
> links_map[queues[i]] = (uint8_t)priorities[i];
>
> - rte_eventdev_trace_port_link(dev_id, port_id, nb_links, diag);
> + rte_eventdev_trace_port_profile_links_set(dev_id, port_id, nb_links, profile_id, diag);
> return diag;
> }
>
> @@ -1014,27 +1044,51 @@ int
> rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
> uint8_t queues[], uint16_t nb_unlinks)
> {
> - struct rte_eventdev *dev;
> + return rte_event_port_profile_unlink(dev_id, port_id, queues, nb_unlinks, 0);
> +}
> +
> +int
> +rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
> + uint16_t nb_unlinks, uint8_t profile_id)
> +{
> uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
> - int i, diag, j;
> + struct rte_event_dev_info info;
> + struct rte_eventdev *dev;
> uint16_t *links_map;
> + int i, diag, j;
>
> RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);
> dev = &rte_eventdevs[dev_id];
>
> + if (*dev->dev_ops->dev_infos_get == NULL)
> + return -ENOTSUP;
> +
> + (*dev->dev_ops->dev_infos_get)(dev, &info);
> + if (profile_id >= RTE_EVENT_MAX_PROFILES_PER_PORT ||
> + profile_id >= info.max_profiles_per_port) {
> + RTE_EDEV_LOG_ERR("Invalid profile_id=%" PRIu8, profile_id);
> + return -EINVAL;
> + }
> +
> if (*dev->dev_ops->port_unlink == NULL) {
> RTE_EDEV_LOG_ERR("Function not supported");
> rte_errno = ENOTSUP;
> return 0;
> }
>
> + if (profile_id && *dev->dev_ops->port_unlink_profile == NULL) {
> + RTE_EDEV_LOG_ERR("Function not supported");
> + rte_errno = ENOTSUP;
> + return 0;
> + }
> +
> if (!is_valid_port(dev, port_id)) {
> RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> rte_errno = EINVAL;
> return 0;
> }
>
> - links_map = dev->data->links_map;
> + links_map = dev->data->links_map[profile_id];
> /* Point links_map to this port specific area */
> links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
>
> @@ -1063,16 +1117,19 @@ rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
> return 0;
> }
>
> - diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id],
> - queues, nb_unlinks);
> -
> + if (profile_id)
> + diag = (*dev->dev_ops->port_unlink_profile)(dev, dev->data->ports[port_id], queues,
> + nb_unlinks, profile_id);
> + else
> + diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id], queues,
> + nb_unlinks);
> if (diag < 0)
> return diag;
>
> for (i = 0; i < diag; i++)
> links_map[queues[i]] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
>
> - rte_eventdev_trace_port_unlink(dev_id, port_id, nb_unlinks, diag);
> + rte_eventdev_trace_port_profile_unlink(dev_id, port_id, nb_unlinks, profile_id, diag);
> return diag;
> }
>
> @@ -1116,7 +1173,8 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
> return -EINVAL;
> }
>
> - links_map = dev->data->links_map;
> + /* Use the default profile_id. */
> + links_map = dev->data->links_map[0];
> /* Point links_map to this port specific area */
> links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
> for (i = 0; i < dev->data->nb_queues; i++) {
> @@ -1132,6 +1190,49 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
> return count;
> }
>
> +int
> +rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
> + uint8_t priorities[], uint8_t profile_id)
> +{
> + struct rte_event_dev_info info;
> + struct rte_eventdev *dev;
> + uint16_t *links_map;
> + int i, count = 0;
> +
> + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> +
> + dev = &rte_eventdevs[dev_id];
> + if (*dev->dev_ops->dev_infos_get == NULL)
> + return -ENOTSUP;
> +
> + (*dev->dev_ops->dev_infos_get)(dev, &info);
> + if (profile_id >= RTE_EVENT_MAX_PROFILES_PER_PORT ||
> + profile_id >= info.max_profiles_per_port) {
> + RTE_EDEV_LOG_ERR("Invalid profile_id=%" PRIu8, profile_id);
> + return -EINVAL;
> + }
> +
> + if (!is_valid_port(dev, port_id)) {
> + RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> + return -EINVAL;
> + }
> +
> + links_map = dev->data->links_map[profile_id];
> + /* Point links_map to this port specific area */
> + links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
> + for (i = 0; i < dev->data->nb_queues; i++) {
> + if (links_map[i] != EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
> + queues[count] = i;
> + priorities[count] = (uint8_t)links_map[i];
> + ++count;
> + }
> + }
> +
> + rte_eventdev_trace_port_profile_links_get(dev_id, port_id, profile_id, count);
> +
> + return count;
> +}
> +
> int
> rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
> uint64_t *timeout_ticks)
> @@ -1440,7 +1541,7 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
> {
> char mz_name[RTE_EVENTDEV_NAME_MAX_LEN];
> const struct rte_memzone *mz;
> - int n;
> + int i, n;
>
> /* Generate memzone name */
> n = snprintf(mz_name, sizeof(mz_name), "rte_eventdev_data_%u", dev_id);
> @@ -1460,11 +1561,10 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
> *data = mz->addr;
> if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> memset(*data, 0, sizeof(struct rte_eventdev_data));
> - for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV *
> - RTE_EVENT_MAX_QUEUES_PER_DEV;
> - n++)
> - (*data)->links_map[n] =
> - EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
> + for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++)
> + for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV;
> + n++)
> + (*data)->links_map[i][n] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
> }
>
> return 0;
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 2ba8a7b090..23cbff939f 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -320,6 +320,12 @@ struct rte_event;
> * rte_event_queue_setup().
> */
>
> +#define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12)
> +/**< Event device is capable of supporting multiple link profiles per event port
> + * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater
> + * than one.
> + */
> +
> /* Event device priority levels */
> #define RTE_EVENT_DEV_PRIORITY_HIGHEST 0
> /**< Highest priority expressed across eventdev subsystem
> @@ -446,6 +452,10 @@ struct rte_event_dev_info {
> * device. These ports and queues are not accounted for in
> * max_event_ports or max_event_queues.
> */
> + uint8_t max_profiles_per_port;
> + /**< Maximum number of event queue profiles per event port.
> + * A device that doesn't support multiple profiles will set this as 1.
> + */
> };
>
> /**
> @@ -1536,6 +1546,10 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
> * latency of critical work by establishing the link with more event ports
> * at runtime.
> *
> + * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater
> + * than or equal to one, this function links the event queues to the default
> + * profile_id i.e. profile_id 0 of the event port.
> + *
> * @param dev_id
> * The identifier of the device.
> *
> @@ -1593,6 +1607,10 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
> * Event queue(s) to event port unlink establishment can be changed at runtime
> * without re-configuring the device.
> *
> + * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater
> + * than or equal to one, this function unlinks the event queues from the default
> + * profile identifier i.e. profile 0 of the event port.
> + *
> * @see rte_event_port_unlinks_in_progress() to poll for completed unlinks.
> *
> * @param dev_id
> @@ -1626,6 +1644,136 @@ int
> rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
> uint8_t queues[], uint16_t nb_unlinks);
>
> +/**
> + * Link multiple source event queues supplied in *queues* to the destination
> + * event port designated by its *port_id* with associated profile identifier
> + * supplied in *profile_id* with service priorities supplied in *priorities*
> + * on the event device designated by its *dev_id*.
> + *
> + * If *profile_id* is set to 0 then, the links created by the call `rte_event_port_link`
> + * will be overwritten.
> + *
> + * Event ports by default use profile_id 0 unless it is changed using the
> + * call ``rte_event_port_profile_switch()``.
> + *
> + * The link establishment shall enable the event port *port_id* from
> + * receiving events from the specified event queue(s) supplied in *queues*
> + *
> + * An event queue may link to one or more event ports.
> + * The number of links can be established from an event queue to event port is
> + * implementation defined.
> + *
> + * Event queue(s) to event port link establishment can be changed at runtime
> + * without re-configuring the device to support scaling and to reduce the
> + * latency of critical work by establishing the link with more event ports
> + * at runtime.
> + *
> + * @param dev_id
> + * The identifier of the device.
> + *
> + * @param port_id
> + * Event port identifier to select the destination port to link.
> + *
> + * @param queues
> + * Points to an array of *nb_links* event queues to be linked
> + * to the event port.
> + * NULL value is allowed, in which case this function links all the configured
> + * event queues *nb_event_queues* which previously supplied to
> + * rte_event_dev_configure() to the event port *port_id*
> + *
> + * @param priorities
> + * Points to an array of *nb_links* service priorities associated with each
> + * event queue link to event port.
> + * The priority defines the event port's servicing priority for
> + * event queue, which may be ignored by an implementation.
> + * The requested priority should in the range of
> + * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST].
> + * The implementation shall normalize the requested priority to
> + * implementation supported priority value.
> + * NULL value is allowed, in which case this function links the event queues
> + * with RTE_EVENT_DEV_PRIORITY_NORMAL servicing priority
> + *
> + * @param nb_links
> + * The number of links to establish. This parameter is ignored if queues is
> + * NULL.
> + *
> + * @param profile_id
> + * The profile identifier associated with the links between event queues and
> + * event port. Should be less than the max capability reported by
> + * ``rte_event_dev_info::max_profiles_per_port``
> + *
> + * @return
> + * The number of links actually established. The return value can be less than
> + * the value of the *nb_links* parameter when the implementation has the
> + * limitation on specific queue to port link establishment or if invalid
> + * parameters are specified in *queues*
> + * If the return value is less than *nb_links*, the remaining links at the end
> + * of link[] are not established, and the caller has to take care of them.
> + * If return value is less than *nb_links* then implementation shall update the
> + * rte_errno accordingly, Possible rte_errno values are
> + * (EDQUOT) Quota exceeded(Application tried to link the queue configured with
> + * RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports)
> + * (EINVAL) Invalid parameter
> + *
> + */
> +__rte_experimental
> +int
> +rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
> + const uint8_t priorities[], uint16_t nb_links, uint8_t profile_id);
> +
> +/**
> + * Unlink multiple source event queues supplied in *queues* that belong to profile
> + * designated by *profile_id* from the destination event port designated by its
> + * *port_id* on the event device designated by its *dev_id*.
> + *
> + * If *profile_id* is set to 0 i.e., the default profile then, then this function
> + * will act as ``rte_event_port_unlink``.
> + *
> + * The unlink call issues an async request to disable the event port *port_id*
> + * from receiving events from the specified event queue *queue_id*.
> + * Event queue(s) to event port unlink establishment can be changed at runtime
> + * without re-configuring the device.
> + *
> + * @see rte_event_port_unlinks_in_progress() to poll for completed unlinks.
> + *
> + * @param dev_id
> + * The identifier of the device.
> + *
> + * @param port_id
> + * Event port identifier to select the destination port to unlink.
> + *
> + * @param queues
> + * Points to an array of *nb_unlinks* event queues to be unlinked
> + * from the event port.
> + * NULL value is allowed, in which case this function unlinks all the
> + * event queue(s) from the event port *port_id*.
> + *
> + * @param nb_unlinks
> + * The number of unlinks to establish. This parameter is ignored if queues is
> + * NULL.
> + *
> + * @param profile_id
> + * The profile identifier associated with the links between event queues and
> + * event port. Should be less than the max capability reported by
> + * ``rte_event_dev_info::max_profiles_per_port``
> + *
> + * @return
> + * The number of unlinks successfully requested. The return value can be less
> + * than the value of the *nb_unlinks* parameter when the implementation has the
> + * limitation on specific queue to port unlink establishment or
> + * if invalid parameters are specified.
> + * If the return value is less than *nb_unlinks*, the remaining queues at the
> + * end of queues[] are not unlinked, and the caller has to take care of them.
> + * If return value is less than *nb_unlinks* then implementation shall update
> + * the rte_errno accordingly, Possible rte_errno values are
> + * (EINVAL) Invalid parameter
> + *
> + */
> +__rte_experimental
> +int
> +rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
> + uint16_t nb_unlinks, uint8_t profile_id);
> +
> /**
> * Returns the number of unlinks in progress.
> *
> @@ -1680,6 +1828,42 @@ int
> rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
> uint8_t queues[], uint8_t priorities[]);
>
> +/**
> + * Retrieve the list of source event queues and its service priority
> + * associated to a *profile_id* and linked to the destination event port
> + * designated by its *port_id* on the event device designated by its *dev_id*.
> + *
> + * @param dev_id
> + * The identifier of the device.
> + *
> + * @param port_id
> + * Event port identifier.
> + *
> + * @param[out] queues
> + * Points to an array of *queues* for output.
> + * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
> + * store the event queue(s) linked with event port *port_id*
> + *
> + * @param[out] priorities
> + * Points to an array of *priorities* for output.
> + * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
> + * store the service priority associated with each event queue linked
> + *
> + * @param profile_id
> + * The profile identifier associated with the links between event queues and
> + * event port. Should be less than the max capability reported by
> + * ``rte_event_dev_info::max_profiles_per_port``
> + *
> + * @return
> + * The number of links established on the event port designated by its
> + * *port_id*.
> + * - <0 on failure.
> + */
> +__rte_experimental
> +int
> +rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
> + uint8_t priorities[], uint8_t profile_id);
> +
> /**
> * Retrieve the service ID of the event dev. If the adapter doesn't use
> * a rte_service function, this function returns -ESRCH.
> @@ -2265,6 +2449,53 @@ rte_event_maintain(uint8_t dev_id, uint8_t port_id, int op)
> return 0;
> }
>
> +/**
> + * Change the active profile on an event port.
> + *
> + * This function is used to change the current active profile on an event port
> + * when multiple link profiles are configured on an event port through the
> + * function call ``rte_event_port_profile_links_set``.
> + *
> + * On the subsequent ``rte_event_dequeue_burst`` call, only the event queues
> + * that were associated with the newly active profile will participate in
> + * scheduling.
> + *
> + * @param dev_id
> + * The identifier of the device.
> + * @param port_id
> + * The identifier of the event port.
> + * @param profile_id
> + * The identifier of the profile.
> + * @return
> + * - 0 on success.
> + * - -EINVAL if *dev_id*, *port_id*, or *profile_id* is invalid.
> + */
> +__rte_experimental
> +static inline uint8_t
> +rte_event_port_profile_switch(uint8_t dev_id, uint8_t port_id, uint8_t profile_id)
> +{
> + const struct rte_event_fp_ops *fp_ops;
> + void *port;
> +
> + fp_ops = &rte_event_fp_ops[dev_id];
> + port = fp_ops->data[port_id];
> +
> +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> + if (dev_id >= RTE_EVENT_MAX_DEVS ||
> + port_id >= RTE_EVENT_MAX_PORTS_PER_DEV)
> + return -EINVAL;
> +
> + if (port == NULL)
> + return -EINVAL;
> +
> + if (profile_id >= RTE_EVENT_MAX_PROFILES_PER_PORT)
> + return -EINVAL;
> +#endif
> + rte_eventdev_trace_port_profile_switch(dev_id, port_id, profile_id);
> +
> + return fp_ops->profile_switch(port, profile_id);
> +}
> +
> #ifdef __cplusplus
> }
> #endif
> diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
> index c27a52ccc0..5af646ed5c 100644
> --- a/lib/eventdev/rte_eventdev_core.h
> +++ b/lib/eventdev/rte_eventdev_core.h
> @@ -42,6 +42,8 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port,
> uint16_t nb_events);
> /**< @internal Enqueue burst of events on crypto adapter */
>
> +typedef int (*event_profile_switch_t)(void *port, uint8_t profile);
> +
> struct rte_event_fp_ops {
> void **data;
> /**< points to array of internal port data pointers */
> @@ -65,7 +67,9 @@ struct rte_event_fp_ops {
> /**< PMD Tx adapter enqueue same destination function. */
> event_crypto_adapter_enqueue_t ca_enqueue;
> /**< PMD Crypto adapter enqueue function. */
> - uintptr_t reserved[5];
> + event_profile_switch_t profile_switch;
> + /**< PMD Event switch profile function. */
> + uintptr_t reserved[4];
> } __rte_cache_aligned;
>
> extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
> diff --git a/lib/eventdev/rte_eventdev_trace_fp.h b/lib/eventdev/rte_eventdev_trace_fp.h
> index af2172d2a5..04d510ad00 100644
> --- a/lib/eventdev/rte_eventdev_trace_fp.h
> +++ b/lib/eventdev/rte_eventdev_trace_fp.h
> @@ -46,6 +46,14 @@ RTE_TRACE_POINT_FP(
> rte_trace_point_emit_int(op);
> )
>
> +RTE_TRACE_POINT_FP(
> + rte_eventdev_trace_port_profile_switch,
> + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, uint8_t profile),
> + rte_trace_point_emit_u8(dev_id);
> + rte_trace_point_emit_u8(port_id);
> + rte_trace_point_emit_u8(profile);
> +)
> +
> RTE_TRACE_POINT_FP(
> rte_eventdev_trace_eth_tx_adapter_enqueue,
> RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table,
> diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
> index 7ce09a87bb..f88decee39 100644
> --- a/lib/eventdev/version.map
> +++ b/lib/eventdev/version.map
> @@ -134,6 +134,10 @@ EXPERIMENTAL {
>
> # added in 23.11
> rte_event_eth_rx_adapter_create_ext_with_params;
> + rte_event_port_profile_links_set;
> + rte_event_port_profile_unlink;
> + rte_event_port_profile_links_get;
> + __rte_eventdev_trace_port_profile_switch;
> };
>
> INTERNAL {
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH v5 0/3] Introduce event link profiles
2023-09-28 10:12 ` [PATCH v4 " pbhagavatula
` (3 preceding siblings ...)
2023-09-28 14:45 ` [PATCH v4 0/3] Introduce event link profiles Jerin Jacob
@ 2023-10-03 7:51 ` pbhagavatula
2023-10-03 7:51 ` [PATCH v5 1/3] eventdev: introduce " pbhagavatula
` (3 more replies)
4 siblings, 4 replies; 44+ messages in thread
From: pbhagavatula @ 2023-10-03 7:51 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
A collection of event queues linked to an event port can be associated
with unique identifier called as a link profile, multiple such profiles
can be configured based on the event device capability using the function
`rte_event_port_profile_links_set` which takes arguments similar to
`rte_event_port_link` in addition to the profile identifier.
The maximum link profiles that are supported by an event device is
advertised through the structure member
`rte_event_dev_info::max_profiles_per_port`.
By default, event ports are configured to use the link profile 0 on
initialization.
Once multiple link profiles are set up and the event device is started, the
application can use the function `rte_event_port_profile_switch` to change
the currently active profile on an event port. This effects the next
`rte_event_dequeue_burst` call, where the event queues associated with the
newly active link profile will participate in scheduling.
Rudementary work flow would something like:
Config path:
uint8_t lq[4] = {4, 5, 6, 7};
uint8_t hq[4] = {0, 1, 2, 3};
if (rte_event_dev_info.max_profiles_per_port < 2)
return -ENOTSUP;
rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0);
rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1);
Worker path:
empty_high_deq = 0;
empty_low_deq = 0;
is_low_deq = 0;
while (1) {
deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
if (deq == 0) {
/**
* Change link profile based on work activity on current
* active profile
*/
if (is_low_deq) {
empty_low_deq++;
if (empty_low_deq == MAX_LOW_RETRY) {
rte_event_port_profile_switch(0, 0, 0);
is_low_deq = 0;
empty_low_deq = 0;
}
continue;
}
if (empty_high_deq == MAX_HIGH_RETRY) {
rte_event_port_profile_switch(0, 0, 1);
is_low_deq = 1;
empty_high_deq = 0;
}
continue;
}
// Process the event received.
if (is_low_deq++ == MAX_LOW_EVENTS) {
rte_event_port_profile_switch(0, 0, 0);
is_low_deq = 0;
}
}
An application could use heuristic data of load/activity of a given event
port and change its active profile to adapt to the traffic pattern.
An unlink function `rte_event_port_profile_unlink` is provided to
modify the links associated to a profile, and
`rte_event_port_profile_links_get` can be used to retrieve the links
associated with a profile.
Using Link profiles can reduce the overhead of linking/unlinking and
waiting for unlinks in progress in fast-path and gives applications
the ability to switch between preset profiles on the fly.
v5 Changes:
----------
- Rebase on next-event
v4 Changes:
----------
- Address review comments (Jerin).
v3 Changes:
----------
- Rebase to next-eventdev
- Rename testcase name to match API.
v2 Changes:
----------
- Fix compilation.
Pavan Nikhilesh (3):
eventdev: introduce link profiles
event/cnxk: implement event link profiles
test/event: add event link profile test
app/test/test_eventdev.c | 117 +++++++++++
config/rte_config.h | 1 +
doc/guides/eventdevs/cnxk.rst | 1 +
doc/guides/eventdevs/features/cnxk.ini | 3 +-
doc/guides/eventdevs/features/default.ini | 1 +
doc/guides/prog_guide/eventdev.rst | 40 ++++
doc/guides/rel_notes/release_23_11.rst | 13 ++
drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
drivers/common/cnxk/roc_sso.c | 18 +-
drivers/common/cnxk/roc_sso.h | 8 +-
drivers/common/cnxk/roc_sso_priv.h | 4 +-
drivers/event/cnxk/cn10k_eventdev.c | 45 +++--
drivers/event/cnxk/cn10k_worker.c | 11 ++
drivers/event/cnxk/cn10k_worker.h | 1 +
drivers/event/cnxk/cn9k_eventdev.c | 74 ++++---
drivers/event/cnxk/cn9k_worker.c | 22 +++
drivers/event/cnxk/cn9k_worker.h | 2 +
drivers/event/cnxk/cnxk_eventdev.c | 37 ++--
drivers/event/cnxk/cnxk_eventdev.h | 10 +-
lib/eventdev/eventdev_pmd.h | 59 +++++-
lib/eventdev/eventdev_private.c | 9 +
lib/eventdev/eventdev_trace.h | 32 +++
lib/eventdev/eventdev_trace_points.c | 12 ++
lib/eventdev/rte_eventdev.c | 150 +++++++++++---
lib/eventdev/rte_eventdev.h | 231 ++++++++++++++++++++++
lib/eventdev/rte_eventdev_core.h | 5 +
lib/eventdev/rte_eventdev_trace_fp.h | 8 +
lib/eventdev/version.map | 4 +
28 files changed, 813 insertions(+), 109 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH v5 1/3] eventdev: introduce link profiles
2023-10-03 7:51 ` [PATCH v5 " pbhagavatula
@ 2023-10-03 7:51 ` pbhagavatula
2023-10-03 7:51 ` [PATCH v5 2/3] event/cnxk: implement event " pbhagavatula
` (2 subsequent siblings)
3 siblings, 0 replies; 44+ messages in thread
From: pbhagavatula @ 2023-10-03 7:51 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
A collection of event queues linked to an event port can be
associated with a unique identifier called as a link profile, multiple
such profiles can be created based on the event device capability
using the function `rte_event_port_profile_links_set` which takes
arguments similar to `rte_event_port_link` in addition to the profile
identifier.
The maximum link profiles that are supported by an event device
is advertised through the structure member
`rte_event_dev_info::max_profiles_per_port`.
By default, event ports are configured to use the link profile 0
on initialization.
Once multiple link profiles are set up and the event device is started,
the application can use the function `rte_event_port_profile_switch`
to change the currently active profile on an event port. This effects
the next `rte_event_dequeue_burst` call, where the event queues
associated with the newly active link profile will participate in
scheduling.
An unlink function `rte_event_port_profile_unlink` is provided
to modify the links associated to a profile, and
`rte_event_port_profile_links_get` can be used to retrieve the
links associated with a profile.
Using Link profiles can reduce the overhead of linking/unlinking and
waiting for unlinks in progress in fast-path and gives applications
the ability to switch between preset profiles on the fly.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
config/rte_config.h | 1 +
doc/guides/eventdevs/features/default.ini | 1 +
doc/guides/prog_guide/eventdev.rst | 40 ++++
doc/guides/rel_notes/release_23_11.rst | 11 ++
lib/eventdev/eventdev_pmd.h | 59 +++++-
lib/eventdev/eventdev_private.c | 9 +
lib/eventdev/eventdev_trace.h | 32 +++
lib/eventdev/eventdev_trace_points.c | 12 ++
lib/eventdev/rte_eventdev.c | 150 +++++++++++---
lib/eventdev/rte_eventdev.h | 231 ++++++++++++++++++++++
lib/eventdev/rte_eventdev_core.h | 5 +
lib/eventdev/rte_eventdev_trace_fp.h | 8 +
lib/eventdev/version.map | 4 +
13 files changed, 535 insertions(+), 28 deletions(-)
diff --git a/config/rte_config.h b/config/rte_config.h
index 401727703f..a06189d0b5 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -73,6 +73,7 @@
#define RTE_EVENT_MAX_DEVS 16
#define RTE_EVENT_MAX_PORTS_PER_DEV 255
#define RTE_EVENT_MAX_QUEUES_PER_DEV 255
+#define RTE_EVENT_MAX_PROFILES_PER_PORT 8
#define RTE_EVENT_TIMER_ADAPTER_NUM_MAX 32
#define RTE_EVENT_ETH_INTR_RING_SIZE 1024
#define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini
index 73a52d915b..e980ae134a 100644
--- a/doc/guides/eventdevs/features/default.ini
+++ b/doc/guides/eventdevs/features/default.ini
@@ -18,6 +18,7 @@ multiple_queue_port =
carry_flow_id =
maintenance_free =
runtime_queue_attr =
+profile_links =
;
; Features of a default Ethernet Rx adapter.
diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst
index ff55115d0d..8c15c678bf 100644
--- a/doc/guides/prog_guide/eventdev.rst
+++ b/doc/guides/prog_guide/eventdev.rst
@@ -317,6 +317,46 @@ can be achieved like this:
}
int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1);
+Linking Queues to Ports with link profiles
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+An application can use link profiles if supported by the underlying event device to setup up
+multiple link profile per port and change them run time depending up on heuristic data.
+Using Link profiles can reduce the overhead of linking/unlinking and wait for unlinks in progress
+in fast-path and gives applications the ability to switch between preset profiles on the fly.
+
+An Example use case could be as follows.
+
+Config path:
+
+.. code-block:: c
+
+ uint8_t lq[4] = {4, 5, 6, 7};
+ uint8_t hq[4] = {0, 1, 2, 3};
+
+ if (rte_event_dev_info.max_profiles_per_port < 2)
+ return -ENOTSUP;
+
+ rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0);
+ rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1);
+
+Worker path:
+
+.. code-block:: c
+
+ uint8_t profile_id_to_switch;
+
+ while (1) {
+ deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
+ if (deq == 0) {
+ profile_id_to_switch = app_find_profile_id_to_switch();
+ rte_event_port_profile_switch(0, 0, profile_id_to_switch);
+ continue;
+ }
+
+ // Process the event received.
+ }
+
Starting the EventDev
~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index b66c364e21..fe6656bed2 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -90,6 +90,17 @@ New Features
model by introducing APIs that allow applications to enqueue/dequeue DMA
operations to/from dmadev as events scheduled by an event device.
+* **Added eventdev support to link queues to port with link profile.**
+
+ Introduced event link profiles that can be used to associated links between
+ event queues and an event port with a unique identifier termed as link profile.
+ The profile can be used to switch between the associated links in fast-path
+ without the additional overhead of linking/unlinking and waiting for unlinking.
+
+ * Added ``rte_event_port_profile_links_set``, ``rte_event_port_profile_unlink``
+ ``rte_event_port_profile_links_get`` and ``rte_event_port_profile_switch``
+ APIs to enable this feature.
+
* **Updated Marvell cnxk eventdev driver.**
* Added support for ``remaining_ticks_get`` timer adapter PMD callback
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index f7227c0bfd..30bd90085c 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -119,8 +119,8 @@ struct rte_eventdev_data {
/**< Array of port configuration structures. */
struct rte_event_queue_conf queues_cfg[RTE_EVENT_MAX_QUEUES_PER_DEV];
/**< Array of queue configuration structures. */
- uint16_t links_map[RTE_EVENT_MAX_PORTS_PER_DEV *
- RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint16_t links_map[RTE_EVENT_MAX_PROFILES_PER_PORT]
+ [RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV];
/**< Memory to store queues to port connections. */
void *dev_private;
/**< PMD-specific private data */
@@ -179,9 +179,10 @@ struct rte_eventdev {
/**< Pointer to PMD eth Tx adapter enqueue function. */
event_crypto_adapter_enqueue_t ca_enqueue;
/**< Pointer to PMD crypto adapter enqueue function. */
-
event_dma_adapter_enqueue_t dma_enqueue;
/**< Pointer to PMD DMA adapter enqueue function. */
+ event_profile_switch_t profile_switch;
+ /**< Pointer to PMD Event switch profile function. */
uint64_t reserved_64s[3]; /**< Reserved for future fields */
void *reserved_ptrs[3]; /**< Reserved for future fields */
@@ -441,6 +442,32 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
const uint8_t queues[], const uint8_t priorities[],
uint16_t nb_links);
+/**
+ * Link multiple source event queues associated with a link profile to a
+ * destination event port.
+ *
+ * @param dev
+ * Event device pointer
+ * @param port
+ * Event port pointer
+ * @param queues
+ * Points to an array of *nb_links* event queues to be linked
+ * to the event port.
+ * @param priorities
+ * Points to an array of *nb_links* service priorities associated with each
+ * event queue link to event port.
+ * @param nb_links
+ * The number of links to establish.
+ * @param profile_id
+ * The profile ID to associate the links.
+ *
+ * @return
+ * Returns 0 on success.
+ */
+typedef int (*eventdev_port_link_profile_t)(struct rte_eventdev *dev, void *port,
+ const uint8_t queues[], const uint8_t priorities[],
+ uint16_t nb_links, uint8_t profile_id);
+
/**
* Unlink multiple source event queues from destination event port.
*
@@ -459,6 +486,28 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
typedef int (*eventdev_port_unlink_t)(struct rte_eventdev *dev, void *port,
uint8_t queues[], uint16_t nb_unlinks);
+/**
+ * Unlink multiple source event queues associated with a link profile from
+ * destination event port.
+ *
+ * @param dev
+ * Event device pointer
+ * @param port
+ * Event port pointer
+ * @param queues
+ * An array of *nb_unlinks* event queues to be unlinked from the event port.
+ * @param nb_unlinks
+ * The number of unlinks to establish
+ * @param profile_id
+ * The profile ID of the associated links.
+ *
+ * @return
+ * Returns 0 on success.
+ */
+typedef int (*eventdev_port_unlink_profile_t)(struct rte_eventdev *dev, void *port,
+ uint8_t queues[], uint16_t nb_unlinks,
+ uint8_t profile_id);
+
/**
* Unlinks in progress. Returns number of unlinks that the PMD is currently
* performing, but have not yet been completed.
@@ -1502,8 +1551,12 @@ struct eventdev_ops {
eventdev_port_link_t port_link;
/**< Link event queues to an event port. */
+ eventdev_port_link_profile_t port_link_profile;
+ /**< Link event queues associated with a profile to an event port. */
eventdev_port_unlink_t port_unlink;
/**< Unlink event queues from an event port. */
+ eventdev_port_unlink_profile_t port_unlink_profile;
+ /**< Unlink event queues associated with a profile from an event port. */
eventdev_port_unlinks_in_progress_t port_unlinks_in_progress;
/**< Unlinks in progress on an event port. */
eventdev_dequeue_timeout_ticks_t timeout_ticks;
diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c
index 18ed8bf3c8..017f97ccab 100644
--- a/lib/eventdev/eventdev_private.c
+++ b/lib/eventdev/eventdev_private.c
@@ -89,6 +89,13 @@ dummy_event_dma_adapter_enqueue(__rte_unused void *port, __rte_unused struct rte
return 0;
}
+static int
+dummy_event_port_profile_switch(__rte_unused void *port, __rte_unused uint8_t profile_id)
+{
+ RTE_EDEV_LOG_ERR("change profile requested for unconfigured event device");
+ return -EINVAL;
+}
+
void
event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
{
@@ -106,6 +113,7 @@ event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
dummy_event_tx_adapter_enqueue_same_dest,
.ca_enqueue = dummy_event_crypto_adapter_enqueue,
.dma_enqueue = dummy_event_dma_adapter_enqueue,
+ .profile_switch = dummy_event_port_profile_switch,
.data = dummy_data,
};
@@ -127,5 +135,6 @@ event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op,
fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest;
fp_op->ca_enqueue = dev->ca_enqueue;
fp_op->dma_enqueue = dev->dma_enqueue;
+ fp_op->profile_switch = dev->profile_switch;
fp_op->data = dev->data->ports;
}
diff --git a/lib/eventdev/eventdev_trace.h b/lib/eventdev/eventdev_trace.h
index f008ef0091..9c2b261c06 100644
--- a/lib/eventdev/eventdev_trace.h
+++ b/lib/eventdev/eventdev_trace.h
@@ -76,6 +76,17 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(rc);
)
+RTE_TRACE_POINT(
+ rte_eventdev_trace_port_profile_links_set,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
+ uint16_t nb_links, uint8_t profile_id, int rc),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u16(nb_links);
+ rte_trace_point_emit_u8(profile_id);
+ rte_trace_point_emit_int(rc);
+)
+
RTE_TRACE_POINT(
rte_eventdev_trace_port_unlink,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
@@ -86,6 +97,17 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(rc);
)
+RTE_TRACE_POINT(
+ rte_eventdev_trace_port_profile_unlink,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
+ uint16_t nb_unlinks, uint8_t profile_id, int rc),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u16(nb_unlinks);
+ rte_trace_point_emit_u8(profile_id);
+ rte_trace_point_emit_int(rc);
+)
+
RTE_TRACE_POINT(
rte_eventdev_trace_start,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, int rc),
@@ -487,6 +509,16 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(count);
)
+RTE_TRACE_POINT(
+ rte_eventdev_trace_port_profile_links_get,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, uint8_t profile_id,
+ int count),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u8(profile_id);
+ rte_trace_point_emit_int(count);
+)
+
RTE_TRACE_POINT(
rte_eventdev_trace_port_unlinks_in_progress,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id),
diff --git a/lib/eventdev/eventdev_trace_points.c b/lib/eventdev/eventdev_trace_points.c
index 76144cfe75..8024e07531 100644
--- a/lib/eventdev/eventdev_trace_points.c
+++ b/lib/eventdev/eventdev_trace_points.c
@@ -19,9 +19,15 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_setup,
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_link,
lib.eventdev.port.link)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_links_set,
+ lib.eventdev.port.profile.links.set)
+
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlink,
lib.eventdev.port.unlink)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_unlink,
+ lib.eventdev.port.profile.unlink)
+
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_start,
lib.eventdev.start)
@@ -40,6 +46,9 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_deq_burst,
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_maintain,
lib.eventdev.maintain)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_switch,
+ lib.eventdev.port.profile.switch)
+
/* Eventdev Rx adapter trace points */
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_eth_rx_adapter_create,
lib.eventdev.rx.adapter.create)
@@ -206,6 +215,9 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_default_conf_get,
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_links_get,
lib.eventdev.port.links.get)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_links_get,
+ lib.eventdev.port.profile.links.get)
+
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlinks_in_progress,
lib.eventdev.port.unlinks.in.progress)
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 60509c6efb..5ee8bd665b 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -96,6 +96,7 @@ rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)
return -EINVAL;
memset(dev_info, 0, sizeof(struct rte_event_dev_info));
+ dev_info->max_profiles_per_port = 1;
if (*dev->dev_ops->dev_infos_get == NULL)
return -ENOTSUP;
@@ -293,7 +294,7 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
void **ports;
uint16_t *links_map;
struct rte_event_port_conf *ports_cfg;
- unsigned int i;
+ unsigned int i, j;
RTE_EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
dev->data->dev_id);
@@ -304,7 +305,6 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
ports = dev->data->ports;
ports_cfg = dev->data->ports_cfg;
- links_map = dev->data->links_map;
for (i = nb_ports; i < old_nb_ports; i++)
(*dev->dev_ops->port_release)(ports[i]);
@@ -320,9 +320,11 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
sizeof(ports[0]) * new_ps);
memset(ports_cfg + old_nb_ports, 0,
sizeof(ports_cfg[0]) * new_ps);
- for (i = old_links_map_end; i < links_map_end; i++)
- links_map[i] =
- EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++) {
+ links_map = dev->data->links_map[i];
+ for (j = old_links_map_end; j < links_map_end; j++)
+ links_map[j] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ }
}
} else {
if (*dev->dev_ops->port_release == NULL)
@@ -976,21 +978,45 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
const uint8_t queues[], const uint8_t priorities[],
uint16_t nb_links)
{
- struct rte_eventdev *dev;
- uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ return rte_event_port_profile_links_set(dev_id, port_id, queues, priorities, nb_links, 0);
+}
+
+int
+rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile_id)
+{
uint8_t priorities_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
uint16_t *links_map;
int i, diag;
RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);
dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile_id >= RTE_EVENT_MAX_PROFILES_PER_PORT ||
+ profile_id >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile_id=%" PRIu8, profile_id);
+ return -EINVAL;
+ }
+
if (*dev->dev_ops->port_link == NULL) {
RTE_EDEV_LOG_ERR("Function not supported\n");
rte_errno = ENOTSUP;
return 0;
}
+ if (profile_id && *dev->dev_ops->port_link_profile == NULL) {
+ RTE_EDEV_LOG_ERR("Function not supported\n");
+ rte_errno = ENOTSUP;
+ return 0;
+ }
+
if (!is_valid_port(dev, port_id)) {
RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
rte_errno = EINVAL;
@@ -1018,18 +1044,22 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
return 0;
}
- diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id],
- queues, priorities, nb_links);
+ if (profile_id)
+ diag = (*dev->dev_ops->port_link_profile)(dev, dev->data->ports[port_id], queues,
+ priorities, nb_links, profile_id);
+ else
+ diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id], queues,
+ priorities, nb_links);
if (diag < 0)
return diag;
- links_map = dev->data->links_map;
+ links_map = dev->data->links_map[profile_id];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
for (i = 0; i < diag; i++)
links_map[queues[i]] = (uint8_t)priorities[i];
- rte_eventdev_trace_port_link(dev_id, port_id, nb_links, diag);
+ rte_eventdev_trace_port_profile_links_set(dev_id, port_id, nb_links, profile_id, diag);
return diag;
}
@@ -1037,27 +1067,51 @@ int
rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint16_t nb_unlinks)
{
- struct rte_eventdev *dev;
+ return rte_event_port_profile_unlink(dev_id, port_id, queues, nb_unlinks, 0);
+}
+
+int
+rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile_id)
+{
uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
- int i, diag, j;
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
uint16_t *links_map;
+ int i, diag, j;
RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);
dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile_id >= RTE_EVENT_MAX_PROFILES_PER_PORT ||
+ profile_id >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile_id=%" PRIu8, profile_id);
+ return -EINVAL;
+ }
+
if (*dev->dev_ops->port_unlink == NULL) {
RTE_EDEV_LOG_ERR("Function not supported");
rte_errno = ENOTSUP;
return 0;
}
+ if (profile_id && *dev->dev_ops->port_unlink_profile == NULL) {
+ RTE_EDEV_LOG_ERR("Function not supported");
+ rte_errno = ENOTSUP;
+ return 0;
+ }
+
if (!is_valid_port(dev, port_id)) {
RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
rte_errno = EINVAL;
return 0;
}
- links_map = dev->data->links_map;
+ links_map = dev->data->links_map[profile_id];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
@@ -1086,16 +1140,19 @@ rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
return 0;
}
- diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id],
- queues, nb_unlinks);
-
+ if (profile_id)
+ diag = (*dev->dev_ops->port_unlink_profile)(dev, dev->data->ports[port_id], queues,
+ nb_unlinks, profile_id);
+ else
+ diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id], queues,
+ nb_unlinks);
if (diag < 0)
return diag;
for (i = 0; i < diag; i++)
links_map[queues[i]] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
- rte_eventdev_trace_port_unlink(dev_id, port_id, nb_unlinks, diag);
+ rte_eventdev_trace_port_profile_unlink(dev_id, port_id, nb_unlinks, profile_id, diag);
return diag;
}
@@ -1139,7 +1196,8 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
return -EINVAL;
}
- links_map = dev->data->links_map;
+ /* Use the default profile_id. */
+ links_map = dev->data->links_map[0];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
for (i = 0; i < dev->data->nb_queues; i++) {
@@ -1155,6 +1213,49 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
return count;
}
+int
+rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint8_t priorities[], uint8_t profile_id)
+{
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
+ uint16_t *links_map;
+ int i, count = 0;
+
+ RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+
+ dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile_id >= RTE_EVENT_MAX_PROFILES_PER_PORT ||
+ profile_id >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile_id=%" PRIu8, profile_id);
+ return -EINVAL;
+ }
+
+ if (!is_valid_port(dev, port_id)) {
+ RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+ return -EINVAL;
+ }
+
+ links_map = dev->data->links_map[profile_id];
+ /* Point links_map to this port specific area */
+ links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
+ for (i = 0; i < dev->data->nb_queues; i++) {
+ if (links_map[i] != EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
+ queues[count] = i;
+ priorities[count] = (uint8_t)links_map[i];
+ ++count;
+ }
+ }
+
+ rte_eventdev_trace_port_profile_links_get(dev_id, port_id, profile_id, count);
+
+ return count;
+}
+
int
rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
uint64_t *timeout_ticks)
@@ -1463,7 +1564,7 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
{
char mz_name[RTE_EVENTDEV_NAME_MAX_LEN];
const struct rte_memzone *mz;
- int n;
+ int i, n;
/* Generate memzone name */
n = snprintf(mz_name, sizeof(mz_name), "rte_eventdev_data_%u", dev_id);
@@ -1483,11 +1584,10 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
*data = mz->addr;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
memset(*data, 0, sizeof(struct rte_eventdev_data));
- for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV *
- RTE_EVENT_MAX_QUEUES_PER_DEV;
- n++)
- (*data)->links_map[n] =
- EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++)
+ for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV;
+ n++)
+ (*data)->links_map[i][n] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
}
return 0;
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 41743f91b1..2ea98302b8 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -320,6 +320,12 @@ struct rte_event;
* rte_event_queue_setup().
*/
+#define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12)
+/**< Event device is capable of supporting multiple link profiles per event port
+ * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater
+ * than one.
+ */
+
/* Event device priority levels */
#define RTE_EVENT_DEV_PRIORITY_HIGHEST 0
/**< Highest priority expressed across eventdev subsystem
@@ -446,6 +452,10 @@ struct rte_event_dev_info {
* device. These ports and queues are not accounted for in
* max_event_ports or max_event_queues.
*/
+ uint8_t max_profiles_per_port;
+ /**< Maximum number of event queue profiles per event port.
+ * A device that doesn't support multiple profiles will set this as 1.
+ */
};
/**
@@ -1580,6 +1590,10 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
* latency of critical work by establishing the link with more event ports
* at runtime.
*
+ * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater
+ * than or equal to one, this function links the event queues to the default
+ * profile_id i.e. profile_id 0 of the event port.
+ *
* @param dev_id
* The identifier of the device.
*
@@ -1637,6 +1651,10 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
* Event queue(s) to event port unlink establishment can be changed at runtime
* without re-configuring the device.
*
+ * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater
+ * than or equal to one, this function unlinks the event queues from the default
+ * profile identifier i.e. profile 0 of the event port.
+ *
* @see rte_event_port_unlinks_in_progress() to poll for completed unlinks.
*
* @param dev_id
@@ -1670,6 +1688,136 @@ int
rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint16_t nb_unlinks);
+/**
+ * Link multiple source event queues supplied in *queues* to the destination
+ * event port designated by its *port_id* with associated profile identifier
+ * supplied in *profile_id* with service priorities supplied in *priorities*
+ * on the event device designated by its *dev_id*.
+ *
+ * If *profile_id* is set to 0 then, the links created by the call `rte_event_port_link`
+ * will be overwritten.
+ *
+ * Event ports by default use profile_id 0 unless it is changed using the
+ * call ``rte_event_port_profile_switch()``.
+ *
+ * The link establishment shall enable the event port *port_id* from
+ * receiving events from the specified event queue(s) supplied in *queues*
+ *
+ * An event queue may link to one or more event ports.
+ * The number of links can be established from an event queue to event port is
+ * implementation defined.
+ *
+ * Event queue(s) to event port link establishment can be changed at runtime
+ * without re-configuring the device to support scaling and to reduce the
+ * latency of critical work by establishing the link with more event ports
+ * at runtime.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier to select the destination port to link.
+ *
+ * @param queues
+ * Points to an array of *nb_links* event queues to be linked
+ * to the event port.
+ * NULL value is allowed, in which case this function links all the configured
+ * event queues *nb_event_queues* which previously supplied to
+ * rte_event_dev_configure() to the event port *port_id*
+ *
+ * @param priorities
+ * Points to an array of *nb_links* service priorities associated with each
+ * event queue link to event port.
+ * The priority defines the event port's servicing priority for
+ * event queue, which may be ignored by an implementation.
+ * The requested priority should in the range of
+ * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST].
+ * The implementation shall normalize the requested priority to
+ * implementation supported priority value.
+ * NULL value is allowed, in which case this function links the event queues
+ * with RTE_EVENT_DEV_PRIORITY_NORMAL servicing priority
+ *
+ * @param nb_links
+ * The number of links to establish. This parameter is ignored if queues is
+ * NULL.
+ *
+ * @param profile_id
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of links actually established. The return value can be less than
+ * the value of the *nb_links* parameter when the implementation has the
+ * limitation on specific queue to port link establishment or if invalid
+ * parameters are specified in *queues*
+ * If the return value is less than *nb_links*, the remaining links at the end
+ * of link[] are not established, and the caller has to take care of them.
+ * If return value is less than *nb_links* then implementation shall update the
+ * rte_errno accordingly, Possible rte_errno values are
+ * (EDQUOT) Quota exceeded(Application tried to link the queue configured with
+ * RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports)
+ * (EINVAL) Invalid parameter
+ *
+ */
+__rte_experimental
+int
+rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile_id);
+
+/**
+ * Unlink multiple source event queues supplied in *queues* that belong to profile
+ * designated by *profile_id* from the destination event port designated by its
+ * *port_id* on the event device designated by its *dev_id*.
+ *
+ * If *profile_id* is set to 0 i.e., the default profile then, then this function
+ * will act as ``rte_event_port_unlink``.
+ *
+ * The unlink call issues an async request to disable the event port *port_id*
+ * from receiving events from the specified event queue *queue_id*.
+ * Event queue(s) to event port unlink establishment can be changed at runtime
+ * without re-configuring the device.
+ *
+ * @see rte_event_port_unlinks_in_progress() to poll for completed unlinks.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier to select the destination port to unlink.
+ *
+ * @param queues
+ * Points to an array of *nb_unlinks* event queues to be unlinked
+ * from the event port.
+ * NULL value is allowed, in which case this function unlinks all the
+ * event queue(s) from the event port *port_id*.
+ *
+ * @param nb_unlinks
+ * The number of unlinks to establish. This parameter is ignored if queues is
+ * NULL.
+ *
+ * @param profile_id
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of unlinks successfully requested. The return value can be less
+ * than the value of the *nb_unlinks* parameter when the implementation has the
+ * limitation on specific queue to port unlink establishment or
+ * if invalid parameters are specified.
+ * If the return value is less than *nb_unlinks*, the remaining queues at the
+ * end of queues[] are not unlinked, and the caller has to take care of them.
+ * If return value is less than *nb_unlinks* then implementation shall update
+ * the rte_errno accordingly, Possible rte_errno values are
+ * (EINVAL) Invalid parameter
+ *
+ */
+__rte_experimental
+int
+rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile_id);
+
/**
* Returns the number of unlinks in progress.
*
@@ -1724,6 +1872,42 @@ int
rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint8_t priorities[]);
+/**
+ * Retrieve the list of source event queues and its service priority
+ * associated to a *profile_id* and linked to the destination event port
+ * designated by its *port_id* on the event device designated by its *dev_id*.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier.
+ *
+ * @param[out] queues
+ * Points to an array of *queues* for output.
+ * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
+ * store the event queue(s) linked with event port *port_id*
+ *
+ * @param[out] priorities
+ * Points to an array of *priorities* for output.
+ * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
+ * store the service priority associated with each event queue linked
+ *
+ * @param profile_id
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of links established on the event port designated by its
+ * *port_id*.
+ * - <0 on failure.
+ */
+__rte_experimental
+int
+rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint8_t priorities[], uint8_t profile_id);
+
/**
* Retrieve the service ID of the event dev. If the adapter doesn't use
* a rte_service function, this function returns -ESRCH.
@@ -2309,6 +2493,53 @@ rte_event_maintain(uint8_t dev_id, uint8_t port_id, int op)
return 0;
}
+/**
+ * Change the active profile on an event port.
+ *
+ * This function is used to change the current active profile on an event port
+ * when multiple link profiles are configured on an event port through the
+ * function call ``rte_event_port_profile_links_set``.
+ *
+ * On the subsequent ``rte_event_dequeue_burst`` call, only the event queues
+ * that were associated with the newly active profile will participate in
+ * scheduling.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ * @param port_id
+ * The identifier of the event port.
+ * @param profile_id
+ * The identifier of the profile.
+ * @return
+ * - 0 on success.
+ * - -EINVAL if *dev_id*, *port_id*, or *profile_id* is invalid.
+ */
+__rte_experimental
+static inline uint8_t
+rte_event_port_profile_switch(uint8_t dev_id, uint8_t port_id, uint8_t profile_id)
+{
+ const struct rte_event_fp_ops *fp_ops;
+ void *port;
+
+ fp_ops = &rte_event_fp_ops[dev_id];
+ port = fp_ops->data[port_id];
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+ if (dev_id >= RTE_EVENT_MAX_DEVS ||
+ port_id >= RTE_EVENT_MAX_PORTS_PER_DEV)
+ return -EINVAL;
+
+ if (port == NULL)
+ return -EINVAL;
+
+ if (profile_id >= RTE_EVENT_MAX_PROFILES_PER_PORT)
+ return -EINVAL;
+#endif
+ rte_eventdev_trace_port_profile_switch(dev_id, port_id, profile_id);
+
+ return fp_ops->profile_switch(port, profile_id);
+}
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
index 83e8736c71..5b405518d1 100644
--- a/lib/eventdev/rte_eventdev_core.h
+++ b/lib/eventdev/rte_eventdev_core.h
@@ -46,6 +46,9 @@ typedef uint16_t (*event_dma_adapter_enqueue_t)(void *port, struct rte_event ev[
uint16_t nb_events);
/**< @internal Enqueue burst of events on DMA adapter */
+typedef int (*event_profile_switch_t)(void *port, uint8_t profile);
+/**< @internal Switch active link profile on the event port. */
+
struct rte_event_fp_ops {
void **data;
/**< points to array of internal port data pointers */
@@ -71,6 +74,8 @@ struct rte_event_fp_ops {
/**< PMD Crypto adapter enqueue function. */
event_dma_adapter_enqueue_t dma_enqueue;
/**< PMD DMA adapter enqueue function. */
+ event_profile_switch_t profile_switch;
+ /**< PMD Event switch profile function. */
uintptr_t reserved[4];
} __rte_cache_aligned;
diff --git a/lib/eventdev/rte_eventdev_trace_fp.h b/lib/eventdev/rte_eventdev_trace_fp.h
index af2172d2a5..04d510ad00 100644
--- a/lib/eventdev/rte_eventdev_trace_fp.h
+++ b/lib/eventdev/rte_eventdev_trace_fp.h
@@ -46,6 +46,14 @@ RTE_TRACE_POINT_FP(
rte_trace_point_emit_int(op);
)
+RTE_TRACE_POINT_FP(
+ rte_eventdev_trace_port_profile_switch,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, uint8_t profile),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u8(profile);
+)
+
RTE_TRACE_POINT_FP(
rte_eventdev_trace_eth_tx_adapter_enqueue,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table,
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index b81eb2919c..59ee8b86cf 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -150,6 +150,10 @@ EXPERIMENTAL {
rte_event_dma_adapter_vchan_add;
rte_event_dma_adapter_vchan_del;
rte_event_eth_rx_adapter_create_ext_with_params;
+ rte_event_port_profile_links_set;
+ rte_event_port_profile_unlink;
+ rte_event_port_profile_links_get;
+ __rte_eventdev_trace_port_profile_switch;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH v5 2/3] event/cnxk: implement event link profiles
2023-10-03 7:51 ` [PATCH v5 " pbhagavatula
2023-10-03 7:51 ` [PATCH v5 1/3] eventdev: introduce " pbhagavatula
@ 2023-10-03 7:51 ` pbhagavatula
2023-10-03 7:51 ` [PATCH v5 3/3] test/event: add event link profile test pbhagavatula
2023-10-03 9:47 ` [PATCH v6 0/3] Introduce event link profiles pbhagavatula
3 siblings, 0 replies; 44+ messages in thread
From: pbhagavatula @ 2023-10-03 7:51 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Implement event link profiles support on CN10K and CN9K.
Both the platforms support up to 2 link profiles.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 1 +
doc/guides/eventdevs/features/cnxk.ini | 3 +-
doc/guides/rel_notes/release_23_11.rst | 2 +
drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
drivers/common/cnxk/roc_sso.c | 18 +++----
drivers/common/cnxk/roc_sso.h | 8 +--
drivers/common/cnxk/roc_sso_priv.h | 4 +-
drivers/event/cnxk/cn10k_eventdev.c | 45 +++++++++++-----
drivers/event/cnxk/cn10k_worker.c | 11 ++++
drivers/event/cnxk/cn10k_worker.h | 1 +
drivers/event/cnxk/cn9k_eventdev.c | 74 ++++++++++++++++----------
drivers/event/cnxk/cn9k_worker.c | 22 ++++++++
drivers/event/cnxk/cn9k_worker.h | 2 +
drivers/event/cnxk/cnxk_eventdev.c | 37 +++++++------
drivers/event/cnxk/cnxk_eventdev.h | 10 ++--
15 files changed, 161 insertions(+), 81 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 1a59233282..cccb8a0304 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -48,6 +48,7 @@ Features of the OCTEON cnxk SSO PMD are:
- HW managed event vectorization on CN10K for packets enqueued from ethdev to
eventdev configurable per each Rx queue in Rx adapter.
- Event vector transmission via Tx adapter.
+- Up to 2 event link profiles.
Prerequisites and Compilation procedure
---------------------------------------
diff --git a/doc/guides/eventdevs/features/cnxk.ini b/doc/guides/eventdevs/features/cnxk.ini
index bee69bf8f4..5d353e3670 100644
--- a/doc/guides/eventdevs/features/cnxk.ini
+++ b/doc/guides/eventdevs/features/cnxk.ini
@@ -12,7 +12,8 @@ runtime_port_link = Y
multiple_queue_port = Y
carry_flow_id = Y
maintenance_free = Y
-runtime_queue_attr = y
+runtime_queue_attr = Y
+profile_links = Y
[Eth Rx adapter Features]
internal_port = Y
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index fe6656bed2..66c4ddf37c 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -105,6 +105,8 @@ New Features
* Added support for ``remaining_ticks_get`` timer adapter PMD callback
to get the remaining ticks to expire for a given event timer.
+ * Added link profiles support for Marvell CNXK event device driver,
+ up to two link profiles are supported per event port.
Removed Items
diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c
index d76158e30d..690d47c045 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev.c
@@ -285,7 +285,7 @@ nix_inl_sso_setup(struct nix_inl_dev *inl_dev)
}
/* Setup hwgrp->hws link */
- sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, true);
+ sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, 0, true);
/* Enable HWGRP */
plt_write64(0x1, inl_dev->sso_base + SSO_LF_GGRP_QCTL);
@@ -315,7 +315,7 @@ nix_inl_sso_release(struct nix_inl_dev *inl_dev)
nix_inl_sso_unregister_irqs(inl_dev);
/* Unlink hws */
- sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, false);
+ sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, 0, false);
/* Release XAQ aura */
sso_hwgrp_release_xaq(&inl_dev->dev, 1);
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index c37da685da..748d287bad 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -186,8 +186,8 @@ sso_rsrc_get(struct roc_sso *roc_sso)
}
void
-sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
- uint16_t hwgrp[], uint16_t n, uint16_t enable)
+sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, uint16_t hwgrp[],
+ uint16_t n, uint8_t set, uint16_t enable)
{
uint64_t reg;
int i, j, k;
@@ -204,7 +204,7 @@ sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
k = n % 4;
k = k ? k : 4;
for (j = 0; j < k; j++) {
- mask[j] = hwgrp[i + j] | enable << 14;
+ mask[j] = hwgrp[i + j] | (uint32_t)set << 12 | enable << 14;
if (bmp) {
enable ? plt_bitmap_set(bmp, hwgrp[i + j]) :
plt_bitmap_clear(bmp, hwgrp[i + j]);
@@ -290,8 +290,8 @@ roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns)
}
int
-roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
- uint16_t nb_hwgrp)
+roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], uint16_t nb_hwgrp,
+ uint8_t set)
{
struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
struct sso *sso;
@@ -299,14 +299,14 @@ roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
sso = roc_sso_to_sso_priv(roc_sso);
base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12);
- sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, 1);
+ sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, set, 1);
return nb_hwgrp;
}
int
-roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
- uint16_t nb_hwgrp)
+roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], uint16_t nb_hwgrp,
+ uint8_t set)
{
struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
struct sso *sso;
@@ -314,7 +314,7 @@ roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
sso = roc_sso_to_sso_priv(roc_sso);
base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12);
- sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, 0);
+ sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, set, 0);
return nb_hwgrp;
}
diff --git a/drivers/common/cnxk/roc_sso.h b/drivers/common/cnxk/roc_sso.h
index 8ee62afb9a..64f14b8119 100644
--- a/drivers/common/cnxk/roc_sso.h
+++ b/drivers/common/cnxk/roc_sso.h
@@ -84,10 +84,10 @@ int __roc_api roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso,
uint16_t hwgrp, uint8_t weight,
uint8_t affinity, uint8_t priority);
uint64_t __roc_api roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns);
-int __roc_api roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws,
- uint16_t hwgrp[], uint16_t nb_hwgrp);
-int __roc_api roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws,
- uint16_t hwgrp[], uint16_t nb_hwgrp);
+int __roc_api roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
+ uint16_t nb_hwgrp, uint8_t set);
+int __roc_api roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
+ uint16_t nb_hwgrp, uint8_t set);
int __roc_api roc_sso_hwgrp_hws_link_status(struct roc_sso *roc_sso,
uint8_t hws, uint16_t hwgrp);
uintptr_t __roc_api roc_sso_hws_base_get(struct roc_sso *roc_sso, uint8_t hws);
diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h
index 09729d4f62..21c59c57e6 100644
--- a/drivers/common/cnxk/roc_sso_priv.h
+++ b/drivers/common/cnxk/roc_sso_priv.h
@@ -44,8 +44,8 @@ roc_sso_to_sso_priv(struct roc_sso *roc_sso)
int sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf,
void **rsp);
int sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf);
-void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
- uint16_t hwgrp[], uint16_t n, uint16_t enable);
+void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, uint16_t hwgrp[],
+ uint16_t n, uint8_t set, uint16_t enable);
int sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps);
int sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps);
int sso_hwgrp_init_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq,
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index cf186b9af4..bb0c910553 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -66,21 +66,21 @@ cn10k_sso_init_hws_mem(void *arg, uint8_t port_id)
}
static int
-cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = port;
- return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+ return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link, profile);
}
static int
-cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = port;
- return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+ return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link, profile);
}
static void
@@ -107,10 +107,11 @@ cn10k_sso_hws_release(void *arg, void *hws)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = hws;
- uint16_t i;
+ uint16_t i, j;
- for (i = 0; i < dev->nb_event_queues; i++)
- roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1);
+ for (i = 0; i < CNXK_SSO_MAX_PROFILES; i++)
+ for (j = 0; j < dev->nb_event_queues; j++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id, &j, 1, i);
memset(ws, 0, sizeof(*ws));
}
@@ -482,6 +483,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
CN10K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, sso_hws_tx_adptr_enq);
event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue;
+ event_dev->profile_switch = cn10k_sso_hws_profile_switch;
#else
RTE_SET_USED(event_dev);
#endif
@@ -633,9 +635,8 @@ cn10k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port,
}
static int
-cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port,
- const uint8_t queues[], const uint8_t priorities[],
- uint16_t nb_links)
+cn10k_sso_port_link_profile(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_links];
@@ -644,14 +645,14 @@ cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port,
RTE_SET_USED(priorities);
for (link = 0; link < nb_links; link++)
hwgrp_ids[link] = queues[link];
- nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+ nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links, profile);
return (int)nb_links;
}
static int
-cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
- uint8_t queues[], uint16_t nb_unlinks)
+cn10k_sso_port_unlink_profile(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_unlinks];
@@ -659,11 +660,25 @@ cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
for (unlink = 0; unlink < nb_unlinks; unlink++)
hwgrp_ids[unlink] = queues[unlink];
- nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+ nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks, profile);
return (int)nb_unlinks;
}
+static int
+cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links)
+{
+ return cn10k_sso_port_link_profile(event_dev, port, queues, priorities, nb_links, 0);
+}
+
+static int
+cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks)
+{
+ return cn10k_sso_port_unlink_profile(event_dev, port, queues, nb_unlinks, 0);
+}
+
static void
cn10k_sso_configure_queue_stash(struct rte_eventdev *event_dev)
{
@@ -1020,6 +1035,8 @@ static struct eventdev_ops cn10k_sso_dev_ops = {
.port_quiesce = cn10k_sso_port_quiesce,
.port_link = cn10k_sso_port_link,
.port_unlink = cn10k_sso_port_unlink,
+ .port_link_profile = cn10k_sso_port_link_profile,
+ .port_unlink_profile = cn10k_sso_port_unlink_profile,
.timeout_ticks = cnxk_sso_timeout_ticks,
.eth_rx_adapter_caps_get = cn10k_sso_rx_adapter_caps_get,
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
index 9b5bf90159..d59769717e 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -431,3 +431,14 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+
+int __rte_hot
+cn10k_sso_hws_profile_switch(void *port, uint8_t profile)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ ws->gw_wdata &= ~(0xFFUL);
+ ws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index e71ab3c523..26fecf21fb 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -329,6 +329,7 @@ uint16_t __rte_hot cn10k_sso_hws_enq_new_burst(void *port,
uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn10k_sso_hws_profile_switch(void *port, uint8_t profile);
#define R(name, flags) \
uint16_t __rte_hot cn10k_sso_hws_deq_##name( \
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index fe6f5d9f86..9fb9ca0d63 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -15,7 +15,7 @@
enq_op = enq_ops[dev->tx_offloads & (NIX_TX_OFFLOAD_MAX - 1)]
static int
-cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
@@ -24,22 +24,20 @@ cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
if (dev->dual_ws) {
dws = port;
- rc = roc_sso_hws_link(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map,
- nb_link);
- rc |= roc_sso_hws_link(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
- map, nb_link);
+ rc = roc_sso_hws_link(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map, nb_link,
+ profile);
+ rc |= roc_sso_hws_link(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), map,
+ nb_link, profile);
} else {
ws = port;
- rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+ rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link, profile);
}
return rc;
}
static int
-cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
@@ -48,15 +46,13 @@ cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
if (dev->dual_ws) {
dws = port;
- rc = roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
- map, nb_link);
- rc |= roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
- map, nb_link);
+ rc = roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map,
+ nb_link, profile);
+ rc |= roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), map,
+ nb_link, profile);
} else {
ws = port;
- rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+ rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link, profile);
}
return rc;
@@ -97,21 +93,24 @@ cn9k_sso_hws_release(void *arg, void *hws)
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
struct cn9k_sso_hws *ws;
- uint16_t i;
+ uint16_t i, k;
if (dev->dual_ws) {
dws = hws;
for (i = 0; i < dev->nb_event_queues; i++) {
- roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), &i, 1);
- roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), &i, 1);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++) {
+ roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
+ &i, 1, k);
+ roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ &i, 1, k);
+ }
}
memset(dws, 0, sizeof(*dws));
} else {
ws = hws;
for (i = 0; i < dev->nb_event_queues; i++)
- roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1, k);
memset(ws, 0, sizeof(*ws));
}
}
@@ -438,6 +437,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
event_dev->enqueue_burst = cn9k_sso_hws_enq_burst;
event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst;
+ event_dev->profile_switch = cn9k_sso_hws_profile_switch;
if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, sso_hws_deq_seg);
CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst,
@@ -475,6 +475,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
event_dev->enqueue_forward_burst =
cn9k_sso_hws_dual_enq_fwd_burst;
event_dev->ca_enqueue = cn9k_sso_hws_dual_ca_enq;
+ event_dev->profile_switch = cn9k_sso_hws_dual_profile_switch;
if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue,
@@ -708,9 +709,8 @@ cn9k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port,
}
static int
-cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port,
- const uint8_t queues[], const uint8_t priorities[],
- uint16_t nb_links)
+cn9k_sso_port_link_profile(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_links];
@@ -719,14 +719,14 @@ cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port,
RTE_SET_USED(priorities);
for (link = 0; link < nb_links; link++)
hwgrp_ids[link] = queues[link];
- nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+ nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links, profile);
return (int)nb_links;
}
static int
-cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
- uint8_t queues[], uint16_t nb_unlinks)
+cn9k_sso_port_unlink_profile(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_unlinks];
@@ -734,11 +734,25 @@ cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
for (unlink = 0; unlink < nb_unlinks; unlink++)
hwgrp_ids[unlink] = queues[unlink];
- nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+ nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks, profile);
return (int)nb_unlinks;
}
+static int
+cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links)
+{
+ return cn9k_sso_port_link_profile(event_dev, port, queues, priorities, nb_links, 0);
+}
+
+static int
+cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks)
+{
+ return cn9k_sso_port_unlink_profile(event_dev, port, queues, nb_unlinks, 0);
+}
+
static int
cn9k_sso_start(struct rte_eventdev *event_dev)
{
@@ -1019,6 +1033,8 @@ static struct eventdev_ops cn9k_sso_dev_ops = {
.port_quiesce = cn9k_sso_port_quiesce,
.port_link = cn9k_sso_port_link,
.port_unlink = cn9k_sso_port_unlink,
+ .port_link_profile = cn9k_sso_port_link_profile,
+ .port_unlink_profile = cn9k_sso_port_unlink_profile,
.timeout_ticks = cnxk_sso_timeout_ticks,
.eth_rx_adapter_caps_get = cn9k_sso_rx_adapter_caps_get,
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
index abbbfffd85..a9ac49a5a7 100644
--- a/drivers/event/cnxk/cn9k_worker.c
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -66,6 +66,17 @@ cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+int __rte_hot
+cn9k_sso_hws_profile_switch(void *port, uint8_t profile)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ ws->gw_wdata &= ~(0xFFUL);
+ ws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
+
/* Dual ws ops. */
uint16_t __rte_hot
@@ -149,3 +160,14 @@ cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events)
return cn9k_cpt_crypto_adapter_enqueue(dws->base[!dws->vws],
ev->event_ptr);
}
+
+int __rte_hot
+cn9k_sso_hws_dual_profile_switch(void *port, uint8_t profile)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+
+ dws->gw_wdata &= ~(0xFFUL);
+ dws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index ee659e80d6..6936b7ad04 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -366,6 +366,7 @@ uint16_t __rte_hot cn9k_sso_hws_enq_new_burst(void *port,
uint16_t __rte_hot cn9k_sso_hws_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn9k_sso_hws_profile_switch(void *port, uint8_t profile);
uint16_t __rte_hot cn9k_sso_hws_dual_enq(void *port,
const struct rte_event *ev);
@@ -382,6 +383,7 @@ uint16_t __rte_hot cn9k_sso_hws_ca_enq(void *port, struct rte_event ev[],
uint16_t nb_events);
uint16_t __rte_hot cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn9k_sso_hws_dual_profile_switch(void *port, uint8_t profile);
#define R(name, flags) \
uint16_t __rte_hot cn9k_sso_hws_deq_##name( \
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 9c9192bd40..0c61f4c20e 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -30,7 +30,9 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE |
- RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR;
+ RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR |
+ RTE_EVENT_DEV_CAP_PROFILE_LINK;
+ dev_info->max_profiles_per_port = CNXK_SSO_MAX_PROFILES;
}
int
@@ -128,23 +130,25 @@ cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t *links_map, hwgrp[CNXK_SSO_MAX_HWGRP];
- int i, j;
+ int i, j, k;
for (i = 0; i < dev->nb_event_ports; i++) {
- uint16_t nb_hwgrp = 0;
-
- links_map = event_dev->data->links_map;
- /* Point links_map to this port specific area */
- links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++) {
+ uint16_t nb_hwgrp = 0;
+
+ links_map = event_dev->data->links_map[k];
+ /* Point links_map to this port specific area */
+ links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
+
+ for (j = 0; j < dev->nb_event_queues; j++) {
+ if (links_map[j] == 0xdead)
+ continue;
+ hwgrp[nb_hwgrp] = j;
+ nb_hwgrp++;
+ }
- for (j = 0; j < dev->nb_event_queues; j++) {
- if (links_map[j] == 0xdead)
- continue;
- hwgrp[nb_hwgrp] = j;
- nb_hwgrp++;
+ link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp, k);
}
-
- link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp);
}
}
@@ -435,7 +439,7 @@ cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t all_queues[CNXK_SSO_MAX_HWGRP];
- uint16_t i;
+ uint16_t i, j;
void *ws;
if (!dev->configured)
@@ -446,7 +450,8 @@ cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn)
for (i = 0; i < dev->nb_event_ports; i++) {
ws = event_dev->data->ports[i];
- unlink_fn(dev, ws, all_queues, dev->nb_event_queues);
+ for (j = 0; j < CNXK_SSO_MAX_PROFILES; j++)
+ unlink_fn(dev, ws, all_queues, dev->nb_event_queues, j);
rte_free(cnxk_sso_hws_get_cookie(ws));
event_dev->data->ports[i] = NULL;
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index bd50de87c0..d42d1afa1a 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -33,6 +33,8 @@
#define CN10K_SSO_GW_MODE "gw_mode"
#define CN10K_SSO_STASH "stash"
+#define CNXK_SSO_MAX_PROFILES 2
+
#define NSEC2USEC(__ns) ((__ns) / 1E3)
#define USEC2NSEC(__us) ((__us)*1E3)
#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
@@ -57,10 +59,10 @@
typedef void *(*cnxk_sso_init_hws_mem_t)(void *dev, uint8_t port_id);
typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
-typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
- uint16_t nb_link);
-typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map,
- uint16_t nb_link);
+typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map, uint16_t nb_link,
+ uint8_t profile);
+typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map, uint16_t nb_link,
+ uint8_t profile);
typedef void (*cnxk_handle_event_t)(void *arg, struct rte_event ev);
typedef void (*cnxk_sso_hws_reset_t)(void *arg, void *ws);
typedef int (*cnxk_sso_hws_flush_t)(void *ws, uint8_t queue_id, uintptr_t base,
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH v5 3/3] test/event: add event link profile test
2023-10-03 7:51 ` [PATCH v5 " pbhagavatula
2023-10-03 7:51 ` [PATCH v5 1/3] eventdev: introduce " pbhagavatula
2023-10-03 7:51 ` [PATCH v5 2/3] event/cnxk: implement event " pbhagavatula
@ 2023-10-03 7:51 ` pbhagavatula
2023-10-03 9:47 ` [PATCH v6 0/3] Introduce event link profiles pbhagavatula
3 siblings, 0 replies; 44+ messages in thread
From: pbhagavatula @ 2023-10-03 7:51 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add test case to verify event link profiles.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
app/test/test_eventdev.c | 117 +++++++++++++++++++++++++++++++++++++++
1 file changed, 117 insertions(+)
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index c51c93bdbd..0ecfa7db02 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1129,6 +1129,121 @@ test_eventdev_link_get(void)
return TEST_SUCCESS;
}
+static int
+test_eventdev_profile_switch(void)
+{
+#define MAX_RETRIES 4
+ uint8_t priorities[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ struct rte_event_queue_conf qcfg;
+ struct rte_event_port_conf pcfg;
+ struct rte_event_dev_info info;
+ struct rte_event ev;
+ uint8_t q, re;
+ int rc;
+
+ rte_event_dev_info_get(TEST_DEV_ID, &info);
+
+ if (info.max_profiles_per_port <= 1)
+ return TEST_SKIPPED;
+
+ if (info.max_event_queues <= 1)
+ return TEST_SKIPPED;
+
+ rc = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to get port0 default config");
+ rc = rte_event_port_setup(TEST_DEV_ID, 0, &pcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to setup port0");
+
+ rc = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, &qcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to get queue0 default config");
+ rc = rte_event_queue_setup(TEST_DEV_ID, 0, &qcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to setup queue0");
+
+ q = 0;
+ rc = rte_event_port_profile_links_set(TEST_DEV_ID, 0, &q, NULL, 1, 0);
+ TEST_ASSERT(rc == 1, "Failed to link queue 0 to port 0 with profile 0");
+ q = 1;
+ rc = rte_event_port_profile_links_set(TEST_DEV_ID, 0, &q, NULL, 1, 1);
+ TEST_ASSERT(rc == 1, "Failed to link queue 1 to port 0 with profile 1");
+
+ rc = rte_event_port_profile_links_get(TEST_DEV_ID, 0, queues, priorities, 0);
+ TEST_ASSERT(rc == 1, "Failed to links");
+ TEST_ASSERT(queues[0] == 0, "Invalid queue found in link");
+
+ rc = rte_event_port_profile_links_get(TEST_DEV_ID, 0, queues, priorities, 1);
+ TEST_ASSERT(rc == 1, "Failed to links");
+ TEST_ASSERT(queues[0] == 1, "Invalid queue found in link");
+
+ rc = rte_event_dev_start(TEST_DEV_ID);
+ TEST_ASSERT_SUCCESS(rc, "Failed to start event device");
+
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.queue_id = 0;
+ ev.op = RTE_EVENT_OP_NEW;
+ ev.flow_id = 0;
+ ev.u64 = 0xBADF00D0;
+ rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1);
+ TEST_ASSERT(rc == 1, "Failed to enqueue event");
+ ev.queue_id = 1;
+ ev.flow_id = 1;
+ rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1);
+ TEST_ASSERT(rc == 1, "Failed to enqueue event");
+
+ ev.event = 0;
+ ev.u64 = 0;
+
+ rc = rte_event_port_profile_switch(TEST_DEV_ID, 0, 1);
+ TEST_ASSERT_SUCCESS(rc, "Failed to change profile");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ printf("rc %d\n", rc);
+ if (rc)
+ break;
+ }
+
+ TEST_ASSERT(rc == 1, "Failed to dequeue event from profile 1");
+ TEST_ASSERT(ev.flow_id == 1, "Incorrect flow identifier from profile 1");
+ TEST_ASSERT(ev.queue_id == 1, "Incorrect queue identifier from profile 1");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ TEST_ASSERT(rc == 0, "Unexpected event dequeued from active profile");
+ }
+
+ rc = rte_event_port_profile_switch(TEST_DEV_ID, 0, 0);
+ TEST_ASSERT_SUCCESS(rc, "Failed to change profile");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ if (rc)
+ break;
+ }
+
+ TEST_ASSERT(rc == 1, "Failed to dequeue event from profile 1");
+ TEST_ASSERT(ev.flow_id == 0, "Incorrect flow identifier from profile 0");
+ TEST_ASSERT(ev.queue_id == 0, "Incorrect queue identifier from profile 0");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ TEST_ASSERT(rc == 0, "Unexpected event dequeued from active profile");
+ }
+
+ q = 0;
+ rc = rte_event_port_profile_unlink(TEST_DEV_ID, 0, &q, 1, 0);
+ TEST_ASSERT(rc == 1, "Failed to unlink queue 0 to port 0 with profile 0");
+ q = 1;
+ rc = rte_event_port_profile_unlink(TEST_DEV_ID, 0, &q, 1, 1);
+ TEST_ASSERT(rc == 1, "Failed to unlink queue 1 to port 0 with profile 1");
+
+ return TEST_SUCCESS;
+}
+
static int
test_eventdev_close(void)
{
@@ -1187,6 +1302,8 @@ static struct unit_test_suite eventdev_common_testsuite = {
test_eventdev_timeout_ticks),
TEST_CASE_ST(NULL, NULL,
test_eventdev_start_stop),
+ TEST_CASE_ST(eventdev_configure_setup, eventdev_stop_device,
+ test_eventdev_profile_switch),
TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
test_eventdev_link),
TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH v6 0/3] Introduce event link profiles
2023-10-03 7:51 ` [PATCH v5 " pbhagavatula
` (2 preceding siblings ...)
2023-10-03 7:51 ` [PATCH v5 3/3] test/event: add event link profile test pbhagavatula
@ 2023-10-03 9:47 ` pbhagavatula
2023-10-03 9:47 ` [PATCH v6 1/3] eventdev: introduce " pbhagavatula
` (3 more replies)
3 siblings, 4 replies; 44+ messages in thread
From: pbhagavatula @ 2023-10-03 9:47 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
A collection of event queues linked to an event port can be associated
with unique identifier called as a link profile, multiple such profiles
can be configured based on the event device capability using the function
`rte_event_port_profile_links_set` which takes arguments similar to
`rte_event_port_link` in addition to the profile identifier.
The maximum link profiles that are supported by an event device is
advertised through the structure member
`rte_event_dev_info::max_profiles_per_port`.
By default, event ports are configured to use the link profile 0 on
initialization.
Once multiple link profiles are set up and the event device is started, the
application can use the function `rte_event_port_profile_switch` to change
the currently active profile on an event port. This effects the next
`rte_event_dequeue_burst` call, where the event queues associated with the
newly active link profile will participate in scheduling.
Rudementary work flow would something like:
Config path:
uint8_t lq[4] = {4, 5, 6, 7};
uint8_t hq[4] = {0, 1, 2, 3};
if (rte_event_dev_info.max_profiles_per_port < 2)
return -ENOTSUP;
rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0);
rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1);
Worker path:
empty_high_deq = 0;
empty_low_deq = 0;
is_low_deq = 0;
while (1) {
deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
if (deq == 0) {
/**
* Change link profile based on work activity on current
* active profile
*/
if (is_low_deq) {
empty_low_deq++;
if (empty_low_deq == MAX_LOW_RETRY) {
rte_event_port_profile_switch(0, 0, 0);
is_low_deq = 0;
empty_low_deq = 0;
}
continue;
}
if (empty_high_deq == MAX_HIGH_RETRY) {
rte_event_port_profile_switch(0, 0, 1);
is_low_deq = 1;
empty_high_deq = 0;
}
continue;
}
// Process the event received.
if (is_low_deq++ == MAX_LOW_EVENTS) {
rte_event_port_profile_switch(0, 0, 0);
is_low_deq = 0;
}
}
An application could use heuristic data of load/activity of a given event
port and change its active profile to adapt to the traffic pattern.
An unlink function `rte_event_port_profile_unlink` is provided to
modify the links associated to a profile, and
`rte_event_port_profile_links_get` can be used to retrieve the links
associated with a profile.
Using Link profiles can reduce the overhead of linking/unlinking and
waiting for unlinks in progress in fast-path and gives applications
the ability to switch between preset profiles on the fly.
v6 Changes:
----------
- Fix compilation.
v5 Changes:
----------
- Rebase on next-event
v4 Changes:
----------
- Address review comments (Jerin).
v3 Changes:
----------
- Rebase to next-eventdev
- Rename testcase name to match API.
v2 Changes:
----------
- Fix compilation.
Pavan Nikhilesh (3):
eventdev: introduce link profiles
event/cnxk: implement event link profiles
test/event: add event link profile test
app/test/test_eventdev.c | 117 +++++++++++
config/rte_config.h | 1 +
doc/guides/eventdevs/cnxk.rst | 1 +
doc/guides/eventdevs/features/cnxk.ini | 3 +-
doc/guides/eventdevs/features/default.ini | 1 +
doc/guides/prog_guide/eventdev.rst | 40 ++++
doc/guides/rel_notes/release_23_11.rst | 13 ++
drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
drivers/common/cnxk/roc_sso.c | 18 +-
drivers/common/cnxk/roc_sso.h | 8 +-
drivers/common/cnxk/roc_sso_priv.h | 4 +-
drivers/event/cnxk/cn10k_eventdev.c | 45 +++--
drivers/event/cnxk/cn10k_worker.c | 11 ++
drivers/event/cnxk/cn10k_worker.h | 1 +
drivers/event/cnxk/cn9k_eventdev.c | 74 ++++---
drivers/event/cnxk/cn9k_worker.c | 22 +++
drivers/event/cnxk/cn9k_worker.h | 2 +
drivers/event/cnxk/cnxk_eventdev.c | 37 ++--
drivers/event/cnxk/cnxk_eventdev.h | 10 +-
lib/eventdev/eventdev_pmd.h | 59 +++++-
lib/eventdev/eventdev_private.c | 9 +
lib/eventdev/eventdev_trace.h | 32 +++
lib/eventdev/eventdev_trace_points.c | 12 ++
lib/eventdev/rte_eventdev.c | 150 +++++++++++---
lib/eventdev/rte_eventdev.h | 231 ++++++++++++++++++++++
lib/eventdev/rte_eventdev_core.h | 5 +
lib/eventdev/rte_eventdev_trace_fp.h | 8 +
lib/eventdev/version.map | 4 +
28 files changed, 813 insertions(+), 109 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH v6 1/3] eventdev: introduce link profiles
2023-10-03 9:47 ` [PATCH v6 0/3] Introduce event link profiles pbhagavatula
@ 2023-10-03 9:47 ` pbhagavatula
2023-10-03 9:47 ` [PATCH v6 2/3] event/cnxk: implement event " pbhagavatula
` (2 subsequent siblings)
3 siblings, 0 replies; 44+ messages in thread
From: pbhagavatula @ 2023-10-03 9:47 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
A collection of event queues linked to an event port can be
associated with a unique identifier called as a link profile, multiple
such profiles can be created based on the event device capability
using the function `rte_event_port_profile_links_set` which takes
arguments similar to `rte_event_port_link` in addition to the profile
identifier.
The maximum link profiles that are supported by an event device
is advertised through the structure member
`rte_event_dev_info::max_profiles_per_port`.
By default, event ports are configured to use the link profile 0
on initialization.
Once multiple link profiles are set up and the event device is started,
the application can use the function `rte_event_port_profile_switch`
to change the currently active profile on an event port. This effects
the next `rte_event_dequeue_burst` call, where the event queues
associated with the newly active link profile will participate in
scheduling.
An unlink function `rte_event_port_profile_unlink` is provided
to modify the links associated to a profile, and
`rte_event_port_profile_links_get` can be used to retrieve the
links associated with a profile.
Using Link profiles can reduce the overhead of linking/unlinking and
waiting for unlinks in progress in fast-path and gives applications
the ability to switch between preset profiles on the fly.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
config/rte_config.h | 1 +
doc/guides/eventdevs/features/default.ini | 1 +
doc/guides/prog_guide/eventdev.rst | 40 ++++
doc/guides/rel_notes/release_23_11.rst | 11 ++
drivers/event/cnxk/cnxk_eventdev.c | 2 +-
lib/eventdev/eventdev_pmd.h | 59 +++++-
lib/eventdev/eventdev_private.c | 9 +
lib/eventdev/eventdev_trace.h | 32 +++
lib/eventdev/eventdev_trace_points.c | 12 ++
lib/eventdev/rte_eventdev.c | 150 +++++++++++---
lib/eventdev/rte_eventdev.h | 231 ++++++++++++++++++++++
lib/eventdev/rte_eventdev_core.h | 5 +
lib/eventdev/rte_eventdev_trace_fp.h | 8 +
lib/eventdev/version.map | 4 +
14 files changed, 536 insertions(+), 29 deletions(-)
diff --git a/config/rte_config.h b/config/rte_config.h
index 401727703f..a06189d0b5 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -73,6 +73,7 @@
#define RTE_EVENT_MAX_DEVS 16
#define RTE_EVENT_MAX_PORTS_PER_DEV 255
#define RTE_EVENT_MAX_QUEUES_PER_DEV 255
+#define RTE_EVENT_MAX_PROFILES_PER_PORT 8
#define RTE_EVENT_TIMER_ADAPTER_NUM_MAX 32
#define RTE_EVENT_ETH_INTR_RING_SIZE 1024
#define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini
index 73a52d915b..e980ae134a 100644
--- a/doc/guides/eventdevs/features/default.ini
+++ b/doc/guides/eventdevs/features/default.ini
@@ -18,6 +18,7 @@ multiple_queue_port =
carry_flow_id =
maintenance_free =
runtime_queue_attr =
+profile_links =
;
; Features of a default Ethernet Rx adapter.
diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst
index ff55115d0d..8c15c678bf 100644
--- a/doc/guides/prog_guide/eventdev.rst
+++ b/doc/guides/prog_guide/eventdev.rst
@@ -317,6 +317,46 @@ can be achieved like this:
}
int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1);
+Linking Queues to Ports with link profiles
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+An application can use link profiles if supported by the underlying event device to setup up
+multiple link profile per port and change them run time depending up on heuristic data.
+Using Link profiles can reduce the overhead of linking/unlinking and wait for unlinks in progress
+in fast-path and gives applications the ability to switch between preset profiles on the fly.
+
+An Example use case could be as follows.
+
+Config path:
+
+.. code-block:: c
+
+ uint8_t lq[4] = {4, 5, 6, 7};
+ uint8_t hq[4] = {0, 1, 2, 3};
+
+ if (rte_event_dev_info.max_profiles_per_port < 2)
+ return -ENOTSUP;
+
+ rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0);
+ rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1);
+
+Worker path:
+
+.. code-block:: c
+
+ uint8_t profile_id_to_switch;
+
+ while (1) {
+ deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
+ if (deq == 0) {
+ profile_id_to_switch = app_find_profile_id_to_switch();
+ rte_event_port_profile_switch(0, 0, profile_id_to_switch);
+ continue;
+ }
+
+ // Process the event received.
+ }
+
Starting the EventDev
~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index b66c364e21..fe6656bed2 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -90,6 +90,17 @@ New Features
model by introducing APIs that allow applications to enqueue/dequeue DMA
operations to/from dmadev as events scheduled by an event device.
+* **Added eventdev support to link queues to port with link profile.**
+
+ Introduced event link profiles that can be used to associated links between
+ event queues and an event port with a unique identifier termed as link profile.
+ The profile can be used to switch between the associated links in fast-path
+ without the additional overhead of linking/unlinking and waiting for unlinking.
+
+ * Added ``rte_event_port_profile_links_set``, ``rte_event_port_profile_unlink``
+ ``rte_event_port_profile_links_get`` and ``rte_event_port_profile_switch``
+ APIs to enable this feature.
+
* **Updated Marvell cnxk eventdev driver.**
* Added support for ``remaining_ticks_get`` timer adapter PMD callback
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 9c9192bd40..e8ea7e0efb 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -133,7 +133,7 @@ cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
for (i = 0; i < dev->nb_event_ports; i++) {
uint16_t nb_hwgrp = 0;
- links_map = event_dev->data->links_map;
+ links_map = event_dev->data->links_map[0];
/* Point links_map to this port specific area */
links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index f7227c0bfd..30bd90085c 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -119,8 +119,8 @@ struct rte_eventdev_data {
/**< Array of port configuration structures. */
struct rte_event_queue_conf queues_cfg[RTE_EVENT_MAX_QUEUES_PER_DEV];
/**< Array of queue configuration structures. */
- uint16_t links_map[RTE_EVENT_MAX_PORTS_PER_DEV *
- RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint16_t links_map[RTE_EVENT_MAX_PROFILES_PER_PORT]
+ [RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV];
/**< Memory to store queues to port connections. */
void *dev_private;
/**< PMD-specific private data */
@@ -179,9 +179,10 @@ struct rte_eventdev {
/**< Pointer to PMD eth Tx adapter enqueue function. */
event_crypto_adapter_enqueue_t ca_enqueue;
/**< Pointer to PMD crypto adapter enqueue function. */
-
event_dma_adapter_enqueue_t dma_enqueue;
/**< Pointer to PMD DMA adapter enqueue function. */
+ event_profile_switch_t profile_switch;
+ /**< Pointer to PMD Event switch profile function. */
uint64_t reserved_64s[3]; /**< Reserved for future fields */
void *reserved_ptrs[3]; /**< Reserved for future fields */
@@ -441,6 +442,32 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
const uint8_t queues[], const uint8_t priorities[],
uint16_t nb_links);
+/**
+ * Link multiple source event queues associated with a link profile to a
+ * destination event port.
+ *
+ * @param dev
+ * Event device pointer
+ * @param port
+ * Event port pointer
+ * @param queues
+ * Points to an array of *nb_links* event queues to be linked
+ * to the event port.
+ * @param priorities
+ * Points to an array of *nb_links* service priorities associated with each
+ * event queue link to event port.
+ * @param nb_links
+ * The number of links to establish.
+ * @param profile_id
+ * The profile ID to associate the links.
+ *
+ * @return
+ * Returns 0 on success.
+ */
+typedef int (*eventdev_port_link_profile_t)(struct rte_eventdev *dev, void *port,
+ const uint8_t queues[], const uint8_t priorities[],
+ uint16_t nb_links, uint8_t profile_id);
+
/**
* Unlink multiple source event queues from destination event port.
*
@@ -459,6 +486,28 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
typedef int (*eventdev_port_unlink_t)(struct rte_eventdev *dev, void *port,
uint8_t queues[], uint16_t nb_unlinks);
+/**
+ * Unlink multiple source event queues associated with a link profile from
+ * destination event port.
+ *
+ * @param dev
+ * Event device pointer
+ * @param port
+ * Event port pointer
+ * @param queues
+ * An array of *nb_unlinks* event queues to be unlinked from the event port.
+ * @param nb_unlinks
+ * The number of unlinks to establish
+ * @param profile_id
+ * The profile ID of the associated links.
+ *
+ * @return
+ * Returns 0 on success.
+ */
+typedef int (*eventdev_port_unlink_profile_t)(struct rte_eventdev *dev, void *port,
+ uint8_t queues[], uint16_t nb_unlinks,
+ uint8_t profile_id);
+
/**
* Unlinks in progress. Returns number of unlinks that the PMD is currently
* performing, but have not yet been completed.
@@ -1502,8 +1551,12 @@ struct eventdev_ops {
eventdev_port_link_t port_link;
/**< Link event queues to an event port. */
+ eventdev_port_link_profile_t port_link_profile;
+ /**< Link event queues associated with a profile to an event port. */
eventdev_port_unlink_t port_unlink;
/**< Unlink event queues from an event port. */
+ eventdev_port_unlink_profile_t port_unlink_profile;
+ /**< Unlink event queues associated with a profile from an event port. */
eventdev_port_unlinks_in_progress_t port_unlinks_in_progress;
/**< Unlinks in progress on an event port. */
eventdev_dequeue_timeout_ticks_t timeout_ticks;
diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c
index 18ed8bf3c8..017f97ccab 100644
--- a/lib/eventdev/eventdev_private.c
+++ b/lib/eventdev/eventdev_private.c
@@ -89,6 +89,13 @@ dummy_event_dma_adapter_enqueue(__rte_unused void *port, __rte_unused struct rte
return 0;
}
+static int
+dummy_event_port_profile_switch(__rte_unused void *port, __rte_unused uint8_t profile_id)
+{
+ RTE_EDEV_LOG_ERR("change profile requested for unconfigured event device");
+ return -EINVAL;
+}
+
void
event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
{
@@ -106,6 +113,7 @@ event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
dummy_event_tx_adapter_enqueue_same_dest,
.ca_enqueue = dummy_event_crypto_adapter_enqueue,
.dma_enqueue = dummy_event_dma_adapter_enqueue,
+ .profile_switch = dummy_event_port_profile_switch,
.data = dummy_data,
};
@@ -127,5 +135,6 @@ event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op,
fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest;
fp_op->ca_enqueue = dev->ca_enqueue;
fp_op->dma_enqueue = dev->dma_enqueue;
+ fp_op->profile_switch = dev->profile_switch;
fp_op->data = dev->data->ports;
}
diff --git a/lib/eventdev/eventdev_trace.h b/lib/eventdev/eventdev_trace.h
index f008ef0091..9c2b261c06 100644
--- a/lib/eventdev/eventdev_trace.h
+++ b/lib/eventdev/eventdev_trace.h
@@ -76,6 +76,17 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(rc);
)
+RTE_TRACE_POINT(
+ rte_eventdev_trace_port_profile_links_set,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
+ uint16_t nb_links, uint8_t profile_id, int rc),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u16(nb_links);
+ rte_trace_point_emit_u8(profile_id);
+ rte_trace_point_emit_int(rc);
+)
+
RTE_TRACE_POINT(
rte_eventdev_trace_port_unlink,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
@@ -86,6 +97,17 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(rc);
)
+RTE_TRACE_POINT(
+ rte_eventdev_trace_port_profile_unlink,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
+ uint16_t nb_unlinks, uint8_t profile_id, int rc),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u16(nb_unlinks);
+ rte_trace_point_emit_u8(profile_id);
+ rte_trace_point_emit_int(rc);
+)
+
RTE_TRACE_POINT(
rte_eventdev_trace_start,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, int rc),
@@ -487,6 +509,16 @@ RTE_TRACE_POINT(
rte_trace_point_emit_int(count);
)
+RTE_TRACE_POINT(
+ rte_eventdev_trace_port_profile_links_get,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, uint8_t profile_id,
+ int count),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u8(profile_id);
+ rte_trace_point_emit_int(count);
+)
+
RTE_TRACE_POINT(
rte_eventdev_trace_port_unlinks_in_progress,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id),
diff --git a/lib/eventdev/eventdev_trace_points.c b/lib/eventdev/eventdev_trace_points.c
index 76144cfe75..8024e07531 100644
--- a/lib/eventdev/eventdev_trace_points.c
+++ b/lib/eventdev/eventdev_trace_points.c
@@ -19,9 +19,15 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_setup,
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_link,
lib.eventdev.port.link)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_links_set,
+ lib.eventdev.port.profile.links.set)
+
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlink,
lib.eventdev.port.unlink)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_unlink,
+ lib.eventdev.port.profile.unlink)
+
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_start,
lib.eventdev.start)
@@ -40,6 +46,9 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_deq_burst,
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_maintain,
lib.eventdev.maintain)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_switch,
+ lib.eventdev.port.profile.switch)
+
/* Eventdev Rx adapter trace points */
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_eth_rx_adapter_create,
lib.eventdev.rx.adapter.create)
@@ -206,6 +215,9 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_default_conf_get,
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_links_get,
lib.eventdev.port.links.get)
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_links_get,
+ lib.eventdev.port.profile.links.get)
+
RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlinks_in_progress,
lib.eventdev.port.unlinks.in.progress)
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 60509c6efb..5ee8bd665b 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -96,6 +96,7 @@ rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)
return -EINVAL;
memset(dev_info, 0, sizeof(struct rte_event_dev_info));
+ dev_info->max_profiles_per_port = 1;
if (*dev->dev_ops->dev_infos_get == NULL)
return -ENOTSUP;
@@ -293,7 +294,7 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
void **ports;
uint16_t *links_map;
struct rte_event_port_conf *ports_cfg;
- unsigned int i;
+ unsigned int i, j;
RTE_EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
dev->data->dev_id);
@@ -304,7 +305,6 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
ports = dev->data->ports;
ports_cfg = dev->data->ports_cfg;
- links_map = dev->data->links_map;
for (i = nb_ports; i < old_nb_ports; i++)
(*dev->dev_ops->port_release)(ports[i]);
@@ -320,9 +320,11 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
sizeof(ports[0]) * new_ps);
memset(ports_cfg + old_nb_ports, 0,
sizeof(ports_cfg[0]) * new_ps);
- for (i = old_links_map_end; i < links_map_end; i++)
- links_map[i] =
- EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++) {
+ links_map = dev->data->links_map[i];
+ for (j = old_links_map_end; j < links_map_end; j++)
+ links_map[j] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ }
}
} else {
if (*dev->dev_ops->port_release == NULL)
@@ -976,21 +978,45 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
const uint8_t queues[], const uint8_t priorities[],
uint16_t nb_links)
{
- struct rte_eventdev *dev;
- uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ return rte_event_port_profile_links_set(dev_id, port_id, queues, priorities, nb_links, 0);
+}
+
+int
+rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile_id)
+{
uint8_t priorities_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
uint16_t *links_map;
int i, diag;
RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);
dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile_id >= RTE_EVENT_MAX_PROFILES_PER_PORT ||
+ profile_id >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile_id=%" PRIu8, profile_id);
+ return -EINVAL;
+ }
+
if (*dev->dev_ops->port_link == NULL) {
RTE_EDEV_LOG_ERR("Function not supported\n");
rte_errno = ENOTSUP;
return 0;
}
+ if (profile_id && *dev->dev_ops->port_link_profile == NULL) {
+ RTE_EDEV_LOG_ERR("Function not supported\n");
+ rte_errno = ENOTSUP;
+ return 0;
+ }
+
if (!is_valid_port(dev, port_id)) {
RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
rte_errno = EINVAL;
@@ -1018,18 +1044,22 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
return 0;
}
- diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id],
- queues, priorities, nb_links);
+ if (profile_id)
+ diag = (*dev->dev_ops->port_link_profile)(dev, dev->data->ports[port_id], queues,
+ priorities, nb_links, profile_id);
+ else
+ diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id], queues,
+ priorities, nb_links);
if (diag < 0)
return diag;
- links_map = dev->data->links_map;
+ links_map = dev->data->links_map[profile_id];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
for (i = 0; i < diag; i++)
links_map[queues[i]] = (uint8_t)priorities[i];
- rte_eventdev_trace_port_link(dev_id, port_id, nb_links, diag);
+ rte_eventdev_trace_port_profile_links_set(dev_id, port_id, nb_links, profile_id, diag);
return diag;
}
@@ -1037,27 +1067,51 @@ int
rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint16_t nb_unlinks)
{
- struct rte_eventdev *dev;
+ return rte_event_port_profile_unlink(dev_id, port_id, queues, nb_unlinks, 0);
+}
+
+int
+rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile_id)
+{
uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
- int i, diag, j;
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
uint16_t *links_map;
+ int i, diag, j;
RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);
dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile_id >= RTE_EVENT_MAX_PROFILES_PER_PORT ||
+ profile_id >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile_id=%" PRIu8, profile_id);
+ return -EINVAL;
+ }
+
if (*dev->dev_ops->port_unlink == NULL) {
RTE_EDEV_LOG_ERR("Function not supported");
rte_errno = ENOTSUP;
return 0;
}
+ if (profile_id && *dev->dev_ops->port_unlink_profile == NULL) {
+ RTE_EDEV_LOG_ERR("Function not supported");
+ rte_errno = ENOTSUP;
+ return 0;
+ }
+
if (!is_valid_port(dev, port_id)) {
RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
rte_errno = EINVAL;
return 0;
}
- links_map = dev->data->links_map;
+ links_map = dev->data->links_map[profile_id];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
@@ -1086,16 +1140,19 @@ rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
return 0;
}
- diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id],
- queues, nb_unlinks);
-
+ if (profile_id)
+ diag = (*dev->dev_ops->port_unlink_profile)(dev, dev->data->ports[port_id], queues,
+ nb_unlinks, profile_id);
+ else
+ diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id], queues,
+ nb_unlinks);
if (diag < 0)
return diag;
for (i = 0; i < diag; i++)
links_map[queues[i]] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
- rte_eventdev_trace_port_unlink(dev_id, port_id, nb_unlinks, diag);
+ rte_eventdev_trace_port_profile_unlink(dev_id, port_id, nb_unlinks, profile_id, diag);
return diag;
}
@@ -1139,7 +1196,8 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
return -EINVAL;
}
- links_map = dev->data->links_map;
+ /* Use the default profile_id. */
+ links_map = dev->data->links_map[0];
/* Point links_map to this port specific area */
links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
for (i = 0; i < dev->data->nb_queues; i++) {
@@ -1155,6 +1213,49 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
return count;
}
+int
+rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint8_t priorities[], uint8_t profile_id)
+{
+ struct rte_event_dev_info info;
+ struct rte_eventdev *dev;
+ uint16_t *links_map;
+ int i, count = 0;
+
+ RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+
+ dev = &rte_eventdevs[dev_id];
+ if (*dev->dev_ops->dev_infos_get == NULL)
+ return -ENOTSUP;
+
+ (*dev->dev_ops->dev_infos_get)(dev, &info);
+ if (profile_id >= RTE_EVENT_MAX_PROFILES_PER_PORT ||
+ profile_id >= info.max_profiles_per_port) {
+ RTE_EDEV_LOG_ERR("Invalid profile_id=%" PRIu8, profile_id);
+ return -EINVAL;
+ }
+
+ if (!is_valid_port(dev, port_id)) {
+ RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+ return -EINVAL;
+ }
+
+ links_map = dev->data->links_map[profile_id];
+ /* Point links_map to this port specific area */
+ links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
+ for (i = 0; i < dev->data->nb_queues; i++) {
+ if (links_map[i] != EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
+ queues[count] = i;
+ priorities[count] = (uint8_t)links_map[i];
+ ++count;
+ }
+ }
+
+ rte_eventdev_trace_port_profile_links_get(dev_id, port_id, profile_id, count);
+
+ return count;
+}
+
int
rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
uint64_t *timeout_ticks)
@@ -1463,7 +1564,7 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
{
char mz_name[RTE_EVENTDEV_NAME_MAX_LEN];
const struct rte_memzone *mz;
- int n;
+ int i, n;
/* Generate memzone name */
n = snprintf(mz_name, sizeof(mz_name), "rte_eventdev_data_%u", dev_id);
@@ -1483,11 +1584,10 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
*data = mz->addr;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
memset(*data, 0, sizeof(struct rte_eventdev_data));
- for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV *
- RTE_EVENT_MAX_QUEUES_PER_DEV;
- n++)
- (*data)->links_map[n] =
- EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+ for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++)
+ for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV;
+ n++)
+ (*data)->links_map[i][n] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
}
return 0;
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 41743f91b1..2ea98302b8 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -320,6 +320,12 @@ struct rte_event;
* rte_event_queue_setup().
*/
+#define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12)
+/**< Event device is capable of supporting multiple link profiles per event port
+ * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater
+ * than one.
+ */
+
/* Event device priority levels */
#define RTE_EVENT_DEV_PRIORITY_HIGHEST 0
/**< Highest priority expressed across eventdev subsystem
@@ -446,6 +452,10 @@ struct rte_event_dev_info {
* device. These ports and queues are not accounted for in
* max_event_ports or max_event_queues.
*/
+ uint8_t max_profiles_per_port;
+ /**< Maximum number of event queue profiles per event port.
+ * A device that doesn't support multiple profiles will set this as 1.
+ */
};
/**
@@ -1580,6 +1590,10 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
* latency of critical work by establishing the link with more event ports
* at runtime.
*
+ * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater
+ * than or equal to one, this function links the event queues to the default
+ * profile_id i.e. profile_id 0 of the event port.
+ *
* @param dev_id
* The identifier of the device.
*
@@ -1637,6 +1651,10 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
* Event queue(s) to event port unlink establishment can be changed at runtime
* without re-configuring the device.
*
+ * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater
+ * than or equal to one, this function unlinks the event queues from the default
+ * profile identifier i.e. profile 0 of the event port.
+ *
* @see rte_event_port_unlinks_in_progress() to poll for completed unlinks.
*
* @param dev_id
@@ -1670,6 +1688,136 @@ int
rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint16_t nb_unlinks);
+/**
+ * Link multiple source event queues supplied in *queues* to the destination
+ * event port designated by its *port_id* with associated profile identifier
+ * supplied in *profile_id* with service priorities supplied in *priorities*
+ * on the event device designated by its *dev_id*.
+ *
+ * If *profile_id* is set to 0 then, the links created by the call `rte_event_port_link`
+ * will be overwritten.
+ *
+ * Event ports by default use profile_id 0 unless it is changed using the
+ * call ``rte_event_port_profile_switch()``.
+ *
+ * The link establishment shall enable the event port *port_id* from
+ * receiving events from the specified event queue(s) supplied in *queues*
+ *
+ * An event queue may link to one or more event ports.
+ * The number of links can be established from an event queue to event port is
+ * implementation defined.
+ *
+ * Event queue(s) to event port link establishment can be changed at runtime
+ * without re-configuring the device to support scaling and to reduce the
+ * latency of critical work by establishing the link with more event ports
+ * at runtime.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier to select the destination port to link.
+ *
+ * @param queues
+ * Points to an array of *nb_links* event queues to be linked
+ * to the event port.
+ * NULL value is allowed, in which case this function links all the configured
+ * event queues *nb_event_queues* which previously supplied to
+ * rte_event_dev_configure() to the event port *port_id*
+ *
+ * @param priorities
+ * Points to an array of *nb_links* service priorities associated with each
+ * event queue link to event port.
+ * The priority defines the event port's servicing priority for
+ * event queue, which may be ignored by an implementation.
+ * The requested priority should in the range of
+ * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST].
+ * The implementation shall normalize the requested priority to
+ * implementation supported priority value.
+ * NULL value is allowed, in which case this function links the event queues
+ * with RTE_EVENT_DEV_PRIORITY_NORMAL servicing priority
+ *
+ * @param nb_links
+ * The number of links to establish. This parameter is ignored if queues is
+ * NULL.
+ *
+ * @param profile_id
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of links actually established. The return value can be less than
+ * the value of the *nb_links* parameter when the implementation has the
+ * limitation on specific queue to port link establishment or if invalid
+ * parameters are specified in *queues*
+ * If the return value is less than *nb_links*, the remaining links at the end
+ * of link[] are not established, and the caller has to take care of them.
+ * If return value is less than *nb_links* then implementation shall update the
+ * rte_errno accordingly, Possible rte_errno values are
+ * (EDQUOT) Quota exceeded(Application tried to link the queue configured with
+ * RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports)
+ * (EINVAL) Invalid parameter
+ *
+ */
+__rte_experimental
+int
+rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile_id);
+
+/**
+ * Unlink multiple source event queues supplied in *queues* that belong to profile
+ * designated by *profile_id* from the destination event port designated by its
+ * *port_id* on the event device designated by its *dev_id*.
+ *
+ * If *profile_id* is set to 0 i.e., the default profile then, then this function
+ * will act as ``rte_event_port_unlink``.
+ *
+ * The unlink call issues an async request to disable the event port *port_id*
+ * from receiving events from the specified event queue *queue_id*.
+ * Event queue(s) to event port unlink establishment can be changed at runtime
+ * without re-configuring the device.
+ *
+ * @see rte_event_port_unlinks_in_progress() to poll for completed unlinks.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier to select the destination port to unlink.
+ *
+ * @param queues
+ * Points to an array of *nb_unlinks* event queues to be unlinked
+ * from the event port.
+ * NULL value is allowed, in which case this function unlinks all the
+ * event queue(s) from the event port *port_id*.
+ *
+ * @param nb_unlinks
+ * The number of unlinks to establish. This parameter is ignored if queues is
+ * NULL.
+ *
+ * @param profile_id
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of unlinks successfully requested. The return value can be less
+ * than the value of the *nb_unlinks* parameter when the implementation has the
+ * limitation on specific queue to port unlink establishment or
+ * if invalid parameters are specified.
+ * If the return value is less than *nb_unlinks*, the remaining queues at the
+ * end of queues[] are not unlinked, and the caller has to take care of them.
+ * If return value is less than *nb_unlinks* then implementation shall update
+ * the rte_errno accordingly, Possible rte_errno values are
+ * (EINVAL) Invalid parameter
+ *
+ */
+__rte_experimental
+int
+rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile_id);
+
/**
* Returns the number of unlinks in progress.
*
@@ -1724,6 +1872,42 @@ int
rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
uint8_t queues[], uint8_t priorities[]);
+/**
+ * Retrieve the list of source event queues and its service priority
+ * associated to a *profile_id* and linked to the destination event port
+ * designated by its *port_id* on the event device designated by its *dev_id*.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ *
+ * @param port_id
+ * Event port identifier.
+ *
+ * @param[out] queues
+ * Points to an array of *queues* for output.
+ * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
+ * store the event queue(s) linked with event port *port_id*
+ *
+ * @param[out] priorities
+ * Points to an array of *priorities* for output.
+ * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
+ * store the service priority associated with each event queue linked
+ *
+ * @param profile_id
+ * The profile identifier associated with the links between event queues and
+ * event port. Should be less than the max capability reported by
+ * ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of links established on the event port designated by its
+ * *port_id*.
+ * - <0 on failure.
+ */
+__rte_experimental
+int
+rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+ uint8_t priorities[], uint8_t profile_id);
+
/**
* Retrieve the service ID of the event dev. If the adapter doesn't use
* a rte_service function, this function returns -ESRCH.
@@ -2309,6 +2493,53 @@ rte_event_maintain(uint8_t dev_id, uint8_t port_id, int op)
return 0;
}
+/**
+ * Change the active profile on an event port.
+ *
+ * This function is used to change the current active profile on an event port
+ * when multiple link profiles are configured on an event port through the
+ * function call ``rte_event_port_profile_links_set``.
+ *
+ * On the subsequent ``rte_event_dequeue_burst`` call, only the event queues
+ * that were associated with the newly active profile will participate in
+ * scheduling.
+ *
+ * @param dev_id
+ * The identifier of the device.
+ * @param port_id
+ * The identifier of the event port.
+ * @param profile_id
+ * The identifier of the profile.
+ * @return
+ * - 0 on success.
+ * - -EINVAL if *dev_id*, *port_id*, or *profile_id* is invalid.
+ */
+__rte_experimental
+static inline uint8_t
+rte_event_port_profile_switch(uint8_t dev_id, uint8_t port_id, uint8_t profile_id)
+{
+ const struct rte_event_fp_ops *fp_ops;
+ void *port;
+
+ fp_ops = &rte_event_fp_ops[dev_id];
+ port = fp_ops->data[port_id];
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+ if (dev_id >= RTE_EVENT_MAX_DEVS ||
+ port_id >= RTE_EVENT_MAX_PORTS_PER_DEV)
+ return -EINVAL;
+
+ if (port == NULL)
+ return -EINVAL;
+
+ if (profile_id >= RTE_EVENT_MAX_PROFILES_PER_PORT)
+ return -EINVAL;
+#endif
+ rte_eventdev_trace_port_profile_switch(dev_id, port_id, profile_id);
+
+ return fp_ops->profile_switch(port, profile_id);
+}
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
index 83e8736c71..5b405518d1 100644
--- a/lib/eventdev/rte_eventdev_core.h
+++ b/lib/eventdev/rte_eventdev_core.h
@@ -46,6 +46,9 @@ typedef uint16_t (*event_dma_adapter_enqueue_t)(void *port, struct rte_event ev[
uint16_t nb_events);
/**< @internal Enqueue burst of events on DMA adapter */
+typedef int (*event_profile_switch_t)(void *port, uint8_t profile);
+/**< @internal Switch active link profile on the event port. */
+
struct rte_event_fp_ops {
void **data;
/**< points to array of internal port data pointers */
@@ -71,6 +74,8 @@ struct rte_event_fp_ops {
/**< PMD Crypto adapter enqueue function. */
event_dma_adapter_enqueue_t dma_enqueue;
/**< PMD DMA adapter enqueue function. */
+ event_profile_switch_t profile_switch;
+ /**< PMD Event switch profile function. */
uintptr_t reserved[4];
} __rte_cache_aligned;
diff --git a/lib/eventdev/rte_eventdev_trace_fp.h b/lib/eventdev/rte_eventdev_trace_fp.h
index af2172d2a5..04d510ad00 100644
--- a/lib/eventdev/rte_eventdev_trace_fp.h
+++ b/lib/eventdev/rte_eventdev_trace_fp.h
@@ -46,6 +46,14 @@ RTE_TRACE_POINT_FP(
rte_trace_point_emit_int(op);
)
+RTE_TRACE_POINT_FP(
+ rte_eventdev_trace_port_profile_switch,
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, uint8_t profile),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_u8(port_id);
+ rte_trace_point_emit_u8(profile);
+)
+
RTE_TRACE_POINT_FP(
rte_eventdev_trace_eth_tx_adapter_enqueue,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table,
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index b81eb2919c..59ee8b86cf 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -150,6 +150,10 @@ EXPERIMENTAL {
rte_event_dma_adapter_vchan_add;
rte_event_dma_adapter_vchan_del;
rte_event_eth_rx_adapter_create_ext_with_params;
+ rte_event_port_profile_links_set;
+ rte_event_port_profile_unlink;
+ rte_event_port_profile_links_get;
+ __rte_eventdev_trace_port_profile_switch;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH v6 2/3] event/cnxk: implement event link profiles
2023-10-03 9:47 ` [PATCH v6 0/3] Introduce event link profiles pbhagavatula
2023-10-03 9:47 ` [PATCH v6 1/3] eventdev: introduce " pbhagavatula
@ 2023-10-03 9:47 ` pbhagavatula
2023-10-03 9:47 ` [PATCH v6 3/3] test/event: add event link profile test pbhagavatula
2023-10-03 10:36 ` [PATCH v6 0/3] Introduce event link profiles Jerin Jacob
3 siblings, 0 replies; 44+ messages in thread
From: pbhagavatula @ 2023-10-03 9:47 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Implement event link profiles support on CN10K and CN9K.
Both the platforms support up to 2 link profiles.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/cnxk.rst | 1 +
doc/guides/eventdevs/features/cnxk.ini | 3 +-
doc/guides/rel_notes/release_23_11.rst | 2 +
drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
drivers/common/cnxk/roc_sso.c | 18 +++----
drivers/common/cnxk/roc_sso.h | 8 +--
drivers/common/cnxk/roc_sso_priv.h | 4 +-
drivers/event/cnxk/cn10k_eventdev.c | 45 +++++++++++-----
drivers/event/cnxk/cn10k_worker.c | 11 ++++
drivers/event/cnxk/cn10k_worker.h | 1 +
drivers/event/cnxk/cn9k_eventdev.c | 74 ++++++++++++++++----------
drivers/event/cnxk/cn9k_worker.c | 22 ++++++++
drivers/event/cnxk/cn9k_worker.h | 2 +
drivers/event/cnxk/cnxk_eventdev.c | 37 +++++++------
drivers/event/cnxk/cnxk_eventdev.h | 10 ++--
15 files changed, 161 insertions(+), 81 deletions(-)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 1a59233282..cccb8a0304 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -48,6 +48,7 @@ Features of the OCTEON cnxk SSO PMD are:
- HW managed event vectorization on CN10K for packets enqueued from ethdev to
eventdev configurable per each Rx queue in Rx adapter.
- Event vector transmission via Tx adapter.
+- Up to 2 event link profiles.
Prerequisites and Compilation procedure
---------------------------------------
diff --git a/doc/guides/eventdevs/features/cnxk.ini b/doc/guides/eventdevs/features/cnxk.ini
index bee69bf8f4..5d353e3670 100644
--- a/doc/guides/eventdevs/features/cnxk.ini
+++ b/doc/guides/eventdevs/features/cnxk.ini
@@ -12,7 +12,8 @@ runtime_port_link = Y
multiple_queue_port = Y
carry_flow_id = Y
maintenance_free = Y
-runtime_queue_attr = y
+runtime_queue_attr = Y
+profile_links = Y
[Eth Rx adapter Features]
internal_port = Y
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index fe6656bed2..66c4ddf37c 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -105,6 +105,8 @@ New Features
* Added support for ``remaining_ticks_get`` timer adapter PMD callback
to get the remaining ticks to expire for a given event timer.
+ * Added link profiles support for Marvell CNXK event device driver,
+ up to two link profiles are supported per event port.
Removed Items
diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c
index d76158e30d..690d47c045 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev.c
@@ -285,7 +285,7 @@ nix_inl_sso_setup(struct nix_inl_dev *inl_dev)
}
/* Setup hwgrp->hws link */
- sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, true);
+ sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, 0, true);
/* Enable HWGRP */
plt_write64(0x1, inl_dev->sso_base + SSO_LF_GGRP_QCTL);
@@ -315,7 +315,7 @@ nix_inl_sso_release(struct nix_inl_dev *inl_dev)
nix_inl_sso_unregister_irqs(inl_dev);
/* Unlink hws */
- sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, false);
+ sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, 0, false);
/* Release XAQ aura */
sso_hwgrp_release_xaq(&inl_dev->dev, 1);
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index c37da685da..748d287bad 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -186,8 +186,8 @@ sso_rsrc_get(struct roc_sso *roc_sso)
}
void
-sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
- uint16_t hwgrp[], uint16_t n, uint16_t enable)
+sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, uint16_t hwgrp[],
+ uint16_t n, uint8_t set, uint16_t enable)
{
uint64_t reg;
int i, j, k;
@@ -204,7 +204,7 @@ sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
k = n % 4;
k = k ? k : 4;
for (j = 0; j < k; j++) {
- mask[j] = hwgrp[i + j] | enable << 14;
+ mask[j] = hwgrp[i + j] | (uint32_t)set << 12 | enable << 14;
if (bmp) {
enable ? plt_bitmap_set(bmp, hwgrp[i + j]) :
plt_bitmap_clear(bmp, hwgrp[i + j]);
@@ -290,8 +290,8 @@ roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns)
}
int
-roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
- uint16_t nb_hwgrp)
+roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], uint16_t nb_hwgrp,
+ uint8_t set)
{
struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
struct sso *sso;
@@ -299,14 +299,14 @@ roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
sso = roc_sso_to_sso_priv(roc_sso);
base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12);
- sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, 1);
+ sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, set, 1);
return nb_hwgrp;
}
int
-roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
- uint16_t nb_hwgrp)
+roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], uint16_t nb_hwgrp,
+ uint8_t set)
{
struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
struct sso *sso;
@@ -314,7 +314,7 @@ roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
sso = roc_sso_to_sso_priv(roc_sso);
base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12);
- sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, 0);
+ sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, set, 0);
return nb_hwgrp;
}
diff --git a/drivers/common/cnxk/roc_sso.h b/drivers/common/cnxk/roc_sso.h
index 8ee62afb9a..64f14b8119 100644
--- a/drivers/common/cnxk/roc_sso.h
+++ b/drivers/common/cnxk/roc_sso.h
@@ -84,10 +84,10 @@ int __roc_api roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso,
uint16_t hwgrp, uint8_t weight,
uint8_t affinity, uint8_t priority);
uint64_t __roc_api roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns);
-int __roc_api roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws,
- uint16_t hwgrp[], uint16_t nb_hwgrp);
-int __roc_api roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws,
- uint16_t hwgrp[], uint16_t nb_hwgrp);
+int __roc_api roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
+ uint16_t nb_hwgrp, uint8_t set);
+int __roc_api roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[],
+ uint16_t nb_hwgrp, uint8_t set);
int __roc_api roc_sso_hwgrp_hws_link_status(struct roc_sso *roc_sso,
uint8_t hws, uint16_t hwgrp);
uintptr_t __roc_api roc_sso_hws_base_get(struct roc_sso *roc_sso, uint8_t hws);
diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h
index 09729d4f62..21c59c57e6 100644
--- a/drivers/common/cnxk/roc_sso_priv.h
+++ b/drivers/common/cnxk/roc_sso_priv.h
@@ -44,8 +44,8 @@ roc_sso_to_sso_priv(struct roc_sso *roc_sso)
int sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf,
void **rsp);
int sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf);
-void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
- uint16_t hwgrp[], uint16_t n, uint16_t enable);
+void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, uint16_t hwgrp[],
+ uint16_t n, uint8_t set, uint16_t enable);
int sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps);
int sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps);
int sso_hwgrp_init_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq,
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index cf186b9af4..bb0c910553 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -66,21 +66,21 @@ cn10k_sso_init_hws_mem(void *arg, uint8_t port_id)
}
static int
-cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = port;
- return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+ return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link, profile);
}
static int
-cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = port;
- return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+ return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link, profile);
}
static void
@@ -107,10 +107,11 @@ cn10k_sso_hws_release(void *arg, void *hws)
{
struct cnxk_sso_evdev *dev = arg;
struct cn10k_sso_hws *ws = hws;
- uint16_t i;
+ uint16_t i, j;
- for (i = 0; i < dev->nb_event_queues; i++)
- roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1);
+ for (i = 0; i < CNXK_SSO_MAX_PROFILES; i++)
+ for (j = 0; j < dev->nb_event_queues; j++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id, &j, 1, i);
memset(ws, 0, sizeof(*ws));
}
@@ -482,6 +483,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
CN10K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, sso_hws_tx_adptr_enq);
event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue;
+ event_dev->profile_switch = cn10k_sso_hws_profile_switch;
#else
RTE_SET_USED(event_dev);
#endif
@@ -633,9 +635,8 @@ cn10k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port,
}
static int
-cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port,
- const uint8_t queues[], const uint8_t priorities[],
- uint16_t nb_links)
+cn10k_sso_port_link_profile(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_links];
@@ -644,14 +645,14 @@ cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port,
RTE_SET_USED(priorities);
for (link = 0; link < nb_links; link++)
hwgrp_ids[link] = queues[link];
- nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+ nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links, profile);
return (int)nb_links;
}
static int
-cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
- uint8_t queues[], uint16_t nb_unlinks)
+cn10k_sso_port_unlink_profile(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_unlinks];
@@ -659,11 +660,25 @@ cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
for (unlink = 0; unlink < nb_unlinks; unlink++)
hwgrp_ids[unlink] = queues[unlink];
- nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+ nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks, profile);
return (int)nb_unlinks;
}
+static int
+cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links)
+{
+ return cn10k_sso_port_link_profile(event_dev, port, queues, priorities, nb_links, 0);
+}
+
+static int
+cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks)
+{
+ return cn10k_sso_port_unlink_profile(event_dev, port, queues, nb_unlinks, 0);
+}
+
static void
cn10k_sso_configure_queue_stash(struct rte_eventdev *event_dev)
{
@@ -1020,6 +1035,8 @@ static struct eventdev_ops cn10k_sso_dev_ops = {
.port_quiesce = cn10k_sso_port_quiesce,
.port_link = cn10k_sso_port_link,
.port_unlink = cn10k_sso_port_unlink,
+ .port_link_profile = cn10k_sso_port_link_profile,
+ .port_unlink_profile = cn10k_sso_port_unlink_profile,
.timeout_ticks = cnxk_sso_timeout_ticks,
.eth_rx_adapter_caps_get = cn10k_sso_rx_adapter_caps_get,
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
index 9b5bf90159..d59769717e 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -431,3 +431,14 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+
+int __rte_hot
+cn10k_sso_hws_profile_switch(void *port, uint8_t profile)
+{
+ struct cn10k_sso_hws *ws = port;
+
+ ws->gw_wdata &= ~(0xFFUL);
+ ws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index e71ab3c523..26fecf21fb 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -329,6 +329,7 @@ uint16_t __rte_hot cn10k_sso_hws_enq_new_burst(void *port,
uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn10k_sso_hws_profile_switch(void *port, uint8_t profile);
#define R(name, flags) \
uint16_t __rte_hot cn10k_sso_hws_deq_##name( \
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index fe6f5d9f86..9fb9ca0d63 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -15,7 +15,7 @@
enq_op = enq_ops[dev->tx_offloads & (NIX_TX_OFFLOAD_MAX - 1)]
static int
-cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
@@ -24,22 +24,20 @@ cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link)
if (dev->dual_ws) {
dws = port;
- rc = roc_sso_hws_link(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map,
- nb_link);
- rc |= roc_sso_hws_link(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
- map, nb_link);
+ rc = roc_sso_hws_link(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map, nb_link,
+ profile);
+ rc |= roc_sso_hws_link(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), map,
+ nb_link, profile);
} else {
ws = port;
- rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link);
+ rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link, profile);
}
return rc;
}
static int
-cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
+cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile)
{
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
@@ -48,15 +46,13 @@ cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link)
if (dev->dual_ws) {
dws = port;
- rc = roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
- map, nb_link);
- rc |= roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
- map, nb_link);
+ rc = roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map,
+ nb_link, profile);
+ rc |= roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), map,
+ nb_link, profile);
} else {
ws = port;
- rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link);
+ rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link, profile);
}
return rc;
@@ -97,21 +93,24 @@ cn9k_sso_hws_release(void *arg, void *hws)
struct cnxk_sso_evdev *dev = arg;
struct cn9k_sso_hws_dual *dws;
struct cn9k_sso_hws *ws;
- uint16_t i;
+ uint16_t i, k;
if (dev->dual_ws) {
dws = hws;
for (i = 0; i < dev->nb_event_queues; i++) {
- roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), &i, 1);
- roc_sso_hws_unlink(&dev->sso,
- CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), &i, 1);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++) {
+ roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0),
+ &i, 1, k);
+ roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1),
+ &i, 1, k);
+ }
}
memset(dws, 0, sizeof(*dws));
} else {
ws = hws;
for (i = 0; i < dev->nb_event_queues; i++)
- roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++)
+ roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1, k);
memset(ws, 0, sizeof(*ws));
}
}
@@ -438,6 +437,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
event_dev->enqueue_burst = cn9k_sso_hws_enq_burst;
event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst;
+ event_dev->profile_switch = cn9k_sso_hws_profile_switch;
if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, sso_hws_deq_seg);
CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst,
@@ -475,6 +475,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
event_dev->enqueue_forward_burst =
cn9k_sso_hws_dual_enq_fwd_burst;
event_dev->ca_enqueue = cn9k_sso_hws_dual_ca_enq;
+ event_dev->profile_switch = cn9k_sso_hws_dual_profile_switch;
if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue,
@@ -708,9 +709,8 @@ cn9k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port,
}
static int
-cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port,
- const uint8_t queues[], const uint8_t priorities[],
- uint16_t nb_links)
+cn9k_sso_port_link_profile(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_links];
@@ -719,14 +719,14 @@ cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port,
RTE_SET_USED(priorities);
for (link = 0; link < nb_links; link++)
hwgrp_ids[link] = queues[link];
- nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links);
+ nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links, profile);
return (int)nb_links;
}
static int
-cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
- uint8_t queues[], uint16_t nb_unlinks)
+cn9k_sso_port_unlink_profile(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks, uint8_t profile)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t hwgrp_ids[nb_unlinks];
@@ -734,11 +734,25 @@ cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
for (unlink = 0; unlink < nb_unlinks; unlink++)
hwgrp_ids[unlink] = queues[unlink];
- nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks);
+ nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks, profile);
return (int)nb_unlinks;
}
+static int
+cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[],
+ const uint8_t priorities[], uint16_t nb_links)
+{
+ return cn9k_sso_port_link_profile(event_dev, port, queues, priorities, nb_links, 0);
+}
+
+static int
+cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues[],
+ uint16_t nb_unlinks)
+{
+ return cn9k_sso_port_unlink_profile(event_dev, port, queues, nb_unlinks, 0);
+}
+
static int
cn9k_sso_start(struct rte_eventdev *event_dev)
{
@@ -1019,6 +1033,8 @@ static struct eventdev_ops cn9k_sso_dev_ops = {
.port_quiesce = cn9k_sso_port_quiesce,
.port_link = cn9k_sso_port_link,
.port_unlink = cn9k_sso_port_unlink,
+ .port_link_profile = cn9k_sso_port_link_profile,
+ .port_unlink_profile = cn9k_sso_port_unlink_profile,
.timeout_ticks = cnxk_sso_timeout_ticks,
.eth_rx_adapter_caps_get = cn9k_sso_rx_adapter_caps_get,
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
index abbbfffd85..a9ac49a5a7 100644
--- a/drivers/event/cnxk/cn9k_worker.c
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -66,6 +66,17 @@ cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+int __rte_hot
+cn9k_sso_hws_profile_switch(void *port, uint8_t profile)
+{
+ struct cn9k_sso_hws *ws = port;
+
+ ws->gw_wdata &= ~(0xFFUL);
+ ws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
+
/* Dual ws ops. */
uint16_t __rte_hot
@@ -149,3 +160,14 @@ cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events)
return cn9k_cpt_crypto_adapter_enqueue(dws->base[!dws->vws],
ev->event_ptr);
}
+
+int __rte_hot
+cn9k_sso_hws_dual_profile_switch(void *port, uint8_t profile)
+{
+ struct cn9k_sso_hws_dual *dws = port;
+
+ dws->gw_wdata &= ~(0xFFUL);
+ dws->gw_wdata |= (profile + 1);
+
+ return 0;
+}
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index ee659e80d6..6936b7ad04 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -366,6 +366,7 @@ uint16_t __rte_hot cn9k_sso_hws_enq_new_burst(void *port,
uint16_t __rte_hot cn9k_sso_hws_enq_fwd_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn9k_sso_hws_profile_switch(void *port, uint8_t profile);
uint16_t __rte_hot cn9k_sso_hws_dual_enq(void *port,
const struct rte_event *ev);
@@ -382,6 +383,7 @@ uint16_t __rte_hot cn9k_sso_hws_ca_enq(void *port, struct rte_event ev[],
uint16_t nb_events);
uint16_t __rte_hot cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[],
uint16_t nb_events);
+int __rte_hot cn9k_sso_hws_dual_profile_switch(void *port, uint8_t profile);
#define R(name, flags) \
uint16_t __rte_hot cn9k_sso_hws_deq_##name( \
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index e8ea7e0efb..0c61f4c20e 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -30,7 +30,9 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE |
- RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR;
+ RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR |
+ RTE_EVENT_DEV_CAP_PROFILE_LINK;
+ dev_info->max_profiles_per_port = CNXK_SSO_MAX_PROFILES;
}
int
@@ -128,23 +130,25 @@ cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t *links_map, hwgrp[CNXK_SSO_MAX_HWGRP];
- int i, j;
+ int i, j, k;
for (i = 0; i < dev->nb_event_ports; i++) {
- uint16_t nb_hwgrp = 0;
-
- links_map = event_dev->data->links_map[0];
- /* Point links_map to this port specific area */
- links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
+ for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++) {
+ uint16_t nb_hwgrp = 0;
+
+ links_map = event_dev->data->links_map[k];
+ /* Point links_map to this port specific area */
+ links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
+
+ for (j = 0; j < dev->nb_event_queues; j++) {
+ if (links_map[j] == 0xdead)
+ continue;
+ hwgrp[nb_hwgrp] = j;
+ nb_hwgrp++;
+ }
- for (j = 0; j < dev->nb_event_queues; j++) {
- if (links_map[j] == 0xdead)
- continue;
- hwgrp[nb_hwgrp] = j;
- nb_hwgrp++;
+ link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp, k);
}
-
- link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp);
}
}
@@ -435,7 +439,7 @@ cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
uint16_t all_queues[CNXK_SSO_MAX_HWGRP];
- uint16_t i;
+ uint16_t i, j;
void *ws;
if (!dev->configured)
@@ -446,7 +450,8 @@ cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn)
for (i = 0; i < dev->nb_event_ports; i++) {
ws = event_dev->data->ports[i];
- unlink_fn(dev, ws, all_queues, dev->nb_event_queues);
+ for (j = 0; j < CNXK_SSO_MAX_PROFILES; j++)
+ unlink_fn(dev, ws, all_queues, dev->nb_event_queues, j);
rte_free(cnxk_sso_hws_get_cookie(ws));
event_dev->data->ports[i] = NULL;
}
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index bd50de87c0..d42d1afa1a 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -33,6 +33,8 @@
#define CN10K_SSO_GW_MODE "gw_mode"
#define CN10K_SSO_STASH "stash"
+#define CNXK_SSO_MAX_PROFILES 2
+
#define NSEC2USEC(__ns) ((__ns) / 1E3)
#define USEC2NSEC(__us) ((__us)*1E3)
#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
@@ -57,10 +59,10 @@
typedef void *(*cnxk_sso_init_hws_mem_t)(void *dev, uint8_t port_id);
typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t grp_base);
typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws);
-typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map,
- uint16_t nb_link);
-typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map,
- uint16_t nb_link);
+typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map, uint16_t nb_link,
+ uint8_t profile);
+typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map, uint16_t nb_link,
+ uint8_t profile);
typedef void (*cnxk_handle_event_t)(void *arg, struct rte_event ev);
typedef void (*cnxk_sso_hws_reset_t)(void *arg, void *ws);
typedef int (*cnxk_sso_hws_flush_t)(void *ws, uint8_t queue_id, uintptr_t base,
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH v6 3/3] test/event: add event link profile test
2023-10-03 9:47 ` [PATCH v6 0/3] Introduce event link profiles pbhagavatula
2023-10-03 9:47 ` [PATCH v6 1/3] eventdev: introduce " pbhagavatula
2023-10-03 9:47 ` [PATCH v6 2/3] event/cnxk: implement event " pbhagavatula
@ 2023-10-03 9:47 ` pbhagavatula
2023-10-03 10:36 ` [PATCH v6 0/3] Introduce event link profiles Jerin Jacob
3 siblings, 0 replies; 44+ messages in thread
From: pbhagavatula @ 2023-10-03 9:47 UTC (permalink / raw)
To: jerinj, pbhagavatula, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add test case to verify event link profiles.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
app/test/test_eventdev.c | 117 +++++++++++++++++++++++++++++++++++++++
1 file changed, 117 insertions(+)
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index c51c93bdbd..0ecfa7db02 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1129,6 +1129,121 @@ test_eventdev_link_get(void)
return TEST_SUCCESS;
}
+static int
+test_eventdev_profile_switch(void)
+{
+#define MAX_RETRIES 4
+ uint8_t priorities[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+ struct rte_event_queue_conf qcfg;
+ struct rte_event_port_conf pcfg;
+ struct rte_event_dev_info info;
+ struct rte_event ev;
+ uint8_t q, re;
+ int rc;
+
+ rte_event_dev_info_get(TEST_DEV_ID, &info);
+
+ if (info.max_profiles_per_port <= 1)
+ return TEST_SKIPPED;
+
+ if (info.max_event_queues <= 1)
+ return TEST_SKIPPED;
+
+ rc = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to get port0 default config");
+ rc = rte_event_port_setup(TEST_DEV_ID, 0, &pcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to setup port0");
+
+ rc = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, &qcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to get queue0 default config");
+ rc = rte_event_queue_setup(TEST_DEV_ID, 0, &qcfg);
+ TEST_ASSERT_SUCCESS(rc, "Failed to setup queue0");
+
+ q = 0;
+ rc = rte_event_port_profile_links_set(TEST_DEV_ID, 0, &q, NULL, 1, 0);
+ TEST_ASSERT(rc == 1, "Failed to link queue 0 to port 0 with profile 0");
+ q = 1;
+ rc = rte_event_port_profile_links_set(TEST_DEV_ID, 0, &q, NULL, 1, 1);
+ TEST_ASSERT(rc == 1, "Failed to link queue 1 to port 0 with profile 1");
+
+ rc = rte_event_port_profile_links_get(TEST_DEV_ID, 0, queues, priorities, 0);
+ TEST_ASSERT(rc == 1, "Failed to links");
+ TEST_ASSERT(queues[0] == 0, "Invalid queue found in link");
+
+ rc = rte_event_port_profile_links_get(TEST_DEV_ID, 0, queues, priorities, 1);
+ TEST_ASSERT(rc == 1, "Failed to links");
+ TEST_ASSERT(queues[0] == 1, "Invalid queue found in link");
+
+ rc = rte_event_dev_start(TEST_DEV_ID);
+ TEST_ASSERT_SUCCESS(rc, "Failed to start event device");
+
+ ev.event_type = RTE_EVENT_TYPE_CPU;
+ ev.queue_id = 0;
+ ev.op = RTE_EVENT_OP_NEW;
+ ev.flow_id = 0;
+ ev.u64 = 0xBADF00D0;
+ rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1);
+ TEST_ASSERT(rc == 1, "Failed to enqueue event");
+ ev.queue_id = 1;
+ ev.flow_id = 1;
+ rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1);
+ TEST_ASSERT(rc == 1, "Failed to enqueue event");
+
+ ev.event = 0;
+ ev.u64 = 0;
+
+ rc = rte_event_port_profile_switch(TEST_DEV_ID, 0, 1);
+ TEST_ASSERT_SUCCESS(rc, "Failed to change profile");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ printf("rc %d\n", rc);
+ if (rc)
+ break;
+ }
+
+ TEST_ASSERT(rc == 1, "Failed to dequeue event from profile 1");
+ TEST_ASSERT(ev.flow_id == 1, "Incorrect flow identifier from profile 1");
+ TEST_ASSERT(ev.queue_id == 1, "Incorrect queue identifier from profile 1");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ TEST_ASSERT(rc == 0, "Unexpected event dequeued from active profile");
+ }
+
+ rc = rte_event_port_profile_switch(TEST_DEV_ID, 0, 0);
+ TEST_ASSERT_SUCCESS(rc, "Failed to change profile");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ if (rc)
+ break;
+ }
+
+ TEST_ASSERT(rc == 1, "Failed to dequeue event from profile 1");
+ TEST_ASSERT(ev.flow_id == 0, "Incorrect flow identifier from profile 0");
+ TEST_ASSERT(ev.queue_id == 0, "Incorrect queue identifier from profile 0");
+
+ re = MAX_RETRIES;
+ while (re--) {
+ rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0);
+ TEST_ASSERT(rc == 0, "Unexpected event dequeued from active profile");
+ }
+
+ q = 0;
+ rc = rte_event_port_profile_unlink(TEST_DEV_ID, 0, &q, 1, 0);
+ TEST_ASSERT(rc == 1, "Failed to unlink queue 0 to port 0 with profile 0");
+ q = 1;
+ rc = rte_event_port_profile_unlink(TEST_DEV_ID, 0, &q, 1, 1);
+ TEST_ASSERT(rc == 1, "Failed to unlink queue 1 to port 0 with profile 1");
+
+ return TEST_SUCCESS;
+}
+
static int
test_eventdev_close(void)
{
@@ -1187,6 +1302,8 @@ static struct unit_test_suite eventdev_common_testsuite = {
test_eventdev_timeout_ticks),
TEST_CASE_ST(NULL, NULL,
test_eventdev_start_stop),
+ TEST_CASE_ST(eventdev_configure_setup, eventdev_stop_device,
+ test_eventdev_profile_switch),
TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
test_eventdev_link),
TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
--
2.25.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [PATCH v6 0/3] Introduce event link profiles
2023-10-03 9:47 ` [PATCH v6 0/3] Introduce event link profiles pbhagavatula
` (2 preceding siblings ...)
2023-10-03 9:47 ` [PATCH v6 3/3] test/event: add event link profile test pbhagavatula
@ 2023-10-03 10:36 ` Jerin Jacob
2023-10-03 14:12 ` Bruce Richardson
3 siblings, 1 reply; 44+ messages in thread
From: Jerin Jacob @ 2023-10-03 10:36 UTC (permalink / raw)
To: pbhagavatula
Cc: jerinj, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov, dev
On Tue, Oct 3, 2023 at 3:17 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> A collection of event queues linked to an event port can be associated
> with unique identifier called as a link profile, multiple such profiles
> can be configured based on the event device capability using the function
> `rte_event_port_profile_links_set` which takes arguments similar to
> `rte_event_port_link` in addition to the profile identifier.
>
> The maximum link profiles that are supported by an event device is
> advertised through the structure member
...
>
> v6 Changes:
Series applied to dpdk-next-net-eventdev/for-main with following changes. Thanks
[for-main]dell[dpdk-next-eventdev] $ git diff
diff --git a/doc/guides/prog_guide/eventdev.rst
b/doc/guides/prog_guide/eventdev.rst
index 8c15c678bf..e177ca6bdb 100644
--- a/doc/guides/prog_guide/eventdev.rst
+++ b/doc/guides/prog_guide/eventdev.rst
@@ -325,7 +325,7 @@ multiple link profile per port and change them run
time depending up on heuristi
Using Link profiles can reduce the overhead of linking/unlinking and
wait for unlinks in progress
in fast-path and gives applications the ability to switch between
preset profiles on the fly.
-An Example use case could be as follows.
+An example use case could be as follows.
Config path:
diff --git a/doc/guides/rel_notes/release_23_11.rst
b/doc/guides/rel_notes/release_23_11.rst
index 66c4ddf37c..261594aacc 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -105,8 +105,7 @@ New Features
* Added support for ``remaining_ticks_get`` timer adapter PMD callback
to get the remaining ticks to expire for a given event timer.
- * Added link profiles support for Marvell CNXK event device driver,
- up to two link profiles are supported per event port.
+ * Added link profiles support, up to two link profiles are supported.
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [PATCH v6 0/3] Introduce event link profiles
2023-10-03 10:36 ` [PATCH v6 0/3] Introduce event link profiles Jerin Jacob
@ 2023-10-03 14:12 ` Bruce Richardson
2023-10-03 15:17 ` Jerin Jacob
0 siblings, 1 reply; 44+ messages in thread
From: Bruce Richardson @ 2023-10-03 14:12 UTC (permalink / raw)
To: Jerin Jacob
Cc: pbhagavatula, jerinj, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov, dev
On Tue, Oct 03, 2023 at 04:06:10PM +0530, Jerin Jacob wrote:
> On Tue, Oct 3, 2023 at 3:17 PM <pbhagavatula@marvell.com> wrote:
> >
> > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >
> > A collection of event queues linked to an event port can be associated
> > with unique identifier called as a link profile, multiple such profiles
> > can be configured based on the event device capability using the function
> > `rte_event_port_profile_links_set` which takes arguments similar to
> > `rte_event_port_link` in addition to the profile identifier.
> >
> > The maximum link profiles that are supported by an event device is
> > advertised through the structure member
>
> ...
>
> >
> > v6 Changes:
>
> Series applied to dpdk-next-net-eventdev/for-main with following changes. Thanks
>
I'm doing some investigation work on the software eventdev, using
eventdev_pipeline, and following these patches the eventdev_pipeline sample
no longer is working for me. Error message is as shown below:
Config:
ports: 2
workers: 22
packets: 33554432
Queue-prio: 0
qid0 type: ordered
Cores available: 48
Cores used: 24
Eventdev 0: event_sw
Stages:
Stage 0, Type Ordered Priority = 128
EVENTDEV: rte_event_port_profile_unlink() line 1092: Invalid profile_id=0
Error setting up port 0
Parameters used when running the app:
-l 24-47 --in-memory --vdev=event_sw0 -- \
-r 1000000 -t 1000000 -e 2000000 -w FFFFFC000000 -c 64 -W 500
Regards,
/Bruce
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [PATCH v6 0/3] Introduce event link profiles
2023-10-03 14:12 ` Bruce Richardson
@ 2023-10-03 15:17 ` Jerin Jacob
2023-10-03 15:32 ` [EXT] " Pavan Nikhilesh Bhagavatula
0 siblings, 1 reply; 44+ messages in thread
From: Jerin Jacob @ 2023-10-03 15:17 UTC (permalink / raw)
To: Bruce Richardson
Cc: pbhagavatula, jerinj, sthotton, timothy.mcdaniel, hemant.agrawal,
sachin.saxena, mattias.ronnblom, liangma, peter.mccarthy,
harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
s.v.naga.harish.k, anatoly.burakov, dev
On Tue, Oct 3, 2023 at 7:43 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Tue, Oct 03, 2023 at 04:06:10PM +0530, Jerin Jacob wrote:
> > On Tue, Oct 3, 2023 at 3:17 PM <pbhagavatula@marvell.com> wrote:
> > >
> > > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > >
> > > A collection of event queues linked to an event port can be associated
> > > with unique identifier called as a link profile, multiple such profiles
> > > can be configured based on the event device capability using the function
> > > `rte_event_port_profile_links_set` which takes arguments similar to
> > > `rte_event_port_link` in addition to the profile identifier.
> > >
> > > The maximum link profiles that are supported by an event device is
> > > advertised through the structure member
> >
> > ...
> >
> > >
> > > v6 Changes:
> >
> > Series applied to dpdk-next-net-eventdev/for-main with following changes. Thanks
> >
>
> I'm doing some investigation work on the software eventdev, using
> eventdev_pipeline, and following these patches the eventdev_pipeline sample
> no longer is working for me. Error message is as shown below:
>
> Config:
> ports: 2
> workers: 22
> packets: 33554432
> Queue-prio: 0
> qid0 type: ordered
> Cores available: 48
> Cores used: 24
> Eventdev 0: event_sw
> Stages:
> Stage 0, Type Ordered Priority = 128
>
> EVENTDEV: rte_event_port_profile_unlink() line 1092: Invalid profile_id=0
> Error setting up port 0
>
> Parameters used when running the app:
> -l 24-47 --in-memory --vdev=event_sw0 -- \
> -r 1000000 -t 1000000 -e 2000000 -w FFFFFC000000 -c 64 -W 500
Following max_profiles_per_port = 1 is getting overridden in [1]. I
was suggested to take this path to avoid driver changes.
Looks like we can not rely on common code. @Pavan Nikhilesh Could you
change to your old version(where every driver changes to add
max_profiles_per_port = 1).
I will squash it.
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 60509c6efb..5ee8bd665b 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -96,6 +96,7 @@ rte_event_dev_info_get(uint8_t dev_id, struct
rte_event_dev_info *dev_info)
return -EINVAL;
memset(dev_info, 0, sizeof(struct rte_event_dev_info));
+ dev_info->max_profiles_per_port = 1;
[1]
static void
sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
{
RTE_SET_USED(dev);
static const struct rte_event_dev_info evdev_sw_info = {
.driver_name = SW_PMD_NAME,
.max_event_queues = RTE_EVENT_MAX_QUEUES_PER_DEV,
.max_event_queue_flows = SW_QID_NUM_FIDS,
.max_event_queue_priority_levels = SW_Q_PRIORITY_MAX,
.max_event_priority_levels = SW_IQS_MAX,
.max_event_ports = SW_PORTS_MAX,
.max_event_port_dequeue_depth = MAX_SW_CONS_Q_DEPTH,
.max_event_port_enqueue_depth = MAX_SW_PROD_Q_DEPTH,
.max_num_events = SW_INFLIGHT_EVENTS_TOTAL,
.event_dev_cap = (
RTE_EVENT_DEV_CAP_QUEUE_QOS |
RTE_EVENT_DEV_CAP_BURST_MODE |
RTE_EVENT_DEV_CAP_EVENT_QOS |
RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE|
RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE),
};
*info = evdev_sw_info;
}
>
> Regards,
> /Bruce
^ permalink raw reply [flat|nested] 44+ messages in thread
* RE: [EXT] Re: [PATCH v6 0/3] Introduce event link profiles
2023-10-03 15:17 ` Jerin Jacob
@ 2023-10-03 15:32 ` Pavan Nikhilesh Bhagavatula
0 siblings, 0 replies; 44+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2023-10-03 15:32 UTC (permalink / raw)
To: Jerin Jacob, Bruce Richardson
Cc: Jerin Jacob Kollanukkaran, Shijith Thotton, timothy.mcdaniel,
hemant.agrawal, sachin.saxena, mattias.ronnblom, liangma,
peter.mccarthy, harry.van.haaren, erik.g.carrillo,
abhinandan.gujjar, s.v.naga.harish.k, anatoly.burakov, dev
> On Tue, Oct 3, 2023 at 7:43 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > On Tue, Oct 03, 2023 at 04:06:10PM +0530, Jerin Jacob wrote:
> > > On Tue, Oct 3, 2023 at 3:17 PM <pbhagavatula@marvell.com> wrote:
> > > >
> > > > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > > >
> > > > A collection of event queues linked to an event port can be associated
> > > > with unique identifier called as a link profile, multiple such profiles
> > > > can be configured based on the event device capability using the
> function
> > > > `rte_event_port_profile_links_set` which takes arguments similar to
> > > > `rte_event_port_link` in addition to the profile identifier.
> > > >
> > > > The maximum link profiles that are supported by an event device is
> > > > advertised through the structure member
> > >
> > > ...
> > >
> > > >
> > > > v6 Changes:
> > >
> > > Series applied to dpdk-next-net-eventdev/for-main with following
> changes. Thanks
> > >
> >
> > I'm doing some investigation work on the software eventdev, using
> > eventdev_pipeline, and following these patches the eventdev_pipeline
> sample
> > no longer is working for me. Error message is as shown below:
> >
> > Config:
> > ports: 2
> > workers: 22
> > packets: 33554432
> > Queue-prio: 0
> > qid0 type: ordered
> > Cores available: 48
> > Cores used: 24
> > Eventdev 0: event_sw
> > Stages:
> > Stage 0, Type Ordered Priority = 128
> >
> > EVENTDEV: rte_event_port_profile_unlink() line 1092: Invalid profile_id=0
> > Error setting up port 0
> >
> > Parameters used when running the app:
> > -l 24-47 --in-memory --vdev=event_sw0 -- \
> > -r 1000000 -t 1000000 -e 2000000 -w FFFFFC000000 -c 64 -W 500
>
>
> Following max_profiles_per_port = 1 is getting overridden in [1]. I
> was suggested to take this path to avoid driver changes.
> Looks like we can not rely on common code. @Pavan Nikhilesh Could you
> change to your old version(where every driver changes to add
> max_profiles_per_port = 1).
> I will squash it.
>
> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
> index 60509c6efb..5ee8bd665b 100644
> --- a/lib/eventdev/rte_eventdev.c
> +++ b/lib/eventdev/rte_eventdev.c
> @@ -96,6 +96,7 @@ rte_event_dev_info_get(uint8_t dev_id, struct
> rte_event_dev_info *dev_info)
> return -EINVAL;
>
> memset(dev_info, 0, sizeof(struct rte_event_dev_info));
> + dev_info->max_profiles_per_port = 1;
Should be fixed with the following patch, @Bruce Richardson could you please verify
https://patchwork.dpdk.org/project/dpdk/patch/20231003152535.10177-1-pbhagavatula@marvell.com/
>
> [1]
> static void
> sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
> {
> RTE_SET_USED(dev);
>
> static const struct rte_event_dev_info evdev_sw_info = {
> .driver_name = SW_PMD_NAME,
> .max_event_queues = RTE_EVENT_MAX_QUEUES_PER_DEV,
> .max_event_queue_flows = SW_QID_NUM_FIDS,
> .max_event_queue_priority_levels = SW_Q_PRIORITY_MAX,
> .max_event_priority_levels = SW_IQS_MAX,
> .max_event_ports = SW_PORTS_MAX,
> .max_event_port_dequeue_depth =
> MAX_SW_CONS_Q_DEPTH,
> .max_event_port_enqueue_depth =
> MAX_SW_PROD_Q_DEPTH,
> .max_num_events = SW_INFLIGHT_EVENTS_TOTAL,
> .event_dev_cap = (
> RTE_EVENT_DEV_CAP_QUEUE_QOS |
> RTE_EVENT_DEV_CAP_BURST_MODE |
> RTE_EVENT_DEV_CAP_EVENT_QOS |
> RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE|
> RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
> RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
> RTE_EVENT_DEV_CAP_NONSEQ_MODE |
> RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
> RTE_EVENT_DEV_CAP_MAINTENANCE_FREE),
> };
>
> *info = evdev_sw_info;
> }
>
>
> >
> > Regards,
> > /Bruce
^ permalink raw reply [flat|nested] 44+ messages in thread
end of thread, other threads:[~2023-10-03 15:32 UTC | newest]
Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-09 14:26 [RFC 0/3] Introduce event link profiles pbhagavatula
2023-08-09 14:26 ` [RFC 1/3] eventdev: introduce " pbhagavatula
2023-08-18 10:27 ` Jerin Jacob
2023-08-09 14:26 ` [RFC 2/3] event/cnxk: implement event " pbhagavatula
2023-08-09 14:26 ` [RFC 3/3] test/event: add event link profile test pbhagavatula
2023-08-09 19:45 ` [RFC 0/3] Introduce event link profiles Mattias Rönnblom
2023-08-10 5:17 ` [EXT] " Pavan Nikhilesh Bhagavatula
2023-08-12 5:52 ` Mattias Rönnblom
2023-08-14 11:29 ` Pavan Nikhilesh Bhagavatula
2023-08-25 18:44 ` [PATCH " pbhagavatula
2023-08-25 18:44 ` [PATCH 1/3] eventdev: introduce " pbhagavatula
2023-08-25 18:44 ` [PATCH 2/3] event/cnxk: implement event " pbhagavatula
2023-08-25 18:44 ` [PATCH 3/3] test/event: add event link profile test pbhagavatula
2023-08-31 20:44 ` [PATCH v2 0/3] Introduce event link profiles pbhagavatula
2023-08-31 20:44 ` [PATCH v2 1/3] eventdev: introduce " pbhagavatula
2023-09-20 4:22 ` Jerin Jacob
2023-08-31 20:44 ` [PATCH v2 2/3] event/cnxk: implement event " pbhagavatula
2023-08-31 20:44 ` [PATCH v2 3/3] test/event: add event link profile test pbhagavatula
2023-09-21 10:28 ` [PATCH v3 0/3] Introduce event link profiles pbhagavatula
2023-09-21 10:28 ` [PATCH v3 1/3] eventdev: introduce " pbhagavatula
2023-09-27 15:23 ` Jerin Jacob
2023-09-21 10:28 ` [PATCH v3 2/3] event/cnxk: implement event " pbhagavatula
2023-09-27 15:29 ` Jerin Jacob
2023-09-21 10:28 ` [PATCH v3 3/3] test/event: add event link profile test pbhagavatula
2023-09-27 14:56 ` [PATCH v3 0/3] Introduce event link profiles Jerin Jacob
2023-09-28 10:12 ` [PATCH v4 " pbhagavatula
2023-09-28 10:12 ` [PATCH v4 1/3] eventdev: introduce " pbhagavatula
2023-10-03 6:55 ` Jerin Jacob
2023-09-28 10:12 ` [PATCH v4 2/3] event/cnxk: implement event " pbhagavatula
2023-09-28 10:12 ` [PATCH v4 3/3] test/event: add event link profile test pbhagavatula
2023-09-28 14:45 ` [PATCH v4 0/3] Introduce event link profiles Jerin Jacob
2023-09-29 9:27 ` [EXT] " Pavan Nikhilesh Bhagavatula
2023-10-03 7:51 ` [PATCH v5 " pbhagavatula
2023-10-03 7:51 ` [PATCH v5 1/3] eventdev: introduce " pbhagavatula
2023-10-03 7:51 ` [PATCH v5 2/3] event/cnxk: implement event " pbhagavatula
2023-10-03 7:51 ` [PATCH v5 3/3] test/event: add event link profile test pbhagavatula
2023-10-03 9:47 ` [PATCH v6 0/3] Introduce event link profiles pbhagavatula
2023-10-03 9:47 ` [PATCH v6 1/3] eventdev: introduce " pbhagavatula
2023-10-03 9:47 ` [PATCH v6 2/3] event/cnxk: implement event " pbhagavatula
2023-10-03 9:47 ` [PATCH v6 3/3] test/event: add event link profile test pbhagavatula
2023-10-03 10:36 ` [PATCH v6 0/3] Introduce event link profiles Jerin Jacob
2023-10-03 14:12 ` Bruce Richardson
2023-10-03 15:17 ` Jerin Jacob
2023-10-03 15:32 ` [EXT] " Pavan Nikhilesh Bhagavatula
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).