DPDK patches and discussions
 help / color / mirror / Atom feed
From: <pbhagavatula@marvell.com>
To: <jerinj@marvell.com>, <pbhagavatula@marvell.com>,
	<sthotton@marvell.com>,  <timothy.mcdaniel@intel.com>,
	<hemant.agrawal@nxp.com>, <sachin.saxena@nxp.com>,
	<mattias.ronnblom@ericsson.com>, <liangma@liangbit.com>,
	<peter.mccarthy@intel.com>, <harry.van.haaren@intel.com>,
	<erik.g.carrillo@intel.com>, <abhinandan.gujjar@intel.com>,
	<s.v.naga.harish.k@intel.com>, <anatoly.burakov@intel.com>
Cc: <dev@dpdk.org>
Subject: [PATCH 1/3] eventdev: introduce link profiles
Date: Sat, 26 Aug 2023 00:14:33 +0530	[thread overview]
Message-ID: <20230825184435.2986-2-pbhagavatula@marvell.com> (raw)
In-Reply-To: <20230825184435.2986-1-pbhagavatula@marvell.com>

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

A collection of event queues linked to an event port can be
associated with a unique identifier called as a profile, multiple
such profiles can be created based on the event device capability
using the function `rte_event_port_profile_links_set` which takes
arguments similar to `rte_event_port_link` in addition to the profile
identifier.

The maximum link profiles that are supported by an event device
is advertised through the structure member
`rte_event_dev_info::max_profiles_per_port`.
By default, event ports are configured to use the link profile 0
on initialization.

Once multiple link profiles are set up and the event device is started,
the application can use the function `rte_event_port_profile_switch`
to change the currently active profile on an event port. This effects
the next `rte_event_dequeue_burst` call, where the event queues
associated with the newly active link profile will participate in
scheduling.

An unlink function `rte_event_port_profile_unlink` is provided
to modify the links associated to a profile, and
`rte_event_port_profile_links_get` can be used to retrieve the
links associated with a profile.

Using Link profiles can reduce the overhead of linking/unlinking and
waiting for unlinks in progress in fast-path and gives applications
the ability to switch between preset profiles on the fly.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 config/rte_config.h                        |   1 +
 doc/guides/eventdevs/features/default.ini  |   1 +
 doc/guides/prog_guide/eventdev.rst         |  40 ++++
 doc/guides/rel_notes/release_23_11.rst     |  17 ++
 drivers/event/cnxk/cnxk_eventdev.c         |   3 +-
 drivers/event/dlb2/dlb2.c                  |   1 +
 drivers/event/dpaa/dpaa_eventdev.c         |   1 +
 drivers/event/dpaa2/dpaa2_eventdev.c       |   2 +-
 drivers/event/dsw/dsw_evdev.c              |   1 +
 drivers/event/octeontx/ssovf_evdev.c       |   2 +-
 drivers/event/opdl/opdl_evdev.c            |   1 +
 drivers/event/skeleton/skeleton_eventdev.c |   1 +
 drivers/event/sw/sw_evdev.c                |   1 +
 lib/eventdev/eventdev_pmd.h                |  59 +++++-
 lib/eventdev/eventdev_private.c            |   9 +
 lib/eventdev/eventdev_trace.h              |  22 ++
 lib/eventdev/eventdev_trace_points.c       |   6 +
 lib/eventdev/rte_eventdev.c                | 146 ++++++++++---
 lib/eventdev/rte_eventdev.h                | 231 +++++++++++++++++++++
 lib/eventdev/rte_eventdev_core.h           |   4 +
 lib/eventdev/rte_eventdev_trace_fp.h       |   8 +
 lib/eventdev/version.map                   |   6 +
 22 files changed, 533 insertions(+), 30 deletions(-)

diff --git a/config/rte_config.h b/config/rte_config.h
index 400e44e3cf..d43b3eecb8 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -73,6 +73,7 @@
 #define RTE_EVENT_MAX_DEVS 16
 #define RTE_EVENT_MAX_PORTS_PER_DEV 255
 #define RTE_EVENT_MAX_QUEUES_PER_DEV 255
+#define RTE_EVENT_MAX_PROFILES_PER_PORT 8
 #define RTE_EVENT_TIMER_ADAPTER_NUM_MAX 32
 #define RTE_EVENT_ETH_INTR_RING_SIZE 1024
 #define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini
index 00360f60c6..1c0082352b 100644
--- a/doc/guides/eventdevs/features/default.ini
+++ b/doc/guides/eventdevs/features/default.ini
@@ -18,6 +18,7 @@ multiple_queue_port        =
 carry_flow_id              =
 maintenance_free           =
 runtime_queue_attr         =
+profile_links              =
 
 ;
 ; Features of a default Ethernet Rx adapter.
diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst
index 2c83176846..9c07870a79 100644
--- a/doc/guides/prog_guide/eventdev.rst
+++ b/doc/guides/prog_guide/eventdev.rst
@@ -317,6 +317,46 @@ can be achieved like this:
         }
         int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1);
 
+Linking Queues to Ports with profiles
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+An application can use link profiles if supported by the underlying event device to setup up
+multiple link profile per port and change them run time depending up on heuristic data.
+Using Link profiles can reduce the overhead of linking/unlinking and wait for unlinks in progress
+in fast-path and gives applications the ability to switch between preset profiles on the fly.
+
+An Example use case could be as follows.
+
+Config path:
+
+.. code-block:: c
+
+        uint8_t lq[4] = {4, 5, 6, 7};
+        uint8_t hq[4] = {0, 1, 2, 3};
+
+        if (rte_event_dev_info.max_profiles_per_port < 2)
+            return -ENOTSUP;
+
+        rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0);
+        rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1);
+
+Worker path:
+
+.. code-block:: c
+
+        uint8_t profile_id_to_switch;
+
+        while (1) {
+            deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
+            if (deq == 0) {
+                profile_id_to_switch = app_findprofile_id_to_switch();
+                rte_event_port_profile_switch(0, 0, profile_id_to_switch);
+                continue;
+            }
+
+            // Process the event received.
+        }
+
 Starting the EventDev
 ~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index 333e1d95a2..e19a0ed3c3 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -78,6 +78,23 @@ New Features
 * build: Optional libraries can now be selected with the new ``enable_libs``
   build option similarly to the existing ``enable_drivers`` build option.
 
+* **Added eventdev support to link queues to port with profile.**
+
+  Introduced event link profiles that can be used to associated links between
+  event queues and an event port with a unique identifier termed as profile.
+  The profile can be used to switch between the associated links in fast-path
+  without the additional overhead of linking/unlinking and waiting for unlinking.
+
+  * Added ``rte_event_port_profile_links_set`` to link event queues to an event
+    port with a unique profile identifier.
+
+  * Added ``rte_event_port_profile_unlink`` to unlink event queues from an event
+    port associated with a profile.
+
+  * Added ``rte_event_port_profile_links_get`` to retrieve links associated to a
+    profile.
+
+  * Added ``rte_event_port_profile_switch`` to switch between profiles as needed.
 
 Removed Items
 -------------
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 27883a3619..529622cac6 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -31,6 +31,7 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
 				  RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
 				  RTE_EVENT_DEV_CAP_MAINTENANCE_FREE |
 				  RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR;
+	dev_info->max_profiles_per_port = 1;
 }
 
 int
@@ -133,7 +134,7 @@ cnxk_sso_restore_links(const struct rte_eventdev *event_dev,
 	for (i = 0; i < dev->nb_event_ports; i++) {
 		uint16_t nb_hwgrp = 0;
 
-		links_map = event_dev->data->links_map;
+		links_map = event_dev->data->links_map[0];
 		/* Point links_map to this port specific area */
 		links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
 
diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 60c5cd4804..580057870f 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -79,6 +79,7 @@ static struct rte_event_dev_info evdev_dlb2_default_info = {
 			  RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
 			  RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
 			  RTE_EVENT_DEV_CAP_MAINTENANCE_FREE),
+	.max_profiles_per_port = 1,
 };
 
 struct process_local_port_data
diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c
index 4b3d16735b..f615da3813 100644
--- a/drivers/event/dpaa/dpaa_eventdev.c
+++ b/drivers/event/dpaa/dpaa_eventdev.c
@@ -359,6 +359,7 @@ dpaa_event_dev_info_get(struct rte_eventdev *dev,
 		RTE_EVENT_DEV_CAP_NONSEQ_MODE |
 		RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
 		RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
+	dev_info->max_profiles_per_port = 1;
 }
 
 static int
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index fa1a1ade80..ffc5550f85 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -411,7 +411,7 @@ dpaa2_eventdev_info_get(struct rte_eventdev *dev,
 		RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
 		RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
 		RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
-
+	dev_info->max_profiles_per_port = 1;
 }
 
 static int
diff --git a/drivers/event/dsw/dsw_evdev.c b/drivers/event/dsw/dsw_evdev.c
index 6c5cde2468..785c12f61f 100644
--- a/drivers/event/dsw/dsw_evdev.c
+++ b/drivers/event/dsw/dsw_evdev.c
@@ -218,6 +218,7 @@ dsw_info_get(struct rte_eventdev *dev __rte_unused,
 		.max_event_port_dequeue_depth = DSW_MAX_PORT_DEQUEUE_DEPTH,
 		.max_event_port_enqueue_depth = DSW_MAX_PORT_ENQUEUE_DEPTH,
 		.max_num_events = DSW_MAX_EVENTS,
+		.max_profiles_per_port = 1,
 		.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE|
 		RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED|
 		RTE_EVENT_DEV_CAP_NONSEQ_MODE|
diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
index 650266b996..0eb9358981 100644
--- a/drivers/event/octeontx/ssovf_evdev.c
+++ b/drivers/event/octeontx/ssovf_evdev.c
@@ -158,7 +158,7 @@ ssovf_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *dev_info)
 					RTE_EVENT_DEV_CAP_NONSEQ_MODE |
 					RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
 					RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
-
+	dev_info->max_profiles_per_port = 1;
 }
 
 static int
diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c
index 9ce8b39b60..dd25749654 100644
--- a/drivers/event/opdl/opdl_evdev.c
+++ b/drivers/event/opdl/opdl_evdev.c
@@ -378,6 +378,7 @@ opdl_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
 		.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE |
 				 RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
 				 RTE_EVENT_DEV_CAP_MAINTENANCE_FREE,
+		.max_profiles_per_port = 1,
 	};
 
 	*info = evdev_opdl_info;
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
index 8513b9a013..dc9b131641 100644
--- a/drivers/event/skeleton/skeleton_eventdev.c
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -104,6 +104,7 @@ skeleton_eventdev_info_get(struct rte_eventdev *dev,
 					RTE_EVENT_DEV_CAP_EVENT_QOS |
 					RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
 					RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
+	dev_info->max_profiles_per_port = 1;
 }
 
 static int
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index cfd659d774..6d1816b76d 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -609,6 +609,7 @@ sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
 				RTE_EVENT_DEV_CAP_NONSEQ_MODE |
 				RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
 				RTE_EVENT_DEV_CAP_MAINTENANCE_FREE),
+			.max_profiles_per_port = 1,
 	};
 
 	*info = evdev_sw_info;
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index f62f42e140..66fdad71f3 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -119,8 +119,8 @@ struct rte_eventdev_data {
 	/**< Array of port configuration structures. */
 	struct rte_event_queue_conf queues_cfg[RTE_EVENT_MAX_QUEUES_PER_DEV];
 	/**< Array of queue configuration structures. */
-	uint16_t links_map[RTE_EVENT_MAX_PORTS_PER_DEV *
-			   RTE_EVENT_MAX_QUEUES_PER_DEV];
+	uint16_t links_map[RTE_EVENT_MAX_PROFILES_PER_PORT]
+			  [RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV];
 	/**< Memory to store queues to port connections. */
 	void *dev_private;
 	/**< PMD-specific private data */
@@ -178,6 +178,9 @@ struct rte_eventdev {
 	event_tx_adapter_enqueue_t txa_enqueue;
 	/**< Pointer to PMD eth Tx adapter enqueue function. */
 	event_crypto_adapter_enqueue_t ca_enqueue;
+	/**< PMD Crypto adapter enqueue function. */
+	event_profile_switch_t profile_switch;
+	/**< PMD Event switch profile function. */
 
 	uint64_t reserved_64s[4]; /**< Reserved for future fields */
 	void *reserved_ptrs[3];	  /**< Reserved for future fields */
@@ -437,6 +440,32 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
 		const uint8_t queues[], const uint8_t priorities[],
 		uint16_t nb_links);
 
+/**
+ * Link multiple source event queues associated with a profile to a destination
+ * event port.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param port
+ *   Event port pointer
+ * @param queues
+ *   Points to an array of *nb_links* event queues to be linked
+ *   to the event port.
+ * @param priorities
+ *   Points to an array of *nb_links* service priorities associated with each
+ *   event queue link to event port.
+ * @param nb_links
+ *   The number of links to establish.
+ * @param profile
+ *   The profile ID to associate the links.
+ *
+ * @return
+ *   Returns 0 on success.
+ */
+typedef int (*eventdev_port_link_profile_t)(struct rte_eventdev *dev, void *port,
+					    const uint8_t queues[], const uint8_t priorities[],
+					    uint16_t nb_links, uint8_t profile);
+
 /**
  * Unlink multiple source event queues from destination event port.
  *
@@ -455,6 +484,28 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
 typedef int (*eventdev_port_unlink_t)(struct rte_eventdev *dev, void *port,
 		uint8_t queues[], uint16_t nb_unlinks);
 
+/**
+ * Unlink multiple source event queues associated with a profile from destination
+ * event port.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param port
+ *   Event port pointer
+ * @param queues
+ *   An array of *nb_unlinks* event queues to be unlinked from the event port.
+ * @param nb_unlinks
+ *   The number of unlinks to establish
+ * @param profile
+ *   The profile ID of the associated links.
+ *
+ * @return
+ *   Returns 0 on success.
+ */
+typedef int (*eventdev_port_unlink_profile_t)(struct rte_eventdev *dev, void *port,
+					      uint8_t queues[], uint16_t nb_unlinks,
+					      uint8_t profile);
+
 /**
  * Unlinks in progress. Returns number of unlinks that the PMD is currently
  * performing, but have not yet been completed.
@@ -1348,8 +1399,12 @@ struct eventdev_ops {
 
 	eventdev_port_link_t port_link;
 	/**< Link event queues to an event port. */
+	eventdev_port_link_profile_t port_link_profile;
+	/**< Link event queues associated with a profile to an event port. */
 	eventdev_port_unlink_t port_unlink;
 	/**< Unlink event queues from an event port. */
+	eventdev_port_unlink_profile_t port_unlink_profile;
+	/**< Unlink event queues associated with a profile from an event port. */
 	eventdev_port_unlinks_in_progress_t port_unlinks_in_progress;
 	/**< Unlinks in progress on an event port. */
 	eventdev_dequeue_timeout_ticks_t timeout_ticks;
diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c
index 1d3d9d357e..a5e0bd3de0 100644
--- a/lib/eventdev/eventdev_private.c
+++ b/lib/eventdev/eventdev_private.c
@@ -81,6 +81,13 @@ dummy_event_crypto_adapter_enqueue(__rte_unused void *port,
 	return 0;
 }
 
+static int
+dummy_event_port_profile_switch(__rte_unused void *port, __rte_unused uint8_t profile)
+{
+	RTE_EDEV_LOG_ERR("change profile requested for unconfigured event device");
+	return -EINVAL;
+}
+
 void
 event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
 {
@@ -97,6 +104,7 @@ event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
 		.txa_enqueue_same_dest =
 			dummy_event_tx_adapter_enqueue_same_dest,
 		.ca_enqueue = dummy_event_crypto_adapter_enqueue,
+		.profile_switch = dummy_event_port_profile_switch,
 		.data = dummy_data,
 	};
 
@@ -117,5 +125,6 @@ event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op,
 	fp_op->txa_enqueue = dev->txa_enqueue;
 	fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest;
 	fp_op->ca_enqueue = dev->ca_enqueue;
+	fp_op->profile_switch = dev->profile_switch;
 	fp_op->data = dev->data->ports;
 }
diff --git a/lib/eventdev/eventdev_trace.h b/lib/eventdev/eventdev_trace.h
index f008ef0091..7b89b8d53f 100644
--- a/lib/eventdev/eventdev_trace.h
+++ b/lib/eventdev/eventdev_trace.h
@@ -76,6 +76,17 @@ RTE_TRACE_POINT(
 	rte_trace_point_emit_int(rc);
 )
 
+RTE_TRACE_POINT(
+	rte_eventdev_trace_port_link_with_profile,
+	RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
+		uint16_t nb_links, uint8_t profile, int rc),
+	rte_trace_point_emit_u8(dev_id);
+	rte_trace_point_emit_u8(port_id);
+	rte_trace_point_emit_u16(nb_links);
+	rte_trace_point_emit_u8(profile);
+	rte_trace_point_emit_int(rc);
+)
+
 RTE_TRACE_POINT(
 	rte_eventdev_trace_port_unlink,
 	RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
@@ -86,6 +97,17 @@ RTE_TRACE_POINT(
 	rte_trace_point_emit_int(rc);
 )
 
+RTE_TRACE_POINT(
+	rte_eventdev_trace_port_unlink_with_profile,
+	RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id,
+		uint16_t nb_unlinks, uint8_t profile, int rc),
+	rte_trace_point_emit_u8(dev_id);
+	rte_trace_point_emit_u8(port_id);
+	rte_trace_point_emit_u16(nb_unlinks);
+	rte_trace_point_emit_u8(profile);
+	rte_trace_point_emit_int(rc);
+)
+
 RTE_TRACE_POINT(
 	rte_eventdev_trace_start,
 	RTE_TRACE_POINT_ARGS(uint8_t dev_id, int rc),
diff --git a/lib/eventdev/eventdev_trace_points.c b/lib/eventdev/eventdev_trace_points.c
index 76144cfe75..cfde20b2a8 100644
--- a/lib/eventdev/eventdev_trace_points.c
+++ b/lib/eventdev/eventdev_trace_points.c
@@ -19,9 +19,15 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_setup,
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_link,
 	lib.eventdev.port.link)
 
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_link_with_profile,
+			 lib.eventdev.port.link_with_profile)
+
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlink,
 	lib.eventdev.port.unlink)
 
+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlink_with_profile,
+			 lib.eventdev.port.unlink_with_profile)
+
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_start,
 	lib.eventdev.start)
 
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 6ab4524332..2171d131ad 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -270,7 +270,7 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
 	void **ports;
 	uint16_t *links_map;
 	struct rte_event_port_conf *ports_cfg;
-	unsigned int i;
+	unsigned int i, j;
 
 	RTE_EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
 			 dev->data->dev_id);
@@ -281,7 +281,6 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
 
 		ports = dev->data->ports;
 		ports_cfg = dev->data->ports_cfg;
-		links_map = dev->data->links_map;
 
 		for (i = nb_ports; i < old_nb_ports; i++)
 			(*dev->dev_ops->port_release)(ports[i]);
@@ -297,9 +296,11 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
 				sizeof(ports[0]) * new_ps);
 			memset(ports_cfg + old_nb_ports, 0,
 				sizeof(ports_cfg[0]) * new_ps);
-			for (i = old_links_map_end; i < links_map_end; i++)
-				links_map[i] =
-					EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+			for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++) {
+				links_map = dev->data->links_map[i];
+				for (j = old_links_map_end; j < links_map_end; j++)
+					links_map[j] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+			}
 		}
 	} else {
 		if (*dev->dev_ops->port_release == NULL)
@@ -953,21 +954,44 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
 		    const uint8_t queues[], const uint8_t priorities[],
 		    uint16_t nb_links)
 {
-	struct rte_eventdev *dev;
-	uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+	return rte_event_port_profile_links_set(dev_id, port_id, queues, priorities, nb_links, 0);
+}
+
+int
+rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
+				 const uint8_t priorities[], uint16_t nb_links, uint8_t profile)
+{
 	uint8_t priorities_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+	uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+	struct rte_event_dev_info info;
+	struct rte_eventdev *dev;
 	uint16_t *links_map;
 	int i, diag;
 
 	RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);
 	dev = &rte_eventdevs[dev_id];
 
+	if (*dev->dev_ops->dev_infos_get == NULL)
+		return -ENOTSUP;
+
+	(*dev->dev_ops->dev_infos_get)(dev, &info);
+	if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT || profile >= info.max_profiles_per_port) {
+		RTE_EDEV_LOG_ERR("Invalid profile=%" PRIu8, profile);
+		return -EINVAL;
+	}
+
 	if (*dev->dev_ops->port_link == NULL) {
 		RTE_EDEV_LOG_ERR("Function not supported\n");
 		rte_errno = ENOTSUP;
 		return 0;
 	}
 
+	if (profile && *dev->dev_ops->port_link_profile == NULL) {
+		RTE_EDEV_LOG_ERR("Function not supported\n");
+		rte_errno = ENOTSUP;
+		return 0;
+	}
+
 	if (!is_valid_port(dev, port_id)) {
 		RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
 		rte_errno = EINVAL;
@@ -995,18 +1019,22 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
 			return 0;
 		}
 
-	diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id],
-						queues, priorities, nb_links);
+	if (profile)
+		diag = (*dev->dev_ops->port_link_profile)(dev, dev->data->ports[port_id], queues,
+							  priorities, nb_links, profile);
+	else
+		diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id], queues,
+						  priorities, nb_links);
 	if (diag < 0)
 		return diag;
 
-	links_map = dev->data->links_map;
+	links_map = dev->data->links_map[profile];
 	/* Point links_map to this port specific area */
 	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
 	for (i = 0; i < diag; i++)
 		links_map[queues[i]] = (uint8_t)priorities[i];
 
-	rte_eventdev_trace_port_link(dev_id, port_id, nb_links, diag);
+	rte_eventdev_trace_port_link_with_profile(dev_id, port_id, nb_links, profile, diag);
 	return diag;
 }
 
@@ -1014,27 +1042,50 @@ int
 rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
 		      uint8_t queues[], uint16_t nb_unlinks)
 {
-	struct rte_eventdev *dev;
+	return rte_event_port_profile_unlink(dev_id, port_id, queues, nb_unlinks, 0);
+}
+
+int
+rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+			      uint16_t nb_unlinks, uint8_t profile)
+{
 	uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
-	int i, diag, j;
+	struct rte_event_dev_info info;
+	struct rte_eventdev *dev;
 	uint16_t *links_map;
+	int i, diag, j;
 
 	RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);
 	dev = &rte_eventdevs[dev_id];
 
+	if (*dev->dev_ops->dev_infos_get == NULL)
+		return -ENOTSUP;
+
+	(*dev->dev_ops->dev_infos_get)(dev, &info);
+	if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT || profile >= info.max_profiles_per_port) {
+		RTE_EDEV_LOG_ERR("Invalid profile=%" PRIu8, profile);
+		return -EINVAL;
+	}
+
 	if (*dev->dev_ops->port_unlink == NULL) {
 		RTE_EDEV_LOG_ERR("Function not supported");
 		rte_errno = ENOTSUP;
 		return 0;
 	}
 
+	if (profile && *dev->dev_ops->port_unlink_profile == NULL) {
+		RTE_EDEV_LOG_ERR("Function not supported");
+		rte_errno = ENOTSUP;
+		return 0;
+	}
+
 	if (!is_valid_port(dev, port_id)) {
 		RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
 		rte_errno = EINVAL;
 		return 0;
 	}
 
-	links_map = dev->data->links_map;
+	links_map = dev->data->links_map[profile];
 	/* Point links_map to this port specific area */
 	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
 
@@ -1063,16 +1114,19 @@ rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
 			return 0;
 		}
 
-	diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id],
-					queues, nb_unlinks);
-
+	if (profile)
+		diag = (*dev->dev_ops->port_unlink_profile)(dev, dev->data->ports[port_id], queues,
+							    nb_unlinks, profile);
+	else
+		diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id], queues,
+						    nb_unlinks);
 	if (diag < 0)
 		return diag;
 
 	for (i = 0; i < diag; i++)
 		links_map[queues[i]] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
 
-	rte_eventdev_trace_port_unlink(dev_id, port_id, nb_unlinks, diag);
+	rte_eventdev_trace_port_unlink_with_profile(dev_id, port_id, nb_unlinks, profile, diag);
 	return diag;
 }
 
@@ -1116,7 +1170,50 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
 		return -EINVAL;
 	}
 
-	links_map = dev->data->links_map;
+	/* Use the default profile. */
+	links_map = dev->data->links_map[0];
+	/* Point links_map to this port specific area */
+	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
+	for (i = 0; i < dev->data->nb_queues; i++) {
+		if (links_map[i] != EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
+			queues[count] = i;
+			priorities[count] = (uint8_t)links_map[i];
+			++count;
+		}
+	}
+
+	rte_eventdev_trace_port_links_get(dev_id, port_id, count);
+
+	return count;
+}
+
+int
+rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+				 uint8_t priorities[], uint8_t profile)
+{
+	struct rte_event_dev_info info;
+	struct rte_eventdev *dev;
+	uint16_t *links_map;
+	int i, count = 0;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+
+	dev = &rte_eventdevs[dev_id];
+	if (*dev->dev_ops->dev_infos_get == NULL)
+		return -ENOTSUP;
+
+	(*dev->dev_ops->dev_infos_get)(dev, &info);
+	if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT || profile >= info.max_profiles_per_port) {
+		RTE_EDEV_LOG_ERR("Invalid profile=%" PRIu8, profile);
+		return -EINVAL;
+	}
+
+	if (!is_valid_port(dev, port_id)) {
+		RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+		return -EINVAL;
+	}
+
+	links_map = dev->data->links_map[profile];
 	/* Point links_map to this port specific area */
 	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
 	for (i = 0; i < dev->data->nb_queues; i++) {
@@ -1440,7 +1537,7 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
 {
 	char mz_name[RTE_EVENTDEV_NAME_MAX_LEN];
 	const struct rte_memzone *mz;
-	int n;
+	int i, n;
 
 	/* Generate memzone name */
 	n = snprintf(mz_name, sizeof(mz_name), "rte_eventdev_data_%u", dev_id);
@@ -1460,11 +1557,10 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
 	*data = mz->addr;
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 		memset(*data, 0, sizeof(struct rte_eventdev_data));
-		for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV *
-					RTE_EVENT_MAX_QUEUES_PER_DEV;
-		     n++)
-			(*data)->links_map[n] =
-				EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+		for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++)
+			for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV;
+			     n++)
+				(*data)->links_map[i][n] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
 	}
 
 	return 0;
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 2ba8a7b090..7a169de067 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -320,6 +320,12 @@ struct rte_event;
  * rte_event_queue_setup().
  */
 
+#define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12)
+/** Event device is capable of supporting multiple link profiles per event port
+ * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater
+ * than one.
+ */
+
 /* Event device priority levels */
 #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
 /**< Highest priority expressed across eventdev subsystem
@@ -446,6 +452,10 @@ struct rte_event_dev_info {
 	 * device. These ports and queues are not accounted for in
 	 * max_event_ports or max_event_queues.
 	 */
+	uint8_t max_profiles_per_port;
+	/**< Maximum number of event queue profiles per event port.
+	 * A device that doesn't support multiple profiles will set this as 1.
+	 */
 };
 
 /**
@@ -1536,6 +1546,10 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
  * latency of critical work by establishing the link with more event ports
  * at runtime.
  *
+ * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater
+ * than or equal to one, this function links the event queues to the default
+ * profile i.e. profile 0 of the event port.
+ *
  * @param dev_id
  *   The identifier of the device.
  *
@@ -1593,6 +1607,10 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
  * Event queue(s) to event port unlink establishment can be changed at runtime
  * without re-configuring the device.
  *
+ * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater
+ * than or equal to one, this function unlinks the event queues from the default
+ * profile i.e. profile 0 of the event port.
+ *
  * @see rte_event_port_unlinks_in_progress() to poll for completed unlinks.
  *
  * @param dev_id
@@ -1626,6 +1644,136 @@ int
 rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
 		      uint8_t queues[], uint16_t nb_unlinks);
 
+/**
+ * Link multiple source event queues supplied in *queues* to the destination
+ * event port designated by its *port_id* with associated profile identifier
+ * supplied in *profile* with service priorities supplied in *priorities* on
+ * the event device designated by its *dev_id*.
+ *
+ * If *profile* is set to 0 then, the links created by the call `rte_event_port_link`
+ * will be overwritten.
+ *
+ * Event ports by default use profile 0 unless it is changed using the
+ * call ``rte_event_port_profile_switch()``.
+ *
+ * The link establishment shall enable the event port *port_id* from
+ * receiving events from the specified event queue(s) supplied in *queues*
+ *
+ * An event queue may link to one or more event ports.
+ * The number of links can be established from an event queue to event port is
+ * implementation defined.
+ *
+ * Event queue(s) to event port link establishment can be changed at runtime
+ * without re-configuring the device to support scaling and to reduce the
+ * latency of critical work by establishing the link with more event ports
+ * at runtime.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param port_id
+ *   Event port identifier to select the destination port to link.
+ *
+ * @param queues
+ *   Points to an array of *nb_links* event queues to be linked
+ *   to the event port.
+ *   NULL value is allowed, in which case this function links all the configured
+ *   event queues *nb_event_queues* which previously supplied to
+ *   rte_event_dev_configure() to the event port *port_id*
+ *
+ * @param priorities
+ *   Points to an array of *nb_links* service priorities associated with each
+ *   event queue link to event port.
+ *   The priority defines the event port's servicing priority for
+ *   event queue, which may be ignored by an implementation.
+ *   The requested priority should in the range of
+ *   [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST].
+ *   The implementation shall normalize the requested priority to
+ *   implementation supported priority value.
+ *   NULL value is allowed, in which case this function links the event queues
+ *   with RTE_EVENT_DEV_PRIORITY_NORMAL servicing priority
+ *
+ * @param nb_links
+ *   The number of links to establish. This parameter is ignored if queues is
+ *   NULL.
+ *
+ * @param profile
+ *   The profile identifier associated with the links between event queues and
+ *   event port. Should be less than the max capability reported by
+ *   ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of links actually established. The return value can be less than
+ * the value of the *nb_links* parameter when the implementation has the
+ * limitation on specific queue to port link establishment or if invalid
+ * parameters are specified in *queues*
+ * If the return value is less than *nb_links*, the remaining links at the end
+ * of link[] are not established, and the caller has to take care of them.
+ * If return value is less than *nb_links* then implementation shall update the
+ * rte_errno accordingly, Possible rte_errno values are
+ * (EDQUOT) Quota exceeded(Application tried to link the queue configured with
+ *  RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports)
+ * (EINVAL) Invalid parameter
+ *
+ */
+__rte_experimental
+int
+rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
+				 const uint8_t priorities[], uint16_t nb_links, uint8_t profile);
+
+/**
+ * Unlink multiple source event queues supplied in *queues* that belong to profile
+ * designated by *profile* from the destination event port designated by its
+ * *port_id* on the event device designated by its *dev_id*.
+ *
+ * If *profile* is set to 0 i.e., the default profile then, then this function will
+ * act as ``rte_event_port_unlink``.
+ *
+ * The unlink call issues an async request to disable the event port *port_id*
+ * from receiving events from the specified event queue *queue_id*.
+ * Event queue(s) to event port unlink establishment can be changed at runtime
+ * without re-configuring the device.
+ *
+ * @see rte_event_port_unlinks_in_progress() to poll for completed unlinks.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param port_id
+ *   Event port identifier to select the destination port to unlink.
+ *
+ * @param queues
+ *   Points to an array of *nb_unlinks* event queues to be unlinked
+ *   from the event port.
+ *   NULL value is allowed, in which case this function unlinks all the
+ *   event queue(s) from the event port *port_id*.
+ *
+ * @param nb_unlinks
+ *   The number of unlinks to establish. This parameter is ignored if queues is
+ *   NULL.
+ *
+ * @param profile
+ *   The profile identifier associated with the links between event queues and
+ *   event port. Should be less than the max capability reported by
+ *   ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of unlinks successfully requested. The return value can be less
+ * than the value of the *nb_unlinks* parameter when the implementation has the
+ * limitation on specific queue to port unlink establishment or
+ * if invalid parameters are specified.
+ * If the return value is less than *nb_unlinks*, the remaining queues at the
+ * end of queues[] are not unlinked, and the caller has to take care of them.
+ * If return value is less than *nb_unlinks* then implementation shall update
+ * the rte_errno accordingly, Possible rte_errno values are
+ * (EINVAL) Invalid parameter
+ *
+ */
+__rte_experimental
+int
+rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+			      uint16_t nb_unlinks, uint8_t profile);
+
 /**
  * Returns the number of unlinks in progress.
  *
@@ -1680,6 +1828,42 @@ int
 rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
 			 uint8_t queues[], uint8_t priorities[]);
 
+/**
+ * Retrieve the list of source event queues and its service priority
+ * associated to a profile and linked to the destination event port
+ * designated by its *port_id* on the event device designated by its *dev_id*.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param port_id
+ *   Event port identifier.
+ *
+ * @param[out] queues
+ *   Points to an array of *queues* for output.
+ *   The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
+ *   store the event queue(s) linked with event port *port_id*
+ *
+ * @param[out] priorities
+ *   Points to an array of *priorities* for output.
+ *   The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
+ *   store the service priority associated with each event queue linked
+ *
+ * @param profile
+ *   The profile identifier associated with the links between event queues and
+ *   event port. Should be less than the max capability reported by
+ *   ``rte_event_dev_info::max_profiles_per_port``
+ *
+ * @return
+ * The number of links established on the event port designated by its
+ *  *port_id*.
+ * - <0 on failure.
+ */
+__rte_experimental
+int
+rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
+				 uint8_t priorities[], uint8_t profile);
+
 /**
  * Retrieve the service ID of the event dev. If the adapter doesn't use
  * a rte_service function, this function returns -ESRCH.
@@ -2265,6 +2449,53 @@ rte_event_maintain(uint8_t dev_id, uint8_t port_id, int op)
 	return 0;
 }
 
+/**
+ * Change the active profile on an event port.
+ *
+ * This function is used to change the current active profile on an event port
+ * when multiple link profiles are configured on an event port through the
+ * function call ``rte_event_port_profile_links_set``.
+ *
+ * On the subsequent ``rte_event_dequeue_burst`` call, only the event queues
+ * that were associated with the newly active profile will participate in
+ * scheduling.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param port_id
+ *   The identifier of the event port.
+ * @param profile
+ *   The identifier of the profile.
+ * @return
+ *  - 0 on success.
+ *  - -EINVAL if *dev_id*,  *port_id*, or *profile* is invalid.
+ */
+__rte_experimental
+static inline uint8_t
+rte_event_port_profile_switch(uint8_t dev_id, uint8_t port_id, uint8_t profile)
+{
+	const struct rte_event_fp_ops *fp_ops;
+	void *port;
+
+	fp_ops = &rte_event_fp_ops[dev_id];
+	port = fp_ops->data[port_id];
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+	if (dev_id >= RTE_EVENT_MAX_DEVS ||
+	    port_id >= RTE_EVENT_MAX_PORTS_PER_DEV)
+		return -EINVAL;
+
+	if (port == NULL)
+		return -EINVAL;
+
+	if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT)
+		return -EINVAL;
+#endif
+	rte_eventdev_trace_change_profile(dev_id, port_id, profile);
+
+	return fp_ops->profile_switch(port, profile);
+}
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
index c328bdbc82..dfde8500fc 100644
--- a/lib/eventdev/rte_eventdev_core.h
+++ b/lib/eventdev/rte_eventdev_core.h
@@ -42,6 +42,8 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port,
 						   uint16_t nb_events);
 /**< @internal Enqueue burst of events on crypto adapter */
 
+typedef int (*event_profile_switch_t)(void *port, uint8_t profile);
+
 struct rte_event_fp_ops {
 	void **data;
 	/**< points to array of internal port data pointers */
@@ -65,6 +67,8 @@ struct rte_event_fp_ops {
 	/**< PMD Tx adapter enqueue same destination function. */
 	event_crypto_adapter_enqueue_t ca_enqueue;
 	/**< PMD Crypto adapter enqueue function. */
+	event_profile_switch_t profile_switch;
+	/**< PMD Event switch profile function. */
 	uintptr_t reserved[6];
 } __rte_cache_aligned;
 
diff --git a/lib/eventdev/rte_eventdev_trace_fp.h b/lib/eventdev/rte_eventdev_trace_fp.h
index af2172d2a5..141a45ed58 100644
--- a/lib/eventdev/rte_eventdev_trace_fp.h
+++ b/lib/eventdev/rte_eventdev_trace_fp.h
@@ -46,6 +46,14 @@ RTE_TRACE_POINT_FP(
 	rte_trace_point_emit_int(op);
 )
 
+RTE_TRACE_POINT_FP(
+	rte_eventdev_trace_change_profile,
+	RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, uint8_t profile),
+	rte_trace_point_emit_u8(dev_id);
+	rte_trace_point_emit_u8(port_id);
+	rte_trace_point_emit_u8(profile);
+)
+
 RTE_TRACE_POINT_FP(
 	rte_eventdev_trace_eth_tx_adapter_enqueue,
 	RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table,
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index b03c10d99f..67efee5489 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -131,6 +131,12 @@ EXPERIMENTAL {
 	rte_event_eth_tx_adapter_runtime_params_init;
 	rte_event_eth_tx_adapter_runtime_params_set;
 	rte_event_timer_remaining_ticks_get;
+
+	# added in 23.11
+	rte_event_port_profile_links_set;
+	rte_event_port_profile_unlink;
+	rte_event_port_profile_links_get;
+	rte_event_port_profile_switch;
 };
 
 INTERNAL {
-- 
2.25.1


  reply	other threads:[~2023-08-25 18:44 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-09 14:26 [RFC 0/3] Introduce event " pbhagavatula
2023-08-09 14:26 ` [RFC 1/3] eventdev: introduce " pbhagavatula
2023-08-18 10:27   ` Jerin Jacob
2023-08-09 14:26 ` [RFC 2/3] event/cnxk: implement event " pbhagavatula
2023-08-09 14:26 ` [RFC 3/3] test/event: add event link profile test pbhagavatula
2023-08-09 19:45 ` [RFC 0/3] Introduce event link profiles Mattias Rönnblom
2023-08-10  5:17   ` [EXT] " Pavan Nikhilesh Bhagavatula
2023-08-12  5:52     ` Mattias Rönnblom
2023-08-14 11:29       ` Pavan Nikhilesh Bhagavatula
2023-08-25 18:44 ` [PATCH " pbhagavatula
2023-08-25 18:44   ` pbhagavatula [this message]
2023-08-25 18:44   ` [PATCH 2/3] event/cnxk: implement " pbhagavatula
2023-08-25 18:44   ` [PATCH 3/3] test/event: add event link profile test pbhagavatula
2023-08-31 20:44   ` [PATCH v2 0/3] Introduce event link profiles pbhagavatula
2023-08-31 20:44     ` [PATCH v2 1/3] eventdev: introduce " pbhagavatula
2023-09-20  4:22       ` Jerin Jacob
2023-08-31 20:44     ` [PATCH v2 2/3] event/cnxk: implement event " pbhagavatula
2023-08-31 20:44     ` [PATCH v2 3/3] test/event: add event link profile test pbhagavatula
2023-09-21 10:28     ` [PATCH v3 0/3] Introduce event link profiles pbhagavatula
2023-09-21 10:28       ` [PATCH v3 1/3] eventdev: introduce " pbhagavatula
2023-09-27 15:23         ` Jerin Jacob
2023-09-21 10:28       ` [PATCH v3 2/3] event/cnxk: implement event " pbhagavatula
2023-09-27 15:29         ` Jerin Jacob
2023-09-21 10:28       ` [PATCH v3 3/3] test/event: add event link profile test pbhagavatula
2023-09-27 14:56       ` [PATCH v3 0/3] Introduce event link profiles Jerin Jacob
2023-09-28 10:12       ` [PATCH v4 " pbhagavatula
2023-09-28 10:12         ` [PATCH v4 1/3] eventdev: introduce " pbhagavatula
2023-10-03  6:55           ` Jerin Jacob
2023-09-28 10:12         ` [PATCH v4 2/3] event/cnxk: implement event " pbhagavatula
2023-09-28 10:12         ` [PATCH v4 3/3] test/event: add event link profile test pbhagavatula
2023-09-28 14:45         ` [PATCH v4 0/3] Introduce event link profiles Jerin Jacob
2023-09-29  9:27           ` [EXT] " Pavan Nikhilesh Bhagavatula
2023-10-03  7:51         ` [PATCH v5 " pbhagavatula
2023-10-03  7:51           ` [PATCH v5 1/3] eventdev: introduce " pbhagavatula
2023-10-03  7:51           ` [PATCH v5 2/3] event/cnxk: implement event " pbhagavatula
2023-10-03  7:51           ` [PATCH v5 3/3] test/event: add event link profile test pbhagavatula
2023-10-03  9:47           ` [PATCH v6 0/3] Introduce event link profiles pbhagavatula
2023-10-03  9:47             ` [PATCH v6 1/3] eventdev: introduce " pbhagavatula
2023-10-03  9:47             ` [PATCH v6 2/3] event/cnxk: implement event " pbhagavatula
2023-10-03  9:47             ` [PATCH v6 3/3] test/event: add event link profile test pbhagavatula
2023-10-03 10:36             ` [PATCH v6 0/3] Introduce event link profiles Jerin Jacob
2023-10-03 14:12               ` Bruce Richardson
2023-10-03 15:17                 ` Jerin Jacob
2023-10-03 15:32                   ` [EXT] " Pavan Nikhilesh Bhagavatula

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230825184435.2986-2-pbhagavatula@marvell.com \
    --to=pbhagavatula@marvell.com \
    --cc=abhinandan.gujjar@intel.com \
    --cc=anatoly.burakov@intel.com \
    --cc=dev@dpdk.org \
    --cc=erik.g.carrillo@intel.com \
    --cc=harry.van.haaren@intel.com \
    --cc=hemant.agrawal@nxp.com \
    --cc=jerinj@marvell.com \
    --cc=liangma@liangbit.com \
    --cc=mattias.ronnblom@ericsson.com \
    --cc=peter.mccarthy@intel.com \
    --cc=s.v.naga.harish.k@intel.com \
    --cc=sachin.saxena@nxp.com \
    --cc=sthotton@marvell.com \
    --cc=timothy.mcdaniel@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).