DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/6] Extend and set event queue attributes at runtime
@ 2022-03-29 13:10 Shijith Thotton
  2022-03-29 13:11 ` [PATCH 1/6] eventdev: support to set " Shijith Thotton
                   ` (7 more replies)
  0 siblings, 8 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-03-29 13:10 UTC (permalink / raw)
  To: dev, jerinj; +Cc: Shijith Thotton, pbhagavatula

This series adds support for setting event queue attributes at runtime
and adds two new event queue attributes weight and affinity. Eventdev
capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR is added to expose the
capability to set attributes at runtime and rte_event_queue_attr_set()
API is used to set the attributes.

Attributes weight and affinity are not yet added to rte_event_queue_conf
structure to avoid ABI break and will be added in 22.11. Till then, PMDs
using the new attributes are expected to manage them.

Test application changes and example implementation are added as last
three patches.

Pavan Nikhilesh (1):
  common/cnxk: use lock when accessing mbox of SSO

Shijith Thotton (5):
  eventdev: support to set queue attributes at runtime
  eventdev: add weight and affinity to queue attributes
  doc: announce change in event queue conf structure
  test/event: test cases to test runtime queue attribute
  event/cnxk: support to set runtime queue attributes

 app/test/test_eventdev.c                  | 146 ++++++++++++++++++
 doc/guides/eventdevs/features/cnxk.ini    |   1 +
 doc/guides/eventdevs/features/default.ini |   1 +
 doc/guides/rel_notes/deprecation.rst      |   3 +
 drivers/common/cnxk/roc_sso.c             | 174 ++++++++++++++++------
 drivers/common/cnxk/roc_sso_priv.h        |   1 +
 drivers/common/cnxk/roc_tim.c             | 134 +++++++++++------
 drivers/event/cnxk/cn10k_eventdev.c       |   4 +
 drivers/event/cnxk/cn9k_eventdev.c        |   4 +
 drivers/event/cnxk/cnxk_eventdev.c        |  81 +++++++++-
 drivers/event/cnxk/cnxk_eventdev.h        |  16 ++
 lib/eventdev/eventdev_pmd.h               |  44 ++++++
 lib/eventdev/rte_eventdev.c               |  43 ++++++
 lib/eventdev/rte_eventdev.h               |  75 +++++++++-
 lib/eventdev/version.map                  |   3 +
 15 files changed, 627 insertions(+), 103 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH 1/6] eventdev: support to set queue attributes at runtime
  2022-03-29 13:10 [PATCH 0/6] Extend and set event queue attributes at runtime Shijith Thotton
@ 2022-03-29 13:11 ` Shijith Thotton
  2022-03-30 10:58   ` Van Haaren, Harry
  2022-03-30 12:14   ` Mattias Rönnblom
  2022-03-29 13:11 ` [PATCH 2/6] eventdev: add weight and affinity to queue attributes Shijith Thotton
                   ` (6 subsequent siblings)
  7 siblings, 2 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-03-29 13:11 UTC (permalink / raw)
  To: dev, jerinj; +Cc: Shijith Thotton, pbhagavatula, Ray Kinsella

Added a new eventdev API rte_event_queue_attr_set(), to set event queue
attributes at runtime from the values set during initialization using
rte_event_queue_setup(). PMD's supporting this feature should expose the
capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
 doc/guides/eventdevs/features/default.ini |  1 +
 lib/eventdev/eventdev_pmd.h               | 22 +++++++++++++
 lib/eventdev/rte_eventdev.c               | 31 ++++++++++++++++++
 lib/eventdev/rte_eventdev.h               | 38 ++++++++++++++++++++++-
 lib/eventdev/version.map                  |  3 ++
 5 files changed, 94 insertions(+), 1 deletion(-)

diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini
index 2ea233463a..00360f60c6 100644
--- a/doc/guides/eventdevs/features/default.ini
+++ b/doc/guides/eventdevs/features/default.ini
@@ -17,6 +17,7 @@ runtime_port_link          =
 multiple_queue_port        =
 carry_flow_id              =
 maintenance_free           =
+runtime_queue_attr         =
 
 ;
 ; Features of a default Ethernet Rx adapter.
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index ce469d47a6..6182749503 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
 typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
 		uint8_t queue_id);
 
+/**
+ * Set an event queue attribute at runtime.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param queue_id
+ *   Event queue index
+ * @param attr_id
+ *   Event queue attribute id
+ * @param attr_value
+ *   Event queue attribute value
+ *
+ * @return
+ *  - 0: Success.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev,
+					 uint8_t queue_id, uint32_t attr_id,
+					 uint32_t attr_value);
+
 /**
  * Retrieve the default event port configuration.
  *
@@ -1211,6 +1231,8 @@ struct eventdev_ops {
 	/**< Set up an event queue. */
 	eventdev_queue_release_t queue_release;
 	/**< Release an event queue. */
+	eventdev_queue_attr_set_t queue_attr_set;
+	/**< Set an event queue attribute. */
 
 	eventdev_port_default_conf_get_t port_def_conf;
 	/**< Get default port configuration. */
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 532a253553..13c8af877e 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -844,6 +844,37 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 	return 0;
 }
 
+int
+rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
+			 uint32_t attr_value)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	if (!is_valid_queue(dev, queue_id)) {
+		RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
+		return -EINVAL;
+	}
+
+	if (attr_id > RTE_EVENT_QUEUE_ATTR_MAX) {
+		RTE_EDEV_LOG_ERR("Invalid attribute ID %" PRIu8, attr_id);
+		return -EINVAL;
+	}
+
+	if (!(dev->data->event_dev_cap &
+	      RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) {
+		RTE_EDEV_LOG_ERR(
+			"Device %" PRIu8 "does not support changing queue attributes at runtime",
+			dev_id);
+		return -ENOTSUP;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_attr_set, -ENOTSUP);
+	return (*dev->dev_ops->queue_attr_set)(dev, queue_id, attr_id,
+					       attr_value);
+}
+
 int
 rte_event_port_link(uint8_t dev_id, uint8_t port_id,
 		    const uint8_t queues[], const uint8_t priorities[],
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 42a5660169..19710cd0c5 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -225,7 +225,7 @@ struct rte_event;
 /**< Event scheduling prioritization is based on the priority associated with
  *  each event queue.
  *
- *  @see rte_event_queue_setup()
+ *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
  */
 #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
 /**< Event scheduling prioritization is based on the priority associated with
@@ -307,6 +307,13 @@ struct rte_event;
  * global pool, or process signaling related to load balancing.
  */
 
+#define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
+/**< Event device is capable of changing the queue attributes at runtime i.e after
+ * rte_event_queue_setup() or rte_event_start() call sequence. If this flag is
+ * not set, eventdev queue attributes can only be configured during
+ * rte_event_queue_setup().
+ */
+
 /* Event device priority levels */
 #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
 /**< Highest priority expressed across eventdev subsystem
@@ -678,6 +685,11 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
  */
 #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4
 
+/**
+ * Maximum supported attribute ID.
+ */
+#define RTE_EVENT_QUEUE_ATTR_MAX RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE
+
 /**
  * Get an attribute from a queue.
  *
@@ -702,6 +714,30 @@ int
 rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 			uint32_t *attr_value);
 
+/**
+ * Set an event queue attribute.
+ *
+ * @param dev_id
+ *   Eventdev id
+ * @param queue_id
+ *   Eventdev queue id
+ * @param attr_id
+ *   The attribute ID to set
+ * @param attr_value
+ *   The attribute value to set
+ *
+ * @return
+ *   - 0: Successfully set attribute.
+ *   - -EINVAL: invalid device, queue or attr_id.
+ *   - -ENOTSUP: device does not support setting event attribute.
+ *   - -EBUSY: device is in running state
+ *   - <0: failed to set event queue attribute
+ */
+__rte_experimental
+int
+rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
+			 uint32_t attr_value);
+
 /* Event port specific APIs */
 
 /* Event port configuration bitmap flags */
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index cd5dada07f..c581b75c18 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -108,6 +108,9 @@ EXPERIMENTAL {
 
 	# added in 22.03
 	rte_event_eth_rx_adapter_event_port_get;
+
+	# added in 22.07
+	rte_event_queue_attr_set;
 };
 
 INTERNAL {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH 2/6] eventdev: add weight and affinity to queue attributes
  2022-03-29 13:10 [PATCH 0/6] Extend and set event queue attributes at runtime Shijith Thotton
  2022-03-29 13:11 ` [PATCH 1/6] eventdev: support to set " Shijith Thotton
@ 2022-03-29 13:11 ` Shijith Thotton
  2022-03-30 12:12   ` Mattias Rönnblom
  2022-03-29 13:11 ` [PATCH 3/6] doc: announce change in event queue conf structure Shijith Thotton
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 58+ messages in thread
From: Shijith Thotton @ 2022-03-29 13:11 UTC (permalink / raw)
  To: dev, jerinj; +Cc: Shijith Thotton, pbhagavatula

Extended eventdev queue QoS attributes to support weight and affinity.
If queues are of same priority, events from the queue with highest
weight will be scheduled first. Affinity indicates the number of times,
the subsequent schedule calls from an event port will use the same event
queue. Schedule call selects another queue if current queue goes empty
or schedule count reaches affinity count.

To avoid ABI break, weight and affinity attributes are not yet added to
queue config structure and relies on PMD for managing it. New eventdev
op queue_attr_get can be used to get it from the PMD.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
 lib/eventdev/eventdev_pmd.h | 22 ++++++++++++++++++++
 lib/eventdev/rte_eventdev.c | 12 +++++++++++
 lib/eventdev/rte_eventdev.h | 41 +++++++++++++++++++++++++++++++++----
 3 files changed, 71 insertions(+), 4 deletions(-)

diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index 6182749503..f19df98a7a 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
 typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
 		uint8_t queue_id);
 
+/**
+ * Get an event queue attribute at runtime.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param queue_id
+ *   Event queue index
+ * @param attr_id
+ *   Event queue attribute id
+ * @param[out] attr_value
+ *   Event queue attribute value
+ *
+ * @return
+ *  - 0: Success.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_queue_attr_get_t)(struct rte_eventdev *dev,
+					 uint8_t queue_id, uint32_t attr_id,
+					 uint32_t *attr_value);
+
 /**
  * Set an event queue attribute at runtime.
  *
@@ -1231,6 +1251,8 @@ struct eventdev_ops {
 	/**< Set up an event queue. */
 	eventdev_queue_release_t queue_release;
 	/**< Release an event queue. */
+	eventdev_queue_attr_get_t queue_attr_get;
+	/**< Get an event queue attribute. */
 	eventdev_queue_attr_set_t queue_attr_set;
 	/**< Set an event queue attribute. */
 
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 13c8af877e..37f0e54bf3 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -838,6 +838,18 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 
 		*attr_value = conf->schedule_type;
 		break;
+	case RTE_EVENT_QUEUE_ATTR_WEIGHT:
+		*attr_value = RTE_EVENT_QUEUE_WEIGHT_LOWEST;
+		if (dev->dev_ops->queue_attr_get)
+			return (*dev->dev_ops->queue_attr_get)(
+				dev, queue_id, attr_id, attr_value);
+		break;
+	case RTE_EVENT_QUEUE_ATTR_AFFINITY:
+		*attr_value = RTE_EVENT_QUEUE_AFFINITY_LOWEST;
+		if (dev->dev_ops->queue_attr_get)
+			return (*dev->dev_ops->queue_attr_get)(
+				dev, queue_id, attr_id, attr_value);
+		break;
 	default:
 		return -EINVAL;
 	};
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 19710cd0c5..fa16fc5dcb 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -222,8 +222,14 @@ struct rte_event;
 
 /* Event device capability bitmap flags */
 #define RTE_EVENT_DEV_CAP_QUEUE_QOS           (1ULL << 0)
-/**< Event scheduling prioritization is based on the priority associated with
- *  each event queue.
+/**< Event scheduling prioritization is based on the priority and weight
+ * associated with each event queue. Events from a queue with highest priority
+ * is scheduled first. If the queues are of same priority, a queue with highest
+ * weight is selected. Subsequent schedules from an event port could see events
+ * from the same event queue if the queue is configured with an affinity count.
+ * Affinity count of a queue indicates the number of times, the subsequent
+ * schedule calls from an event port should use the same queue if the queue is
+ * non-empty.
  *
  *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
  */
@@ -331,6 +337,26 @@ struct rte_event;
  * @see rte_event_port_link()
  */
 
+/* Event queue scheduling weights */
+#define RTE_EVENT_QUEUE_WEIGHT_HIGHEST   255
+/**< Highest weight of an event queue
+ * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
+ */
+#define RTE_EVENT_QUEUE_WEIGHT_LOWEST    0
+/**< Lowest weight of an event queue
+ * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
+ */
+
+/* Event queue scheduling affinity */
+#define RTE_EVENT_QUEUE_AFFINITY_HIGHEST   255
+/**< Highest scheduling affinity of an event queue
+ * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
+ */
+#define RTE_EVENT_QUEUE_AFFINITY_LOWEST    0
+/**< Lowest scheduling affinity of an event queue
+ * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
+ */
+
 /**
  * Get the total number of event devices that have been successfully
  * initialised.
@@ -684,11 +710,18 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
  * The schedule type of the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4
-
+/**
+ * The weight of the queue.
+ */
+#define RTE_EVENT_QUEUE_ATTR_WEIGHT 5
+/**
+ * Affinity of the queue.
+ */
+#define RTE_EVENT_QUEUE_ATTR_AFFINITY 6
 /**
  * Maximum supported attribute ID.
  */
-#define RTE_EVENT_QUEUE_ATTR_MAX RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE
+#define RTE_EVENT_QUEUE_ATTR_MAX RTE_EVENT_QUEUE_ATTR_AFFINITY
 
 /**
  * Get an attribute from a queue.
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH 3/6] doc: announce change in event queue conf structure
  2022-03-29 13:10 [PATCH 0/6] Extend and set event queue attributes at runtime Shijith Thotton
  2022-03-29 13:11 ` [PATCH 1/6] eventdev: support to set " Shijith Thotton
  2022-03-29 13:11 ` [PATCH 2/6] eventdev: add weight and affinity to queue attributes Shijith Thotton
@ 2022-03-29 13:11 ` Shijith Thotton
  2022-03-29 13:11 ` [PATCH 4/6] test/event: test cases to test runtime queue attribute Shijith Thotton
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-03-29 13:11 UTC (permalink / raw)
  To: dev, jerinj; +Cc: Shijith Thotton, pbhagavatula, Ray Kinsella

Structure rte_event_queue_conf will be extended to include fields to
support weight and affinity attribute. Once it gets added in DPDK 22.11,
eventdev internal op, queue_attr_get can be removed.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
 doc/guides/rel_notes/deprecation.rst | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 4e5b23c53d..04125db681 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -125,3 +125,6 @@ Deprecation Notices
   applications should be updated to use the ``dmadev`` library instead,
   with the underlying HW-functionality being provided by the ``ioat`` or
   ``idxd`` dma drivers
+
+* eventdev: New fields to represent event queue weight and affinity will be
+  added to ``rte_event_queue_conf`` structure in DPDK 22.11.
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH 4/6] test/event: test cases to test runtime queue attribute
  2022-03-29 13:10 [PATCH 0/6] Extend and set event queue attributes at runtime Shijith Thotton
                   ` (2 preceding siblings ...)
  2022-03-29 13:11 ` [PATCH 3/6] doc: announce change in event queue conf structure Shijith Thotton
@ 2022-03-29 13:11 ` Shijith Thotton
  2022-03-29 13:11 ` [PATCH 5/6] event/cnxk: support to set runtime queue attributes Shijith Thotton
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-03-29 13:11 UTC (permalink / raw)
  To: dev, jerinj; +Cc: Shijith Thotton, pbhagavatula

Added test cases to test changing of queue QoS attributes priority,
weight and affinity at runtime.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
 app/test/test_eventdev.c | 146 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 146 insertions(+)

diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 4f51042bda..b9ec319ad9 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -385,6 +385,146 @@ test_eventdev_queue_attr_priority(void)
 	return TEST_SUCCESS;
 }
 
+static int
+test_eventdev_queue_attr_priority_runtime(void)
+{
+	struct rte_event_queue_conf qconf;
+	struct rte_event_dev_info info;
+	uint32_t queue_count;
+	int i, ret;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR))
+		return TEST_SKIPPED;
+
+	TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(
+				    TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+				    &queue_count),
+			    "Queue count get failed");
+
+	for (i = 0; i < (int)queue_count; i++) {
+		ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i);
+		ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	for (i = 0; i < (int)queue_count; i++) {
+		uint32_t attr_val, tmp;
+
+		attr_val = i % RTE_EVENT_DEV_PRIORITY_LOWEST;
+		TEST_ASSERT_SUCCESS(
+			rte_event_queue_attr_set(TEST_DEV_ID, i,
+						 RTE_EVENT_QUEUE_ATTR_PRIORITY,
+						 attr_val),
+			"Queue priority set failed");
+		TEST_ASSERT_SUCCESS(
+			rte_event_queue_attr_get(TEST_DEV_ID, i,
+						 RTE_EVENT_QUEUE_ATTR_PRIORITY,
+						 &tmp),
+			"Queue priority get failed");
+		TEST_ASSERT_EQUAL(tmp, attr_val,
+				  "Wrong priority value for queue%d", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_queue_attr_weight_runtime(void)
+{
+	struct rte_event_queue_conf qconf;
+	struct rte_event_dev_info info;
+	uint32_t queue_count;
+	int i, ret;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR))
+		return TEST_SKIPPED;
+
+	TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(
+				    TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+				    &queue_count),
+			    "Queue count get failed");
+
+	for (i = 0; i < (int)queue_count; i++) {
+		ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i);
+		ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	for (i = 0; i < (int)queue_count; i++) {
+		uint32_t attr_val, tmp;
+
+		attr_val = i % RTE_EVENT_QUEUE_WEIGHT_HIGHEST;
+		TEST_ASSERT_SUCCESS(
+			rte_event_queue_attr_set(TEST_DEV_ID, i,
+						 RTE_EVENT_QUEUE_ATTR_WEIGHT,
+						 attr_val),
+			"Queue weight set failed");
+		TEST_ASSERT_SUCCESS(rte_event_queue_attr_get(
+					    TEST_DEV_ID, i,
+					    RTE_EVENT_QUEUE_ATTR_WEIGHT, &tmp),
+				    "Queue weight get failed");
+		TEST_ASSERT_EQUAL(tmp, attr_val,
+				  "Wrong weight value for queue%d", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_queue_attr_affinity_runtime(void)
+{
+	struct rte_event_queue_conf qconf;
+	struct rte_event_dev_info info;
+	uint32_t queue_count;
+	int i, ret;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR))
+		return TEST_SKIPPED;
+
+	TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(
+				    TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+				    &queue_count),
+			    "Queue count get failed");
+
+	for (i = 0; i < (int)queue_count; i++) {
+		ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i);
+		ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	for (i = 0; i < (int)queue_count; i++) {
+		uint32_t attr_val, tmp;
+
+		attr_val = i % RTE_EVENT_QUEUE_AFFINITY_HIGHEST;
+		TEST_ASSERT_SUCCESS(
+			rte_event_queue_attr_set(TEST_DEV_ID, i,
+						 RTE_EVENT_QUEUE_ATTR_AFFINITY,
+						 attr_val),
+			"Queue affinity set failed");
+		TEST_ASSERT_SUCCESS(
+			rte_event_queue_attr_get(TEST_DEV_ID, i,
+						 RTE_EVENT_QUEUE_ATTR_AFFINITY,
+						 &tmp),
+			"Queue affinity get failed");
+		TEST_ASSERT_EQUAL(tmp, attr_val,
+				  "Wrong affinity value for queue%d", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
 static int
 test_eventdev_queue_attr_nb_atomic_flows(void)
 {
@@ -964,6 +1104,12 @@ static struct unit_test_suite eventdev_common_testsuite  = {
 			test_eventdev_queue_count),
 		TEST_CASE_ST(eventdev_configure_setup, NULL,
 			test_eventdev_queue_attr_priority),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_attr_priority_runtime),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_attr_weight_runtime),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_attr_affinity_runtime),
 		TEST_CASE_ST(eventdev_configure_setup, NULL,
 			test_eventdev_queue_attr_nb_atomic_flows),
 		TEST_CASE_ST(eventdev_configure_setup, NULL,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH 5/6] event/cnxk: support to set runtime queue attributes
  2022-03-29 13:10 [PATCH 0/6] Extend and set event queue attributes at runtime Shijith Thotton
                   ` (3 preceding siblings ...)
  2022-03-29 13:11 ` [PATCH 4/6] test/event: test cases to test runtime queue attribute Shijith Thotton
@ 2022-03-29 13:11 ` Shijith Thotton
  2022-03-30 11:05   ` Van Haaren, Harry
  2022-03-29 13:11 ` [PATCH 6/6] common/cnxk: use lock when accessing mbox of SSO Shijith Thotton
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 58+ messages in thread
From: Shijith Thotton @ 2022-03-29 13:11 UTC (permalink / raw)
  To: dev, jerinj; +Cc: Shijith Thotton, pbhagavatula

Added API to set queue attributes at runtime and API to get weight and
affinity.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
 doc/guides/eventdevs/features/cnxk.ini |  1 +
 drivers/event/cnxk/cn10k_eventdev.c    |  4 ++
 drivers/event/cnxk/cn9k_eventdev.c     |  4 ++
 drivers/event/cnxk/cnxk_eventdev.c     | 81 ++++++++++++++++++++++++--
 drivers/event/cnxk/cnxk_eventdev.h     | 16 +++++
 5 files changed, 100 insertions(+), 6 deletions(-)

diff --git a/doc/guides/eventdevs/features/cnxk.ini b/doc/guides/eventdevs/features/cnxk.ini
index 7633c6e3a2..bee69bf8f4 100644
--- a/doc/guides/eventdevs/features/cnxk.ini
+++ b/doc/guides/eventdevs/features/cnxk.ini
@@ -12,6 +12,7 @@ runtime_port_link          = Y
 multiple_queue_port        = Y
 carry_flow_id              = Y
 maintenance_free           = Y
+runtime_queue_attr         = y
 
 [Eth Rx adapter Features]
 internal_port              = Y
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 9b4d2895ec..f6973bb691 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -845,9 +845,13 @@ cn10k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev,
 static struct eventdev_ops cn10k_sso_dev_ops = {
 	.dev_infos_get = cn10k_sso_info_get,
 	.dev_configure = cn10k_sso_dev_configure,
+
 	.queue_def_conf = cnxk_sso_queue_def_conf,
 	.queue_setup = cnxk_sso_queue_setup,
 	.queue_release = cnxk_sso_queue_release,
+	.queue_attr_get = cnxk_sso_queue_attribute_get,
+	.queue_attr_set = cnxk_sso_queue_attribute_set,
+
 	.port_def_conf = cnxk_sso_port_def_conf,
 	.port_setup = cn10k_sso_port_setup,
 	.port_release = cn10k_sso_port_release,
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 4bba477dd1..7cb59bbbfa 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -1079,9 +1079,13 @@ cn9k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev,
 static struct eventdev_ops cn9k_sso_dev_ops = {
 	.dev_infos_get = cn9k_sso_info_get,
 	.dev_configure = cn9k_sso_dev_configure,
+
 	.queue_def_conf = cnxk_sso_queue_def_conf,
 	.queue_setup = cnxk_sso_queue_setup,
 	.queue_release = cnxk_sso_queue_release,
+	.queue_attr_get = cnxk_sso_queue_attribute_get,
+	.queue_attr_set = cnxk_sso_queue_attribute_set,
+
 	.port_def_conf = cnxk_sso_port_def_conf,
 	.port_setup = cn9k_sso_port_setup,
 	.port_release = cn9k_sso_port_release,
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index be021d86c9..73f1029779 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -120,7 +120,8 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
 				  RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
 				  RTE_EVENT_DEV_CAP_NONSEQ_MODE |
 				  RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
-				  RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
+				  RTE_EVENT_DEV_CAP_MAINTENANCE_FREE |
+				  RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR;
 }
 
 int
@@ -300,11 +301,27 @@ cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
 		     const struct rte_event_queue_conf *queue_conf)
 {
 	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
-
-	plt_sso_dbg("Queue=%d prio=%d", queue_id, queue_conf->priority);
-	/* Normalize <0-255> to <0-7> */
-	return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, 0xFF, 0xFF,
-					  queue_conf->priority / 32);
+	uint8_t priority, weight, affinity;
+
+	/* Default weight and affinity */
+	dev->mlt_prio[queue_id].weight = RTE_EVENT_QUEUE_WEIGHT_HIGHEST;
+	dev->mlt_prio[queue_id].affinity = RTE_EVENT_QUEUE_AFFINITY_HIGHEST;
+
+	priority = CNXK_QOS_NORMALIZE(queue_conf->priority,
+				      RTE_EVENT_DEV_PRIORITY_LOWEST,
+				      CNXK_SSO_PRIORITY_CNT);
+	weight = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].weight,
+				    RTE_EVENT_QUEUE_WEIGHT_HIGHEST,
+				    CNXK_SSO_WEIGHT_CNT);
+	affinity = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].affinity,
+				      RTE_EVENT_QUEUE_AFFINITY_HIGHEST,
+				      CNXK_SSO_AFFINITY_CNT);
+
+	plt_sso_dbg("Queue=%u prio=%u weight=%u affinity=%u", queue_id,
+		    priority, weight, affinity);
+
+	return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, weight, affinity,
+					  priority);
 }
 
 void
@@ -314,6 +331,58 @@ cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id)
 	RTE_SET_USED(queue_id);
 }
 
+int
+cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev, uint8_t queue_id,
+			     uint32_t attr_id, uint32_t *attr_value)
+{
+	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+	*attr_value = attr_id == RTE_EVENT_QUEUE_ATTR_WEIGHT ?
+			      dev->mlt_prio[queue_id].weight :
+			      dev->mlt_prio[queue_id].affinity;
+
+	return 0;
+}
+
+int
+cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev, uint8_t queue_id,
+			     uint32_t attr_id, uint32_t attr_value)
+{
+	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+	uint8_t priority, weight, affinity;
+	struct rte_event_queue_conf *conf;
+
+	conf = &event_dev->data->queues_cfg[queue_id];
+
+	switch (attr_id) {
+	case RTE_EVENT_QUEUE_ATTR_PRIORITY:
+		conf->priority = attr_value;
+		break;
+	case RTE_EVENT_QUEUE_ATTR_WEIGHT:
+		dev->mlt_prio[queue_id].weight = attr_value;
+		break;
+	case RTE_EVENT_QUEUE_ATTR_AFFINITY:
+		dev->mlt_prio[queue_id].affinity = attr_value;
+		break;
+	default:
+		plt_sso_dbg("Ignored setting attribute id %u", attr_id);
+		return 0;
+	}
+
+	priority = CNXK_QOS_NORMALIZE(conf->priority,
+				      RTE_EVENT_DEV_PRIORITY_LOWEST,
+				      CNXK_SSO_PRIORITY_CNT);
+	weight = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].weight,
+				    RTE_EVENT_QUEUE_WEIGHT_HIGHEST,
+				    CNXK_SSO_WEIGHT_CNT);
+	affinity = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].affinity,
+				      RTE_EVENT_QUEUE_AFFINITY_HIGHEST,
+				      CNXK_SSO_AFFINITY_CNT);
+
+	return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, weight, affinity,
+					  priority);
+}
+
 void
 cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
 		       struct rte_event_port_conf *port_conf)
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 5564746e6d..8037cbbb3b 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -38,6 +38,9 @@
 #define CNXK_SSO_XAQ_CACHE_CNT (0x7)
 #define CNXK_SSO_XAQ_SLACK     (8)
 #define CNXK_SSO_WQE_SG_PTR    (9)
+#define CNXK_SSO_PRIORITY_CNT  (8)
+#define CNXK_SSO_WEIGHT_CNT    (64)
+#define CNXK_SSO_AFFINITY_CNT  (16)
 
 #define CNXK_TT_FROM_TAG(x)	    (((x) >> 32) & SSO_TT_EMPTY)
 #define CNXK_TT_FROM_EVENT(x)	    (((x) >> 38) & SSO_TT_EMPTY)
@@ -54,6 +57,7 @@
 #define CN10K_GW_MODE_PREF     1
 #define CN10K_GW_MODE_PREF_WFE 2
 
+#define CNXK_QOS_NORMALIZE(val, max, cnt) (val / ((max + cnt - 1) / cnt))
 #define CNXK_VALID_DEV_OR_ERR_RET(dev, drv_name)                               \
 	do {                                                                   \
 		if (strncmp(dev->driver->name, drv_name, strlen(drv_name)))    \
@@ -79,6 +83,11 @@ struct cnxk_sso_qos {
 	uint16_t iaq_prcnt;
 };
 
+struct cnxk_sso_mlt_prio {
+	uint8_t weight;
+	uint8_t affinity;
+};
+
 struct cnxk_sso_evdev {
 	struct roc_sso sso;
 	uint8_t max_event_queues;
@@ -108,6 +117,7 @@ struct cnxk_sso_evdev {
 	uint64_t *timer_adptr_sz;
 	uint16_t vec_pool_cnt;
 	uint64_t *vec_pools;
+	struct cnxk_sso_mlt_prio mlt_prio[RTE_EVENT_MAX_QUEUES_PER_DEV];
 	/* Dev args */
 	uint32_t xae_cnt;
 	uint8_t qos_queue_cnt;
@@ -234,6 +244,12 @@ void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
 int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
 			 const struct rte_event_queue_conf *queue_conf);
 void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id);
+int cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev,
+				 uint8_t queue_id, uint32_t attr_id,
+				 uint32_t *attr_value);
+int cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev,
+				 uint8_t queue_id, uint32_t attr_id,
+				 uint32_t attr_value);
 void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
 			    struct rte_event_port_conf *port_conf);
 int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH 6/6] common/cnxk: use lock when accessing mbox of SSO
  2022-03-29 13:10 [PATCH 0/6] Extend and set event queue attributes at runtime Shijith Thotton
                   ` (4 preceding siblings ...)
  2022-03-29 13:11 ` [PATCH 5/6] event/cnxk: support to set runtime queue attributes Shijith Thotton
@ 2022-03-29 13:11 ` Shijith Thotton
  2022-03-29 18:49 ` [PATCH 0/6] Extend and set event queue attributes at runtime Jerin Jacob
  2022-04-05  5:40 ` [PATCH v2 " Shijith Thotton
  7 siblings, 0 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-03-29 13:11 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Pavan Nikhilesh, Shijith Thotton, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Since mbox is now accessed from multiple threads, use lock to
synchronize access.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
 drivers/common/cnxk/roc_sso.c      | 174 +++++++++++++++++++++--------
 drivers/common/cnxk/roc_sso_priv.h |   1 +
 drivers/common/cnxk/roc_tim.c      | 134 ++++++++++++++--------
 3 files changed, 215 insertions(+), 94 deletions(-)

diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index f8a0a96533..358d37a9f2 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -36,8 +36,8 @@ sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf,
 	}
 
 	rc = mbox_process_msg(dev->mbox, rsp);
-	if (rc < 0)
-		return rc;
+	if (rc)
+		return -EIO;
 
 	return 0;
 }
@@ -69,8 +69,8 @@ sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf)
 	}
 
 	rc = mbox_process(dev->mbox);
-	if (rc < 0)
-		return rc;
+	if (rc)
+		return -EIO;
 
 	return 0;
 }
@@ -98,7 +98,7 @@ sso_rsrc_attach(struct roc_sso *roc_sso, enum sso_lf_type lf_type,
 	}
 
 	req->modify = true;
-	if (mbox_process(dev->mbox) < 0)
+	if (mbox_process(dev->mbox))
 		return -EIO;
 
 	return 0;
@@ -126,7 +126,7 @@ sso_rsrc_detach(struct roc_sso *roc_sso, enum sso_lf_type lf_type)
 	}
 
 	req->partial = true;
-	if (mbox_process(dev->mbox) < 0)
+	if (mbox_process(dev->mbox))
 		return -EIO;
 
 	return 0;
@@ -141,9 +141,9 @@ sso_rsrc_get(struct roc_sso *roc_sso)
 
 	mbox_alloc_msg_free_rsrc_cnt(dev->mbox);
 	rc = mbox_process_msg(dev->mbox, (void **)&rsrc_cnt);
-	if (rc < 0) {
+	if (rc) {
 		plt_err("Failed to get free resource count\n");
-		return rc;
+		return -EIO;
 	}
 
 	roc_sso->max_hwgrp = rsrc_cnt->sso;
@@ -197,8 +197,8 @@ sso_msix_fill(struct roc_sso *roc_sso, uint16_t nb_hws, uint16_t nb_hwgrp)
 
 	mbox_alloc_msg_msix_offset(dev->mbox);
 	rc = mbox_process_msg(dev->mbox, (void **)&rsp);
-	if (rc < 0)
-		return rc;
+	if (rc)
+		return -EIO;
 
 	for (i = 0; i < nb_hws; i++)
 		sso->hws_msix_offset[i] = rsp->ssow_msixoff[i];
@@ -285,53 +285,71 @@ int
 roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws,
 		      struct roc_sso_hws_stats *stats)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
 	struct sso_hws_stats *req_rsp;
+	struct dev *dev = &sso->dev;
 	int rc;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req_rsp = (struct sso_hws_stats *)mbox_alloc_msg_sso_hws_get_stats(
 		dev->mbox);
 	if (req_rsp == NULL) {
 		rc = mbox_process(dev->mbox);
-		if (rc < 0)
-			return rc;
+		if (rc) {
+			rc = -EIO;
+			goto fail;
+		}
 		req_rsp = (struct sso_hws_stats *)
 			mbox_alloc_msg_sso_hws_get_stats(dev->mbox);
-		if (req_rsp == NULL)
-			return -ENOSPC;
+		if (req_rsp == NULL) {
+			rc = -ENOSPC;
+			goto fail;
+		}
 	}
 	req_rsp->hws = hws;
 	rc = mbox_process_msg(dev->mbox, (void **)&req_rsp);
-	if (rc)
-		return rc;
+	if (rc) {
+		rc = -EIO;
+		goto fail;
+	}
 
 	stats->arbitration = req_rsp->arbitration;
-	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
 roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp,
 			struct roc_sso_hwgrp_stats *stats)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
 	struct sso_grp_stats *req_rsp;
+	struct dev *dev = &sso->dev;
 	int rc;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req_rsp = (struct sso_grp_stats *)mbox_alloc_msg_sso_grp_get_stats(
 		dev->mbox);
 	if (req_rsp == NULL) {
 		rc = mbox_process(dev->mbox);
-		if (rc < 0)
-			return rc;
+		if (rc) {
+			rc = -EIO;
+			goto fail;
+		}
 		req_rsp = (struct sso_grp_stats *)
 			mbox_alloc_msg_sso_grp_get_stats(dev->mbox);
-		if (req_rsp == NULL)
-			return -ENOSPC;
+		if (req_rsp == NULL) {
+			rc = -ENOSPC;
+			goto fail;
+		}
 	}
 	req_rsp->grp = hwgrp;
 	rc = mbox_process_msg(dev->mbox, (void **)&req_rsp);
-	if (rc)
-		return rc;
+	if (rc) {
+		rc = -EIO;
+		goto fail;
+	}
 
 	stats->aw_status = req_rsp->aw_status;
 	stats->dq_pc = req_rsp->dq_pc;
@@ -341,7 +359,10 @@ roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp,
 	stats->ts_pc = req_rsp->ts_pc;
 	stats->wa_pc = req_rsp->wa_pc;
 	stats->ws_pc = req_rsp->ws_pc;
-	return 0;
+
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -358,10 +379,12 @@ int
 roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos,
 			 uint8_t nb_qos, uint32_t nb_xaq)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
 	struct sso_grp_qos_cfg *req;
 	int i, rc;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	for (i = 0; i < nb_qos; i++) {
 		uint8_t xaq_prcnt = qos[i].xaq_prcnt;
 		uint8_t iaq_prcnt = qos[i].iaq_prcnt;
@@ -370,11 +393,16 @@ roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos,
 		req = mbox_alloc_msg_sso_grp_qos_config(dev->mbox);
 		if (req == NULL) {
 			rc = mbox_process(dev->mbox);
-			if (rc < 0)
-				return rc;
+			if (rc) {
+				rc = -EIO;
+				goto fail;
+			}
+
 			req = mbox_alloc_msg_sso_grp_qos_config(dev->mbox);
-			if (req == NULL)
-				return -ENOSPC;
+			if (req == NULL) {
+				rc = -ENOSPC;
+				goto fail;
+			}
 		}
 		req->grp = qos[i].hwgrp;
 		req->xaq_limit = (nb_xaq * (xaq_prcnt ? xaq_prcnt : 100)) / 100;
@@ -386,7 +414,12 @@ roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos,
 			       100;
 	}
 
-	return mbox_process(dev->mbox);
+	rc = mbox_process(dev->mbox);
+	if (rc)
+		rc = -EIO;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -482,11 +515,16 @@ sso_hwgrp_init_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq,
 int
 roc_sso_hwgrp_init_xaq_aura(struct roc_sso *roc_sso, uint32_t nb_xae)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
+	int rc;
 
-	return sso_hwgrp_init_xaq_aura(dev, &roc_sso->xaq, nb_xae,
-				       roc_sso->xae_waes, roc_sso->xaq_buf_size,
-				       roc_sso->nb_hwgrp);
+	plt_spinlock_lock(&sso->mbox_lock);
+	rc = sso_hwgrp_init_xaq_aura(dev, &roc_sso->xaq, nb_xae,
+				     roc_sso->xae_waes, roc_sso->xaq_buf_size,
+				     roc_sso->nb_hwgrp);
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -515,9 +553,14 @@ sso_hwgrp_free_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq,
 int
 roc_sso_hwgrp_free_xaq_aura(struct roc_sso *roc_sso, uint16_t nb_hwgrp)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
+	int rc;
 
-	return sso_hwgrp_free_xaq_aura(dev, &roc_sso->xaq, nb_hwgrp);
+	plt_spinlock_lock(&sso->mbox_lock);
+	rc = sso_hwgrp_free_xaq_aura(dev, &roc_sso->xaq, nb_hwgrp);
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -533,16 +576,24 @@ sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps)
 	req->npa_aura_id = npa_aura_id;
 	req->hwgrps = hwgrps;
 
-	return mbox_process(dev->mbox);
+	if (mbox_process(dev->mbox))
+		return -EIO;
+
+	return 0;
 }
 
 int
 roc_sso_hwgrp_alloc_xaq(struct roc_sso *roc_sso, uint32_t npa_aura_id,
 			uint16_t hwgrps)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
+	int rc;
 
-	return sso_hwgrp_alloc_xaq(dev, npa_aura_id, hwgrps);
+	plt_spinlock_lock(&sso->mbox_lock);
+	rc = sso_hwgrp_alloc_xaq(dev, npa_aura_id, hwgrps);
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -555,40 +606,56 @@ sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps)
 		return -EINVAL;
 	req->hwgrps = hwgrps;
 
-	return mbox_process(dev->mbox);
+	if (mbox_process(dev->mbox))
+		return -EIO;
+
+	return 0;
 }
 
 int
 roc_sso_hwgrp_release_xaq(struct roc_sso *roc_sso, uint16_t hwgrps)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
+	int rc;
 
-	return sso_hwgrp_release_xaq(dev, hwgrps);
+	plt_spinlock_lock(&sso->mbox_lock);
+	rc = sso_hwgrp_release_xaq(dev, hwgrps);
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
 roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso, uint16_t hwgrp,
 			   uint8_t weight, uint8_t affinity, uint8_t priority)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
 	struct sso_grp_priority *req;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_sso_grp_set_priority(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->grp = hwgrp;
 	req->weight = weight;
 	req->affinity = affinity;
 	req->priority = priority;
 
 	rc = mbox_process(dev->mbox);
-	if (rc < 0)
-		return rc;
+	if (rc) {
+		rc = -EIO;
+		goto fail;
+	}
+	plt_spinlock_unlock(&sso->mbox_lock);
 	plt_sso_dbg("HWGRP %d weight %d affinity %d priority %d", hwgrp, weight,
 		    affinity, priority);
 
 	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -603,10 +670,11 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
 	if (roc_sso->max_hws < nb_hws)
 		return -ENOENT;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	rc = sso_rsrc_attach(roc_sso, SSO_LF_TYPE_HWS, nb_hws);
 	if (rc < 0) {
 		plt_err("Unable to attach SSO HWS LFs");
-		return rc;
+		goto fail;
 	}
 
 	rc = sso_rsrc_attach(roc_sso, SSO_LF_TYPE_HWGRP, nb_hwgrp);
@@ -645,6 +713,7 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
 		goto sso_msix_fail;
 	}
 
+	plt_spinlock_unlock(&sso->mbox_lock);
 	roc_sso->nb_hwgrp = nb_hwgrp;
 	roc_sso->nb_hws = nb_hws;
 
@@ -657,6 +726,8 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
 	sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWGRP);
 hwgrp_atch_fail:
 	sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWS);
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
 	return rc;
 }
 
@@ -678,6 +749,7 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso)
 
 	roc_sso->nb_hwgrp = 0;
 	roc_sso->nb_hws = 0;
+	plt_spinlock_unlock(&sso->mbox_lock);
 }
 
 int
@@ -696,6 +768,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso)
 	sso = roc_sso_to_sso_priv(roc_sso);
 	memset(sso, 0, sizeof(*sso));
 	pci_dev = roc_sso->pci_dev;
+	plt_spinlock_init(&sso->mbox_lock);
 
 	rc = dev_init(&sso->dev, pci_dev);
 	if (rc < 0) {
@@ -703,6 +776,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso)
 		goto fail;
 	}
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	rc = sso_rsrc_get(roc_sso);
 	if (rc < 0) {
 		plt_err("Failed to get SSO resources");
@@ -739,6 +813,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso)
 	sso->pci_dev = pci_dev;
 	sso->dev.drv_inited = true;
 	roc_sso->lmt_base = sso->dev.lmt_base;
+	plt_spinlock_unlock(&sso->mbox_lock);
 
 	return 0;
 link_mem_free:
@@ -746,6 +821,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso)
 rsrc_fail:
 	rc |= dev_fini(&sso->dev, pci_dev);
 fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
 	return rc;
 }
 
diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h
index 09729d4f62..674e4e0a39 100644
--- a/drivers/common/cnxk/roc_sso_priv.h
+++ b/drivers/common/cnxk/roc_sso_priv.h
@@ -22,6 +22,7 @@ struct sso {
 	/* SSO link mapping. */
 	struct plt_bitmap **link_map;
 	void *link_map_mem;
+	plt_spinlock_t mbox_lock;
 } __plt_cache_aligned;
 
 enum sso_err_status {
diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c
index cefd9bc89d..0f9209937b 100644
--- a/drivers/common/cnxk/roc_tim.c
+++ b/drivers/common/cnxk/roc_tim.c
@@ -8,15 +8,16 @@
 static int
 tim_fill_msix(struct roc_tim *roc_tim, uint16_t nb_ring)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
 	struct tim *tim = roc_tim_to_tim_priv(roc_tim);
+	struct dev *dev = &sso->dev;
 	struct msix_offset_rsp *rsp;
 	int i, rc;
 
 	mbox_alloc_msg_msix_offset(dev->mbox);
 	rc = mbox_process_msg(dev->mbox, (void **)&rsp);
-	if (rc < 0)
-		return rc;
+	if (rc)
+		return -EIO;
 
 	for (i = 0; i < nb_ring; i++)
 		tim->tim_msix_offsets[i] = rsp->timlf_msixoff[i];
@@ -88,20 +89,23 @@ int
 roc_tim_lf_enable(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *start_tsc,
 		  uint32_t *cur_bkt)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
+	struct dev *dev = &sso->dev;
 	struct tim_enable_rsp *rsp;
 	struct tim_ring_req *req;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_enable_ring(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->ring = ring_id;
 
 	rc = mbox_process_msg(dev->mbox, (void **)&rsp);
-	if (rc < 0) {
+	if (rc) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
+		goto fail;
 	}
 
 	if (cur_bkt)
@@ -109,28 +113,34 @@ roc_tim_lf_enable(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *start_tsc,
 	if (start_tsc)
 		*start_tsc = rsp->timestarted;
 
-	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
 roc_tim_lf_disable(struct roc_tim *roc_tim, uint8_t ring_id)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
+	struct dev *dev = &sso->dev;
 	struct tim_ring_req *req;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_disable_ring(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->ring = ring_id;
 
 	rc = mbox_process(dev->mbox);
-	if (rc < 0) {
+	if (rc) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
 	}
 
-	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 uintptr_t
@@ -147,13 +157,15 @@ roc_tim_lf_config(struct roc_tim *roc_tim, uint8_t ring_id,
 		  uint8_t ena_dfb, uint32_t bucket_sz, uint32_t chunk_sz,
 		  uint32_t interval, uint64_t intervalns, uint64_t clockfreq)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
+	struct dev *dev = &sso->dev;
 	struct tim_config_req *req;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_config_ring(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->ring = ring_id;
 	req->bigendian = false;
 	req->bucketsize = bucket_sz;
@@ -167,12 +179,14 @@ roc_tim_lf_config(struct roc_tim *roc_tim, uint8_t ring_id,
 	req->gpioedge = TIM_GPIO_LTOH_TRANS;
 
 	rc = mbox_process(dev->mbox);
-	if (rc < 0) {
+	if (rc) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
 	}
 
-	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -180,27 +194,32 @@ roc_tim_lf_interval(struct roc_tim *roc_tim, enum roc_tim_clk_src clk_src,
 		    uint64_t clockfreq, uint64_t *intervalns,
 		    uint64_t *interval)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
+	struct dev *dev = &sso->dev;
 	struct tim_intvl_req *req;
 	struct tim_intvl_rsp *rsp;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_get_min_intvl(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 
 	req->clockfreq = clockfreq;
 	req->clocksource = clk_src;
 	rc = mbox_process_msg(dev->mbox, (void **)&rsp);
-	if (rc < 0) {
+	if (rc) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
+		goto fail;
 	}
 
 	*intervalns = rsp->intvl_ns;
 	*interval = rsp->intvl_cyc;
 
-	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -214,17 +233,19 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk)
 	struct dev *dev = &sso->dev;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_lf_alloc(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->npa_pf_func = idev_npa_pffunc_get();
 	req->sso_pf_func = idev_sso_pffunc_get();
 	req->ring = ring_id;
 
 	rc = mbox_process_msg(dev->mbox, (void **)&rsp);
-	if (rc < 0) {
+	if (rc) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
+		goto fail;
 	}
 
 	if (clk)
@@ -235,12 +256,18 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk)
 	if (rc < 0) {
 		plt_tim_dbg("Failed to register Ring[%d] IRQ", ring_id);
 		free_req = mbox_alloc_msg_tim_lf_free(dev->mbox);
-		if (free_req == NULL)
-			return -ENOSPC;
+		if (free_req == NULL) {
+			rc = -ENOSPC;
+			goto fail;
+		}
 		free_req->ring = ring_id;
-		mbox_process(dev->mbox);
+		rc = mbox_process(dev->mbox);
+		if (rc)
+			rc = -EIO;
 	}
 
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
 	return rc;
 }
 
@@ -256,17 +283,20 @@ roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id)
 	tim_unregister_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
 				tim->tim_msix_offsets[ring_id]);
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_lf_free(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->ring = ring_id;
 
 	rc = mbox_process(dev->mbox);
 	if (rc < 0) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
 	}
 
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
 	return 0;
 }
 
@@ -276,40 +306,48 @@ roc_tim_init(struct roc_tim *roc_tim)
 	struct rsrc_attach_req *attach_req;
 	struct rsrc_detach_req *detach_req;
 	struct free_rsrcs_rsp *free_rsrc;
-	struct dev *dev;
+	struct sso *sso;
 	uint16_t nb_lfs;
+	struct dev *dev;
 	int rc;
 
 	if (roc_tim == NULL || roc_tim->roc_sso == NULL)
 		return TIM_ERR_PARAM;
 
+	sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
+	dev = &sso->dev;
 	PLT_STATIC_ASSERT(sizeof(struct tim) <= TIM_MEM_SZ);
-	dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
 	nb_lfs = roc_tim->nb_lfs;
+	plt_spinlock_lock(&sso->mbox_lock);
 	mbox_alloc_msg_free_rsrc_cnt(dev->mbox);
 	rc = mbox_process_msg(dev->mbox, (void *)&free_rsrc);
-	if (rc < 0) {
+	if (rc) {
 		plt_err("Unable to get free rsrc count.");
-		return 0;
+		nb_lfs = 0;
+		goto fail;
 	}
 
 	if (nb_lfs && (free_rsrc->tim < nb_lfs)) {
 		plt_tim_dbg("Requested LFs : %d Available LFs : %d", nb_lfs,
 			    free_rsrc->tim);
-		return 0;
+		nb_lfs = 0;
+		goto fail;
 	}
 
 	attach_req = mbox_alloc_msg_attach_resources(dev->mbox);
-	if (attach_req == NULL)
-		return -ENOSPC;
+	if (attach_req == NULL) {
+		nb_lfs = 0;
+		goto fail;
+	}
 	attach_req->modify = true;
 	attach_req->timlfs = nb_lfs ? nb_lfs : free_rsrc->tim;
 	nb_lfs = attach_req->timlfs;
 
 	rc = mbox_process(dev->mbox);
-	if (rc < 0) {
+	if (rc) {
 		plt_err("Unable to attach TIM LFs.");
-		return 0;
+		nb_lfs = 0;
+		goto fail;
 	}
 
 	rc = tim_fill_msix(roc_tim, nb_lfs);
@@ -317,28 +355,34 @@ roc_tim_init(struct roc_tim *roc_tim)
 		plt_err("Unable to get TIM MSIX vectors");
 
 		detach_req = mbox_alloc_msg_detach_resources(dev->mbox);
-		if (detach_req == NULL)
-			return -ENOSPC;
+		if (detach_req == NULL) {
+			nb_lfs = 0;
+			goto fail;
+		}
 		detach_req->partial = true;
 		detach_req->timlfs = true;
 		mbox_process(dev->mbox);
-
-		return 0;
+		nb_lfs = 0;
 	}
 
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
 	return nb_lfs;
 }
 
 void
 roc_tim_fini(struct roc_tim *roc_tim)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
 	struct rsrc_detach_req *detach_req;
+	struct dev *dev = &sso->dev;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	detach_req = mbox_alloc_msg_detach_resources(dev->mbox);
 	PLT_ASSERT(detach_req);
 	detach_req->partial = true;
 	detach_req->timlfs = true;
 
 	mbox_process(dev->mbox);
+	plt_spinlock_unlock(&sso->mbox_lock);
 }
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 0/6] Extend and set event queue attributes at runtime
  2022-03-29 13:10 [PATCH 0/6] Extend and set event queue attributes at runtime Shijith Thotton
                   ` (5 preceding siblings ...)
  2022-03-29 13:11 ` [PATCH 6/6] common/cnxk: use lock when accessing mbox of SSO Shijith Thotton
@ 2022-03-29 18:49 ` Jerin Jacob
  2022-03-30 10:52   ` Van Haaren, Harry
  2022-04-05  5:40 ` [PATCH v2 " Shijith Thotton
  7 siblings, 1 reply; 58+ messages in thread
From: Jerin Jacob @ 2022-03-29 18:49 UTC (permalink / raw)
  To: Shijith Thotton, Van Haaren, Harry, Jayatheerthan, Jay,
	Erik Gabriel Carrillo, Gujjar, Abhinandan S, McDaniel, Timothy,
	Hemant Agrawal, Nipun Gupta, Mattias Rönnblom, Ray Kinsella
  Cc: dpdk-dev, Jerin Jacob, Pavan Nikhilesh, Liang Ma

On Tue, Mar 29, 2022 at 6:42 PM Shijith Thotton <sthotton@marvell.com> wrote:
>
> This series adds support for setting event queue attributes at runtime
> and adds two new event queue attributes weight and affinity. Eventdev
> capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR is added to expose the
> capability to set attributes at runtime and rte_event_queue_attr_set()
> API is used to set the attributes.
>
> Attributes weight and affinity are not yet added to rte_event_queue_conf
> structure to avoid ABI break and will be added in 22.11. Till then, PMDs
> using the new attributes are expected to manage them.
>
> Test application changes and example implementation are added as last
> three patches.


+ @Van Haaren, Harry  @Jayatheerthan, Jay  @Erik Gabriel Carrillo
@Gujjar, Abhinandan S  @McDaniel, Timothy  @Hemant Agrawal  @Nipun
Gupta  @Mattias Rönnblom  @lingma @Ray Kinsella

> Pavan Nikhilesh (1):
>   common/cnxk: use lock when accessing mbox of SSO
>
> Shijith Thotton (5):
>   eventdev: support to set queue attributes at runtime
>   eventdev: add weight and affinity to queue attributes
>   doc: announce change in event queue conf structure
>   test/event: test cases to test runtime queue attribute
>   event/cnxk: support to set runtime queue attributes
>
>  app/test/test_eventdev.c                  | 146 ++++++++++++++++++
>  doc/guides/eventdevs/features/cnxk.ini    |   1 +
>  doc/guides/eventdevs/features/default.ini |   1 +
>  doc/guides/rel_notes/deprecation.rst      |   3 +
>  drivers/common/cnxk/roc_sso.c             | 174 ++++++++++++++++------
>  drivers/common/cnxk/roc_sso_priv.h        |   1 +
>  drivers/common/cnxk/roc_tim.c             | 134 +++++++++++------
>  drivers/event/cnxk/cn10k_eventdev.c       |   4 +
>  drivers/event/cnxk/cn9k_eventdev.c        |   4 +
>  drivers/event/cnxk/cnxk_eventdev.c        |  81 +++++++++-
>  drivers/event/cnxk/cnxk_eventdev.h        |  16 ++
>  lib/eventdev/eventdev_pmd.h               |  44 ++++++
>  lib/eventdev/rte_eventdev.c               |  43 ++++++
>  lib/eventdev/rte_eventdev.h               |  75 +++++++++-
>  lib/eventdev/version.map                  |   3 +
>  15 files changed, 627 insertions(+), 103 deletions(-)
>
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 58+ messages in thread

* RE: [PATCH 0/6] Extend and set event queue attributes at runtime
  2022-03-29 18:49 ` [PATCH 0/6] Extend and set event queue attributes at runtime Jerin Jacob
@ 2022-03-30 10:52   ` Van Haaren, Harry
  2022-04-04  7:57     ` Shijith Thotton
  0 siblings, 1 reply; 58+ messages in thread
From: Van Haaren, Harry @ 2022-03-30 10:52 UTC (permalink / raw)
  To: Jerin Jacob, Shijith Thotton, Jayatheerthan, Jay, Carrillo,
	Erik G, Gujjar, Abhinandan S, McDaniel, Timothy, Hemant Agrawal,
	Nipun Gupta, mattias.ronnblom, Ray Kinsella
  Cc: dpdk-dev, Jerin Jacob, Pavan Nikhilesh, Liang Ma

> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Tuesday, March 29, 2022 7:50 PM
> To: Shijith Thotton <sthotton@marvell.com>; Van Haaren, Harry
> <harry.van.haaren@intel.com>; Jayatheerthan, Jay
> <jay.jayatheerthan@intel.com>; Carrillo, Erik G <erik.g.carrillo@intel.com>;
> Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; McDaniel, Timothy
> <timothy.mcdaniel@intel.com>; Hemant Agrawal <hemant.agrawal@nxp.com>;
> Nipun Gupta <nipun.gupta@nxp.com>; mattias.ronnblom
> <mattias.ronnblom@ericsson.com>; Ray Kinsella <mdr@ashroe.eu>
> Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob <jerinj@marvell.com>; Pavan
> Nikhilesh <pbhagavatula@marvell.com>; Liang Ma <liangma@liangbit.com>
> Subject: Re: [PATCH 0/6] Extend and set event queue attributes at runtime
> 
> On Tue, Mar 29, 2022 at 6:42 PM Shijith Thotton <sthotton@marvell.com> wrote:
> >
> > This series adds support for setting event queue attributes at runtime
> > and adds two new event queue attributes weight and affinity. Eventdev
> > capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR is added to expose
> the
> > capability to set attributes at runtime and rte_event_queue_attr_set()
> > API is used to set the attributes.
> >
> > Attributes weight and affinity are not yet added to rte_event_queue_conf
> > structure to avoid ABI break and will be added in 22.11. Till then, PMDs
> > using the new attributes are expected to manage them.

When the new attributes are added to queue_conf structure in 22.11, will the attr_get() function have any real use?

If the attr_get() function is not useful post 22.11 (aka, returns const-integers?), we should consider if waiting
for ABI-break in 22.11 is a better solution as it doesn't add public API/ABI functions that only have limited time value..?

<snip>
 
> + @Van Haaren, Harry  @Jayatheerthan, Jay  @Erik Gabriel Carrillo
> @Gujjar, Abhinandan S  @McDaniel, Timothy  @Hemant Agrawal  @Nipun
> Gupta  @Mattias Rönnblom  @lingma @Ray Kinsella

Thanks for flagging Jerin, indeed I hadn't looked at this patchset yet.

From event/sw point of view, the new runtime queue attribute capability is not
available, so the feature flag will not be set.

<snip>

Some code comments inline on the impl patches comping up. Regards, -Harry

^ permalink raw reply	[flat|nested] 58+ messages in thread

* RE: [PATCH 1/6] eventdev: support to set queue attributes at runtime
  2022-03-29 13:11 ` [PATCH 1/6] eventdev: support to set " Shijith Thotton
@ 2022-03-30 10:58   ` Van Haaren, Harry
  2022-04-04  9:35     ` Shijith Thotton
  2022-03-30 12:14   ` Mattias Rönnblom
  1 sibling, 1 reply; 58+ messages in thread
From: Van Haaren, Harry @ 2022-03-30 10:58 UTC (permalink / raw)
  To: Shijith Thotton, dev, jerinj; +Cc: pbhagavatula, Ray Kinsella

> -----Original Message-----
> From: Shijith Thotton <sthotton@marvell.com>
> Sent: Tuesday, March 29, 2022 2:11 PM
> To: dev@dpdk.org; jerinj@marvell.com
> Cc: Shijith Thotton <sthotton@marvell.com>; pbhagavatula@marvell.com; Ray
> Kinsella <mdr@ashroe.eu>
> Subject: [PATCH 1/6] eventdev: support to set queue attributes at runtime

<snip>

> +/**
> + * Set an event queue attribute at runtime.
> + *
> + * @param dev
> + *   Event device pointer
> + * @param queue_id
> + *   Event queue index
> + * @param attr_id
> + *   Event queue attribute id
> + * @param attr_value
> + *   Event queue attribute value
> + *
> + * @return
> + *  - 0: Success.
> + *  - <0: Error code on failure.
> + */
> +typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev,
> +					 uint8_t queue_id, uint32_t attr_id,
> +					 uint32_t attr_value);

Is using a uint64_t a better type for attr_value? Given there might be more in future,
limiting to 32-bits now may cause headaches later, and uint64_t doesn't cost extra?

I think 32-bits of attr_id is enough :)

Same comment on the _get() API in patch 2/6, a uint64_t * would be a better fit there in my opinion.

<snip>

^ permalink raw reply	[flat|nested] 58+ messages in thread

* RE: [PATCH 5/6] event/cnxk: support to set runtime queue attributes
  2022-03-29 13:11 ` [PATCH 5/6] event/cnxk: support to set runtime queue attributes Shijith Thotton
@ 2022-03-30 11:05   ` Van Haaren, Harry
  2022-04-04  7:59     ` Shijith Thotton
  0 siblings, 1 reply; 58+ messages in thread
From: Van Haaren, Harry @ 2022-03-30 11:05 UTC (permalink / raw)
  To: Shijith Thotton, dev, jerinj; +Cc: pbhagavatula

> -----Original Message-----
> From: Shijith Thotton <sthotton@marvell.com>
> Sent: Tuesday, March 29, 2022 2:11 PM
> To: dev@dpdk.org; jerinj@marvell.com
> Cc: Shijith Thotton <sthotton@marvell.com>; pbhagavatula@marvell.com
> Subject: [PATCH 5/6] event/cnxk: support to set runtime queue attributes

<snip>

> +int
> +cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev, uint8_t
> queue_id,
> +			     uint32_t attr_id, uint32_t *attr_value)
> +{
> +	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
> +
> +	*attr_value = attr_id == RTE_EVENT_QUEUE_ATTR_WEIGHT ?
> +			      dev->mlt_prio[queue_id].weight :
> +			      dev->mlt_prio[queue_id].affinity;

This is future-bug prone, as adding a new Eventdev attr will return .affinity silently,
instead of the attr that is being requested.

Prefer a switch(attr_id), and explicitly handle each attr_id, with a default case
to return -1, showing the PMD refusing to handle the attr requested to the caller.

On reviewing the below, the set() below does this perfectly... except the return?

> +
> +	return 0;
> +}
> +
> +int
> +cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev, uint8_t
> queue_id,
> +			     uint32_t attr_id, uint32_t attr_value)
> +{
> +	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
> +	uint8_t priority, weight, affinity;
> +	struct rte_event_queue_conf *conf;
> +
> +	conf = &event_dev->data->queues_cfg[queue_id];
> +
> +	switch (attr_id) {
> +	case RTE_EVENT_QUEUE_ATTR_PRIORITY:
> +		conf->priority = attr_value;
> +		break;
> +	case RTE_EVENT_QUEUE_ATTR_WEIGHT:
> +		dev->mlt_prio[queue_id].weight = attr_value;
> +		break;
> +	case RTE_EVENT_QUEUE_ATTR_AFFINITY:
> +		dev->mlt_prio[queue_id].affinity = attr_value;
> +		break;
> +	default:
> +		plt_sso_dbg("Ignored setting attribute id %u", attr_id);
> +		return 0;
> +	}

Why return 0 here? This is a failure, the PMD did *not* set the attribute ID.
Make the user aware of that fact, return -1; or -EINVAL or something.

Document the explicit return values at Eventdev header level, so all PMDs can
align on the return values, providing consistency to the application.

<snip>

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 2/6] eventdev: add weight and affinity to queue attributes
  2022-03-29 13:11 ` [PATCH 2/6] eventdev: add weight and affinity to queue attributes Shijith Thotton
@ 2022-03-30 12:12   ` Mattias Rönnblom
  2022-04-04  9:33     ` Shijith Thotton
  0 siblings, 1 reply; 58+ messages in thread
From: Mattias Rönnblom @ 2022-03-30 12:12 UTC (permalink / raw)
  To: Shijith Thotton, dev, jerinj; +Cc: pbhagavatula

On 2022-03-29 15:11, Shijith Thotton wrote:
> Extended eventdev queue QoS attributes to support weight and affinity.
> If queues are of same priority, events from the queue with highest
> weight will be scheduled first. Affinity indicates the number of times,
> the subsequent schedule calls from an event port will use the same event
> queue. Schedule call selects another queue if current queue goes empty
> or schedule count reaches affinity count.
>
> To avoid ABI break, weight and affinity attributes are not yet added to
> queue config structure and relies on PMD for managing it. New eventdev
> op queue_attr_get can be used to get it from the PMD.

Have you considered using a PMD-specific command line parameter as a 
stop-gap until you can extend the config struct?

> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> ---
>   lib/eventdev/eventdev_pmd.h | 22 ++++++++++++++++++++
>   lib/eventdev/rte_eventdev.c | 12 +++++++++++
>   lib/eventdev/rte_eventdev.h | 41 +++++++++++++++++++++++++++++++++----
>   3 files changed, 71 insertions(+), 4 deletions(-)
>
> diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
> index 6182749503..f19df98a7a 100644
> --- a/lib/eventdev/eventdev_pmd.h
> +++ b/lib/eventdev/eventdev_pmd.h
> @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
>   typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
>   		uint8_t queue_id);
>   
> +/**
> + * Get an event queue attribute at runtime.
> + *
> + * @param dev
> + *   Event device pointer
> + * @param queue_id
> + *   Event queue index
> + * @param attr_id
> + *   Event queue attribute id
> + * @param[out] attr_value
> + *   Event queue attribute value
> + *
> + * @return
> + *  - 0: Success.
> + *  - <0: Error code on failure.
> + */
> +typedef int (*eventdev_queue_attr_get_t)(struct rte_eventdev *dev,
> +					 uint8_t queue_id, uint32_t attr_id,
> +					 uint32_t *attr_value);
> +
>   /**
>    * Set an event queue attribute at runtime.
>    *
> @@ -1231,6 +1251,8 @@ struct eventdev_ops {
>   	/**< Set up an event queue. */
>   	eventdev_queue_release_t queue_release;
>   	/**< Release an event queue. */
> +	eventdev_queue_attr_get_t queue_attr_get;
> +	/**< Get an event queue attribute. */
>   	eventdev_queue_attr_set_t queue_attr_set;
>   	/**< Set an event queue attribute. */
>   
> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
> index 13c8af877e..37f0e54bf3 100644
> --- a/lib/eventdev/rte_eventdev.c
> +++ b/lib/eventdev/rte_eventdev.c
> @@ -838,6 +838,18 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>   
>   		*attr_value = conf->schedule_type;
>   		break;
> +	case RTE_EVENT_QUEUE_ATTR_WEIGHT:
> +		*attr_value = RTE_EVENT_QUEUE_WEIGHT_LOWEST;
> +		if (dev->dev_ops->queue_attr_get)
> +			return (*dev->dev_ops->queue_attr_get)(
> +				dev, queue_id, attr_id, attr_value);
> +		break;
> +	case RTE_EVENT_QUEUE_ATTR_AFFINITY:
> +		*attr_value = RTE_EVENT_QUEUE_AFFINITY_LOWEST;
> +		if (dev->dev_ops->queue_attr_get)
> +			return (*dev->dev_ops->queue_attr_get)(
> +				dev, queue_id, attr_id, attr_value);
> +		break;
>   	default:
>   		return -EINVAL;
>   	};
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 19710cd0c5..fa16fc5dcb 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -222,8 +222,14 @@ struct rte_event;
>   
>   /* Event device capability bitmap flags */
>   #define RTE_EVENT_DEV_CAP_QUEUE_QOS           (1ULL << 0)
> -/**< Event scheduling prioritization is based on the priority associated with
> - *  each event queue.
> +/**< Event scheduling prioritization is based on the priority and weight
> + * associated with each event queue. Events from a queue with highest priority
> + * is scheduled first. If the queues are of same priority, a queue with highest
> + * weight is selected. Subsequent schedules from an event port could see events
> + * from the same event queue if the queue is configured with an affinity count.
> + * Affinity count of a queue indicates the number of times, the subsequent
> + * schedule calls from an event port should use the same queue if the queue is
> + * non-empty.

Is this specifying something else than WRR scheduling for equal-priority 
queues?

What is a schedule call? I must say I don't understand this description. 
Is affinity the per-port batch size from the queue that is "next in 
line" for an opportunity to be scheduled to a port?

>    *
>    *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
>    */
> @@ -331,6 +337,26 @@ struct rte_event;
>    * @see rte_event_port_link()
>    */
>   
> +/* Event queue scheduling weights */
> +#define RTE_EVENT_QUEUE_WEIGHT_HIGHEST   255
> +/**< Highest weight of an event queue
> + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
> + */
> +#define RTE_EVENT_QUEUE_WEIGHT_LOWEST    0
> +/**< Lowest weight of an event queue
> + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
> + */
> +
> +/* Event queue scheduling affinity */
> +#define RTE_EVENT_QUEUE_AFFINITY_HIGHEST   255
> +/**< Highest scheduling affinity of an event queue
> + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
> + */
> +#define RTE_EVENT_QUEUE_AFFINITY_LOWEST    0
> +/**< Lowest scheduling affinity of an event queue
> + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
> + */
> +
>   /**
>    * Get the total number of event devices that have been successfully
>    * initialised.
> @@ -684,11 +710,18 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
>    * The schedule type of the queue.
>    */
>   #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4
> -
> +/**
> + * The weight of the queue.
> + */
> +#define RTE_EVENT_QUEUE_ATTR_WEIGHT 5
> +/**
> + * Affinity of the queue.
> + */
> +#define RTE_EVENT_QUEUE_ATTR_AFFINITY 6
>   /**
>    * Maximum supported attribute ID.
>    */
> -#define RTE_EVENT_QUEUE_ATTR_MAX RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE
> +#define RTE_EVENT_QUEUE_ATTR_MAX RTE_EVENT_QUEUE_ATTR_AFFINITY
>   

>   /**
>    * Get an attribute from a queue.


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH 1/6] eventdev: support to set queue attributes at runtime
  2022-03-29 13:11 ` [PATCH 1/6] eventdev: support to set " Shijith Thotton
  2022-03-30 10:58   ` Van Haaren, Harry
@ 2022-03-30 12:14   ` Mattias Rönnblom
  2022-04-04 11:45     ` Shijith Thotton
  1 sibling, 1 reply; 58+ messages in thread
From: Mattias Rönnblom @ 2022-03-30 12:14 UTC (permalink / raw)
  To: Shijith Thotton, dev, jerinj; +Cc: pbhagavatula, Ray Kinsella

On 2022-03-29 15:11, Shijith Thotton wrote:
> Added a new eventdev API rte_event_queue_attr_set(), to set event queue
> attributes at runtime from the values set during initialization using
> rte_event_queue_setup(). PMD's supporting this feature should expose the
> capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR.
>
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> ---
>   doc/guides/eventdevs/features/default.ini |  1 +
>   lib/eventdev/eventdev_pmd.h               | 22 +++++++++++++
>   lib/eventdev/rte_eventdev.c               | 31 ++++++++++++++++++
>   lib/eventdev/rte_eventdev.h               | 38 ++++++++++++++++++++++-
>   lib/eventdev/version.map                  |  3 ++
>   5 files changed, 94 insertions(+), 1 deletion(-)
>
> diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini
> index 2ea233463a..00360f60c6 100644
> --- a/doc/guides/eventdevs/features/default.ini
> +++ b/doc/guides/eventdevs/features/default.ini
> @@ -17,6 +17,7 @@ runtime_port_link          =
>   multiple_queue_port        =
>   carry_flow_id              =
>   maintenance_free           =
> +runtime_queue_attr         =
>   
>   ;
>   ; Features of a default Ethernet Rx adapter.
> diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
> index ce469d47a6..6182749503 100644
> --- a/lib/eventdev/eventdev_pmd.h
> +++ b/lib/eventdev/eventdev_pmd.h
> @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
>   typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
>   		uint8_t queue_id);
>   
> +/**
> + * Set an event queue attribute at runtime.
> + *
> + * @param dev
> + *   Event device pointer
> + * @param queue_id
> + *   Event queue index
> + * @param attr_id
> + *   Event queue attribute id
> + * @param attr_value
> + *   Event queue attribute value
> + *
> + * @return
> + *  - 0: Success.
> + *  - <0: Error code on failure.
> + */
> +typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev,
> +					 uint8_t queue_id, uint32_t attr_id,
> +					 uint32_t attr_value);
> +
>   /**
>    * Retrieve the default event port configuration.
>    *
> @@ -1211,6 +1231,8 @@ struct eventdev_ops {
>   	/**< Set up an event queue. */
>   	eventdev_queue_release_t queue_release;
>   	/**< Release an event queue. */
> +	eventdev_queue_attr_set_t queue_attr_set;
> +	/**< Set an event queue attribute. */
>   
>   	eventdev_port_default_conf_get_t port_def_conf;
>   	/**< Get default port configuration. */
> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
> index 532a253553..13c8af877e 100644
> --- a/lib/eventdev/rte_eventdev.c
> +++ b/lib/eventdev/rte_eventdev.c
> @@ -844,6 +844,37 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>   	return 0;
>   }
>   
> +int
> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
> +			 uint32_t attr_value)
> +{
> +	struct rte_eventdev *dev;
> +
> +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> +	dev = &rte_eventdevs[dev_id];
> +	if (!is_valid_queue(dev, queue_id)) {
> +		RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
> +		return -EINVAL;
> +	}
> +
> +	if (attr_id > RTE_EVENT_QUEUE_ATTR_MAX) {
> +		RTE_EDEV_LOG_ERR("Invalid attribute ID %" PRIu8, attr_id);
> +		return -EINVAL;
> +	}
> +
> +	if (!(dev->data->event_dev_cap &
> +	      RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) {
> +		RTE_EDEV_LOG_ERR(
> +			"Device %" PRIu8 "does not support changing queue attributes at runtime",
> +			dev_id);
> +		return -ENOTSUP;
> +	}
> +
> +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_attr_set, -ENOTSUP);
> +	return (*dev->dev_ops->queue_attr_set)(dev, queue_id, attr_id,
> +					       attr_value);
> +}
> +
>   int
>   rte_event_port_link(uint8_t dev_id, uint8_t port_id,
>   		    const uint8_t queues[], const uint8_t priorities[],
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 42a5660169..19710cd0c5 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -225,7 +225,7 @@ struct rte_event;
>   /**< Event scheduling prioritization is based on the priority associated with
>    *  each event queue.
>    *
> - *  @see rte_event_queue_setup()
> + *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
>    */
>   #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
>   /**< Event scheduling prioritization is based on the priority associated with
> @@ -307,6 +307,13 @@ struct rte_event;
>    * global pool, or process signaling related to load balancing.
>    */
>   
> +#define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
> +/**< Event device is capable of changing the queue attributes at runtime i.e after
> + * rte_event_queue_setup() or rte_event_start() call sequence. If this flag is
> + * not set, eventdev queue attributes can only be configured during
> + * rte_event_queue_setup().
> + */
> +
>   /* Event device priority levels */
>   #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
>   /**< Highest priority expressed across eventdev subsystem
> @@ -678,6 +685,11 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
>    */
>   #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4
>   
> +/**
> + * Maximum supported attribute ID.
> + */
> +#define RTE_EVENT_QUEUE_ATTR_MAX RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE
> +

This #define will assure that every new attribute breaks the ABI. Is 
that intentional?

>   /**
>    * Get an attribute from a queue.
>    *
> @@ -702,6 +714,30 @@ int
>   rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>   			uint32_t *attr_value);
>   
> +/**
> + * Set an event queue attribute.
> + *
> + * @param dev_id
> + *   Eventdev id
> + * @param queue_id
> + *   Eventdev queue id
> + * @param attr_id
> + *   The attribute ID to set
> + * @param attr_value
> + *   The attribute value to set
> + *
> + * @return
> + *   - 0: Successfully set attribute.
> + *   - -EINVAL: invalid device, queue or attr_id.
> + *   - -ENOTSUP: device does not support setting event attribute.
> + *   - -EBUSY: device is in running state
> + *   - <0: failed to set event queue attribute
> + */
> +__rte_experimental
> +int
> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
> +			 uint32_t attr_value);
> +
>   /* Event port specific APIs */
>   
>   /* Event port configuration bitmap flags */
> diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
> index cd5dada07f..c581b75c18 100644
> --- a/lib/eventdev/version.map
> +++ b/lib/eventdev/version.map
> @@ -108,6 +108,9 @@ EXPERIMENTAL {
>   
>   	# added in 22.03
>   	rte_event_eth_rx_adapter_event_port_get;
> +
> +	# added in 22.07
> +	rte_event_queue_attr_set;
>   };
>   
>   INTERNAL {


^ permalink raw reply	[flat|nested] 58+ messages in thread

* RE: [PATCH 0/6] Extend and set event queue attributes at runtime
  2022-03-30 10:52   ` Van Haaren, Harry
@ 2022-04-04  7:57     ` Shijith Thotton
  0 siblings, 0 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-04-04  7:57 UTC (permalink / raw)
  To: Van Haaren, Harry, Jerin Jacob, Jayatheerthan, Jay, Carrillo,
	Erik G, Gujjar, Abhinandan S, McDaniel, Timothy, Hemant Agrawal,
	Nipun Gupta, mattias.ronnblom, Ray Kinsella
  Cc: dpdk-dev, Jerin Jacob Kollanukkaran, Pavan Nikhilesh Bhagavatula,
	Liang Ma

>> >
>> > This series adds support for setting event queue attributes at runtime
>> > and adds two new event queue attributes weight and affinity. Eventdev
>> > capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR is added to expose
>> the
>> > capability to set attributes at runtime and rte_event_queue_attr_set()
>> > API is used to set the attributes.
>> >
>> > Attributes weight and affinity are not yet added to rte_event_queue_conf
>> > structure to avoid ABI break and will be added in 22.11. Till then, PMDs
>> > using the new attributes are expected to manage them.
>
>When the new attributes are added to queue_conf structure in 22.11, will the
>attr_get() function have any real use?
>
>If the attr_get() function is not useful post 22.11 (aka, returns const-integers?), we
>should consider if waiting
>for ABI-break in 22.11 is a better solution as it doesn't add public API/ABI functions
>that only have limited time value..?
>

queue_attr_get is an internal op and is not called if the op is not set by the
PMD. So no changes are needed from other PMDs to incorporate this. It is useful
to the PMDs needing the new attributes before they are added to
rte_event_queue_conf struct in 22.11.

><snip>
>
>> + @Van Haaren, Harry  @Jayatheerthan, Jay  @Erik Gabriel Carrillo
>> @Gujjar, Abhinandan S  @McDaniel, Timothy  @Hemant Agrawal  @Nipun
>> Gupta  @Mattias Rönnblom  @lingma @Ray Kinsella
>
>Thanks for flagging Jerin, indeed I hadn't looked at this patchset yet.
>
>From event/sw point of view, the new runtime queue attribute capability is not
>available, so the feature flag will not be set.
>
><snip>
>
>Some code comments inline on the impl patches comping up. Regards, -Harry

^ permalink raw reply	[flat|nested] 58+ messages in thread

* RE: [PATCH 5/6] event/cnxk: support to set runtime queue attributes
  2022-03-30 11:05   ` Van Haaren, Harry
@ 2022-04-04  7:59     ` Shijith Thotton
  0 siblings, 0 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-04-04  7:59 UTC (permalink / raw)
  To: Van Haaren, Harry, dev, Jerin Jacob Kollanukkaran
  Cc: Pavan Nikhilesh Bhagavatula

><snip>
>
>> +int
>> +cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev, uint8_t
>> queue_id,
>> +			     uint32_t attr_id, uint32_t *attr_value)
>> +{
>> +	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
>> +
>> +	*attr_value = attr_id == RTE_EVENT_QUEUE_ATTR_WEIGHT ?
>> +			      dev->mlt_prio[queue_id].weight :
>> +			      dev->mlt_prio[queue_id].affinity;
>
>This is future-bug prone, as adding a new Eventdev attr will return .affinity silently,
>instead of the attr that is being requested.
>
>Prefer a switch(attr_id), and explicitly handle each attr_id, with a default case
>to return -1, showing the PMD refusing to handle the attr requested to the caller.
>
 
Will change it similar to set().

>On reviewing the below, the set() below does this perfectly... except the return?
>
>> +
>> +	return 0;
>> +}
>> +
>> +int
>> +cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev, uint8_t
>> queue_id,
>> +			     uint32_t attr_id, uint32_t attr_value)
>> +{
>> +	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
>> +	uint8_t priority, weight, affinity;
>> +	struct rte_event_queue_conf *conf;
>> +
>> +	conf = &event_dev->data->queues_cfg[queue_id];
>> +
>> +	switch (attr_id) {
>> +	case RTE_EVENT_QUEUE_ATTR_PRIORITY:
>> +		conf->priority = attr_value;
>> +		break;
>> +	case RTE_EVENT_QUEUE_ATTR_WEIGHT:
>> +		dev->mlt_prio[queue_id].weight = attr_value;
>> +		break;
>> +	case RTE_EVENT_QUEUE_ATTR_AFFINITY:
>> +		dev->mlt_prio[queue_id].affinity = attr_value;
>> +		break;
>> +	default:
>> +		plt_sso_dbg("Ignored setting attribute id %u", attr_id);
>> +		return 0;
>> +	}
>
>Why return 0 here? This is a failure, the PMD did *not* set the attribute ID.
>Make the user aware of that fact, return -1; or -EINVAL or something.
>
>Document the explicit return values at Eventdev header level, so all PMDs can
>align on the return values, providing consistency to the application.
>

Will update PMD and library with error number.

><snip>

^ permalink raw reply	[flat|nested] 58+ messages in thread

* RE: [PATCH 2/6] eventdev: add weight and affinity to queue attributes
  2022-03-30 12:12   ` Mattias Rönnblom
@ 2022-04-04  9:33     ` Shijith Thotton
  0 siblings, 0 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-04-04  9:33 UTC (permalink / raw)
  To: Mattias Rönnblom, dev, Jerin Jacob Kollanukkaran
  Cc: Pavan Nikhilesh Bhagavatula

>> Extended eventdev queue QoS attributes to support weight and affinity.
>> If queues are of same priority, events from the queue with highest
>> weight will be scheduled first. Affinity indicates the number of times,
>> the subsequent schedule calls from an event port will use the same event
>> queue. Schedule call selects another queue if current queue goes empty
>> or schedule count reaches affinity count.
>>
>> To avoid ABI break, weight and affinity attributes are not yet added to
>> queue config structure and relies on PMD for managing it. New eventdev
>> op queue_attr_get can be used to get it from the PMD.
>
>Have you considered using a PMD-specific command line parameter as a
>stop-gap until you can extend the config struct?
>
>> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
>> ---
>>   lib/eventdev/eventdev_pmd.h | 22 ++++++++++++++++++++
>>   lib/eventdev/rte_eventdev.c | 12 +++++++++++
>>   lib/eventdev/rte_eventdev.h | 41 +++++++++++++++++++++++++++++++++-
>---
>>   3 files changed, 71 insertions(+), 4 deletions(-)
>>
>> diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
>> index 6182749503..f19df98a7a 100644
>> --- a/lib/eventdev/eventdev_pmd.h
>> +++ b/lib/eventdev/eventdev_pmd.h
>> @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct
>rte_eventdev *dev,
>>   typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
>>   		uint8_t queue_id);
>>
>> +/**
>> + * Get an event queue attribute at runtime.
>> + *
>> + * @param dev
>> + *   Event device pointer
>> + * @param queue_id
>> + *   Event queue index
>> + * @param attr_id
>> + *   Event queue attribute id
>> + * @param[out] attr_value
>> + *   Event queue attribute value
>> + *
>> + * @return
>> + *  - 0: Success.
>> + *  - <0: Error code on failure.
>> + */
>> +typedef int (*eventdev_queue_attr_get_t)(struct rte_eventdev *dev,
>> +					 uint8_t queue_id, uint32_t attr_id,
>> +					 uint32_t *attr_value);
>> +
>>   /**
>>    * Set an event queue attribute at runtime.
>>    *
>> @@ -1231,6 +1251,8 @@ struct eventdev_ops {
>>   	/**< Set up an event queue. */
>>   	eventdev_queue_release_t queue_release;
>>   	/**< Release an event queue. */
>> +	eventdev_queue_attr_get_t queue_attr_get;
>> +	/**< Get an event queue attribute. */
>>   	eventdev_queue_attr_set_t queue_attr_set;
>>   	/**< Set an event queue attribute. */
>>
>> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
>> index 13c8af877e..37f0e54bf3 100644
>> --- a/lib/eventdev/rte_eventdev.c
>> +++ b/lib/eventdev/rte_eventdev.c
>> @@ -838,6 +838,18 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t
>queue_id, uint32_t attr_id,
>>
>>   		*attr_value = conf->schedule_type;
>>   		break;
>> +	case RTE_EVENT_QUEUE_ATTR_WEIGHT:
>> +		*attr_value = RTE_EVENT_QUEUE_WEIGHT_LOWEST;
>> +		if (dev->dev_ops->queue_attr_get)
>> +			return (*dev->dev_ops->queue_attr_get)(
>> +				dev, queue_id, attr_id, attr_value);
>> +		break;
>> +	case RTE_EVENT_QUEUE_ATTR_AFFINITY:
>> +		*attr_value = RTE_EVENT_QUEUE_AFFINITY_LOWEST;
>> +		if (dev->dev_ops->queue_attr_get)
>> +			return (*dev->dev_ops->queue_attr_get)(
>> +				dev, queue_id, attr_id, attr_value);
>> +		break;
>>   	default:
>>   		return -EINVAL;
>>   	};
>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>> index 19710cd0c5..fa16fc5dcb 100644
>> --- a/lib/eventdev/rte_eventdev.h
>> +++ b/lib/eventdev/rte_eventdev.h
>> @@ -222,8 +222,14 @@ struct rte_event;
>>
>>   /* Event device capability bitmap flags */
>>   #define RTE_EVENT_DEV_CAP_QUEUE_QOS           (1ULL << 0)
>> -/**< Event scheduling prioritization is based on the priority associated with
>> - *  each event queue.
>> +/**< Event scheduling prioritization is based on the priority and weight
>> + * associated with each event queue. Events from a queue with highest priority
>> + * is scheduled first. If the queues are of same priority, a queue with highest
>> + * weight is selected. Subsequent schedules from an event port could see
>events
>> + * from the same event queue if the queue is configured with an affinity count.
>> + * Affinity count of a queue indicates the number of times, the subsequent
>> + * schedule calls from an event port should use the same queue if the queue is
>> + * non-empty.
>
>Is this specifying something else than WRR scheduling for equal-priority
>queues?
>

It is WRR for equal-priority queues. I will update the text as follows. Please check.

/**< Event scheduling prioritization is based on the priority and weight
 * associated with each event queue. Events from a queue with highest priority
 * is scheduled first. If the queues are of same priority, weight of the queues
 * are used to select a queue in a weighted round robin fashion. Subsequent
 * dequeue calls from an event port could see events from the same event queue
 * if the queue is configured with an affinity count. Affinity count of a queue
 * indicates the number of subsequent dequeue calls from an event port which
 * should use the same queue if the queue is non-empty.

>What is a schedule call? I must say I don't understand this description.
 
Schedule call indicates a dequeue call. I have updated the text to avoid confusion.

>Is affinity the per-port batch size from the queue that is "next in
>line" for an opportunity to be scheduled to a port?
>

Not exactly batch size. It is the number of subsequent dequeue calls which
should use the same queue. So the subsequent dequeue calls could return a max of
affinity * batch_size number of events from the same queue.

 >>    *
>>    *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
>>    */
>> @@ -331,6 +337,26 @@ struct rte_event;
>>    * @see rte_event_port_link()
>>    */
>>
>> +/* Event queue scheduling weights */
>> +#define RTE_EVENT_QUEUE_WEIGHT_HIGHEST   255
>> +/**< Highest weight of an event queue
>> + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
>> + */
>> +#define RTE_EVENT_QUEUE_WEIGHT_LOWEST    0
>> +/**< Lowest weight of an event queue
>> + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
>> + */
>> +
>> +/* Event queue scheduling affinity */
>> +#define RTE_EVENT_QUEUE_AFFINITY_HIGHEST   255
>> +/**< Highest scheduling affinity of an event queue
>> + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
>> + */
>> +#define RTE_EVENT_QUEUE_AFFINITY_LOWEST    0
>> +/**< Lowest scheduling affinity of an event queue
>> + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
>> + */
>> +
>>   /**
>>    * Get the total number of event devices that have been successfully
>>    * initialised.
>> @@ -684,11 +710,18 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t
>queue_id,
>>    * The schedule type of the queue.
>>    */
>>   #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4
>> -
>> +/**
>> + * The weight of the queue.
>> + */
>> +#define RTE_EVENT_QUEUE_ATTR_WEIGHT 5
>> +/**
>> + * Affinity of the queue.
>> + */
>> +#define RTE_EVENT_QUEUE_ATTR_AFFINITY 6
>>   /**
>>    * Maximum supported attribute ID.
>>    */
>> -#define RTE_EVENT_QUEUE_ATTR_MAX
>RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE
>> +#define RTE_EVENT_QUEUE_ATTR_MAX RTE_EVENT_QUEUE_ATTR_AFFINITY
>>
>
>>   /**
>>    * Get an attribute from a queue.


^ permalink raw reply	[flat|nested] 58+ messages in thread

* RE: [PATCH 1/6] eventdev: support to set queue attributes at runtime
  2022-03-30 10:58   ` Van Haaren, Harry
@ 2022-04-04  9:35     ` Shijith Thotton
  2022-04-04  9:45       ` Van Haaren, Harry
  0 siblings, 1 reply; 58+ messages in thread
From: Shijith Thotton @ 2022-04-04  9:35 UTC (permalink / raw)
  To: Van Haaren, Harry, dev, Jerin Jacob Kollanukkaran
  Cc: Pavan Nikhilesh Bhagavatula, Ray Kinsella

><snip>
>
>> +/**
>> + * Set an event queue attribute at runtime.
>> + *
>> + * @param dev
>> + *   Event device pointer
>> + * @param queue_id
>> + *   Event queue index
>> + * @param attr_id
>> + *   Event queue attribute id
>> + * @param attr_value
>> + *   Event queue attribute value
>> + *
>> + * @return
>> + *  - 0: Success.
>> + *  - <0: Error code on failure.
>> + */
>> +typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev,
>> +					 uint8_t queue_id, uint32_t attr_id,
>> +					 uint32_t attr_value);
>
>Is using a uint64_t a better type for attr_value? Given there might be more in
>future,
>limiting to 32-bits now may cause headaches later, and uint64_t doesn't cost
>extra?
>
>I think 32-bits of attr_id is enough :)
>
>Same comment on the _get() API in patch 2/6, a uint64_t * would be a better fit
>there in my opinion.
>
><snip>
 
Changing size of attr_value will an ABI break. Can we wait till a need arises ?

^ permalink raw reply	[flat|nested] 58+ messages in thread

* RE: [PATCH 1/6] eventdev: support to set queue attributes at runtime
  2022-04-04  9:35     ` Shijith Thotton
@ 2022-04-04  9:45       ` Van Haaren, Harry
  0 siblings, 0 replies; 58+ messages in thread
From: Van Haaren, Harry @ 2022-04-04  9:45 UTC (permalink / raw)
  To: Shijith Thotton, dev, Jerin Jacob Kollanukkaran
  Cc: Pavan Nikhilesh Bhagavatula, Ray Kinsella

> -----Original Message-----
> From: Shijith Thotton <sthotton@marvell.com>
> Sent: Monday, April 4, 2022 10:36 AM
> To: Van Haaren, Harry <harry.van.haaren@intel.com>; dev@dpdk.org; Jerin Jacob
> Kollanukkaran <jerinj@marvell.com>
> Cc: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>; Ray Kinsella
> <mdr@ashroe.eu>
> Subject: RE: [PATCH 1/6] eventdev: support to set queue attributes at runtime
> 
> ><snip>
> >
> >> +/**
> >> + * Set an event queue attribute at runtime.
> >> + *
> >> + * @param dev
> >> + *   Event device pointer
> >> + * @param queue_id
> >> + *   Event queue index
> >> + * @param attr_id
> >> + *   Event queue attribute id
> >> + * @param attr_value
> >> + *   Event queue attribute value
> >> + *
> >> + * @return
> >> + *  - 0: Success.
> >> + *  - <0: Error code on failure.
> >> + */
> >> +typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev,
> >> +					 uint8_t queue_id, uint32_t attr_id,
> >> +					 uint32_t attr_value);
> >
> >Is using a uint64_t a better type for attr_value? Given there might be more in
> >future,
> >limiting to 32-bits now may cause headaches later, and uint64_t doesn't cost
> >extra?
> >
> >I think 32-bits of attr_id is enough :)
> >
> >Same comment on the _get() API in patch 2/6, a uint64_t * would be a better fit
> >there in my opinion.
> >
> ><snip>
> 
> Changing size of attr_value will an ABI break. Can we wait till a need arises ?

Ah, I forgot that the _get() function is already upstream in DPDK today.

Its actually an API *and* ABI break, which is worse, as user code would have to
change (not just a re-compile against the newer DPDK version...). Any application
attempting source-compatibility with 21.11 and 22.11 would have to #ifdef the
parameter, switching uint32_t* and uint64_t*... or use some magic void* hacks.

Yes I suppose that waiting until a u64 is required for a real-world use-case is probably
better than breaking existing users code today (or in next ABI breaking release) with the
intent of getting to "perfect" API/ABIs...

Suggest to use a u64 for _set() to avoid getting into this same situation again,
but leave _get() as is, until it is required to change for a real use-case?


^ permalink raw reply	[flat|nested] 58+ messages in thread

* RE: [PATCH 1/6] eventdev: support to set queue attributes at runtime
  2022-03-30 12:14   ` Mattias Rönnblom
@ 2022-04-04 11:45     ` Shijith Thotton
  0 siblings, 0 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-04-04 11:45 UTC (permalink / raw)
  To: Mattias Rönnblom, dev, Jerin Jacob Kollanukkaran
  Cc: Pavan Nikhilesh Bhagavatula, Ray Kinsella

>> Added a new eventdev API rte_event_queue_attr_set(), to set event queue
>> attributes at runtime from the values set during initialization using
>> rte_event_queue_setup(). PMD's supporting this feature should expose the
>> capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR.
>>
>> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
>> ---
>>   doc/guides/eventdevs/features/default.ini |  1 +
>>   lib/eventdev/eventdev_pmd.h               | 22 +++++++++++++
>>   lib/eventdev/rte_eventdev.c               | 31 ++++++++++++++++++
>>   lib/eventdev/rte_eventdev.h               | 38 ++++++++++++++++++++++-
>>   lib/eventdev/version.map                  |  3 ++
>>   5 files changed, 94 insertions(+), 1 deletion(-)
>>
>> diff --git a/doc/guides/eventdevs/features/default.ini
>b/doc/guides/eventdevs/features/default.ini
>> index 2ea233463a..00360f60c6 100644
>> --- a/doc/guides/eventdevs/features/default.ini
>> +++ b/doc/guides/eventdevs/features/default.ini
>> @@ -17,6 +17,7 @@ runtime_port_link          =
>>   multiple_queue_port        =
>>   carry_flow_id              =
>>   maintenance_free           =
>> +runtime_queue_attr         =
>>
>>   ;
>>   ; Features of a default Ethernet Rx adapter.
>> diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
>> index ce469d47a6..6182749503 100644
>> --- a/lib/eventdev/eventdev_pmd.h
>> +++ b/lib/eventdev/eventdev_pmd.h
>> @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct
>rte_eventdev *dev,
>>   typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
>>   		uint8_t queue_id);
>>
>> +/**
>> + * Set an event queue attribute at runtime.
>> + *
>> + * @param dev
>> + *   Event device pointer
>> + * @param queue_id
>> + *   Event queue index
>> + * @param attr_id
>> + *   Event queue attribute id
>> + * @param attr_value
>> + *   Event queue attribute value
>> + *
>> + * @return
>> + *  - 0: Success.
>> + *  - <0: Error code on failure.
>> + */
>> +typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev,
>> +					 uint8_t queue_id, uint32_t attr_id,
>> +					 uint32_t attr_value);
>> +
>>   /**
>>    * Retrieve the default event port configuration.
>>    *
>> @@ -1211,6 +1231,8 @@ struct eventdev_ops {
>>   	/**< Set up an event queue. */
>>   	eventdev_queue_release_t queue_release;
>>   	/**< Release an event queue. */
>> +	eventdev_queue_attr_set_t queue_attr_set;
>> +	/**< Set an event queue attribute. */
>>
>>   	eventdev_port_default_conf_get_t port_def_conf;
>>   	/**< Get default port configuration. */
>> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
>> index 532a253553..13c8af877e 100644
>> --- a/lib/eventdev/rte_eventdev.c
>> +++ b/lib/eventdev/rte_eventdev.c
>> @@ -844,6 +844,37 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t
>queue_id, uint32_t attr_id,
>>   	return 0;
>>   }
>>
>> +int
>> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>> +			 uint32_t attr_value)
>> +{
>> +	struct rte_eventdev *dev;
>> +
>> +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>> +	dev = &rte_eventdevs[dev_id];
>> +	if (!is_valid_queue(dev, queue_id)) {
>> +		RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
>> +		return -EINVAL;
>> +	}
>> +
>> +	if (attr_id > RTE_EVENT_QUEUE_ATTR_MAX) {
>> +		RTE_EDEV_LOG_ERR("Invalid attribute ID %" PRIu8, attr_id);
>> +		return -EINVAL;
>> +	}
>> +
>> +	if (!(dev->data->event_dev_cap &
>> +	      RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) {
>> +		RTE_EDEV_LOG_ERR(
>> +			"Device %" PRIu8 "does not support changing queue
>attributes at runtime",
>> +			dev_id);
>> +		return -ENOTSUP;
>> +	}
>> +
>> +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_attr_set, -
>ENOTSUP);
>> +	return (*dev->dev_ops->queue_attr_set)(dev, queue_id, attr_id,
>> +					       attr_value);
>> +}
>> +
>>   int
>>   rte_event_port_link(uint8_t dev_id, uint8_t port_id,
>>   		    const uint8_t queues[], const uint8_t priorities[],
>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>> index 42a5660169..19710cd0c5 100644
>> --- a/lib/eventdev/rte_eventdev.h
>> +++ b/lib/eventdev/rte_eventdev.h
>> @@ -225,7 +225,7 @@ struct rte_event;
>>   /**< Event scheduling prioritization is based on the priority associated with
>>    *  each event queue.
>>    *
>> - *  @see rte_event_queue_setup()
>> + *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
>>    */
>>   #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
>>   /**< Event scheduling prioritization is based on the priority associated with
>> @@ -307,6 +307,13 @@ struct rte_event;
>>    * global pool, or process signaling related to load balancing.
>>    */
>>
>> +#define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
>> +/**< Event device is capable of changing the queue attributes at runtime i.e
>after
>> + * rte_event_queue_setup() or rte_event_start() call sequence. If this flag is
>> + * not set, eventdev queue attributes can only be configured during
>> + * rte_event_queue_setup().
>> + */
>> +
>>   /* Event device priority levels */
>>   #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
>>   /**< Highest priority expressed across eventdev subsystem
>> @@ -678,6 +685,11 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t
>queue_id,
>>    */
>>   #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4
>>
>> +/**
>> + * Maximum supported attribute ID.
>> + */
>> +#define RTE_EVENT_QUEUE_ATTR_MAX
>RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE
>> +
>
>This #define will assure that every new attribute breaks the ABI. Is
>that intentional?
>
 
Will remove .

>>   /**
>>    * Get an attribute from a queue.
>>    *
>> @@ -702,6 +714,30 @@ int
>>   rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>>   			uint32_t *attr_value);
>>
>> +/**
>> + * Set an event queue attribute.
>> + *
>> + * @param dev_id
>> + *   Eventdev id
>> + * @param queue_id
>> + *   Eventdev queue id
>> + * @param attr_id
>> + *   The attribute ID to set
>> + * @param attr_value
>> + *   The attribute value to set
>> + *
>> + * @return
>> + *   - 0: Successfully set attribute.
>> + *   - -EINVAL: invalid device, queue or attr_id.
>> + *   - -ENOTSUP: device does not support setting event attribute.
>> + *   - -EBUSY: device is in running state
>> + *   - <0: failed to set event queue attribute
>> + */
>> +__rte_experimental
>> +int
>> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>> +			 uint32_t attr_value);
>> +
>>   /* Event port specific APIs */
>>
>>   /* Event port configuration bitmap flags */
>> diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
>> index cd5dada07f..c581b75c18 100644
>> --- a/lib/eventdev/version.map
>> +++ b/lib/eventdev/version.map
>> @@ -108,6 +108,9 @@ EXPERIMENTAL {
>>
>>   	# added in 22.03
>>   	rte_event_eth_rx_adapter_event_port_get;
>> +
>> +	# added in 22.07
>> +	rte_event_queue_attr_set;
>>   };
>>
>>   INTERNAL {


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v2 0/6] Extend and set event queue attributes at runtime
  2022-03-29 13:10 [PATCH 0/6] Extend and set event queue attributes at runtime Shijith Thotton
                   ` (6 preceding siblings ...)
  2022-03-29 18:49 ` [PATCH 0/6] Extend and set event queue attributes at runtime Jerin Jacob
@ 2022-04-05  5:40 ` Shijith Thotton
  2022-04-05  5:40   ` [PATCH v2 1/6] eventdev: support to set " Shijith Thotton
                     ` (7 more replies)
  7 siblings, 8 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-04-05  5:40 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Shijith Thotton, pbhagavatula, harry.van.haaren, mattias.ronnblom

This series adds support for setting event queue attributes at runtime
and adds two new event queue attributes weight and affinity. Eventdev
capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR is added to expose the
capability to set attributes at runtime and rte_event_queue_attr_set()
API is used to set the attributes.

Attributes weight and affinity are not yet added to rte_event_queue_conf
structure to avoid ABI break and will be added in 22.11. Till then, PMDs
using the new attributes are expected to manage them.

Test application changes and example implementation are added as last
three patches.

v2:
* Modified attr_value type from u32 to u64 for set().
* Removed RTE_EVENT_QUEUE_ATTR_MAX macro.
* Fixed return value in implementation.

Pavan Nikhilesh (1):
  common/cnxk: use lock when accessing mbox of SSO

Shijith Thotton (5):
  eventdev: support to set queue attributes at runtime
  eventdev: add weight and affinity to queue attributes
  doc: announce change in event queue conf structure
  test/event: test cases to test runtime queue attribute
  event/cnxk: support to set runtime queue attributes

 app/test/test_eventdev.c                  | 149 ++++++++++++++++++
 doc/guides/eventdevs/features/cnxk.ini    |   1 +
 doc/guides/eventdevs/features/default.ini |   1 +
 doc/guides/rel_notes/deprecation.rst      |   3 +
 drivers/common/cnxk/roc_sso.c             | 174 ++++++++++++++++------
 drivers/common/cnxk/roc_sso_priv.h        |   1 +
 drivers/common/cnxk/roc_tim.c             | 134 +++++++++++------
 drivers/event/cnxk/cn10k_eventdev.c       |   4 +
 drivers/event/cnxk/cn9k_eventdev.c        |   4 +
 drivers/event/cnxk/cnxk_eventdev.c        |  91 ++++++++++-
 drivers/event/cnxk/cnxk_eventdev.h        |  16 ++
 lib/eventdev/eventdev_pmd.h               |  44 ++++++
 lib/eventdev/rte_eventdev.c               |  38 +++++
 lib/eventdev/rte_eventdev.h               |  71 ++++++++-
 lib/eventdev/version.map                  |   3 +
 15 files changed, 631 insertions(+), 103 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v2 1/6] eventdev: support to set queue attributes at runtime
  2022-04-05  5:40 ` [PATCH v2 " Shijith Thotton
@ 2022-04-05  5:40   ` Shijith Thotton
  2022-05-09 12:43     ` Jerin Jacob
  2022-04-05  5:40   ` [PATCH v2 2/6] eventdev: add weight and affinity to queue attributes Shijith Thotton
                     ` (6 subsequent siblings)
  7 siblings, 1 reply; 58+ messages in thread
From: Shijith Thotton @ 2022-04-05  5:40 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Shijith Thotton, pbhagavatula, harry.van.haaren,
	mattias.ronnblom, Ray Kinsella

Added a new eventdev API rte_event_queue_attr_set(), to set event queue
attributes at runtime from the values set during initialization using
rte_event_queue_setup(). PMD's supporting this feature should expose the
capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
 doc/guides/eventdevs/features/default.ini |  1 +
 lib/eventdev/eventdev_pmd.h               | 22 +++++++++++++++
 lib/eventdev/rte_eventdev.c               | 26 ++++++++++++++++++
 lib/eventdev/rte_eventdev.h               | 33 ++++++++++++++++++++++-
 lib/eventdev/version.map                  |  3 +++
 5 files changed, 84 insertions(+), 1 deletion(-)

diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini
index 2ea233463a..00360f60c6 100644
--- a/doc/guides/eventdevs/features/default.ini
+++ b/doc/guides/eventdevs/features/default.ini
@@ -17,6 +17,7 @@ runtime_port_link          =
 multiple_queue_port        =
 carry_flow_id              =
 maintenance_free           =
+runtime_queue_attr         =
 
 ;
 ; Features of a default Ethernet Rx adapter.
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index ce469d47a6..3b85d9f7a5 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
 typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
 		uint8_t queue_id);
 
+/**
+ * Set an event queue attribute at runtime.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param queue_id
+ *   Event queue index
+ * @param attr_id
+ *   Event queue attribute id
+ * @param attr_value
+ *   Event queue attribute value
+ *
+ * @return
+ *  - 0: Success.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev,
+					 uint8_t queue_id, uint32_t attr_id,
+					 uint64_t attr_value);
+
 /**
  * Retrieve the default event port configuration.
  *
@@ -1211,6 +1231,8 @@ struct eventdev_ops {
 	/**< Set up an event queue. */
 	eventdev_queue_release_t queue_release;
 	/**< Release an event queue. */
+	eventdev_queue_attr_set_t queue_attr_set;
+	/**< Set an event queue attribute. */
 
 	eventdev_port_default_conf_get_t port_def_conf;
 	/**< Get default port configuration. */
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 532a253553..a31e99be02 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -844,6 +844,32 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 	return 0;
 }
 
+int
+rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
+			 uint64_t attr_value)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	if (!is_valid_queue(dev, queue_id)) {
+		RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
+		return -EINVAL;
+	}
+
+	if (!(dev->data->event_dev_cap &
+	      RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) {
+		RTE_EDEV_LOG_ERR(
+			"Device %" PRIu8 "does not support changing queue attributes at runtime",
+			dev_id);
+		return -ENOTSUP;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_attr_set, -ENOTSUP);
+	return (*dev->dev_ops->queue_attr_set)(dev, queue_id, attr_id,
+					       attr_value);
+}
+
 int
 rte_event_port_link(uint8_t dev_id, uint8_t port_id,
 		    const uint8_t queues[], const uint8_t priorities[],
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 42a5660169..16e9d5fb5b 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -225,7 +225,7 @@ struct rte_event;
 /**< Event scheduling prioritization is based on the priority associated with
  *  each event queue.
  *
- *  @see rte_event_queue_setup()
+ *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
  */
 #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
 /**< Event scheduling prioritization is based on the priority associated with
@@ -307,6 +307,13 @@ struct rte_event;
  * global pool, or process signaling related to load balancing.
  */
 
+#define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
+/**< Event device is capable of changing the queue attributes at runtime i.e after
+ * rte_event_queue_setup() or rte_event_start() call sequence. If this flag is
+ * not set, eventdev queue attributes can only be configured during
+ * rte_event_queue_setup().
+ */
+
 /* Event device priority levels */
 #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
 /**< Highest priority expressed across eventdev subsystem
@@ -702,6 +709,30 @@ int
 rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 			uint32_t *attr_value);
 
+/**
+ * Set an event queue attribute.
+ *
+ * @param dev_id
+ *   Eventdev id
+ * @param queue_id
+ *   Eventdev queue id
+ * @param attr_id
+ *   The attribute ID to set
+ * @param attr_value
+ *   The attribute value to set
+ *
+ * @return
+ *   - 0: Successfully set attribute.
+ *   - -EINVAL: invalid device, queue or attr_id.
+ *   - -ENOTSUP: device does not support setting event attribute.
+ *   - -EBUSY: device is in running state
+ *   - <0: failed to set event queue attribute
+ */
+__rte_experimental
+int
+rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
+			 uint64_t attr_value);
+
 /* Event port specific APIs */
 
 /* Event port configuration bitmap flags */
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index cd5dada07f..c581b75c18 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -108,6 +108,9 @@ EXPERIMENTAL {
 
 	# added in 22.03
 	rte_event_eth_rx_adapter_event_port_get;
+
+	# added in 22.07
+	rte_event_queue_attr_set;
 };
 
 INTERNAL {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v2 2/6] eventdev: add weight and affinity to queue attributes
  2022-04-05  5:40 ` [PATCH v2 " Shijith Thotton
  2022-04-05  5:40   ` [PATCH v2 1/6] eventdev: support to set " Shijith Thotton
@ 2022-04-05  5:40   ` Shijith Thotton
  2022-05-09 12:46     ` Jerin Jacob
  2022-04-05  5:41   ` [PATCH v2 3/6] doc: announce change in event queue conf structure Shijith Thotton
                     ` (5 subsequent siblings)
  7 siblings, 1 reply; 58+ messages in thread
From: Shijith Thotton @ 2022-04-05  5:40 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Shijith Thotton, pbhagavatula, harry.van.haaren, mattias.ronnblom

Extended eventdev queue QoS attributes to support weight and affinity.
If queues are of same priority, events from the queue with highest
weight will be scheduled first. Affinity indicates the number of times,
the subsequent schedule calls from an event port will use the same event
queue. Schedule call selects another queue if current queue goes empty
or schedule count reaches affinity count.

To avoid ABI break, weight and affinity attributes are not yet added to
queue config structure and relies on PMD for managing it. New eventdev
op queue_attr_get can be used to get it from the PMD.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
 lib/eventdev/eventdev_pmd.h | 22 +++++++++++++++++++++
 lib/eventdev/rte_eventdev.c | 12 ++++++++++++
 lib/eventdev/rte_eventdev.h | 38 +++++++++++++++++++++++++++++++++++--
 3 files changed, 70 insertions(+), 2 deletions(-)

diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index 3b85d9f7a5..5495aee4f6 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
 typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
 		uint8_t queue_id);
 
+/**
+ * Get an event queue attribute at runtime.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param queue_id
+ *   Event queue index
+ * @param attr_id
+ *   Event queue attribute id
+ * @param[out] attr_value
+ *   Event queue attribute value
+ *
+ * @return
+ *  - 0: Success.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_queue_attr_get_t)(struct rte_eventdev *dev,
+					 uint8_t queue_id, uint32_t attr_id,
+					 uint32_t *attr_value);
+
 /**
  * Set an event queue attribute at runtime.
  *
@@ -1231,6 +1251,8 @@ struct eventdev_ops {
 	/**< Set up an event queue. */
 	eventdev_queue_release_t queue_release;
 	/**< Release an event queue. */
+	eventdev_queue_attr_get_t queue_attr_get;
+	/**< Get an event queue attribute. */
 	eventdev_queue_attr_set_t queue_attr_set;
 	/**< Set an event queue attribute. */
 
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index a31e99be02..12b261f923 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -838,6 +838,18 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 
 		*attr_value = conf->schedule_type;
 		break;
+	case RTE_EVENT_QUEUE_ATTR_WEIGHT:
+		*attr_value = RTE_EVENT_QUEUE_WEIGHT_LOWEST;
+		if (dev->dev_ops->queue_attr_get)
+			return (*dev->dev_ops->queue_attr_get)(
+				dev, queue_id, attr_id, attr_value);
+		break;
+	case RTE_EVENT_QUEUE_ATTR_AFFINITY:
+		*attr_value = RTE_EVENT_QUEUE_AFFINITY_LOWEST;
+		if (dev->dev_ops->queue_attr_get)
+			return (*dev->dev_ops->queue_attr_get)(
+				dev, queue_id, attr_id, attr_value);
+		break;
 	default:
 		return -EINVAL;
 	};
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 16e9d5fb5b..a6fbaf1c11 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -222,8 +222,14 @@ struct rte_event;
 
 /* Event device capability bitmap flags */
 #define RTE_EVENT_DEV_CAP_QUEUE_QOS           (1ULL << 0)
-/**< Event scheduling prioritization is based on the priority associated with
- *  each event queue.
+/**< Event scheduling prioritization is based on the priority and weight
+ * associated with each event queue. Events from a queue with highest priority
+ * is scheduled first. If the queues are of same priority, weight of the queues
+ * are considered to select a queue in a weighted round robin fashion.
+ * Subsequent dequeue calls from an event port could see events from the same
+ * event queue, if the queue is configured with an affinity count. Affinity
+ * count is the number of subsequent dequeue calls, in which an event port
+ * should use the same event queue if the queue is non-empty
  *
  *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
  */
@@ -331,6 +337,26 @@ struct rte_event;
  * @see rte_event_port_link()
  */
 
+/* Event queue scheduling weights */
+#define RTE_EVENT_QUEUE_WEIGHT_HIGHEST   255
+/**< Highest weight of an event queue
+ * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
+ */
+#define RTE_EVENT_QUEUE_WEIGHT_LOWEST    0
+/**< Lowest weight of an event queue
+ * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
+ */
+
+/* Event queue scheduling affinity */
+#define RTE_EVENT_QUEUE_AFFINITY_HIGHEST   255
+/**< Highest scheduling affinity of an event queue
+ * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
+ */
+#define RTE_EVENT_QUEUE_AFFINITY_LOWEST    0
+/**< Lowest scheduling affinity of an event queue
+ * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
+ */
+
 /**
  * Get the total number of event devices that have been successfully
  * initialised.
@@ -684,6 +710,14 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
  * The schedule type of the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4
+/**
+ * The weight of the queue.
+ */
+#define RTE_EVENT_QUEUE_ATTR_WEIGHT 5
+/**
+ * Affinity of the queue.
+ */
+#define RTE_EVENT_QUEUE_ATTR_AFFINITY 6
 
 /**
  * Get an attribute from a queue.
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v2 3/6] doc: announce change in event queue conf structure
  2022-04-05  5:40 ` [PATCH v2 " Shijith Thotton
  2022-04-05  5:40   ` [PATCH v2 1/6] eventdev: support to set " Shijith Thotton
  2022-04-05  5:40   ` [PATCH v2 2/6] eventdev: add weight and affinity to queue attributes Shijith Thotton
@ 2022-04-05  5:41   ` Shijith Thotton
  2022-05-09 12:47     ` Jerin Jacob
  2022-05-15 10:24     ` [PATCH v3] " Shijith Thotton
  2022-04-05  5:41   ` [PATCH v2 4/6] test/event: test cases to test runtime queue attribute Shijith Thotton
                     ` (4 subsequent siblings)
  7 siblings, 2 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-04-05  5:41 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Shijith Thotton, pbhagavatula, harry.van.haaren,
	mattias.ronnblom, Ray Kinsella

Structure rte_event_queue_conf will be extended to include fields to
support weight and affinity attribute. Once it gets added in DPDK 22.11,
eventdev internal op, queue_attr_get can be removed.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
 doc/guides/rel_notes/deprecation.rst | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 4e5b23c53d..04125db681 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -125,3 +125,6 @@ Deprecation Notices
   applications should be updated to use the ``dmadev`` library instead,
   with the underlying HW-functionality being provided by the ``ioat`` or
   ``idxd`` dma drivers
+
+* eventdev: New fields to represent event queue weight and affinity will be
+  added to ``rte_event_queue_conf`` structure in DPDK 22.11.
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v2 4/6] test/event: test cases to test runtime queue attribute
  2022-04-05  5:40 ` [PATCH v2 " Shijith Thotton
                     ` (2 preceding siblings ...)
  2022-04-05  5:41   ` [PATCH v2 3/6] doc: announce change in event queue conf structure Shijith Thotton
@ 2022-04-05  5:41   ` Shijith Thotton
  2022-05-09 12:55     ` Jerin Jacob
  2022-04-05  5:41   ` [PATCH v2 5/6] event/cnxk: support to set runtime queue attributes Shijith Thotton
                     ` (3 subsequent siblings)
  7 siblings, 1 reply; 58+ messages in thread
From: Shijith Thotton @ 2022-04-05  5:41 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Shijith Thotton, pbhagavatula, harry.van.haaren, mattias.ronnblom

Added test cases to test changing of queue QoS attributes priority,
weight and affinity at runtime.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
 app/test/test_eventdev.c | 149 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 149 insertions(+)

diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 4f51042bda..1af93d3b77 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -385,6 +385,149 @@ test_eventdev_queue_attr_priority(void)
 	return TEST_SUCCESS;
 }
 
+static int
+test_eventdev_queue_attr_priority_runtime(void)
+{
+	struct rte_event_queue_conf qconf;
+	struct rte_event_dev_info info;
+	uint32_t queue_count;
+	int i, ret;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR))
+		return TEST_SKIPPED;
+
+	TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(
+				    TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+				    &queue_count),
+			    "Queue count get failed");
+
+	for (i = 0; i < (int)queue_count; i++) {
+		ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i);
+		ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	for (i = 0; i < (int)queue_count; i++) {
+		uint32_t get_val;
+		uint64_t set_val;
+
+		set_val = i % RTE_EVENT_DEV_PRIORITY_LOWEST;
+		TEST_ASSERT_SUCCESS(
+			rte_event_queue_attr_set(TEST_DEV_ID, i,
+						 RTE_EVENT_QUEUE_ATTR_PRIORITY,
+						 set_val),
+			"Queue priority set failed");
+		TEST_ASSERT_SUCCESS(
+			rte_event_queue_attr_get(TEST_DEV_ID, i,
+						 RTE_EVENT_QUEUE_ATTR_PRIORITY,
+						 &get_val),
+			"Queue priority get failed");
+		TEST_ASSERT_EQUAL(get_val, set_val,
+				  "Wrong priority value for queue%d", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_queue_attr_weight_runtime(void)
+{
+	struct rte_event_queue_conf qconf;
+	struct rte_event_dev_info info;
+	uint32_t queue_count;
+	int i, ret;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR))
+		return TEST_SKIPPED;
+
+	TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(
+				    TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+				    &queue_count),
+			    "Queue count get failed");
+
+	for (i = 0; i < (int)queue_count; i++) {
+		ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i);
+		ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	for (i = 0; i < (int)queue_count; i++) {
+		uint32_t get_val;
+		uint64_t set_val;
+
+		set_val = i % RTE_EVENT_QUEUE_WEIGHT_HIGHEST;
+		TEST_ASSERT_SUCCESS(
+			rte_event_queue_attr_set(TEST_DEV_ID, i,
+						 RTE_EVENT_QUEUE_ATTR_WEIGHT,
+						 set_val),
+			"Queue weight set failed");
+		TEST_ASSERT_SUCCESS(rte_event_queue_attr_get(
+					    TEST_DEV_ID, i,
+					    RTE_EVENT_QUEUE_ATTR_WEIGHT, &get_val),
+				    "Queue weight get failed");
+		TEST_ASSERT_EQUAL(get_val, set_val,
+				  "Wrong weight value for queue%d", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_queue_attr_affinity_runtime(void)
+{
+	struct rte_event_queue_conf qconf;
+	struct rte_event_dev_info info;
+	uint32_t queue_count;
+	int i, ret;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR))
+		return TEST_SKIPPED;
+
+	TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(
+				    TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+				    &queue_count),
+			    "Queue count get failed");
+
+	for (i = 0; i < (int)queue_count; i++) {
+		ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i);
+		ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	for (i = 0; i < (int)queue_count; i++) {
+		uint32_t get_val;
+		uint64_t set_val;
+
+		set_val = i % RTE_EVENT_QUEUE_AFFINITY_HIGHEST;
+		TEST_ASSERT_SUCCESS(
+			rte_event_queue_attr_set(TEST_DEV_ID, i,
+						 RTE_EVENT_QUEUE_ATTR_AFFINITY,
+						 set_val),
+			"Queue affinity set failed");
+		TEST_ASSERT_SUCCESS(
+			rte_event_queue_attr_get(TEST_DEV_ID, i,
+						 RTE_EVENT_QUEUE_ATTR_AFFINITY,
+						 &get_val),
+			"Queue affinity get failed");
+		TEST_ASSERT_EQUAL(get_val, set_val,
+				  "Wrong affinity value for queue%d", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
 static int
 test_eventdev_queue_attr_nb_atomic_flows(void)
 {
@@ -964,6 +1107,12 @@ static struct unit_test_suite eventdev_common_testsuite  = {
 			test_eventdev_queue_count),
 		TEST_CASE_ST(eventdev_configure_setup, NULL,
 			test_eventdev_queue_attr_priority),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_attr_priority_runtime),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_attr_weight_runtime),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_attr_affinity_runtime),
 		TEST_CASE_ST(eventdev_configure_setup, NULL,
 			test_eventdev_queue_attr_nb_atomic_flows),
 		TEST_CASE_ST(eventdev_configure_setup, NULL,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v2 5/6] event/cnxk: support to set runtime queue attributes
  2022-04-05  5:40 ` [PATCH v2 " Shijith Thotton
                     ` (3 preceding siblings ...)
  2022-04-05  5:41   ` [PATCH v2 4/6] test/event: test cases to test runtime queue attribute Shijith Thotton
@ 2022-04-05  5:41   ` Shijith Thotton
  2022-05-09 12:57     ` Jerin Jacob
  2022-04-05  5:41   ` [PATCH v2 6/6] common/cnxk: use lock when accessing mbox of SSO Shijith Thotton
                     ` (2 subsequent siblings)
  7 siblings, 1 reply; 58+ messages in thread
From: Shijith Thotton @ 2022-04-05  5:41 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Shijith Thotton, pbhagavatula, harry.van.haaren, mattias.ronnblom

Added API to set queue attributes at runtime and API to get weight and
affinity.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
 doc/guides/eventdevs/features/cnxk.ini |  1 +
 drivers/event/cnxk/cn10k_eventdev.c    |  4 ++
 drivers/event/cnxk/cn9k_eventdev.c     |  4 ++
 drivers/event/cnxk/cnxk_eventdev.c     | 91 ++++++++++++++++++++++++--
 drivers/event/cnxk/cnxk_eventdev.h     | 16 +++++
 5 files changed, 110 insertions(+), 6 deletions(-)

diff --git a/doc/guides/eventdevs/features/cnxk.ini b/doc/guides/eventdevs/features/cnxk.ini
index 7633c6e3a2..bee69bf8f4 100644
--- a/doc/guides/eventdevs/features/cnxk.ini
+++ b/doc/guides/eventdevs/features/cnxk.ini
@@ -12,6 +12,7 @@ runtime_port_link          = Y
 multiple_queue_port        = Y
 carry_flow_id              = Y
 maintenance_free           = Y
+runtime_queue_attr         = y
 
 [Eth Rx adapter Features]
 internal_port              = Y
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 9b4d2895ec..f6973bb691 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -845,9 +845,13 @@ cn10k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev,
 static struct eventdev_ops cn10k_sso_dev_ops = {
 	.dev_infos_get = cn10k_sso_info_get,
 	.dev_configure = cn10k_sso_dev_configure,
+
 	.queue_def_conf = cnxk_sso_queue_def_conf,
 	.queue_setup = cnxk_sso_queue_setup,
 	.queue_release = cnxk_sso_queue_release,
+	.queue_attr_get = cnxk_sso_queue_attribute_get,
+	.queue_attr_set = cnxk_sso_queue_attribute_set,
+
 	.port_def_conf = cnxk_sso_port_def_conf,
 	.port_setup = cn10k_sso_port_setup,
 	.port_release = cn10k_sso_port_release,
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 4bba477dd1..7cb59bbbfa 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -1079,9 +1079,13 @@ cn9k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev,
 static struct eventdev_ops cn9k_sso_dev_ops = {
 	.dev_infos_get = cn9k_sso_info_get,
 	.dev_configure = cn9k_sso_dev_configure,
+
 	.queue_def_conf = cnxk_sso_queue_def_conf,
 	.queue_setup = cnxk_sso_queue_setup,
 	.queue_release = cnxk_sso_queue_release,
+	.queue_attr_get = cnxk_sso_queue_attribute_get,
+	.queue_attr_set = cnxk_sso_queue_attribute_set,
+
 	.port_def_conf = cnxk_sso_port_def_conf,
 	.port_setup = cn9k_sso_port_setup,
 	.port_release = cn9k_sso_port_release,
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index be021d86c9..e07cb589f2 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -120,7 +120,8 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
 				  RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
 				  RTE_EVENT_DEV_CAP_NONSEQ_MODE |
 				  RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
-				  RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
+				  RTE_EVENT_DEV_CAP_MAINTENANCE_FREE |
+				  RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR;
 }
 
 int
@@ -300,11 +301,27 @@ cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
 		     const struct rte_event_queue_conf *queue_conf)
 {
 	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
-
-	plt_sso_dbg("Queue=%d prio=%d", queue_id, queue_conf->priority);
-	/* Normalize <0-255> to <0-7> */
-	return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, 0xFF, 0xFF,
-					  queue_conf->priority / 32);
+	uint8_t priority, weight, affinity;
+
+	/* Default weight and affinity */
+	dev->mlt_prio[queue_id].weight = RTE_EVENT_QUEUE_WEIGHT_HIGHEST;
+	dev->mlt_prio[queue_id].affinity = RTE_EVENT_QUEUE_AFFINITY_HIGHEST;
+
+	priority = CNXK_QOS_NORMALIZE(queue_conf->priority,
+				      RTE_EVENT_DEV_PRIORITY_LOWEST,
+				      CNXK_SSO_PRIORITY_CNT);
+	weight = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].weight,
+				    RTE_EVENT_QUEUE_WEIGHT_HIGHEST,
+				    CNXK_SSO_WEIGHT_CNT);
+	affinity = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].affinity,
+				      RTE_EVENT_QUEUE_AFFINITY_HIGHEST,
+				      CNXK_SSO_AFFINITY_CNT);
+
+	plt_sso_dbg("Queue=%u prio=%u weight=%u affinity=%u", queue_id,
+		    priority, weight, affinity);
+
+	return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, weight, affinity,
+					  priority);
 }
 
 void
@@ -314,6 +331,68 @@ cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id)
 	RTE_SET_USED(queue_id);
 }
 
+int
+cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev, uint8_t queue_id,
+			     uint32_t attr_id, uint32_t *attr_value)
+{
+	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+	if (attr_id == RTE_EVENT_QUEUE_ATTR_WEIGHT)
+		*attr_value = dev->mlt_prio[queue_id].weight;
+	else if (attr_id == RTE_EVENT_QUEUE_ATTR_AFFINITY)
+		*attr_value = dev->mlt_prio[queue_id].affinity;
+	else
+		return -EINVAL;
+
+	return 0;
+}
+
+int
+cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev, uint8_t queue_id,
+			     uint32_t attr_id, uint64_t attr_value)
+{
+	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+	uint8_t priority, weight, affinity;
+	struct rte_event_queue_conf *conf;
+
+	conf = &event_dev->data->queues_cfg[queue_id];
+
+	switch (attr_id) {
+	case RTE_EVENT_QUEUE_ATTR_PRIORITY:
+		conf->priority = attr_value;
+		break;
+	case RTE_EVENT_QUEUE_ATTR_WEIGHT:
+		dev->mlt_prio[queue_id].weight = attr_value;
+		break;
+	case RTE_EVENT_QUEUE_ATTR_AFFINITY:
+		dev->mlt_prio[queue_id].affinity = attr_value;
+		break;
+	case RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_FLOWS:
+	case RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_ORDER_SEQUENCES:
+	case RTE_EVENT_QUEUE_ATTR_EVENT_QUEUE_CFG:
+	case RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE:
+		/* FALLTHROUGH */
+		plt_sso_dbg("Unsupported attribute id %u", attr_id);
+		return -ENOTSUP;
+	default:
+		plt_err("Invalid attribute id %u", attr_id);
+		return -EINVAL;
+	}
+
+	priority = CNXK_QOS_NORMALIZE(conf->priority,
+				      RTE_EVENT_DEV_PRIORITY_LOWEST,
+				      CNXK_SSO_PRIORITY_CNT);
+	weight = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].weight,
+				    RTE_EVENT_QUEUE_WEIGHT_HIGHEST,
+				    CNXK_SSO_WEIGHT_CNT);
+	affinity = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].affinity,
+				      RTE_EVENT_QUEUE_AFFINITY_HIGHEST,
+				      CNXK_SSO_AFFINITY_CNT);
+
+	return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, weight, affinity,
+					  priority);
+}
+
 void
 cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
 		       struct rte_event_port_conf *port_conf)
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 5564746e6d..cde8fc0c67 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -38,6 +38,9 @@
 #define CNXK_SSO_XAQ_CACHE_CNT (0x7)
 #define CNXK_SSO_XAQ_SLACK     (8)
 #define CNXK_SSO_WQE_SG_PTR    (9)
+#define CNXK_SSO_PRIORITY_CNT  (8)
+#define CNXK_SSO_WEIGHT_CNT    (64)
+#define CNXK_SSO_AFFINITY_CNT  (16)
 
 #define CNXK_TT_FROM_TAG(x)	    (((x) >> 32) & SSO_TT_EMPTY)
 #define CNXK_TT_FROM_EVENT(x)	    (((x) >> 38) & SSO_TT_EMPTY)
@@ -54,6 +57,7 @@
 #define CN10K_GW_MODE_PREF     1
 #define CN10K_GW_MODE_PREF_WFE 2
 
+#define CNXK_QOS_NORMALIZE(val, max, cnt) (val / ((max + cnt - 1) / cnt))
 #define CNXK_VALID_DEV_OR_ERR_RET(dev, drv_name)                               \
 	do {                                                                   \
 		if (strncmp(dev->driver->name, drv_name, strlen(drv_name)))    \
@@ -79,6 +83,11 @@ struct cnxk_sso_qos {
 	uint16_t iaq_prcnt;
 };
 
+struct cnxk_sso_mlt_prio {
+	uint8_t weight;
+	uint8_t affinity;
+};
+
 struct cnxk_sso_evdev {
 	struct roc_sso sso;
 	uint8_t max_event_queues;
@@ -108,6 +117,7 @@ struct cnxk_sso_evdev {
 	uint64_t *timer_adptr_sz;
 	uint16_t vec_pool_cnt;
 	uint64_t *vec_pools;
+	struct cnxk_sso_mlt_prio mlt_prio[RTE_EVENT_MAX_QUEUES_PER_DEV];
 	/* Dev args */
 	uint32_t xae_cnt;
 	uint8_t qos_queue_cnt;
@@ -234,6 +244,12 @@ void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
 int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
 			 const struct rte_event_queue_conf *queue_conf);
 void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id);
+int cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev,
+				 uint8_t queue_id, uint32_t attr_id,
+				 uint32_t *attr_value);
+int cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev,
+				 uint8_t queue_id, uint32_t attr_id,
+				 uint64_t attr_value);
 void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
 			    struct rte_event_port_conf *port_conf);
 int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v2 6/6] common/cnxk: use lock when accessing mbox of SSO
  2022-04-05  5:40 ` [PATCH v2 " Shijith Thotton
                     ` (4 preceding siblings ...)
  2022-04-05  5:41   ` [PATCH v2 5/6] event/cnxk: support to set runtime queue attributes Shijith Thotton
@ 2022-04-05  5:41   ` Shijith Thotton
  2022-04-11 11:07   ` [PATCH v2 0/6] Extend and set event queue attributes at runtime Shijith Thotton
  2022-05-15  9:53   ` [PATCH v3 0/5] " Shijith Thotton
  7 siblings, 0 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-04-05  5:41 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Pavan Nikhilesh, harry.van.haaren, mattias.ronnblom,
	Shijith Thotton, Nithin Dabilpuram, Kiran Kumar K,
	Sunil Kumar Kori, Satha Rao

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Since mbox is now accessed from multiple threads, use lock to
synchronize access.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
 drivers/common/cnxk/roc_sso.c      | 174 +++++++++++++++++++++--------
 drivers/common/cnxk/roc_sso_priv.h |   1 +
 drivers/common/cnxk/roc_tim.c      | 134 ++++++++++++++--------
 3 files changed, 215 insertions(+), 94 deletions(-)

diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index f8a0a96533..358d37a9f2 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -36,8 +36,8 @@ sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf,
 	}
 
 	rc = mbox_process_msg(dev->mbox, rsp);
-	if (rc < 0)
-		return rc;
+	if (rc)
+		return -EIO;
 
 	return 0;
 }
@@ -69,8 +69,8 @@ sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf)
 	}
 
 	rc = mbox_process(dev->mbox);
-	if (rc < 0)
-		return rc;
+	if (rc)
+		return -EIO;
 
 	return 0;
 }
@@ -98,7 +98,7 @@ sso_rsrc_attach(struct roc_sso *roc_sso, enum sso_lf_type lf_type,
 	}
 
 	req->modify = true;
-	if (mbox_process(dev->mbox) < 0)
+	if (mbox_process(dev->mbox))
 		return -EIO;
 
 	return 0;
@@ -126,7 +126,7 @@ sso_rsrc_detach(struct roc_sso *roc_sso, enum sso_lf_type lf_type)
 	}
 
 	req->partial = true;
-	if (mbox_process(dev->mbox) < 0)
+	if (mbox_process(dev->mbox))
 		return -EIO;
 
 	return 0;
@@ -141,9 +141,9 @@ sso_rsrc_get(struct roc_sso *roc_sso)
 
 	mbox_alloc_msg_free_rsrc_cnt(dev->mbox);
 	rc = mbox_process_msg(dev->mbox, (void **)&rsrc_cnt);
-	if (rc < 0) {
+	if (rc) {
 		plt_err("Failed to get free resource count\n");
-		return rc;
+		return -EIO;
 	}
 
 	roc_sso->max_hwgrp = rsrc_cnt->sso;
@@ -197,8 +197,8 @@ sso_msix_fill(struct roc_sso *roc_sso, uint16_t nb_hws, uint16_t nb_hwgrp)
 
 	mbox_alloc_msg_msix_offset(dev->mbox);
 	rc = mbox_process_msg(dev->mbox, (void **)&rsp);
-	if (rc < 0)
-		return rc;
+	if (rc)
+		return -EIO;
 
 	for (i = 0; i < nb_hws; i++)
 		sso->hws_msix_offset[i] = rsp->ssow_msixoff[i];
@@ -285,53 +285,71 @@ int
 roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws,
 		      struct roc_sso_hws_stats *stats)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
 	struct sso_hws_stats *req_rsp;
+	struct dev *dev = &sso->dev;
 	int rc;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req_rsp = (struct sso_hws_stats *)mbox_alloc_msg_sso_hws_get_stats(
 		dev->mbox);
 	if (req_rsp == NULL) {
 		rc = mbox_process(dev->mbox);
-		if (rc < 0)
-			return rc;
+		if (rc) {
+			rc = -EIO;
+			goto fail;
+		}
 		req_rsp = (struct sso_hws_stats *)
 			mbox_alloc_msg_sso_hws_get_stats(dev->mbox);
-		if (req_rsp == NULL)
-			return -ENOSPC;
+		if (req_rsp == NULL) {
+			rc = -ENOSPC;
+			goto fail;
+		}
 	}
 	req_rsp->hws = hws;
 	rc = mbox_process_msg(dev->mbox, (void **)&req_rsp);
-	if (rc)
-		return rc;
+	if (rc) {
+		rc = -EIO;
+		goto fail;
+	}
 
 	stats->arbitration = req_rsp->arbitration;
-	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
 roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp,
 			struct roc_sso_hwgrp_stats *stats)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
 	struct sso_grp_stats *req_rsp;
+	struct dev *dev = &sso->dev;
 	int rc;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req_rsp = (struct sso_grp_stats *)mbox_alloc_msg_sso_grp_get_stats(
 		dev->mbox);
 	if (req_rsp == NULL) {
 		rc = mbox_process(dev->mbox);
-		if (rc < 0)
-			return rc;
+		if (rc) {
+			rc = -EIO;
+			goto fail;
+		}
 		req_rsp = (struct sso_grp_stats *)
 			mbox_alloc_msg_sso_grp_get_stats(dev->mbox);
-		if (req_rsp == NULL)
-			return -ENOSPC;
+		if (req_rsp == NULL) {
+			rc = -ENOSPC;
+			goto fail;
+		}
 	}
 	req_rsp->grp = hwgrp;
 	rc = mbox_process_msg(dev->mbox, (void **)&req_rsp);
-	if (rc)
-		return rc;
+	if (rc) {
+		rc = -EIO;
+		goto fail;
+	}
 
 	stats->aw_status = req_rsp->aw_status;
 	stats->dq_pc = req_rsp->dq_pc;
@@ -341,7 +359,10 @@ roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp,
 	stats->ts_pc = req_rsp->ts_pc;
 	stats->wa_pc = req_rsp->wa_pc;
 	stats->ws_pc = req_rsp->ws_pc;
-	return 0;
+
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -358,10 +379,12 @@ int
 roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos,
 			 uint8_t nb_qos, uint32_t nb_xaq)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
 	struct sso_grp_qos_cfg *req;
 	int i, rc;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	for (i = 0; i < nb_qos; i++) {
 		uint8_t xaq_prcnt = qos[i].xaq_prcnt;
 		uint8_t iaq_prcnt = qos[i].iaq_prcnt;
@@ -370,11 +393,16 @@ roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos,
 		req = mbox_alloc_msg_sso_grp_qos_config(dev->mbox);
 		if (req == NULL) {
 			rc = mbox_process(dev->mbox);
-			if (rc < 0)
-				return rc;
+			if (rc) {
+				rc = -EIO;
+				goto fail;
+			}
+
 			req = mbox_alloc_msg_sso_grp_qos_config(dev->mbox);
-			if (req == NULL)
-				return -ENOSPC;
+			if (req == NULL) {
+				rc = -ENOSPC;
+				goto fail;
+			}
 		}
 		req->grp = qos[i].hwgrp;
 		req->xaq_limit = (nb_xaq * (xaq_prcnt ? xaq_prcnt : 100)) / 100;
@@ -386,7 +414,12 @@ roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos,
 			       100;
 	}
 
-	return mbox_process(dev->mbox);
+	rc = mbox_process(dev->mbox);
+	if (rc)
+		rc = -EIO;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -482,11 +515,16 @@ sso_hwgrp_init_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq,
 int
 roc_sso_hwgrp_init_xaq_aura(struct roc_sso *roc_sso, uint32_t nb_xae)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
+	int rc;
 
-	return sso_hwgrp_init_xaq_aura(dev, &roc_sso->xaq, nb_xae,
-				       roc_sso->xae_waes, roc_sso->xaq_buf_size,
-				       roc_sso->nb_hwgrp);
+	plt_spinlock_lock(&sso->mbox_lock);
+	rc = sso_hwgrp_init_xaq_aura(dev, &roc_sso->xaq, nb_xae,
+				     roc_sso->xae_waes, roc_sso->xaq_buf_size,
+				     roc_sso->nb_hwgrp);
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -515,9 +553,14 @@ sso_hwgrp_free_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq,
 int
 roc_sso_hwgrp_free_xaq_aura(struct roc_sso *roc_sso, uint16_t nb_hwgrp)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
+	int rc;
 
-	return sso_hwgrp_free_xaq_aura(dev, &roc_sso->xaq, nb_hwgrp);
+	plt_spinlock_lock(&sso->mbox_lock);
+	rc = sso_hwgrp_free_xaq_aura(dev, &roc_sso->xaq, nb_hwgrp);
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -533,16 +576,24 @@ sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps)
 	req->npa_aura_id = npa_aura_id;
 	req->hwgrps = hwgrps;
 
-	return mbox_process(dev->mbox);
+	if (mbox_process(dev->mbox))
+		return -EIO;
+
+	return 0;
 }
 
 int
 roc_sso_hwgrp_alloc_xaq(struct roc_sso *roc_sso, uint32_t npa_aura_id,
 			uint16_t hwgrps)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
+	int rc;
 
-	return sso_hwgrp_alloc_xaq(dev, npa_aura_id, hwgrps);
+	plt_spinlock_lock(&sso->mbox_lock);
+	rc = sso_hwgrp_alloc_xaq(dev, npa_aura_id, hwgrps);
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -555,40 +606,56 @@ sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps)
 		return -EINVAL;
 	req->hwgrps = hwgrps;
 
-	return mbox_process(dev->mbox);
+	if (mbox_process(dev->mbox))
+		return -EIO;
+
+	return 0;
 }
 
 int
 roc_sso_hwgrp_release_xaq(struct roc_sso *roc_sso, uint16_t hwgrps)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
+	int rc;
 
-	return sso_hwgrp_release_xaq(dev, hwgrps);
+	plt_spinlock_lock(&sso->mbox_lock);
+	rc = sso_hwgrp_release_xaq(dev, hwgrps);
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
 roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso, uint16_t hwgrp,
 			   uint8_t weight, uint8_t affinity, uint8_t priority)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
 	struct sso_grp_priority *req;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_sso_grp_set_priority(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->grp = hwgrp;
 	req->weight = weight;
 	req->affinity = affinity;
 	req->priority = priority;
 
 	rc = mbox_process(dev->mbox);
-	if (rc < 0)
-		return rc;
+	if (rc) {
+		rc = -EIO;
+		goto fail;
+	}
+	plt_spinlock_unlock(&sso->mbox_lock);
 	plt_sso_dbg("HWGRP %d weight %d affinity %d priority %d", hwgrp, weight,
 		    affinity, priority);
 
 	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -603,10 +670,11 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
 	if (roc_sso->max_hws < nb_hws)
 		return -ENOENT;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	rc = sso_rsrc_attach(roc_sso, SSO_LF_TYPE_HWS, nb_hws);
 	if (rc < 0) {
 		plt_err("Unable to attach SSO HWS LFs");
-		return rc;
+		goto fail;
 	}
 
 	rc = sso_rsrc_attach(roc_sso, SSO_LF_TYPE_HWGRP, nb_hwgrp);
@@ -645,6 +713,7 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
 		goto sso_msix_fail;
 	}
 
+	plt_spinlock_unlock(&sso->mbox_lock);
 	roc_sso->nb_hwgrp = nb_hwgrp;
 	roc_sso->nb_hws = nb_hws;
 
@@ -657,6 +726,8 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
 	sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWGRP);
 hwgrp_atch_fail:
 	sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWS);
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
 	return rc;
 }
 
@@ -678,6 +749,7 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso)
 
 	roc_sso->nb_hwgrp = 0;
 	roc_sso->nb_hws = 0;
+	plt_spinlock_unlock(&sso->mbox_lock);
 }
 
 int
@@ -696,6 +768,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso)
 	sso = roc_sso_to_sso_priv(roc_sso);
 	memset(sso, 0, sizeof(*sso));
 	pci_dev = roc_sso->pci_dev;
+	plt_spinlock_init(&sso->mbox_lock);
 
 	rc = dev_init(&sso->dev, pci_dev);
 	if (rc < 0) {
@@ -703,6 +776,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso)
 		goto fail;
 	}
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	rc = sso_rsrc_get(roc_sso);
 	if (rc < 0) {
 		plt_err("Failed to get SSO resources");
@@ -739,6 +813,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso)
 	sso->pci_dev = pci_dev;
 	sso->dev.drv_inited = true;
 	roc_sso->lmt_base = sso->dev.lmt_base;
+	plt_spinlock_unlock(&sso->mbox_lock);
 
 	return 0;
 link_mem_free:
@@ -746,6 +821,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso)
 rsrc_fail:
 	rc |= dev_fini(&sso->dev, pci_dev);
 fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
 	return rc;
 }
 
diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h
index 09729d4f62..674e4e0a39 100644
--- a/drivers/common/cnxk/roc_sso_priv.h
+++ b/drivers/common/cnxk/roc_sso_priv.h
@@ -22,6 +22,7 @@ struct sso {
 	/* SSO link mapping. */
 	struct plt_bitmap **link_map;
 	void *link_map_mem;
+	plt_spinlock_t mbox_lock;
 } __plt_cache_aligned;
 
 enum sso_err_status {
diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c
index cefd9bc89d..0f9209937b 100644
--- a/drivers/common/cnxk/roc_tim.c
+++ b/drivers/common/cnxk/roc_tim.c
@@ -8,15 +8,16 @@
 static int
 tim_fill_msix(struct roc_tim *roc_tim, uint16_t nb_ring)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
 	struct tim *tim = roc_tim_to_tim_priv(roc_tim);
+	struct dev *dev = &sso->dev;
 	struct msix_offset_rsp *rsp;
 	int i, rc;
 
 	mbox_alloc_msg_msix_offset(dev->mbox);
 	rc = mbox_process_msg(dev->mbox, (void **)&rsp);
-	if (rc < 0)
-		return rc;
+	if (rc)
+		return -EIO;
 
 	for (i = 0; i < nb_ring; i++)
 		tim->tim_msix_offsets[i] = rsp->timlf_msixoff[i];
@@ -88,20 +89,23 @@ int
 roc_tim_lf_enable(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *start_tsc,
 		  uint32_t *cur_bkt)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
+	struct dev *dev = &sso->dev;
 	struct tim_enable_rsp *rsp;
 	struct tim_ring_req *req;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_enable_ring(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->ring = ring_id;
 
 	rc = mbox_process_msg(dev->mbox, (void **)&rsp);
-	if (rc < 0) {
+	if (rc) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
+		goto fail;
 	}
 
 	if (cur_bkt)
@@ -109,28 +113,34 @@ roc_tim_lf_enable(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *start_tsc,
 	if (start_tsc)
 		*start_tsc = rsp->timestarted;
 
-	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
 roc_tim_lf_disable(struct roc_tim *roc_tim, uint8_t ring_id)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
+	struct dev *dev = &sso->dev;
 	struct tim_ring_req *req;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_disable_ring(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->ring = ring_id;
 
 	rc = mbox_process(dev->mbox);
-	if (rc < 0) {
+	if (rc) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
 	}
 
-	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 uintptr_t
@@ -147,13 +157,15 @@ roc_tim_lf_config(struct roc_tim *roc_tim, uint8_t ring_id,
 		  uint8_t ena_dfb, uint32_t bucket_sz, uint32_t chunk_sz,
 		  uint32_t interval, uint64_t intervalns, uint64_t clockfreq)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
+	struct dev *dev = &sso->dev;
 	struct tim_config_req *req;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_config_ring(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->ring = ring_id;
 	req->bigendian = false;
 	req->bucketsize = bucket_sz;
@@ -167,12 +179,14 @@ roc_tim_lf_config(struct roc_tim *roc_tim, uint8_t ring_id,
 	req->gpioedge = TIM_GPIO_LTOH_TRANS;
 
 	rc = mbox_process(dev->mbox);
-	if (rc < 0) {
+	if (rc) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
 	}
 
-	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -180,27 +194,32 @@ roc_tim_lf_interval(struct roc_tim *roc_tim, enum roc_tim_clk_src clk_src,
 		    uint64_t clockfreq, uint64_t *intervalns,
 		    uint64_t *interval)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
+	struct dev *dev = &sso->dev;
 	struct tim_intvl_req *req;
 	struct tim_intvl_rsp *rsp;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_get_min_intvl(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 
 	req->clockfreq = clockfreq;
 	req->clocksource = clk_src;
 	rc = mbox_process_msg(dev->mbox, (void **)&rsp);
-	if (rc < 0) {
+	if (rc) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
+		goto fail;
 	}
 
 	*intervalns = rsp->intvl_ns;
 	*interval = rsp->intvl_cyc;
 
-	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -214,17 +233,19 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk)
 	struct dev *dev = &sso->dev;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_lf_alloc(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->npa_pf_func = idev_npa_pffunc_get();
 	req->sso_pf_func = idev_sso_pffunc_get();
 	req->ring = ring_id;
 
 	rc = mbox_process_msg(dev->mbox, (void **)&rsp);
-	if (rc < 0) {
+	if (rc) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
+		goto fail;
 	}
 
 	if (clk)
@@ -235,12 +256,18 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk)
 	if (rc < 0) {
 		plt_tim_dbg("Failed to register Ring[%d] IRQ", ring_id);
 		free_req = mbox_alloc_msg_tim_lf_free(dev->mbox);
-		if (free_req == NULL)
-			return -ENOSPC;
+		if (free_req == NULL) {
+			rc = -ENOSPC;
+			goto fail;
+		}
 		free_req->ring = ring_id;
-		mbox_process(dev->mbox);
+		rc = mbox_process(dev->mbox);
+		if (rc)
+			rc = -EIO;
 	}
 
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
 	return rc;
 }
 
@@ -256,17 +283,20 @@ roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id)
 	tim_unregister_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
 				tim->tim_msix_offsets[ring_id]);
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_lf_free(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->ring = ring_id;
 
 	rc = mbox_process(dev->mbox);
 	if (rc < 0) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
 	}
 
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
 	return 0;
 }
 
@@ -276,40 +306,48 @@ roc_tim_init(struct roc_tim *roc_tim)
 	struct rsrc_attach_req *attach_req;
 	struct rsrc_detach_req *detach_req;
 	struct free_rsrcs_rsp *free_rsrc;
-	struct dev *dev;
+	struct sso *sso;
 	uint16_t nb_lfs;
+	struct dev *dev;
 	int rc;
 
 	if (roc_tim == NULL || roc_tim->roc_sso == NULL)
 		return TIM_ERR_PARAM;
 
+	sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
+	dev = &sso->dev;
 	PLT_STATIC_ASSERT(sizeof(struct tim) <= TIM_MEM_SZ);
-	dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
 	nb_lfs = roc_tim->nb_lfs;
+	plt_spinlock_lock(&sso->mbox_lock);
 	mbox_alloc_msg_free_rsrc_cnt(dev->mbox);
 	rc = mbox_process_msg(dev->mbox, (void *)&free_rsrc);
-	if (rc < 0) {
+	if (rc) {
 		plt_err("Unable to get free rsrc count.");
-		return 0;
+		nb_lfs = 0;
+		goto fail;
 	}
 
 	if (nb_lfs && (free_rsrc->tim < nb_lfs)) {
 		plt_tim_dbg("Requested LFs : %d Available LFs : %d", nb_lfs,
 			    free_rsrc->tim);
-		return 0;
+		nb_lfs = 0;
+		goto fail;
 	}
 
 	attach_req = mbox_alloc_msg_attach_resources(dev->mbox);
-	if (attach_req == NULL)
-		return -ENOSPC;
+	if (attach_req == NULL) {
+		nb_lfs = 0;
+		goto fail;
+	}
 	attach_req->modify = true;
 	attach_req->timlfs = nb_lfs ? nb_lfs : free_rsrc->tim;
 	nb_lfs = attach_req->timlfs;
 
 	rc = mbox_process(dev->mbox);
-	if (rc < 0) {
+	if (rc) {
 		plt_err("Unable to attach TIM LFs.");
-		return 0;
+		nb_lfs = 0;
+		goto fail;
 	}
 
 	rc = tim_fill_msix(roc_tim, nb_lfs);
@@ -317,28 +355,34 @@ roc_tim_init(struct roc_tim *roc_tim)
 		plt_err("Unable to get TIM MSIX vectors");
 
 		detach_req = mbox_alloc_msg_detach_resources(dev->mbox);
-		if (detach_req == NULL)
-			return -ENOSPC;
+		if (detach_req == NULL) {
+			nb_lfs = 0;
+			goto fail;
+		}
 		detach_req->partial = true;
 		detach_req->timlfs = true;
 		mbox_process(dev->mbox);
-
-		return 0;
+		nb_lfs = 0;
 	}
 
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
 	return nb_lfs;
 }
 
 void
 roc_tim_fini(struct roc_tim *roc_tim)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
 	struct rsrc_detach_req *detach_req;
+	struct dev *dev = &sso->dev;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	detach_req = mbox_alloc_msg_detach_resources(dev->mbox);
 	PLT_ASSERT(detach_req);
 	detach_req->partial = true;
 	detach_req->timlfs = true;
 
 	mbox_process(dev->mbox);
+	plt_spinlock_unlock(&sso->mbox_lock);
 }
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v2 0/6] Extend and set event queue attributes at runtime
  2022-04-05  5:40 ` [PATCH v2 " Shijith Thotton
                     ` (5 preceding siblings ...)
  2022-04-05  5:41   ` [PATCH v2 6/6] common/cnxk: use lock when accessing mbox of SSO Shijith Thotton
@ 2022-04-11 11:07   ` Shijith Thotton
  2022-05-15  9:53   ` [PATCH v3 0/5] " Shijith Thotton
  7 siblings, 0 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-04-11 11:07 UTC (permalink / raw)
  To: dev, Jerin Jacob Kollanukkaran
  Cc: Pavan Nikhilesh Bhagavatula, harry.van.haaren, mattias.ronnblom

[-- Attachment #1: Type: text/plain, Size: 2623 bytes --]

Please review and let me know if any comments.
________________________________
From: Shijith Thotton <sthotton@marvell.com>
Sent: Tuesday, April 5, 2022 11:10 AM
To: dev@dpdk.org <dev@dpdk.org>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>
Cc: Shijith Thotton <sthotton@marvell.com>; Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>; harry.van.haaren@intel.com <harry.van.haaren@intel.com>; mattias.ronnblom@ericsson.com <mattias.ronnblom@ericsson.com>
Subject: [PATCH v2 0/6] Extend and set event queue attributes at runtime

This series adds support for setting event queue attributes at runtime
and adds two new event queue attributes weight and affinity. Eventdev
capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR is added to expose the
capability to set attributes at runtime and rte_event_queue_attr_set()
API is used to set the attributes.

Attributes weight and affinity are not yet added to rte_event_queue_conf
structure to avoid ABI break and will be added in 22.11. Till then, PMDs
using the new attributes are expected to manage them.

Test application changes and example implementation are added as last
three patches.

v2:
* Modified attr_value type from u32 to u64 for set().
* Removed RTE_EVENT_QUEUE_ATTR_MAX macro.
* Fixed return value in implementation.

Pavan Nikhilesh (1):
  common/cnxk: use lock when accessing mbox of SSO

Shijith Thotton (5):
  eventdev: support to set queue attributes at runtime
  eventdev: add weight and affinity to queue attributes
  doc: announce change in event queue conf structure
  test/event: test cases to test runtime queue attribute
  event/cnxk: support to set runtime queue attributes

 app/test/test_eventdev.c                  | 149 ++++++++++++++++++
 doc/guides/eventdevs/features/cnxk.ini    |   1 +
 doc/guides/eventdevs/features/default.ini |   1 +
 doc/guides/rel_notes/deprecation.rst      |   3 +
 drivers/common/cnxk/roc_sso.c             | 174 ++++++++++++++++------
 drivers/common/cnxk/roc_sso_priv.h        |   1 +
 drivers/common/cnxk/roc_tim.c             | 134 +++++++++++------
 drivers/event/cnxk/cn10k_eventdev.c       |   4 +
 drivers/event/cnxk/cn9k_eventdev.c        |   4 +
 drivers/event/cnxk/cnxk_eventdev.c        |  91 ++++++++++-
 drivers/event/cnxk/cnxk_eventdev.h        |  16 ++
 lib/eventdev/eventdev_pmd.h               |  44 ++++++
 lib/eventdev/rte_eventdev.c               |  38 +++++
 lib/eventdev/rte_eventdev.h               |  71 ++++++++-
 lib/eventdev/version.map                  |   3 +
 15 files changed, 631 insertions(+), 103 deletions(-)

--
2.25.1


[-- Attachment #2: Type: text/html, Size: 4484 bytes --]

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v2 1/6] eventdev: support to set queue attributes at runtime
  2022-04-05  5:40   ` [PATCH v2 1/6] eventdev: support to set " Shijith Thotton
@ 2022-05-09 12:43     ` Jerin Jacob
  0 siblings, 0 replies; 58+ messages in thread
From: Jerin Jacob @ 2022-05-09 12:43 UTC (permalink / raw)
  To: Shijith Thotton
  Cc: dpdk-dev, Jerin Jacob, Pavan Nikhilesh, Van Haaren, Harry,
	Mattias Rönnblom, Ray Kinsella

On Tue, Apr 5, 2022 at 11:12 AM Shijith Thotton <sthotton@marvell.com> wrote:
>
> Added a new eventdev API rte_event_queue_attr_set(), to set event queue
> attributes at runtime from the values set during initialization using
> rte_event_queue_setup(). PMD's supporting this feature should expose the
> capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR.
>
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>

Please update release notes.
With the above change,

Acked-by: Jerin Jacob <jerinj@marvell.com>


> ---
>  doc/guides/eventdevs/features/default.ini |  1 +
>  lib/eventdev/eventdev_pmd.h               | 22 +++++++++++++++
>  lib/eventdev/rte_eventdev.c               | 26 ++++++++++++++++++
>  lib/eventdev/rte_eventdev.h               | 33 ++++++++++++++++++++++-
>  lib/eventdev/version.map                  |  3 +++
>  5 files changed, 84 insertions(+), 1 deletion(-)
>
> diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini
> index 2ea233463a..00360f60c6 100644
> --- a/doc/guides/eventdevs/features/default.ini
> +++ b/doc/guides/eventdevs/features/default.ini
> @@ -17,6 +17,7 @@ runtime_port_link          =
>  multiple_queue_port        =
>  carry_flow_id              =
>  maintenance_free           =
> +runtime_queue_attr         =
>
>  ;
>  ; Features of a default Ethernet Rx adapter.
> diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
> index ce469d47a6..3b85d9f7a5 100644
> --- a/lib/eventdev/eventdev_pmd.h
> +++ b/lib/eventdev/eventdev_pmd.h
> @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
>  typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
>                 uint8_t queue_id);
>
> +/**
> + * Set an event queue attribute at runtime.
> + *
> + * @param dev
> + *   Event device pointer
> + * @param queue_id
> + *   Event queue index
> + * @param attr_id
> + *   Event queue attribute id
> + * @param attr_value
> + *   Event queue attribute value
> + *
> + * @return
> + *  - 0: Success.
> + *  - <0: Error code on failure.
> + */
> +typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev,
> +                                        uint8_t queue_id, uint32_t attr_id,
> +                                        uint64_t attr_value);
> +
>  /**
>   * Retrieve the default event port configuration.
>   *
> @@ -1211,6 +1231,8 @@ struct eventdev_ops {
>         /**< Set up an event queue. */
>         eventdev_queue_release_t queue_release;
>         /**< Release an event queue. */
> +       eventdev_queue_attr_set_t queue_attr_set;
> +       /**< Set an event queue attribute. */
>
>         eventdev_port_default_conf_get_t port_def_conf;
>         /**< Get default port configuration. */
> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
> index 532a253553..a31e99be02 100644
> --- a/lib/eventdev/rte_eventdev.c
> +++ b/lib/eventdev/rte_eventdev.c
> @@ -844,6 +844,32 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>         return 0;
>  }
>
> +int
> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
> +                        uint64_t attr_value)
> +{
> +       struct rte_eventdev *dev;
> +
> +       RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> +       dev = &rte_eventdevs[dev_id];
> +       if (!is_valid_queue(dev, queue_id)) {
> +               RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
> +               return -EINVAL;
> +       }
> +
> +       if (!(dev->data->event_dev_cap &
> +             RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) {
> +               RTE_EDEV_LOG_ERR(
> +                       "Device %" PRIu8 "does not support changing queue attributes at runtime",
> +                       dev_id);
> +               return -ENOTSUP;
> +       }
> +
> +       RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_attr_set, -ENOTSUP);
> +       return (*dev->dev_ops->queue_attr_set)(dev, queue_id, attr_id,
> +                                              attr_value);
> +}
> +
>  int
>  rte_event_port_link(uint8_t dev_id, uint8_t port_id,
>                     const uint8_t queues[], const uint8_t priorities[],
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 42a5660169..16e9d5fb5b 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -225,7 +225,7 @@ struct rte_event;
>  /**< Event scheduling prioritization is based on the priority associated with
>   *  each event queue.
>   *
> - *  @see rte_event_queue_setup()
> + *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
>   */
>  #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
>  /**< Event scheduling prioritization is based on the priority associated with
> @@ -307,6 +307,13 @@ struct rte_event;
>   * global pool, or process signaling related to load balancing.
>   */
>
> +#define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
> +/**< Event device is capable of changing the queue attributes at runtime i.e after
> + * rte_event_queue_setup() or rte_event_start() call sequence. If this flag is
> + * not set, eventdev queue attributes can only be configured during
> + * rte_event_queue_setup().
> + */
> +
>  /* Event device priority levels */
>  #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
>  /**< Highest priority expressed across eventdev subsystem
> @@ -702,6 +709,30 @@ int
>  rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>                         uint32_t *attr_value);
>
> +/**
> + * Set an event queue attribute.
> + *
> + * @param dev_id
> + *   Eventdev id
> + * @param queue_id
> + *   Eventdev queue id
> + * @param attr_id
> + *   The attribute ID to set
> + * @param attr_value
> + *   The attribute value to set
> + *
> + * @return
> + *   - 0: Successfully set attribute.
> + *   - -EINVAL: invalid device, queue or attr_id.
> + *   - -ENOTSUP: device does not support setting event attribute.
> + *   - -EBUSY: device is in running state
> + *   - <0: failed to set event queue attribute
> + */
> +__rte_experimental
> +int
> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
> +                        uint64_t attr_value);
> +
>  /* Event port specific APIs */
>
>  /* Event port configuration bitmap flags */
> diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
> index cd5dada07f..c581b75c18 100644
> --- a/lib/eventdev/version.map
> +++ b/lib/eventdev/version.map
> @@ -108,6 +108,9 @@ EXPERIMENTAL {
>
>         # added in 22.03
>         rte_event_eth_rx_adapter_event_port_get;
> +
> +       # added in 22.07
> +       rte_event_queue_attr_set;
>  };
>
>  INTERNAL {
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v2 2/6] eventdev: add weight and affinity to queue attributes
  2022-04-05  5:40   ` [PATCH v2 2/6] eventdev: add weight and affinity to queue attributes Shijith Thotton
@ 2022-05-09 12:46     ` Jerin Jacob
  0 siblings, 0 replies; 58+ messages in thread
From: Jerin Jacob @ 2022-05-09 12:46 UTC (permalink / raw)
  To: Shijith Thotton
  Cc: dpdk-dev, Jerin Jacob, Pavan Nikhilesh, Van Haaren, Harry,
	Mattias Rönnblom

On Tue, Apr 5, 2022 at 11:11 AM Shijith Thotton <sthotton@marvell.com> wrote:
>
> Extended eventdev queue QoS attributes to support weight and affinity.
> If queues are of same priority, events from the queue with highest

the same priority

> weight will be scheduled first. Affinity indicates the number of times,
> the subsequent schedule calls from an event port will use the same event
> queue. Schedule call selects another queue if current queue goes empty
> or schedule count reaches affinity count.
>
> To avoid ABI break, weight and affinity attributes are not yet added to
> queue config structure and relies on PMD for managing it. New eventdev

rely on

> op queue_attr_get can be used to get it from the PMD.
>
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>

Please update the release notes.

With above change,

Acked-by: Jerin Jacob <jerinj@marvell.com>


> ---
>  lib/eventdev/eventdev_pmd.h | 22 +++++++++++++++++++++
>  lib/eventdev/rte_eventdev.c | 12 ++++++++++++
>  lib/eventdev/rte_eventdev.h | 38 +++++++++++++++++++++++++++++++++++--
>  3 files changed, 70 insertions(+), 2 deletions(-)
>
> diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
> index 3b85d9f7a5..5495aee4f6 100644
> --- a/lib/eventdev/eventdev_pmd.h
> +++ b/lib/eventdev/eventdev_pmd.h
> @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
>  typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
>                 uint8_t queue_id);
>
> +/**
> + * Get an event queue attribute at runtime.
> + *
> + * @param dev
> + *   Event device pointer
> + * @param queue_id
> + *   Event queue index
> + * @param attr_id
> + *   Event queue attribute id
> + * @param[out] attr_value
> + *   Event queue attribute value
> + *
> + * @return
> + *  - 0: Success.
> + *  - <0: Error code on failure.
> + */
> +typedef int (*eventdev_queue_attr_get_t)(struct rte_eventdev *dev,
> +                                        uint8_t queue_id, uint32_t attr_id,
> +                                        uint32_t *attr_value);
> +
>  /**
>   * Set an event queue attribute at runtime.
>   *
> @@ -1231,6 +1251,8 @@ struct eventdev_ops {
>         /**< Set up an event queue. */
>         eventdev_queue_release_t queue_release;
>         /**< Release an event queue. */
> +       eventdev_queue_attr_get_t queue_attr_get;
> +       /**< Get an event queue attribute. */
>         eventdev_queue_attr_set_t queue_attr_set;
>         /**< Set an event queue attribute. */
>
> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
> index a31e99be02..12b261f923 100644
> --- a/lib/eventdev/rte_eventdev.c
> +++ b/lib/eventdev/rte_eventdev.c
> @@ -838,6 +838,18 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>
>                 *attr_value = conf->schedule_type;
>                 break;
> +       case RTE_EVENT_QUEUE_ATTR_WEIGHT:
> +               *attr_value = RTE_EVENT_QUEUE_WEIGHT_LOWEST;
> +               if (dev->dev_ops->queue_attr_get)
> +                       return (*dev->dev_ops->queue_attr_get)(
> +                               dev, queue_id, attr_id, attr_value);
> +               break;
> +       case RTE_EVENT_QUEUE_ATTR_AFFINITY:
> +               *attr_value = RTE_EVENT_QUEUE_AFFINITY_LOWEST;
> +               if (dev->dev_ops->queue_attr_get)
> +                       return (*dev->dev_ops->queue_attr_get)(
> +                               dev, queue_id, attr_id, attr_value);
> +               break;
>         default:
>                 return -EINVAL;
>         };
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 16e9d5fb5b..a6fbaf1c11 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -222,8 +222,14 @@ struct rte_event;
>
>  /* Event device capability bitmap flags */
>  #define RTE_EVENT_DEV_CAP_QUEUE_QOS           (1ULL << 0)
> -/**< Event scheduling prioritization is based on the priority associated with
> - *  each event queue.
> +/**< Event scheduling prioritization is based on the priority and weight
> + * associated with each event queue. Events from a queue with highest priority
> + * is scheduled first. If the queues are of same priority, weight of the queues
> + * are considered to select a queue in a weighted round robin fashion.
> + * Subsequent dequeue calls from an event port could see events from the same
> + * event queue, if the queue is configured with an affinity count. Affinity
> + * count is the number of subsequent dequeue calls, in which an event port
> + * should use the same event queue if the queue is non-empty
>   *
>   *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
>   */
> @@ -331,6 +337,26 @@ struct rte_event;
>   * @see rte_event_port_link()
>   */
>
> +/* Event queue scheduling weights */
> +#define RTE_EVENT_QUEUE_WEIGHT_HIGHEST   255
> +/**< Highest weight of an event queue
> + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
> + */
> +#define RTE_EVENT_QUEUE_WEIGHT_LOWEST    0
> +/**< Lowest weight of an event queue
> + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
> + */
> +
> +/* Event queue scheduling affinity */
> +#define RTE_EVENT_QUEUE_AFFINITY_HIGHEST   255
> +/**< Highest scheduling affinity of an event queue
> + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
> + */
> +#define RTE_EVENT_QUEUE_AFFINITY_LOWEST    0
> +/**< Lowest scheduling affinity of an event queue
> + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
> + */
> +
>  /**
>   * Get the total number of event devices that have been successfully
>   * initialised.
> @@ -684,6 +710,14 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
>   * The schedule type of the queue.
>   */
>  #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4
> +/**
> + * The weight of the queue.
> + */
> +#define RTE_EVENT_QUEUE_ATTR_WEIGHT 5
> +/**
> + * Affinity of the queue.
> + */
> +#define RTE_EVENT_QUEUE_ATTR_AFFINITY 6
>
>  /**
>   * Get an attribute from a queue.
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v2 3/6] doc: announce change in event queue conf structure
  2022-04-05  5:41   ` [PATCH v2 3/6] doc: announce change in event queue conf structure Shijith Thotton
@ 2022-05-09 12:47     ` Jerin Jacob
  2022-05-15 10:24     ` [PATCH v3] " Shijith Thotton
  1 sibling, 0 replies; 58+ messages in thread
From: Jerin Jacob @ 2022-05-09 12:47 UTC (permalink / raw)
  To: Shijith Thotton
  Cc: dpdk-dev, Jerin Jacob, Pavan Nikhilesh, Van Haaren, Harry,
	Mattias Rönnblom, Ray Kinsella

On Tue, Apr 5, 2022 at 11:12 AM Shijith Thotton <sthotton@marvell.com> wrote:
>
> Structure rte_event_queue_conf will be extended to include fields to
> support weight and affinity attribute. Once it gets added in DPDK 22.11,
> eventdev internal op, queue_attr_get can be removed.
>
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>

Please remove the deprecation notice patch from this series and send
it as a separate patch.

> ---
>  doc/guides/rel_notes/deprecation.rst | 3 +++
>  1 file changed, 3 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 4e5b23c53d..04125db681 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -125,3 +125,6 @@ Deprecation Notices
>    applications should be updated to use the ``dmadev`` library instead,
>    with the underlying HW-functionality being provided by the ``ioat`` or
>    ``idxd`` dma drivers
> +
> +* eventdev: New fields to represent event queue weight and affinity will be
> +  added to ``rte_event_queue_conf`` structure in DPDK 22.11.
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v2 4/6] test/event: test cases to test runtime queue attribute
  2022-04-05  5:41   ` [PATCH v2 4/6] test/event: test cases to test runtime queue attribute Shijith Thotton
@ 2022-05-09 12:55     ` Jerin Jacob
  0 siblings, 0 replies; 58+ messages in thread
From: Jerin Jacob @ 2022-05-09 12:55 UTC (permalink / raw)
  To: Shijith Thotton
  Cc: dpdk-dev, Jerin Jacob, Pavan Nikhilesh, Van Haaren, Harry,
	Mattias Rönnblom

On Tue, Apr 5, 2022 at 11:12 AM Shijith Thotton <sthotton@marvell.com> wrote:
>
> Added test cases to test changing of queue QoS attributes priority,
> weight and affinity at runtime.
>
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> ---
>  app/test/test_eventdev.c | 149 +++++++++++++++++++++++++++++++++++++++
>  1 file changed, 149 insertions(+)
>
> diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
> index 4f51042bda..1af93d3b77 100644
> --- a/app/test/test_eventdev.c
> +++ b/app/test/test_eventdev.c
> @@ -385,6 +385,149 @@ test_eventdev_queue_attr_priority(void)
>         return TEST_SUCCESS;
>  }
>
> +static int
> +test_eventdev_queue_attr_priority_runtime(void)
> +{
> +       struct rte_event_queue_conf qconf;
> +       struct rte_event_dev_info info;
> +       uint32_t queue_count;
> +       int i, ret;
> +
> +       ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
> +       TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
> +
> +       if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR))
> +               return TEST_SKIPPED;
> +
> +               set_val = i % RTE_EVENT_DEV_PRIORITY_LOWEST;
> +               TEST_ASSERT_SUCCESS(
> +                       rte_event_queue_attr_set(TEST_DEV_ID, i,
> +                                                RTE_EVENT_QUEUE_ATTR_PRIORITY,
> +                                                set_val),
> +                       "Queue priority set failed");

If the return code is -ENOSUP, Please mark the test as TEST_SKIPPED

> +               TEST_ASSERT_SUCCESS(
> +                       rte_event_queue_attr_get(TEST_DEV_ID, i,
> +                                                RTE_EVENT_QUEUE_ATTR_PRIORITY,
> +                                                &get_val),
> +                       "Queue priority get failed");
> +               TEST_ASSERT_EQUAL(get_val, set_val,
> +                                 "Wrong priority value for queue%d", i);
> +       }
> +
> +       return TEST_SUCCESS;
> +}
> +
> +static int
> +test_eventdev_queue_attr_weight_runtime(void)
> +{
> +       struct rte_event_queue_conf qconf;
> +       struct rte_event_dev_info info;
> +       uint32_t queue_count;
> +       int i, ret;
> +
> +       ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
> +       TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
> +
> +       if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR))
> +               return TEST_SKIPPED;
> +
> +       TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(
> +                                   TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
> +                                   &queue_count),
> +                           "Queue count get failed");
> +
> +       for (i = 0; i < (int)queue_count; i++) {
> +               ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf);
> +               TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i);
> +               ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf);
> +               TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
> +       }
> +
> +       for (i = 0; i < (int)queue_count; i++) {
> +               uint32_t get_val;
> +               uint64_t set_val;
> +
> +               set_val = i % RTE_EVENT_QUEUE_WEIGHT_HIGHEST;
> +               TEST_ASSERT_SUCCESS(
> +                       rte_event_queue_attr_set(TEST_DEV_ID, i,
> +                                                RTE_EVENT_QUEUE_ATTR_WEIGHT,
> +                                                set_val),
> +                       "Queue weight set failed");

If the return code is -ENOSUP, Please mark the test as TEST_SKIPPED


> +               TEST_ASSERT_SUCCESS(rte_event_queue_attr_get(
> +                                           TEST_DEV_ID, i,
> +                                           RTE_EVENT_QUEUE_ATTR_WEIGHT, &get_val),
> +                                   "Queue weight get failed");
> +               TEST_ASSERT_EQUAL(get_val, set_val,
> +                                 "Wrong weight value for queue%d", i);
> +       }
> +
> +       return TEST_SUCCESS;
> +}
> +
> +static int
> +test_eventdev_queue_attr_affinity_runtime(void)
> +{

Please use rte_event_dequeue_burst() to get APIs to test the full
functionality to validate the
feature for both priority and affinity test cases.

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v2 5/6] event/cnxk: support to set runtime queue attributes
  2022-04-05  5:41   ` [PATCH v2 5/6] event/cnxk: support to set runtime queue attributes Shijith Thotton
@ 2022-05-09 12:57     ` Jerin Jacob
  0 siblings, 0 replies; 58+ messages in thread
From: Jerin Jacob @ 2022-05-09 12:57 UTC (permalink / raw)
  To: Shijith Thotton
  Cc: dpdk-dev, Jerin Jacob, Pavan Nikhilesh, Van Haaren, Harry,
	Mattias Rönnblom

On Tue, Apr 5, 2022 at 11:12 AM Shijith Thotton <sthotton@marvell.com> wrote:
>
> Added API to set queue attributes at runtime and API to get weight and
> affinity.
>
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> ---
>  doc/guides/eventdevs/features/cnxk.ini |  1 +
>  drivers/event/cnxk/cn10k_eventdev.c    |  4 ++
>  drivers/event/cnxk/cn9k_eventdev.c     |  4 ++
>  drivers/event/cnxk/cnxk_eventdev.c     | 91 ++++++++++++++++++++++++--
>  drivers/event/cnxk/cnxk_eventdev.h     | 16 +++++
>  5 files changed, 110 insertions(+), 6 deletions(-)
>
> diff --git a/doc/guides/eventdevs/features/cnxk.ini b/doc/guides/eventdevs/features/cnxk.ini
> index 7633c6e3a2..bee69bf8f4 100644
> --- a/doc/guides/eventdevs/features/cnxk.ini
> +++ b/doc/guides/eventdevs/features/cnxk.ini
> @@ -12,6 +12,7 @@ runtime_port_link          = Y
>  multiple_queue_port        = Y
>  carry_flow_id              = Y
>  maintenance_free           = Y
> +runtime_queue_attr         = y
> +
>         .port_def_conf = cnxk_sso_port_def_conf,
>         .port_setup = cn9k_sso_port_setup,
>         .port_release = cn9k_sso_port_release,
> diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
> index be021d86c9..e07cb589f2 100644
> --- a/drivers/event/cnxk/cnxk_eventdev.c
> +++ b/drivers/event/cnxk/cnxk_eventdev.c
> @@ -120,7 +120,8 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
>                                   RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
>                                   RTE_EVENT_DEV_CAP_NONSEQ_MODE |
>                                   RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
> -                                 RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
> +                                 RTE_EVENT_DEV_CAP_MAINTENANCE_FREE |
> +                                 RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR;

Please swap 6/6 and 5/6 as to avoid the runtime failure at this point.

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v3 0/5] Extend and set event queue attributes at runtime
  2022-04-05  5:40 ` [PATCH v2 " Shijith Thotton
                     ` (6 preceding siblings ...)
  2022-04-11 11:07   ` [PATCH v2 0/6] Extend and set event queue attributes at runtime Shijith Thotton
@ 2022-05-15  9:53   ` Shijith Thotton
  2022-05-15  9:53     ` [PATCH v3 1/5] eventdev: support to set " Shijith Thotton
                       ` (5 more replies)
  7 siblings, 6 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-05-15  9:53 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Shijith Thotton, pbhagavatula, harry.van.haaren, mattias.ronnblom, mdr

This series adds support for setting event queue attributes at runtime
and adds two new event queue attributes weight and affinity. Eventdev
capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR is added to expose the
capability to set attributes at runtime and rte_event_queue_attr_set()
API is used to set the attributes.

Attributes weight and affinity are not yet added to rte_event_queue_conf
structure to avoid ABI break and will be added in 22.11. Till then, PMDs
using the new attributes are expected to manage them.

Test application changes and example implementation are added as last
three patches.

v3:
* Updated release notes.
* Removed deprecation patch from series.
* Used event enq/deq to test queue priority.

v2:
* Modified attr_value type from u32 to u64 for set().
* Removed RTE_EVENT_QUEUE_ATTR_MAX macro.
* Fixed return value in implementation.


Pavan Nikhilesh (1):
  common/cnxk: use lock when accessing mbox of SSO

Shijith Thotton (4):
  eventdev: support to set queue attributes at runtime
  eventdev: add weight and affinity to queue attributes
  test/event: test cases to test runtime queue attribute
  event/cnxk: support to set runtime queue attributes

 app/test/test_eventdev.c                  | 201 ++++++++++++++++++++++
 doc/guides/eventdevs/features/cnxk.ini    |   1 +
 doc/guides/eventdevs/features/default.ini |   1 +
 doc/guides/rel_notes/release_22_07.rst    |  12 ++
 drivers/common/cnxk/roc_sso.c             | 174 +++++++++++++------
 drivers/common/cnxk/roc_sso_priv.h        |   1 +
 drivers/common/cnxk/roc_tim.c             | 134 ++++++++++-----
 drivers/event/cnxk/cn10k_eventdev.c       |   4 +
 drivers/event/cnxk/cn9k_eventdev.c        |   4 +
 drivers/event/cnxk/cnxk_eventdev.c        |  91 +++++++++-
 drivers/event/cnxk/cnxk_eventdev.h        |  19 ++
 lib/eventdev/eventdev_pmd.h               |  44 +++++
 lib/eventdev/rte_eventdev.c               |  38 ++++
 lib/eventdev/rte_eventdev.h               |  71 +++++++-
 lib/eventdev/version.map                  |   3 +
 15 files changed, 695 insertions(+), 103 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v3 1/5] eventdev: support to set queue attributes at runtime
  2022-05-15  9:53   ` [PATCH v3 0/5] " Shijith Thotton
@ 2022-05-15  9:53     ` Shijith Thotton
  2022-05-15 13:11       ` Mattias Rönnblom
  2022-05-15  9:53     ` [PATCH v3 2/5] eventdev: add weight and affinity to queue attributes Shijith Thotton
                       ` (4 subsequent siblings)
  5 siblings, 1 reply; 58+ messages in thread
From: Shijith Thotton @ 2022-05-15  9:53 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Shijith Thotton, pbhagavatula, harry.van.haaren, mattias.ronnblom, mdr

Added a new eventdev API rte_event_queue_attr_set(), to set event queue
attributes at runtime from the values set during initialization using
rte_event_queue_setup(). PMD's supporting this feature should expose the
capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
 doc/guides/eventdevs/features/default.ini |  1 +
 doc/guides/rel_notes/release_22_07.rst    |  5 ++++
 lib/eventdev/eventdev_pmd.h               | 22 +++++++++++++++
 lib/eventdev/rte_eventdev.c               | 26 ++++++++++++++++++
 lib/eventdev/rte_eventdev.h               | 33 ++++++++++++++++++++++-
 lib/eventdev/version.map                  |  3 +++
 6 files changed, 89 insertions(+), 1 deletion(-)

diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini
index 2ea233463a..00360f60c6 100644
--- a/doc/guides/eventdevs/features/default.ini
+++ b/doc/guides/eventdevs/features/default.ini
@@ -17,6 +17,7 @@ runtime_port_link          =
 multiple_queue_port        =
 carry_flow_id              =
 maintenance_free           =
+runtime_queue_attr         =
 
 ;
 ; Features of a default Ethernet Rx adapter.
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 88d6e96cc1..a7a912d665 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -65,6 +65,11 @@ New Features
   * Added support for promiscuous mode on Windows.
   * Added support for MTU on Windows.
 
+* **Added support for setting queue attributes at runtime in eventdev.**
+
+  Added new API ``rte_event_queue_attr_set()``, to set event queue attributes
+  at runtime.
+
 
 Removed Items
 -------------
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index ce469d47a6..3b85d9f7a5 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
 typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
 		uint8_t queue_id);
 
+/**
+ * Set an event queue attribute at runtime.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param queue_id
+ *   Event queue index
+ * @param attr_id
+ *   Event queue attribute id
+ * @param attr_value
+ *   Event queue attribute value
+ *
+ * @return
+ *  - 0: Success.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev,
+					 uint8_t queue_id, uint32_t attr_id,
+					 uint64_t attr_value);
+
 /**
  * Retrieve the default event port configuration.
  *
@@ -1211,6 +1231,8 @@ struct eventdev_ops {
 	/**< Set up an event queue. */
 	eventdev_queue_release_t queue_release;
 	/**< Release an event queue. */
+	eventdev_queue_attr_set_t queue_attr_set;
+	/**< Set an event queue attribute. */
 
 	eventdev_port_default_conf_get_t port_def_conf;
 	/**< Get default port configuration. */
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 532a253553..a31e99be02 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -844,6 +844,32 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 	return 0;
 }
 
+int
+rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
+			 uint64_t attr_value)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	if (!is_valid_queue(dev, queue_id)) {
+		RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
+		return -EINVAL;
+	}
+
+	if (!(dev->data->event_dev_cap &
+	      RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) {
+		RTE_EDEV_LOG_ERR(
+			"Device %" PRIu8 "does not support changing queue attributes at runtime",
+			dev_id);
+		return -ENOTSUP;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_attr_set, -ENOTSUP);
+	return (*dev->dev_ops->queue_attr_set)(dev, queue_id, attr_id,
+					       attr_value);
+}
+
 int
 rte_event_port_link(uint8_t dev_id, uint8_t port_id,
 		    const uint8_t queues[], const uint8_t priorities[],
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 42a5660169..c1163ee8ec 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -225,7 +225,7 @@ struct rte_event;
 /**< Event scheduling prioritization is based on the priority associated with
  *  each event queue.
  *
- *  @see rte_event_queue_setup()
+ *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
  */
 #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
 /**< Event scheduling prioritization is based on the priority associated with
@@ -307,6 +307,13 @@ struct rte_event;
  * global pool, or process signaling related to load balancing.
  */
 
+#define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
+/**< Event device is capable of changing the queue attributes at runtime i.e
+ * after rte_event_queue_setup() or rte_event_start() call sequence. If this
+ * flag is not set, eventdev queue attributes can only be configured during
+ * rte_event_queue_setup().
+ */
+
 /* Event device priority levels */
 #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
 /**< Highest priority expressed across eventdev subsystem
@@ -702,6 +709,30 @@ int
 rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 			uint32_t *attr_value);
 
+/**
+ * Set an event queue attribute.
+ *
+ * @param dev_id
+ *   Eventdev id
+ * @param queue_id
+ *   Eventdev queue id
+ * @param attr_id
+ *   The attribute ID to set
+ * @param attr_value
+ *   The attribute value to set
+ *
+ * @return
+ *   - 0: Successfully set attribute.
+ *   - -EINVAL: invalid device, queue or attr_id.
+ *   - -ENOTSUP: device does not support setting event attribute.
+ *   - -EBUSY: device is in running state
+ *   - <0: failed to set event queue attribute
+ */
+__rte_experimental
+int
+rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
+			 uint64_t attr_value);
+
 /* Event port specific APIs */
 
 /* Event port configuration bitmap flags */
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index cd5dada07f..c581b75c18 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -108,6 +108,9 @@ EXPERIMENTAL {
 
 	# added in 22.03
 	rte_event_eth_rx_adapter_event_port_get;
+
+	# added in 22.07
+	rte_event_queue_attr_set;
 };
 
 INTERNAL {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v3 2/5] eventdev: add weight and affinity to queue attributes
  2022-05-15  9:53   ` [PATCH v3 0/5] " Shijith Thotton
  2022-05-15  9:53     ` [PATCH v3 1/5] eventdev: support to set " Shijith Thotton
@ 2022-05-15  9:53     ` Shijith Thotton
  2022-05-15  9:53     ` [PATCH v3 3/5] test/event: test cases to test runtime queue attribute Shijith Thotton
                       ` (3 subsequent siblings)
  5 siblings, 0 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-05-15  9:53 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Shijith Thotton, pbhagavatula, harry.van.haaren, mattias.ronnblom, mdr

Extended eventdev queue QoS attributes to support weight and affinity.
If queues are of the same priority, events from the queue with highest
weight will be scheduled first. Affinity indicates the number of times,
the subsequent schedule calls from an event port will use the same event
queue. Schedule call selects another queue if current queue goes empty
or schedule count reaches affinity count.

To avoid ABI break, weight and affinity attributes are not yet added to
queue config structure and rely on PMD for managing it. New eventdev op
queue_attr_get can be used to get it from the PMD.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
 doc/guides/rel_notes/release_22_07.rst |  7 +++++
 lib/eventdev/eventdev_pmd.h            | 22 +++++++++++++++
 lib/eventdev/rte_eventdev.c            | 12 ++++++++
 lib/eventdev/rte_eventdev.h            | 38 ++++++++++++++++++++++++--
 4 files changed, 77 insertions(+), 2 deletions(-)

diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index a7a912d665..f35a31bbdf 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -70,6 +70,13 @@ New Features
   Added new API ``rte_event_queue_attr_set()``, to set event queue attributes
   at runtime.
 
+* **Added new queues attributes weight and affinity in eventdev.**
+
+  Defined new event queue attributes weight and affinity as below:
+
+  * ``RTE_EVENT_QUEUE_ATTR_WEIGHT``
+  * ``RTE_EVENT_QUEUE_ATTR_AFFINITY``
+
 
 Removed Items
 -------------
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index 3b85d9f7a5..5495aee4f6 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
 typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
 		uint8_t queue_id);
 
+/**
+ * Get an event queue attribute at runtime.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param queue_id
+ *   Event queue index
+ * @param attr_id
+ *   Event queue attribute id
+ * @param[out] attr_value
+ *   Event queue attribute value
+ *
+ * @return
+ *  - 0: Success.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_queue_attr_get_t)(struct rte_eventdev *dev,
+					 uint8_t queue_id, uint32_t attr_id,
+					 uint32_t *attr_value);
+
 /**
  * Set an event queue attribute at runtime.
  *
@@ -1231,6 +1251,8 @@ struct eventdev_ops {
 	/**< Set up an event queue. */
 	eventdev_queue_release_t queue_release;
 	/**< Release an event queue. */
+	eventdev_queue_attr_get_t queue_attr_get;
+	/**< Get an event queue attribute. */
 	eventdev_queue_attr_set_t queue_attr_set;
 	/**< Set an event queue attribute. */
 
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index a31e99be02..12b261f923 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -838,6 +838,18 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 
 		*attr_value = conf->schedule_type;
 		break;
+	case RTE_EVENT_QUEUE_ATTR_WEIGHT:
+		*attr_value = RTE_EVENT_QUEUE_WEIGHT_LOWEST;
+		if (dev->dev_ops->queue_attr_get)
+			return (*dev->dev_ops->queue_attr_get)(
+				dev, queue_id, attr_id, attr_value);
+		break;
+	case RTE_EVENT_QUEUE_ATTR_AFFINITY:
+		*attr_value = RTE_EVENT_QUEUE_AFFINITY_LOWEST;
+		if (dev->dev_ops->queue_attr_get)
+			return (*dev->dev_ops->queue_attr_get)(
+				dev, queue_id, attr_id, attr_value);
+		break;
 	default:
 		return -EINVAL;
 	};
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index c1163ee8ec..5d38996f6b 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -222,8 +222,14 @@ struct rte_event;
 
 /* Event device capability bitmap flags */
 #define RTE_EVENT_DEV_CAP_QUEUE_QOS           (1ULL << 0)
-/**< Event scheduling prioritization is based on the priority associated with
- *  each event queue.
+/**< Event scheduling prioritization is based on the priority and weight
+ * associated with each event queue. Events from a queue with highest priority
+ * is scheduled first. If the queues are of same priority, weight of the queues
+ * are considered to select a queue in a weighted round robin fashion.
+ * Subsequent dequeue calls from an event port could see events from the same
+ * event queue, if the queue is configured with an affinity count. Affinity
+ * count is the number of subsequent dequeue calls, in which an event port
+ * should use the same event queue if the queue is non-empty
  *
  *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
  */
@@ -331,6 +337,26 @@ struct rte_event;
  * @see rte_event_port_link()
  */
 
+/* Event queue scheduling weights */
+#define RTE_EVENT_QUEUE_WEIGHT_HIGHEST 255
+/**< Highest weight of an event queue
+ * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
+ */
+#define RTE_EVENT_QUEUE_WEIGHT_LOWEST 0
+/**< Lowest weight of an event queue
+ * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
+ */
+
+/* Event queue scheduling affinity */
+#define RTE_EVENT_QUEUE_AFFINITY_HIGHEST 255
+/**< Highest scheduling affinity of an event queue
+ * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
+ */
+#define RTE_EVENT_QUEUE_AFFINITY_LOWEST 0
+/**< Lowest scheduling affinity of an event queue
+ * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
+ */
+
 /**
  * Get the total number of event devices that have been successfully
  * initialised.
@@ -684,6 +710,14 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
  * The schedule type of the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4
+/**
+ * The weight of the queue.
+ */
+#define RTE_EVENT_QUEUE_ATTR_WEIGHT 5
+/**
+ * Affinity of the queue.
+ */
+#define RTE_EVENT_QUEUE_ATTR_AFFINITY 6
 
 /**
  * Get an attribute from a queue.
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v3 3/5] test/event: test cases to test runtime queue attribute
  2022-05-15  9:53   ` [PATCH v3 0/5] " Shijith Thotton
  2022-05-15  9:53     ` [PATCH v3 1/5] eventdev: support to set " Shijith Thotton
  2022-05-15  9:53     ` [PATCH v3 2/5] eventdev: add weight and affinity to queue attributes Shijith Thotton
@ 2022-05-15  9:53     ` Shijith Thotton
  2022-05-15  9:53     ` [PATCH v3 4/5] common/cnxk: use lock when accessing mbox of SSO Shijith Thotton
                       ` (2 subsequent siblings)
  5 siblings, 0 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-05-15  9:53 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Shijith Thotton, pbhagavatula, harry.van.haaren, mattias.ronnblom, mdr

Added test cases to test changing of queue QoS attributes priority,
weight and affinity at runtime.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
 app/test/test_eventdev.c | 201 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 201 insertions(+)

diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 4f51042bda..336529038e 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -385,6 +385,201 @@ test_eventdev_queue_attr_priority(void)
 	return TEST_SUCCESS;
 }
 
+static int
+test_eventdev_queue_attr_priority_runtime(void)
+{
+	uint32_t queue_count, queue_req, prio, deq_cnt;
+	struct rte_event_queue_conf qconf;
+	struct rte_event_port_conf pconf;
+	struct rte_event_dev_info info;
+	struct rte_event event = {
+		.op = RTE_EVENT_OP_NEW,
+		.event_type = RTE_EVENT_TYPE_CPU,
+		.sched_type = RTE_SCHED_TYPE_ATOMIC,
+		.u64 = 0xbadbadba,
+	};
+	int i, ret;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR))
+		return TEST_SKIPPED;
+
+	TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(
+				    TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+				    &queue_count),
+			    "Queue count get failed");
+
+	/* Need at least 2 queues to test LOW and HIGH priority. */
+	TEST_ASSERT(queue_count > 1, "Not enough event queues, needed 2");
+	queue_req = 2;
+
+	for (i = 0; i < (int)queue_count; i++) {
+		ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i);
+		ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	ret = rte_event_queue_attr_set(TEST_DEV_ID, 0,
+				       RTE_EVENT_QUEUE_ATTR_PRIORITY,
+				       RTE_EVENT_DEV_PRIORITY_LOWEST);
+	if (ret == -ENOTSUP)
+		return TEST_SKIPPED;
+	TEST_ASSERT_SUCCESS(ret, "Queue0 priority set failed");
+
+	ret = rte_event_queue_attr_set(TEST_DEV_ID, 1,
+				       RTE_EVENT_QUEUE_ATTR_PRIORITY,
+				       RTE_EVENT_DEV_PRIORITY_HIGHEST);
+	if (ret == -ENOTSUP)
+		return TEST_SKIPPED;
+	TEST_ASSERT_SUCCESS(ret, "Queue1 priority set failed");
+
+	/* Setup event port 0 */
+	ret = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get port0 info");
+	ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to setup port0");
+	ret = rte_event_port_link(TEST_DEV_ID, 0, NULL, NULL, 0);
+	TEST_ASSERT(ret == (int)queue_count, "Failed to link port, device %d",
+		    TEST_DEV_ID);
+
+	ret = rte_event_dev_start(TEST_DEV_ID);
+	TEST_ASSERT_SUCCESS(ret, "Failed to start device%d", TEST_DEV_ID);
+
+	for (i = 0; i < (int)queue_req; i++) {
+		event.queue_id = i;
+		while (rte_event_enqueue_burst(TEST_DEV_ID, 0, &event, 1) != 1)
+			rte_pause();
+	}
+
+	prio = RTE_EVENT_DEV_PRIORITY_HIGHEST;
+	deq_cnt = 0;
+	while (deq_cnt < queue_req) {
+		uint32_t queue_prio;
+
+		if (rte_event_dequeue_burst(TEST_DEV_ID, 0, &event, 1, 0) == 0)
+			continue;
+
+		ret = rte_event_queue_attr_get(TEST_DEV_ID, event.queue_id,
+					       RTE_EVENT_QUEUE_ATTR_PRIORITY,
+					       &queue_prio);
+		if (ret == -ENOTSUP)
+			return TEST_SKIPPED;
+
+		TEST_ASSERT_SUCCESS(ret, "Queue priority get failed");
+		TEST_ASSERT(queue_prio >= prio,
+			    "Received event from a lower priority queue first");
+		prio = queue_prio;
+		deq_cnt++;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_queue_attr_weight_runtime(void)
+{
+	struct rte_event_queue_conf qconf;
+	struct rte_event_dev_info info;
+	uint32_t queue_count;
+	int i, ret;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR))
+		return TEST_SKIPPED;
+
+	TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(
+				    TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+				    &queue_count),
+			    "Queue count get failed");
+
+	for (i = 0; i < (int)queue_count; i++) {
+		ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i);
+		ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	for (i = 0; i < (int)queue_count; i++) {
+		uint32_t get_val;
+		uint64_t set_val;
+
+		set_val = i % RTE_EVENT_QUEUE_WEIGHT_HIGHEST;
+		ret = rte_event_queue_attr_set(
+			TEST_DEV_ID, i, RTE_EVENT_QUEUE_ATTR_WEIGHT, set_val);
+		if (ret == -ENOTSUP)
+			return TEST_SKIPPED;
+
+		TEST_ASSERT_SUCCESS(ret, "Queue weight set failed");
+
+		ret = rte_event_queue_attr_get(
+			TEST_DEV_ID, i, RTE_EVENT_QUEUE_ATTR_WEIGHT, &get_val);
+		if (ret == -ENOTSUP)
+			return TEST_SKIPPED;
+
+		TEST_ASSERT_SUCCESS(ret, "Queue weight get failed");
+		TEST_ASSERT_EQUAL(get_val, set_val,
+				  "Wrong weight value for queue%d", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_queue_attr_affinity_runtime(void)
+{
+	struct rte_event_queue_conf qconf;
+	struct rte_event_dev_info info;
+	uint32_t queue_count;
+	int i, ret;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR))
+		return TEST_SKIPPED;
+
+	TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(
+				    TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+				    &queue_count),
+			    "Queue count get failed");
+
+	for (i = 0; i < (int)queue_count; i++) {
+		ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i);
+		ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	for (i = 0; i < (int)queue_count; i++) {
+		uint32_t get_val;
+		uint64_t set_val;
+
+		set_val = i % RTE_EVENT_QUEUE_AFFINITY_HIGHEST;
+		ret = rte_event_queue_attr_set(
+			TEST_DEV_ID, i, RTE_EVENT_QUEUE_ATTR_AFFINITY, set_val);
+		if (ret == -ENOTSUP)
+			return TEST_SKIPPED;
+
+		TEST_ASSERT_SUCCESS(ret, "Queue affinity set failed");
+
+		ret = rte_event_queue_attr_get(
+			TEST_DEV_ID, i, RTE_EVENT_QUEUE_ATTR_AFFINITY, &get_val);
+		if (ret == -ENOTSUP)
+			return TEST_SKIPPED;
+
+		TEST_ASSERT_SUCCESS(ret, "Queue affinity get failed");
+		TEST_ASSERT_EQUAL(get_val, set_val,
+				  "Wrong affinity value for queue%d", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
 static int
 test_eventdev_queue_attr_nb_atomic_flows(void)
 {
@@ -964,6 +1159,12 @@ static struct unit_test_suite eventdev_common_testsuite  = {
 			test_eventdev_queue_count),
 		TEST_CASE_ST(eventdev_configure_setup, NULL,
 			test_eventdev_queue_attr_priority),
+		TEST_CASE_ST(eventdev_configure_setup, eventdev_stop_device,
+			test_eventdev_queue_attr_priority_runtime),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_attr_weight_runtime),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_attr_affinity_runtime),
 		TEST_CASE_ST(eventdev_configure_setup, NULL,
 			test_eventdev_queue_attr_nb_atomic_flows),
 		TEST_CASE_ST(eventdev_configure_setup, NULL,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v3 4/5] common/cnxk: use lock when accessing mbox of SSO
  2022-05-15  9:53   ` [PATCH v3 0/5] " Shijith Thotton
                       ` (2 preceding siblings ...)
  2022-05-15  9:53     ` [PATCH v3 3/5] test/event: test cases to test runtime queue attribute Shijith Thotton
@ 2022-05-15  9:53     ` Shijith Thotton
  2022-05-15  9:53     ` [PATCH v3 5/5] event/cnxk: support to set runtime queue attributes Shijith Thotton
  2022-05-16 17:35     ` [PATCH v4 0/5] Extend and set event queue attributes at runtime Shijith Thotton
  5 siblings, 0 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-05-15  9:53 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Pavan Nikhilesh, harry.van.haaren, mattias.ronnblom, mdr,
	Shijith Thotton, Nithin Dabilpuram, Kiran Kumar K,
	Sunil Kumar Kori, Satha Rao

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Since mbox is now accessed from multiple threads, use lock to
synchronize access.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
 drivers/common/cnxk/roc_sso.c      | 174 +++++++++++++++++++++--------
 drivers/common/cnxk/roc_sso_priv.h |   1 +
 drivers/common/cnxk/roc_tim.c      | 134 ++++++++++++++--------
 3 files changed, 215 insertions(+), 94 deletions(-)

diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index f8a0a96533..358d37a9f2 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -36,8 +36,8 @@ sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf,
 	}
 
 	rc = mbox_process_msg(dev->mbox, rsp);
-	if (rc < 0)
-		return rc;
+	if (rc)
+		return -EIO;
 
 	return 0;
 }
@@ -69,8 +69,8 @@ sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf)
 	}
 
 	rc = mbox_process(dev->mbox);
-	if (rc < 0)
-		return rc;
+	if (rc)
+		return -EIO;
 
 	return 0;
 }
@@ -98,7 +98,7 @@ sso_rsrc_attach(struct roc_sso *roc_sso, enum sso_lf_type lf_type,
 	}
 
 	req->modify = true;
-	if (mbox_process(dev->mbox) < 0)
+	if (mbox_process(dev->mbox))
 		return -EIO;
 
 	return 0;
@@ -126,7 +126,7 @@ sso_rsrc_detach(struct roc_sso *roc_sso, enum sso_lf_type lf_type)
 	}
 
 	req->partial = true;
-	if (mbox_process(dev->mbox) < 0)
+	if (mbox_process(dev->mbox))
 		return -EIO;
 
 	return 0;
@@ -141,9 +141,9 @@ sso_rsrc_get(struct roc_sso *roc_sso)
 
 	mbox_alloc_msg_free_rsrc_cnt(dev->mbox);
 	rc = mbox_process_msg(dev->mbox, (void **)&rsrc_cnt);
-	if (rc < 0) {
+	if (rc) {
 		plt_err("Failed to get free resource count\n");
-		return rc;
+		return -EIO;
 	}
 
 	roc_sso->max_hwgrp = rsrc_cnt->sso;
@@ -197,8 +197,8 @@ sso_msix_fill(struct roc_sso *roc_sso, uint16_t nb_hws, uint16_t nb_hwgrp)
 
 	mbox_alloc_msg_msix_offset(dev->mbox);
 	rc = mbox_process_msg(dev->mbox, (void **)&rsp);
-	if (rc < 0)
-		return rc;
+	if (rc)
+		return -EIO;
 
 	for (i = 0; i < nb_hws; i++)
 		sso->hws_msix_offset[i] = rsp->ssow_msixoff[i];
@@ -285,53 +285,71 @@ int
 roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws,
 		      struct roc_sso_hws_stats *stats)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
 	struct sso_hws_stats *req_rsp;
+	struct dev *dev = &sso->dev;
 	int rc;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req_rsp = (struct sso_hws_stats *)mbox_alloc_msg_sso_hws_get_stats(
 		dev->mbox);
 	if (req_rsp == NULL) {
 		rc = mbox_process(dev->mbox);
-		if (rc < 0)
-			return rc;
+		if (rc) {
+			rc = -EIO;
+			goto fail;
+		}
 		req_rsp = (struct sso_hws_stats *)
 			mbox_alloc_msg_sso_hws_get_stats(dev->mbox);
-		if (req_rsp == NULL)
-			return -ENOSPC;
+		if (req_rsp == NULL) {
+			rc = -ENOSPC;
+			goto fail;
+		}
 	}
 	req_rsp->hws = hws;
 	rc = mbox_process_msg(dev->mbox, (void **)&req_rsp);
-	if (rc)
-		return rc;
+	if (rc) {
+		rc = -EIO;
+		goto fail;
+	}
 
 	stats->arbitration = req_rsp->arbitration;
-	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
 roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp,
 			struct roc_sso_hwgrp_stats *stats)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
 	struct sso_grp_stats *req_rsp;
+	struct dev *dev = &sso->dev;
 	int rc;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req_rsp = (struct sso_grp_stats *)mbox_alloc_msg_sso_grp_get_stats(
 		dev->mbox);
 	if (req_rsp == NULL) {
 		rc = mbox_process(dev->mbox);
-		if (rc < 0)
-			return rc;
+		if (rc) {
+			rc = -EIO;
+			goto fail;
+		}
 		req_rsp = (struct sso_grp_stats *)
 			mbox_alloc_msg_sso_grp_get_stats(dev->mbox);
-		if (req_rsp == NULL)
-			return -ENOSPC;
+		if (req_rsp == NULL) {
+			rc = -ENOSPC;
+			goto fail;
+		}
 	}
 	req_rsp->grp = hwgrp;
 	rc = mbox_process_msg(dev->mbox, (void **)&req_rsp);
-	if (rc)
-		return rc;
+	if (rc) {
+		rc = -EIO;
+		goto fail;
+	}
 
 	stats->aw_status = req_rsp->aw_status;
 	stats->dq_pc = req_rsp->dq_pc;
@@ -341,7 +359,10 @@ roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp,
 	stats->ts_pc = req_rsp->ts_pc;
 	stats->wa_pc = req_rsp->wa_pc;
 	stats->ws_pc = req_rsp->ws_pc;
-	return 0;
+
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -358,10 +379,12 @@ int
 roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos,
 			 uint8_t nb_qos, uint32_t nb_xaq)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
 	struct sso_grp_qos_cfg *req;
 	int i, rc;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	for (i = 0; i < nb_qos; i++) {
 		uint8_t xaq_prcnt = qos[i].xaq_prcnt;
 		uint8_t iaq_prcnt = qos[i].iaq_prcnt;
@@ -370,11 +393,16 @@ roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos,
 		req = mbox_alloc_msg_sso_grp_qos_config(dev->mbox);
 		if (req == NULL) {
 			rc = mbox_process(dev->mbox);
-			if (rc < 0)
-				return rc;
+			if (rc) {
+				rc = -EIO;
+				goto fail;
+			}
+
 			req = mbox_alloc_msg_sso_grp_qos_config(dev->mbox);
-			if (req == NULL)
-				return -ENOSPC;
+			if (req == NULL) {
+				rc = -ENOSPC;
+				goto fail;
+			}
 		}
 		req->grp = qos[i].hwgrp;
 		req->xaq_limit = (nb_xaq * (xaq_prcnt ? xaq_prcnt : 100)) / 100;
@@ -386,7 +414,12 @@ roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos,
 			       100;
 	}
 
-	return mbox_process(dev->mbox);
+	rc = mbox_process(dev->mbox);
+	if (rc)
+		rc = -EIO;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -482,11 +515,16 @@ sso_hwgrp_init_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq,
 int
 roc_sso_hwgrp_init_xaq_aura(struct roc_sso *roc_sso, uint32_t nb_xae)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
+	int rc;
 
-	return sso_hwgrp_init_xaq_aura(dev, &roc_sso->xaq, nb_xae,
-				       roc_sso->xae_waes, roc_sso->xaq_buf_size,
-				       roc_sso->nb_hwgrp);
+	plt_spinlock_lock(&sso->mbox_lock);
+	rc = sso_hwgrp_init_xaq_aura(dev, &roc_sso->xaq, nb_xae,
+				     roc_sso->xae_waes, roc_sso->xaq_buf_size,
+				     roc_sso->nb_hwgrp);
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -515,9 +553,14 @@ sso_hwgrp_free_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq,
 int
 roc_sso_hwgrp_free_xaq_aura(struct roc_sso *roc_sso, uint16_t nb_hwgrp)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
+	int rc;
 
-	return sso_hwgrp_free_xaq_aura(dev, &roc_sso->xaq, nb_hwgrp);
+	plt_spinlock_lock(&sso->mbox_lock);
+	rc = sso_hwgrp_free_xaq_aura(dev, &roc_sso->xaq, nb_hwgrp);
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -533,16 +576,24 @@ sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps)
 	req->npa_aura_id = npa_aura_id;
 	req->hwgrps = hwgrps;
 
-	return mbox_process(dev->mbox);
+	if (mbox_process(dev->mbox))
+		return -EIO;
+
+	return 0;
 }
 
 int
 roc_sso_hwgrp_alloc_xaq(struct roc_sso *roc_sso, uint32_t npa_aura_id,
 			uint16_t hwgrps)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
+	int rc;
 
-	return sso_hwgrp_alloc_xaq(dev, npa_aura_id, hwgrps);
+	plt_spinlock_lock(&sso->mbox_lock);
+	rc = sso_hwgrp_alloc_xaq(dev, npa_aura_id, hwgrps);
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -555,40 +606,56 @@ sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps)
 		return -EINVAL;
 	req->hwgrps = hwgrps;
 
-	return mbox_process(dev->mbox);
+	if (mbox_process(dev->mbox))
+		return -EIO;
+
+	return 0;
 }
 
 int
 roc_sso_hwgrp_release_xaq(struct roc_sso *roc_sso, uint16_t hwgrps)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
+	int rc;
 
-	return sso_hwgrp_release_xaq(dev, hwgrps);
+	plt_spinlock_lock(&sso->mbox_lock);
+	rc = sso_hwgrp_release_xaq(dev, hwgrps);
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
 roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso, uint16_t hwgrp,
 			   uint8_t weight, uint8_t affinity, uint8_t priority)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
 	struct sso_grp_priority *req;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_sso_grp_set_priority(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->grp = hwgrp;
 	req->weight = weight;
 	req->affinity = affinity;
 	req->priority = priority;
 
 	rc = mbox_process(dev->mbox);
-	if (rc < 0)
-		return rc;
+	if (rc) {
+		rc = -EIO;
+		goto fail;
+	}
+	plt_spinlock_unlock(&sso->mbox_lock);
 	plt_sso_dbg("HWGRP %d weight %d affinity %d priority %d", hwgrp, weight,
 		    affinity, priority);
 
 	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -603,10 +670,11 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
 	if (roc_sso->max_hws < nb_hws)
 		return -ENOENT;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	rc = sso_rsrc_attach(roc_sso, SSO_LF_TYPE_HWS, nb_hws);
 	if (rc < 0) {
 		plt_err("Unable to attach SSO HWS LFs");
-		return rc;
+		goto fail;
 	}
 
 	rc = sso_rsrc_attach(roc_sso, SSO_LF_TYPE_HWGRP, nb_hwgrp);
@@ -645,6 +713,7 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
 		goto sso_msix_fail;
 	}
 
+	plt_spinlock_unlock(&sso->mbox_lock);
 	roc_sso->nb_hwgrp = nb_hwgrp;
 	roc_sso->nb_hws = nb_hws;
 
@@ -657,6 +726,8 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
 	sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWGRP);
 hwgrp_atch_fail:
 	sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWS);
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
 	return rc;
 }
 
@@ -678,6 +749,7 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso)
 
 	roc_sso->nb_hwgrp = 0;
 	roc_sso->nb_hws = 0;
+	plt_spinlock_unlock(&sso->mbox_lock);
 }
 
 int
@@ -696,6 +768,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso)
 	sso = roc_sso_to_sso_priv(roc_sso);
 	memset(sso, 0, sizeof(*sso));
 	pci_dev = roc_sso->pci_dev;
+	plt_spinlock_init(&sso->mbox_lock);
 
 	rc = dev_init(&sso->dev, pci_dev);
 	if (rc < 0) {
@@ -703,6 +776,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso)
 		goto fail;
 	}
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	rc = sso_rsrc_get(roc_sso);
 	if (rc < 0) {
 		plt_err("Failed to get SSO resources");
@@ -739,6 +813,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso)
 	sso->pci_dev = pci_dev;
 	sso->dev.drv_inited = true;
 	roc_sso->lmt_base = sso->dev.lmt_base;
+	plt_spinlock_unlock(&sso->mbox_lock);
 
 	return 0;
 link_mem_free:
@@ -746,6 +821,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso)
 rsrc_fail:
 	rc |= dev_fini(&sso->dev, pci_dev);
 fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
 	return rc;
 }
 
diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h
index 09729d4f62..674e4e0a39 100644
--- a/drivers/common/cnxk/roc_sso_priv.h
+++ b/drivers/common/cnxk/roc_sso_priv.h
@@ -22,6 +22,7 @@ struct sso {
 	/* SSO link mapping. */
 	struct plt_bitmap **link_map;
 	void *link_map_mem;
+	plt_spinlock_t mbox_lock;
 } __plt_cache_aligned;
 
 enum sso_err_status {
diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c
index cefd9bc89d..0f9209937b 100644
--- a/drivers/common/cnxk/roc_tim.c
+++ b/drivers/common/cnxk/roc_tim.c
@@ -8,15 +8,16 @@
 static int
 tim_fill_msix(struct roc_tim *roc_tim, uint16_t nb_ring)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
 	struct tim *tim = roc_tim_to_tim_priv(roc_tim);
+	struct dev *dev = &sso->dev;
 	struct msix_offset_rsp *rsp;
 	int i, rc;
 
 	mbox_alloc_msg_msix_offset(dev->mbox);
 	rc = mbox_process_msg(dev->mbox, (void **)&rsp);
-	if (rc < 0)
-		return rc;
+	if (rc)
+		return -EIO;
 
 	for (i = 0; i < nb_ring; i++)
 		tim->tim_msix_offsets[i] = rsp->timlf_msixoff[i];
@@ -88,20 +89,23 @@ int
 roc_tim_lf_enable(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *start_tsc,
 		  uint32_t *cur_bkt)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
+	struct dev *dev = &sso->dev;
 	struct tim_enable_rsp *rsp;
 	struct tim_ring_req *req;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_enable_ring(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->ring = ring_id;
 
 	rc = mbox_process_msg(dev->mbox, (void **)&rsp);
-	if (rc < 0) {
+	if (rc) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
+		goto fail;
 	}
 
 	if (cur_bkt)
@@ -109,28 +113,34 @@ roc_tim_lf_enable(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *start_tsc,
 	if (start_tsc)
 		*start_tsc = rsp->timestarted;
 
-	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
 roc_tim_lf_disable(struct roc_tim *roc_tim, uint8_t ring_id)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
+	struct dev *dev = &sso->dev;
 	struct tim_ring_req *req;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_disable_ring(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->ring = ring_id;
 
 	rc = mbox_process(dev->mbox);
-	if (rc < 0) {
+	if (rc) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
 	}
 
-	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 uintptr_t
@@ -147,13 +157,15 @@ roc_tim_lf_config(struct roc_tim *roc_tim, uint8_t ring_id,
 		  uint8_t ena_dfb, uint32_t bucket_sz, uint32_t chunk_sz,
 		  uint32_t interval, uint64_t intervalns, uint64_t clockfreq)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
+	struct dev *dev = &sso->dev;
 	struct tim_config_req *req;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_config_ring(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->ring = ring_id;
 	req->bigendian = false;
 	req->bucketsize = bucket_sz;
@@ -167,12 +179,14 @@ roc_tim_lf_config(struct roc_tim *roc_tim, uint8_t ring_id,
 	req->gpioedge = TIM_GPIO_LTOH_TRANS;
 
 	rc = mbox_process(dev->mbox);
-	if (rc < 0) {
+	if (rc) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
 	}
 
-	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -180,27 +194,32 @@ roc_tim_lf_interval(struct roc_tim *roc_tim, enum roc_tim_clk_src clk_src,
 		    uint64_t clockfreq, uint64_t *intervalns,
 		    uint64_t *interval)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
+	struct dev *dev = &sso->dev;
 	struct tim_intvl_req *req;
 	struct tim_intvl_rsp *rsp;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_get_min_intvl(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 
 	req->clockfreq = clockfreq;
 	req->clocksource = clk_src;
 	rc = mbox_process_msg(dev->mbox, (void **)&rsp);
-	if (rc < 0) {
+	if (rc) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
+		goto fail;
 	}
 
 	*intervalns = rsp->intvl_ns;
 	*interval = rsp->intvl_cyc;
 
-	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -214,17 +233,19 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk)
 	struct dev *dev = &sso->dev;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_lf_alloc(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->npa_pf_func = idev_npa_pffunc_get();
 	req->sso_pf_func = idev_sso_pffunc_get();
 	req->ring = ring_id;
 
 	rc = mbox_process_msg(dev->mbox, (void **)&rsp);
-	if (rc < 0) {
+	if (rc) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
+		goto fail;
 	}
 
 	if (clk)
@@ -235,12 +256,18 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk)
 	if (rc < 0) {
 		plt_tim_dbg("Failed to register Ring[%d] IRQ", ring_id);
 		free_req = mbox_alloc_msg_tim_lf_free(dev->mbox);
-		if (free_req == NULL)
-			return -ENOSPC;
+		if (free_req == NULL) {
+			rc = -ENOSPC;
+			goto fail;
+		}
 		free_req->ring = ring_id;
-		mbox_process(dev->mbox);
+		rc = mbox_process(dev->mbox);
+		if (rc)
+			rc = -EIO;
 	}
 
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
 	return rc;
 }
 
@@ -256,17 +283,20 @@ roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id)
 	tim_unregister_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
 				tim->tim_msix_offsets[ring_id]);
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_lf_free(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->ring = ring_id;
 
 	rc = mbox_process(dev->mbox);
 	if (rc < 0) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
 	}
 
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
 	return 0;
 }
 
@@ -276,40 +306,48 @@ roc_tim_init(struct roc_tim *roc_tim)
 	struct rsrc_attach_req *attach_req;
 	struct rsrc_detach_req *detach_req;
 	struct free_rsrcs_rsp *free_rsrc;
-	struct dev *dev;
+	struct sso *sso;
 	uint16_t nb_lfs;
+	struct dev *dev;
 	int rc;
 
 	if (roc_tim == NULL || roc_tim->roc_sso == NULL)
 		return TIM_ERR_PARAM;
 
+	sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
+	dev = &sso->dev;
 	PLT_STATIC_ASSERT(sizeof(struct tim) <= TIM_MEM_SZ);
-	dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
 	nb_lfs = roc_tim->nb_lfs;
+	plt_spinlock_lock(&sso->mbox_lock);
 	mbox_alloc_msg_free_rsrc_cnt(dev->mbox);
 	rc = mbox_process_msg(dev->mbox, (void *)&free_rsrc);
-	if (rc < 0) {
+	if (rc) {
 		plt_err("Unable to get free rsrc count.");
-		return 0;
+		nb_lfs = 0;
+		goto fail;
 	}
 
 	if (nb_lfs && (free_rsrc->tim < nb_lfs)) {
 		plt_tim_dbg("Requested LFs : %d Available LFs : %d", nb_lfs,
 			    free_rsrc->tim);
-		return 0;
+		nb_lfs = 0;
+		goto fail;
 	}
 
 	attach_req = mbox_alloc_msg_attach_resources(dev->mbox);
-	if (attach_req == NULL)
-		return -ENOSPC;
+	if (attach_req == NULL) {
+		nb_lfs = 0;
+		goto fail;
+	}
 	attach_req->modify = true;
 	attach_req->timlfs = nb_lfs ? nb_lfs : free_rsrc->tim;
 	nb_lfs = attach_req->timlfs;
 
 	rc = mbox_process(dev->mbox);
-	if (rc < 0) {
+	if (rc) {
 		plt_err("Unable to attach TIM LFs.");
-		return 0;
+		nb_lfs = 0;
+		goto fail;
 	}
 
 	rc = tim_fill_msix(roc_tim, nb_lfs);
@@ -317,28 +355,34 @@ roc_tim_init(struct roc_tim *roc_tim)
 		plt_err("Unable to get TIM MSIX vectors");
 
 		detach_req = mbox_alloc_msg_detach_resources(dev->mbox);
-		if (detach_req == NULL)
-			return -ENOSPC;
+		if (detach_req == NULL) {
+			nb_lfs = 0;
+			goto fail;
+		}
 		detach_req->partial = true;
 		detach_req->timlfs = true;
 		mbox_process(dev->mbox);
-
-		return 0;
+		nb_lfs = 0;
 	}
 
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
 	return nb_lfs;
 }
 
 void
 roc_tim_fini(struct roc_tim *roc_tim)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
 	struct rsrc_detach_req *detach_req;
+	struct dev *dev = &sso->dev;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	detach_req = mbox_alloc_msg_detach_resources(dev->mbox);
 	PLT_ASSERT(detach_req);
 	detach_req->partial = true;
 	detach_req->timlfs = true;
 
 	mbox_process(dev->mbox);
+	plt_spinlock_unlock(&sso->mbox_lock);
 }
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v3 5/5] event/cnxk: support to set runtime queue attributes
  2022-05-15  9:53   ` [PATCH v3 0/5] " Shijith Thotton
                       ` (3 preceding siblings ...)
  2022-05-15  9:53     ` [PATCH v3 4/5] common/cnxk: use lock when accessing mbox of SSO Shijith Thotton
@ 2022-05-15  9:53     ` Shijith Thotton
  2022-05-16 17:35     ` [PATCH v4 0/5] Extend and set event queue attributes at runtime Shijith Thotton
  5 siblings, 0 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-05-15  9:53 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Shijith Thotton, pbhagavatula, harry.van.haaren, mattias.ronnblom, mdr

Added API to set queue attributes at runtime and API to get weight and
affinity.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
 doc/guides/eventdevs/features/cnxk.ini |  1 +
 drivers/event/cnxk/cn10k_eventdev.c    |  4 ++
 drivers/event/cnxk/cn9k_eventdev.c     |  4 ++
 drivers/event/cnxk/cnxk_eventdev.c     | 91 ++++++++++++++++++++++++--
 drivers/event/cnxk/cnxk_eventdev.h     | 19 ++++++
 5 files changed, 113 insertions(+), 6 deletions(-)

diff --git a/doc/guides/eventdevs/features/cnxk.ini b/doc/guides/eventdevs/features/cnxk.ini
index 7633c6e3a2..bee69bf8f4 100644
--- a/doc/guides/eventdevs/features/cnxk.ini
+++ b/doc/guides/eventdevs/features/cnxk.ini
@@ -12,6 +12,7 @@ runtime_port_link          = Y
 multiple_queue_port        = Y
 carry_flow_id              = Y
 maintenance_free           = Y
+runtime_queue_attr         = y
 
 [Eth Rx adapter Features]
 internal_port              = Y
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 9b4d2895ec..f6973bb691 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -845,9 +845,13 @@ cn10k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev,
 static struct eventdev_ops cn10k_sso_dev_ops = {
 	.dev_infos_get = cn10k_sso_info_get,
 	.dev_configure = cn10k_sso_dev_configure,
+
 	.queue_def_conf = cnxk_sso_queue_def_conf,
 	.queue_setup = cnxk_sso_queue_setup,
 	.queue_release = cnxk_sso_queue_release,
+	.queue_attr_get = cnxk_sso_queue_attribute_get,
+	.queue_attr_set = cnxk_sso_queue_attribute_set,
+
 	.port_def_conf = cnxk_sso_port_def_conf,
 	.port_setup = cn10k_sso_port_setup,
 	.port_release = cn10k_sso_port_release,
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 4bba477dd1..7cb59bbbfa 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -1079,9 +1079,13 @@ cn9k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev,
 static struct eventdev_ops cn9k_sso_dev_ops = {
 	.dev_infos_get = cn9k_sso_info_get,
 	.dev_configure = cn9k_sso_dev_configure,
+
 	.queue_def_conf = cnxk_sso_queue_def_conf,
 	.queue_setup = cnxk_sso_queue_setup,
 	.queue_release = cnxk_sso_queue_release,
+	.queue_attr_get = cnxk_sso_queue_attribute_get,
+	.queue_attr_set = cnxk_sso_queue_attribute_set,
+
 	.port_def_conf = cnxk_sso_port_def_conf,
 	.port_setup = cn9k_sso_port_setup,
 	.port_release = cn9k_sso_port_release,
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index be021d86c9..a2829b817e 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -120,7 +120,8 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
 				  RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
 				  RTE_EVENT_DEV_CAP_NONSEQ_MODE |
 				  RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
-				  RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
+				  RTE_EVENT_DEV_CAP_MAINTENANCE_FREE |
+				  RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR;
 }
 
 int
@@ -300,11 +301,27 @@ cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
 		     const struct rte_event_queue_conf *queue_conf)
 {
 	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
-
-	plt_sso_dbg("Queue=%d prio=%d", queue_id, queue_conf->priority);
-	/* Normalize <0-255> to <0-7> */
-	return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, 0xFF, 0xFF,
-					  queue_conf->priority / 32);
+	uint8_t priority, weight, affinity;
+
+	/* Default weight and affinity */
+	dev->mlt_prio[queue_id].weight = RTE_EVENT_QUEUE_WEIGHT_LOWEST;
+	dev->mlt_prio[queue_id].affinity = RTE_EVENT_QUEUE_AFFINITY_HIGHEST;
+
+	priority = CNXK_QOS_NORMALIZE(queue_conf->priority, 0,
+				      RTE_EVENT_DEV_PRIORITY_LOWEST,
+				      CNXK_SSO_PRIORITY_CNT);
+	weight = CNXK_QOS_NORMALIZE(
+		dev->mlt_prio[queue_id].weight, CNXK_SSO_WEIGHT_MIN,
+		RTE_EVENT_QUEUE_WEIGHT_HIGHEST, CNXK_SSO_WEIGHT_CNT);
+	affinity = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].affinity, 0,
+				      RTE_EVENT_QUEUE_AFFINITY_HIGHEST,
+				      CNXK_SSO_AFFINITY_CNT);
+
+	plt_sso_dbg("Queue=%u prio=%u weight=%u affinity=%u", queue_id,
+		    priority, weight, affinity);
+
+	return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, weight, affinity,
+					  priority);
 }
 
 void
@@ -314,6 +331,68 @@ cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id)
 	RTE_SET_USED(queue_id);
 }
 
+int
+cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev, uint8_t queue_id,
+			     uint32_t attr_id, uint32_t *attr_value)
+{
+	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+	if (attr_id == RTE_EVENT_QUEUE_ATTR_WEIGHT)
+		*attr_value = dev->mlt_prio[queue_id].weight;
+	else if (attr_id == RTE_EVENT_QUEUE_ATTR_AFFINITY)
+		*attr_value = dev->mlt_prio[queue_id].affinity;
+	else
+		return -EINVAL;
+
+	return 0;
+}
+
+int
+cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev, uint8_t queue_id,
+			     uint32_t attr_id, uint64_t attr_value)
+{
+	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+	uint8_t priority, weight, affinity;
+	struct rte_event_queue_conf *conf;
+
+	conf = &event_dev->data->queues_cfg[queue_id];
+
+	switch (attr_id) {
+	case RTE_EVENT_QUEUE_ATTR_PRIORITY:
+		conf->priority = attr_value;
+		break;
+	case RTE_EVENT_QUEUE_ATTR_WEIGHT:
+		dev->mlt_prio[queue_id].weight = attr_value;
+		break;
+	case RTE_EVENT_QUEUE_ATTR_AFFINITY:
+		dev->mlt_prio[queue_id].affinity = attr_value;
+		break;
+	case RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_FLOWS:
+	case RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_ORDER_SEQUENCES:
+	case RTE_EVENT_QUEUE_ATTR_EVENT_QUEUE_CFG:
+	case RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE:
+		/* FALLTHROUGH */
+		plt_sso_dbg("Unsupported attribute id %u", attr_id);
+		return -ENOTSUP;
+	default:
+		plt_err("Invalid attribute id %u", attr_id);
+		return -EINVAL;
+	}
+
+	priority = CNXK_QOS_NORMALIZE(conf->priority, 0,
+				      RTE_EVENT_DEV_PRIORITY_LOWEST,
+				      CNXK_SSO_PRIORITY_CNT);
+	weight = CNXK_QOS_NORMALIZE(
+		dev->mlt_prio[queue_id].weight, CNXK_SSO_WEIGHT_MIN,
+		RTE_EVENT_QUEUE_WEIGHT_HIGHEST, CNXK_SSO_WEIGHT_CNT);
+	affinity = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].affinity, 0,
+				      RTE_EVENT_QUEUE_AFFINITY_HIGHEST,
+				      CNXK_SSO_AFFINITY_CNT);
+
+	return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, weight, affinity,
+					  priority);
+}
+
 void
 cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
 		       struct rte_event_port_conf *port_conf)
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 5564746e6d..531f6d1a84 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -38,6 +38,11 @@
 #define CNXK_SSO_XAQ_CACHE_CNT (0x7)
 #define CNXK_SSO_XAQ_SLACK     (8)
 #define CNXK_SSO_WQE_SG_PTR    (9)
+#define CNXK_SSO_PRIORITY_CNT  (0x8)
+#define CNXK_SSO_WEIGHT_MAX    (0x3f)
+#define CNXK_SSO_WEIGHT_MIN    (0x3)
+#define CNXK_SSO_WEIGHT_CNT    (CNXK_SSO_WEIGHT_MAX - CNXK_SSO_WEIGHT_MIN + 1)
+#define CNXK_SSO_AFFINITY_CNT  (0x10)
 
 #define CNXK_TT_FROM_TAG(x)	    (((x) >> 32) & SSO_TT_EMPTY)
 #define CNXK_TT_FROM_EVENT(x)	    (((x) >> 38) & SSO_TT_EMPTY)
@@ -54,6 +59,8 @@
 #define CN10K_GW_MODE_PREF     1
 #define CN10K_GW_MODE_PREF_WFE 2
 
+#define CNXK_QOS_NORMALIZE(val, min, max, cnt)                                 \
+	(min + val / ((max + cnt - 1) / cnt))
 #define CNXK_VALID_DEV_OR_ERR_RET(dev, drv_name)                               \
 	do {                                                                   \
 		if (strncmp(dev->driver->name, drv_name, strlen(drv_name)))    \
@@ -79,6 +86,11 @@ struct cnxk_sso_qos {
 	uint16_t iaq_prcnt;
 };
 
+struct cnxk_sso_mlt_prio {
+	uint8_t weight;
+	uint8_t affinity;
+};
+
 struct cnxk_sso_evdev {
 	struct roc_sso sso;
 	uint8_t max_event_queues;
@@ -108,6 +120,7 @@ struct cnxk_sso_evdev {
 	uint64_t *timer_adptr_sz;
 	uint16_t vec_pool_cnt;
 	uint64_t *vec_pools;
+	struct cnxk_sso_mlt_prio mlt_prio[RTE_EVENT_MAX_QUEUES_PER_DEV];
 	/* Dev args */
 	uint32_t xae_cnt;
 	uint8_t qos_queue_cnt;
@@ -234,6 +247,12 @@ void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
 int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
 			 const struct rte_event_queue_conf *queue_conf);
 void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id);
+int cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev,
+				 uint8_t queue_id, uint32_t attr_id,
+				 uint32_t *attr_value);
+int cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev,
+				 uint8_t queue_id, uint32_t attr_id,
+				 uint64_t attr_value);
 void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
 			    struct rte_event_port_conf *port_conf);
 int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v3] doc: announce change in event queue conf structure
  2022-04-05  5:41   ` [PATCH v2 3/6] doc: announce change in event queue conf structure Shijith Thotton
  2022-05-09 12:47     ` Jerin Jacob
@ 2022-05-15 10:24     ` Shijith Thotton
  2022-07-12 14:05       ` Jerin Jacob
  2022-07-17 12:52       ` Thomas Monjalon
  1 sibling, 2 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-05-15 10:24 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Shijith Thotton, pbhagavatula, harry.van.haaren, mattias.ronnblom, mdr

Structure rte_event_queue_conf will be extended to include fields to
support weight and affinity attribute. Once it gets added in DPDK 22.11,
eventdev internal op, queue_attr_get can be removed.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
 doc/guides/rel_notes/deprecation.rst | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 4e5b23c53d..04125db681 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -125,3 +125,6 @@ Deprecation Notices
   applications should be updated to use the ``dmadev`` library instead,
   with the underlying HW-functionality being provided by the ``ioat`` or
   ``idxd`` dma drivers
+
+* eventdev: New fields to represent event queue weight and affinity will be
+  added to ``rte_event_queue_conf`` structure in DPDK 22.11.
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 1/5] eventdev: support to set queue attributes at runtime
  2022-05-15  9:53     ` [PATCH v3 1/5] eventdev: support to set " Shijith Thotton
@ 2022-05-15 13:11       ` Mattias Rönnblom
  2022-05-16  3:57         ` Shijith Thotton
  0 siblings, 1 reply; 58+ messages in thread
From: Mattias Rönnblom @ 2022-05-15 13:11 UTC (permalink / raw)
  To: Shijith Thotton, dev, jerinj; +Cc: pbhagavatula, harry.van.haaren, mdr

On 2022-05-15 11:53, Shijith Thotton wrote:
> Added a new eventdev API rte_event_queue_attr_set(), to set event queue
> attributes at runtime from the values set during initialization using
> rte_event_queue_setup(). PMD's supporting this feature should expose the
> capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR.
> 
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
> ---
>   doc/guides/eventdevs/features/default.ini |  1 +
>   doc/guides/rel_notes/release_22_07.rst    |  5 ++++
>   lib/eventdev/eventdev_pmd.h               | 22 +++++++++++++++
>   lib/eventdev/rte_eventdev.c               | 26 ++++++++++++++++++
>   lib/eventdev/rte_eventdev.h               | 33 ++++++++++++++++++++++-
>   lib/eventdev/version.map                  |  3 +++
>   6 files changed, 89 insertions(+), 1 deletion(-)
> 
> diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini
> index 2ea233463a..00360f60c6 100644
> --- a/doc/guides/eventdevs/features/default.ini
> +++ b/doc/guides/eventdevs/features/default.ini
> @@ -17,6 +17,7 @@ runtime_port_link          =
>   multiple_queue_port        =
>   carry_flow_id              =
>   maintenance_free           =
> +runtime_queue_attr         =
>   
>   ;
>   ; Features of a default Ethernet Rx adapter.
> diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
> index 88d6e96cc1..a7a912d665 100644
> --- a/doc/guides/rel_notes/release_22_07.rst
> +++ b/doc/guides/rel_notes/release_22_07.rst
> @@ -65,6 +65,11 @@ New Features
>     * Added support for promiscuous mode on Windows.
>     * Added support for MTU on Windows.
>   
> +* **Added support for setting queue attributes at runtime in eventdev.**
> +
> +  Added new API ``rte_event_queue_attr_set()``, to set event queue attributes
> +  at runtime.
> +
>   
>   Removed Items
>   -------------
> diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
> index ce469d47a6..3b85d9f7a5 100644
> --- a/lib/eventdev/eventdev_pmd.h
> +++ b/lib/eventdev/eventdev_pmd.h
> @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
>   typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
>   		uint8_t queue_id);
>   
> +/**
> + * Set an event queue attribute at runtime.
> + *
> + * @param dev
> + *   Event device pointer
> + * @param queue_id
> + *   Event queue index
> + * @param attr_id
> + *   Event queue attribute id
> + * @param attr_value
> + *   Event queue attribute value
> + *
> + * @return
> + *  - 0: Success.
> + *  - <0: Error code on failure.
> + */
> +typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev,
> +					 uint8_t queue_id, uint32_t attr_id,
> +					 uint64_t attr_value);
> +
>   /**
>    * Retrieve the default event port configuration.
>    *
> @@ -1211,6 +1231,8 @@ struct eventdev_ops {
>   	/**< Set up an event queue. */
>   	eventdev_queue_release_t queue_release;
>   	/**< Release an event queue. */
> +	eventdev_queue_attr_set_t queue_attr_set;
> +	/**< Set an event queue attribute. */
>   
>   	eventdev_port_default_conf_get_t port_def_conf;
>   	/**< Get default port configuration. */
> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
> index 532a253553..a31e99be02 100644
> --- a/lib/eventdev/rte_eventdev.c
> +++ b/lib/eventdev/rte_eventdev.c
> @@ -844,6 +844,32 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>   	return 0;
>   }
>   
> +int
> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
> +			 uint64_t attr_value)
> +{
> +	struct rte_eventdev *dev;
> +
> +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> +	dev = &rte_eventdevs[dev_id];
> +	if (!is_valid_queue(dev, queue_id)) {
> +		RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
> +		return -EINVAL;
> +	}
> +
> +	if (!(dev->data->event_dev_cap &
> +	      RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) {
> +		RTE_EDEV_LOG_ERR(
> +			"Device %" PRIu8 "does not support changing queue attributes at runtime",
> +			dev_id);
> +		return -ENOTSUP;
> +	}
> +
> +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_attr_set, -ENOTSUP);
> +	return (*dev->dev_ops->queue_attr_set)(dev, queue_id, attr_id,
> +					       attr_value);
> +}
> +
>   int
>   rte_event_port_link(uint8_t dev_id, uint8_t port_id,
>   		    const uint8_t queues[], const uint8_t priorities[],
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 42a5660169..c1163ee8ec 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -225,7 +225,7 @@ struct rte_event;
>   /**< Event scheduling prioritization is based on the priority associated with
>    *  each event queue.
>    *
> - *  @see rte_event_queue_setup()
> + *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
>    */
>   #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
>   /**< Event scheduling prioritization is based on the priority associated with
> @@ -307,6 +307,13 @@ struct rte_event;
>    * global pool, or process signaling related to load balancing.
>    */
>   
> +#define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
> +/**< Event device is capable of changing the queue attributes at runtime i.e
> + * after rte_event_queue_setup() or rte_event_start() call sequence. If this
> + * flag is not set, eventdev queue attributes can only be configured during
> + * rte_event_queue_setup().
> + */
> +
>   /* Event device priority levels */
>   #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
>   /**< Highest priority expressed across eventdev subsystem
> @@ -702,6 +709,30 @@ int
>   rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>   			uint32_t *attr_value);
>   
> +/**
> + * Set an event queue attribute.
> + *
> + * @param dev_id
> + *   Eventdev id
> + * @param queue_id
> + *   Eventdev queue id
> + * @param attr_id
> + *   The attribute ID to set
> + * @param attr_value
> + *   The attribute value to set
> + *
> + * @return
> + *   - 0: Successfully set attribute.
> + *   - -EINVAL: invalid device, queue or attr_id.
> + *   - -ENOTSUP: device does not support setting event attribute.
> + *   - -EBUSY: device is in running state

I thought the point of this new interface was to allow setting queue 
attributes when the event device was running?

It would be useful for the caller to be able to distinguish between 
"busy, but please try again later", and "busy, forever". Maybe the 
latter is what's meant here? In that case, what is the difference with 
EBUSY and ENOTSUP?

> + *   - <0: failed to set event queue attribute
> + */
> +__rte_experimental
> +int
> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
> +			 uint64_t attr_value);
> +
>   /* Event port specific APIs */
>   
>   /* Event port configuration bitmap flags */
> diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
> index cd5dada07f..c581b75c18 100644
> --- a/lib/eventdev/version.map
> +++ b/lib/eventdev/version.map
> @@ -108,6 +108,9 @@ EXPERIMENTAL {
>   
>   	# added in 22.03
>   	rte_event_eth_rx_adapter_event_port_get;
> +
> +	# added in 22.07
> +	rte_event_queue_attr_set;
>   };
>   
>   INTERNAL {


^ permalink raw reply	[flat|nested] 58+ messages in thread

* RE: [PATCH v3 1/5] eventdev: support to set queue attributes at runtime
  2022-05-15 13:11       ` Mattias Rönnblom
@ 2022-05-16  3:57         ` Shijith Thotton
  2022-05-16 10:23           ` Mattias Rönnblom
  0 siblings, 1 reply; 58+ messages in thread
From: Shijith Thotton @ 2022-05-16  3:57 UTC (permalink / raw)
  To: Mattias Rönnblom, dev, Jerin Jacob Kollanukkaran
  Cc: Pavan Nikhilesh Bhagavatula, harry.van.haaren, mdr

>> Added a new eventdev API rte_event_queue_attr_set(), to set event queue
>> attributes at runtime from the values set during initialization using
>> rte_event_queue_setup(). PMD's supporting this feature should expose the
>> capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR.
>>
>> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
>> Acked-by: Jerin Jacob <jerinj@marvell.com>
>> ---
>>   doc/guides/eventdevs/features/default.ini |  1 +
>>   doc/guides/rel_notes/release_22_07.rst    |  5 ++++
>>   lib/eventdev/eventdev_pmd.h               | 22 +++++++++++++++
>>   lib/eventdev/rte_eventdev.c               | 26 ++++++++++++++++++
>>   lib/eventdev/rte_eventdev.h               | 33 ++++++++++++++++++++++-
>>   lib/eventdev/version.map                  |  3 +++
>>   6 files changed, 89 insertions(+), 1 deletion(-)
>>
>> diff --git a/doc/guides/eventdevs/features/default.ini
>b/doc/guides/eventdevs/features/default.ini
>> index 2ea233463a..00360f60c6 100644
>> --- a/doc/guides/eventdevs/features/default.ini
>> +++ b/doc/guides/eventdevs/features/default.ini
>> @@ -17,6 +17,7 @@ runtime_port_link          =
>>   multiple_queue_port        =
>>   carry_flow_id              =
>>   maintenance_free           =
>> +runtime_queue_attr         =
>>
>>   ;
>>   ; Features of a default Ethernet Rx adapter.
>> diff --git a/doc/guides/rel_notes/release_22_07.rst
>b/doc/guides/rel_notes/release_22_07.rst
>> index 88d6e96cc1..a7a912d665 100644
>> --- a/doc/guides/rel_notes/release_22_07.rst
>> +++ b/doc/guides/rel_notes/release_22_07.rst
>> @@ -65,6 +65,11 @@ New Features
>>     * Added support for promiscuous mode on Windows.
>>     * Added support for MTU on Windows.
>>
>> +* **Added support for setting queue attributes at runtime in eventdev.**
>> +
>> +  Added new API ``rte_event_queue_attr_set()``, to set event queue
>attributes
>> +  at runtime.
>> +
>>
>>   Removed Items
>>   -------------
>> diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
>> index ce469d47a6..3b85d9f7a5 100644
>> --- a/lib/eventdev/eventdev_pmd.h
>> +++ b/lib/eventdev/eventdev_pmd.h
>> @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct
>rte_eventdev *dev,
>>   typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
>>   		uint8_t queue_id);
>>
>> +/**
>> + * Set an event queue attribute at runtime.
>> + *
>> + * @param dev
>> + *   Event device pointer
>> + * @param queue_id
>> + *   Event queue index
>> + * @param attr_id
>> + *   Event queue attribute id
>> + * @param attr_value
>> + *   Event queue attribute value
>> + *
>> + * @return
>> + *  - 0: Success.
>> + *  - <0: Error code on failure.
>> + */
>> +typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev,
>> +					 uint8_t queue_id, uint32_t attr_id,
>> +					 uint64_t attr_value);
>> +
>>   /**
>>    * Retrieve the default event port configuration.
>>    *
>> @@ -1211,6 +1231,8 @@ struct eventdev_ops {
>>   	/**< Set up an event queue. */
>>   	eventdev_queue_release_t queue_release;
>>   	/**< Release an event queue. */
>> +	eventdev_queue_attr_set_t queue_attr_set;
>> +	/**< Set an event queue attribute. */
>>
>>   	eventdev_port_default_conf_get_t port_def_conf;
>>   	/**< Get default port configuration. */
>> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
>> index 532a253553..a31e99be02 100644
>> --- a/lib/eventdev/rte_eventdev.c
>> +++ b/lib/eventdev/rte_eventdev.c
>> @@ -844,6 +844,32 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t
>queue_id, uint32_t attr_id,
>>   	return 0;
>>   }
>>
>> +int
>> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>> +			 uint64_t attr_value)
>> +{
>> +	struct rte_eventdev *dev;
>> +
>> +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>> +	dev = &rte_eventdevs[dev_id];
>> +	if (!is_valid_queue(dev, queue_id)) {
>> +		RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
>> +		return -EINVAL;
>> +	}
>> +
>> +	if (!(dev->data->event_dev_cap &
>> +	      RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) {
>> +		RTE_EDEV_LOG_ERR(
>> +			"Device %" PRIu8 "does not support changing queue
>attributes at runtime",
>> +			dev_id);
>> +		return -ENOTSUP;
>> +	}
>> +
>> +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_attr_set, -
>ENOTSUP);
>> +	return (*dev->dev_ops->queue_attr_set)(dev, queue_id, attr_id,
>> +					       attr_value);
>> +}
>> +
>>   int
>>   rte_event_port_link(uint8_t dev_id, uint8_t port_id,
>>   		    const uint8_t queues[], const uint8_t priorities[],
>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>> index 42a5660169..c1163ee8ec 100644
>> --- a/lib/eventdev/rte_eventdev.h
>> +++ b/lib/eventdev/rte_eventdev.h
>> @@ -225,7 +225,7 @@ struct rte_event;
>>   /**< Event scheduling prioritization is based on the priority associated with
>>    *  each event queue.
>>    *
>> - *  @see rte_event_queue_setup()
>> + *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
>>    */
>>   #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
>>   /**< Event scheduling prioritization is based on the priority associated with
>> @@ -307,6 +307,13 @@ struct rte_event;
>>    * global pool, or process signaling related to load balancing.
>>    */
>>
>> +#define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
>> +/**< Event device is capable of changing the queue attributes at runtime i.e
>> + * after rte_event_queue_setup() or rte_event_start() call sequence. If this
>> + * flag is not set, eventdev queue attributes can only be configured during
>> + * rte_event_queue_setup().
>> + */
>> +
>>   /* Event device priority levels */
>>   #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
>>   /**< Highest priority expressed across eventdev subsystem
>> @@ -702,6 +709,30 @@ int
>>   rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>>   			uint32_t *attr_value);
>>
>> +/**
>> + * Set an event queue attribute.
>> + *
>> + * @param dev_id
>> + *   Eventdev id
>> + * @param queue_id
>> + *   Eventdev queue id
>> + * @param attr_id
>> + *   The attribute ID to set
>> + * @param attr_value
>> + *   The attribute value to set
>> + *
>> + * @return
>> + *   - 0: Successfully set attribute.
>> + *   - -EINVAL: invalid device, queue or attr_id.
>> + *   - -ENOTSUP: device does not support setting event attribute.
>> + *   - -EBUSY: device is in running state
>
>I thought the point of this new interface was to allow setting queue
>attributes when the event device was running?
>
>It would be useful for the caller to be able to distinguish between
>"busy, but please try again later", and "busy, forever". Maybe the
>latter is what's meant here? In that case, what is the difference with
>EBUSY and ENOTSUP?
>

As there are multiple queue attributes, not all attributes could be supported by
all PMDs. ENOTSUP can be returned for unsupported attributes.

>> + *   - <0: failed to set event queue attribute
>> + */
>> +__rte_experimental
>> +int
>> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>> +			 uint64_t attr_value);
>> +
>>   /* Event port specific APIs */
>>
>>   /* Event port configuration bitmap flags */
>> diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
>> index cd5dada07f..c581b75c18 100644
>> --- a/lib/eventdev/version.map
>> +++ b/lib/eventdev/version.map
>> @@ -108,6 +108,9 @@ EXPERIMENTAL {
>>
>>   	# added in 22.03
>>   	rte_event_eth_rx_adapter_event_port_get;
>> +
>> +	# added in 22.07
>> +	rte_event_queue_attr_set;
>>   };
>>
>>   INTERNAL {


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 1/5] eventdev: support to set queue attributes at runtime
  2022-05-16  3:57         ` Shijith Thotton
@ 2022-05-16 10:23           ` Mattias Rönnblom
  2022-05-16 12:12             ` Shijith Thotton
  0 siblings, 1 reply; 58+ messages in thread
From: Mattias Rönnblom @ 2022-05-16 10:23 UTC (permalink / raw)
  To: Shijith Thotton, dev, Jerin Jacob Kollanukkaran
  Cc: Pavan Nikhilesh Bhagavatula, harry.van.haaren, mdr

On 2022-05-16 05:57, Shijith Thotton wrote:
>>> Added a new eventdev API rte_event_queue_attr_set(), to set event queue
>>> attributes at runtime from the values set during initialization using
>>> rte_event_queue_setup(). PMD's supporting this feature should expose the
>>> capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR.
>>>
>>> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
>>> Acked-by: Jerin Jacob <jerinj@marvell.com>
>>> ---
>>>    doc/guides/eventdevs/features/default.ini |  1 +
>>>    doc/guides/rel_notes/release_22_07.rst    |  5 ++++
>>>    lib/eventdev/eventdev_pmd.h               | 22 +++++++++++++++
>>>    lib/eventdev/rte_eventdev.c               | 26 ++++++++++++++++++
>>>    lib/eventdev/rte_eventdev.h               | 33 ++++++++++++++++++++++-
>>>    lib/eventdev/version.map                  |  3 +++
>>>    6 files changed, 89 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/doc/guides/eventdevs/features/default.ini
>> b/doc/guides/eventdevs/features/default.ini
>>> index 2ea233463a..00360f60c6 100644
>>> --- a/doc/guides/eventdevs/features/default.ini
>>> +++ b/doc/guides/eventdevs/features/default.ini
>>> @@ -17,6 +17,7 @@ runtime_port_link          =
>>>    multiple_queue_port        =
>>>    carry_flow_id              =
>>>    maintenance_free           =
>>> +runtime_queue_attr         =
>>>
>>>    ;
>>>    ; Features of a default Ethernet Rx adapter.
>>> diff --git a/doc/guides/rel_notes/release_22_07.rst
>> b/doc/guides/rel_notes/release_22_07.rst
>>> index 88d6e96cc1..a7a912d665 100644
>>> --- a/doc/guides/rel_notes/release_22_07.rst
>>> +++ b/doc/guides/rel_notes/release_22_07.rst
>>> @@ -65,6 +65,11 @@ New Features
>>>      * Added support for promiscuous mode on Windows.
>>>      * Added support for MTU on Windows.
>>>
>>> +* **Added support for setting queue attributes at runtime in eventdev.**
>>> +
>>> +  Added new API ``rte_event_queue_attr_set()``, to set event queue
>> attributes
>>> +  at runtime.
>>> +
>>>
>>>    Removed Items
>>>    -------------
>>> diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
>>> index ce469d47a6..3b85d9f7a5 100644
>>> --- a/lib/eventdev/eventdev_pmd.h
>>> +++ b/lib/eventdev/eventdev_pmd.h
>>> @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct
>> rte_eventdev *dev,
>>>    typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
>>>    		uint8_t queue_id);
>>>
>>> +/**
>>> + * Set an event queue attribute at runtime.
>>> + *
>>> + * @param dev
>>> + *   Event device pointer
>>> + * @param queue_id
>>> + *   Event queue index
>>> + * @param attr_id
>>> + *   Event queue attribute id
>>> + * @param attr_value
>>> + *   Event queue attribute value
>>> + *
>>> + * @return
>>> + *  - 0: Success.
>>> + *  - <0: Error code on failure.
>>> + */
>>> +typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev,
>>> +					 uint8_t queue_id, uint32_t attr_id,
>>> +					 uint64_t attr_value);
>>> +
>>>    /**
>>>     * Retrieve the default event port configuration.
>>>     *
>>> @@ -1211,6 +1231,8 @@ struct eventdev_ops {
>>>    	/**< Set up an event queue. */
>>>    	eventdev_queue_release_t queue_release;
>>>    	/**< Release an event queue. */
>>> +	eventdev_queue_attr_set_t queue_attr_set;
>>> +	/**< Set an event queue attribute. */
>>>
>>>    	eventdev_port_default_conf_get_t port_def_conf;
>>>    	/**< Get default port configuration. */
>>> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
>>> index 532a253553..a31e99be02 100644
>>> --- a/lib/eventdev/rte_eventdev.c
>>> +++ b/lib/eventdev/rte_eventdev.c
>>> @@ -844,6 +844,32 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t
>> queue_id, uint32_t attr_id,
>>>    	return 0;
>>>    }
>>>
>>> +int
>>> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>>> +			 uint64_t attr_value)
>>> +{
>>> +	struct rte_eventdev *dev;
>>> +
>>> +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>>> +	dev = &rte_eventdevs[dev_id];
>>> +	if (!is_valid_queue(dev, queue_id)) {
>>> +		RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
>>> +		return -EINVAL;
>>> +	}
>>> +
>>> +	if (!(dev->data->event_dev_cap &
>>> +	      RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) {
>>> +		RTE_EDEV_LOG_ERR(
>>> +			"Device %" PRIu8 "does not support changing queue
>> attributes at runtime",
>>> +			dev_id);
>>> +		return -ENOTSUP;
>>> +	}
>>> +
>>> +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_attr_set, -
>> ENOTSUP);
>>> +	return (*dev->dev_ops->queue_attr_set)(dev, queue_id, attr_id,
>>> +					       attr_value);
>>> +}
>>> +
>>>    int
>>>    rte_event_port_link(uint8_t dev_id, uint8_t port_id,
>>>    		    const uint8_t queues[], const uint8_t priorities[],
>>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>>> index 42a5660169..c1163ee8ec 100644
>>> --- a/lib/eventdev/rte_eventdev.h
>>> +++ b/lib/eventdev/rte_eventdev.h
>>> @@ -225,7 +225,7 @@ struct rte_event;
>>>    /**< Event scheduling prioritization is based on the priority associated with
>>>     *  each event queue.
>>>     *
>>> - *  @see rte_event_queue_setup()
>>> + *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
>>>     */
>>>    #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
>>>    /**< Event scheduling prioritization is based on the priority associated with
>>> @@ -307,6 +307,13 @@ struct rte_event;
>>>     * global pool, or process signaling related to load balancing.
>>>     */
>>>
>>> +#define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
>>> +/**< Event device is capable of changing the queue attributes at runtime i.e
>>> + * after rte_event_queue_setup() or rte_event_start() call sequence. If this
>>> + * flag is not set, eventdev queue attributes can only be configured during
>>> + * rte_event_queue_setup().
>>> + */
>>> +
>>>    /* Event device priority levels */
>>>    #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
>>>    /**< Highest priority expressed across eventdev subsystem
>>> @@ -702,6 +709,30 @@ int
>>>    rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>>>    			uint32_t *attr_value);
>>>
>>> +/**
>>> + * Set an event queue attribute.
>>> + *
>>> + * @param dev_id
>>> + *   Eventdev id
>>> + * @param queue_id
>>> + *   Eventdev queue id
>>> + * @param attr_id
>>> + *   The attribute ID to set
>>> + * @param attr_value
>>> + *   The attribute value to set
>>> + *
>>> + * @return
>>> + *   - 0: Successfully set attribute.
>>> + *   - -EINVAL: invalid device, queue or attr_id.

Can "invalid" here be something else than "non-existent"?

>>> + *   - -ENOTSUP: device does not support setting event attribute.
>>> + *   - -EBUSY: device is in running state
>>
>> I thought the point of this new interface was to allow setting queue
>> attributes when the event device was running?
>>
>> It would be useful for the caller to be able to distinguish between
>> "busy, but please try again later", and "busy, forever". Maybe the
>> latter is what's meant here? In that case, what is the difference with
>> EBUSY and ENOTSUP?
>>
> 
> As there are multiple queue attributes, not all attributes could be supported by
> all PMDs. ENOTSUP can be returned for unsupported attributes.
> 

So ENOTSUP means this particular attribute exists, but can't be change 
in runtime? Is ENOTSUP returned also if no attributes can be modified 
(i.e., the event device does not have the appropriate capability)?

How is the application supposed to behaved in case -EBUSY is returned? 
What does -EBUSY mean? The event device being in a running state doesn't 
sound like an error to me.


>>> + *   - <0: failed to set event queue attribute
>>> + */
>>> +__rte_experimental
>>> +int
>>> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>>> +			 uint64_t attr_value);
>>> +
>>>    /* Event port specific APIs */
>>>
>>>    /* Event port configuration bitmap flags */
>>> diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
>>> index cd5dada07f..c581b75c18 100644
>>> --- a/lib/eventdev/version.map
>>> +++ b/lib/eventdev/version.map
>>> @@ -108,6 +108,9 @@ EXPERIMENTAL {
>>>
>>>    	# added in 22.03
>>>    	rte_event_eth_rx_adapter_event_port_get;
>>> +
>>> +	# added in 22.07
>>> +	rte_event_queue_attr_set;
>>>    };
>>>
>>>    INTERNAL {
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* RE: [PATCH v3 1/5] eventdev: support to set queue attributes at runtime
  2022-05-16 10:23           ` Mattias Rönnblom
@ 2022-05-16 12:12             ` Shijith Thotton
  0 siblings, 0 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-05-16 12:12 UTC (permalink / raw)
  To: Mattias Rönnblom, dev, Jerin Jacob Kollanukkaran
  Cc: Pavan Nikhilesh Bhagavatula, harry.van.haaren, mdr

>>>> Added a new eventdev API rte_event_queue_attr_set(), to set event queue
>>>> attributes at runtime from the values set during initialization using
>>>> rte_event_queue_setup(). PMD's supporting this feature should expose the
>>>> capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR.
>>>>
>>>> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
>>>> Acked-by: Jerin Jacob <jerinj@marvell.com>
>>>> ---
>>>>    doc/guides/eventdevs/features/default.ini |  1 +
>>>>    doc/guides/rel_notes/release_22_07.rst    |  5 ++++
>>>>    lib/eventdev/eventdev_pmd.h               | 22 +++++++++++++++
>>>>    lib/eventdev/rte_eventdev.c               | 26 ++++++++++++++++++
>>>>    lib/eventdev/rte_eventdev.h               | 33 ++++++++++++++++++++++-
>>>>    lib/eventdev/version.map                  |  3 +++
>>>>    6 files changed, 89 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/doc/guides/eventdevs/features/default.ini
>>> b/doc/guides/eventdevs/features/default.ini
>>>> index 2ea233463a..00360f60c6 100644
>>>> --- a/doc/guides/eventdevs/features/default.ini
>>>> +++ b/doc/guides/eventdevs/features/default.ini
>>>> @@ -17,6 +17,7 @@ runtime_port_link          =
>>>>    multiple_queue_port        =
>>>>    carry_flow_id              =
>>>>    maintenance_free           =
>>>> +runtime_queue_attr         =
>>>>
>>>>    ;
>>>>    ; Features of a default Ethernet Rx adapter.
>>>> diff --git a/doc/guides/rel_notes/release_22_07.rst
>>> b/doc/guides/rel_notes/release_22_07.rst
>>>> index 88d6e96cc1..a7a912d665 100644
>>>> --- a/doc/guides/rel_notes/release_22_07.rst
>>>> +++ b/doc/guides/rel_notes/release_22_07.rst
>>>> @@ -65,6 +65,11 @@ New Features
>>>>      * Added support for promiscuous mode on Windows.
>>>>      * Added support for MTU on Windows.
>>>>
>>>> +* **Added support for setting queue attributes at runtime in eventdev.**
>>>> +
>>>> +  Added new API ``rte_event_queue_attr_set()``, to set event queue
>>> attributes
>>>> +  at runtime.
>>>> +
>>>>
>>>>    Removed Items
>>>>    -------------
>>>> diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
>>>> index ce469d47a6..3b85d9f7a5 100644
>>>> --- a/lib/eventdev/eventdev_pmd.h
>>>> +++ b/lib/eventdev/eventdev_pmd.h
>>>> @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct
>>> rte_eventdev *dev,
>>>>    typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
>>>>    		uint8_t queue_id);
>>>>
>>>> +/**
>>>> + * Set an event queue attribute at runtime.
>>>> + *
>>>> + * @param dev
>>>> + *   Event device pointer
>>>> + * @param queue_id
>>>> + *   Event queue index
>>>> + * @param attr_id
>>>> + *   Event queue attribute id
>>>> + * @param attr_value
>>>> + *   Event queue attribute value
>>>> + *
>>>> + * @return
>>>> + *  - 0: Success.
>>>> + *  - <0: Error code on failure.
>>>> + */
>>>> +typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev,
>>>> +					 uint8_t queue_id, uint32_t attr_id,
>>>> +					 uint64_t attr_value);
>>>> +
>>>>    /**
>>>>     * Retrieve the default event port configuration.
>>>>     *
>>>> @@ -1211,6 +1231,8 @@ struct eventdev_ops {
>>>>    	/**< Set up an event queue. */
>>>>    	eventdev_queue_release_t queue_release;
>>>>    	/**< Release an event queue. */
>>>> +	eventdev_queue_attr_set_t queue_attr_set;
>>>> +	/**< Set an event queue attribute. */
>>>>
>>>>    	eventdev_port_default_conf_get_t port_def_conf;
>>>>    	/**< Get default port configuration. */
>>>> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
>>>> index 532a253553..a31e99be02 100644
>>>> --- a/lib/eventdev/rte_eventdev.c
>>>> +++ b/lib/eventdev/rte_eventdev.c
>>>> @@ -844,6 +844,32 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t
>>> queue_id, uint32_t attr_id,
>>>>    	return 0;
>>>>    }
>>>>
>>>> +int
>>>> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t
>attr_id,
>>>> +			 uint64_t attr_value)
>>>> +{
>>>> +	struct rte_eventdev *dev;
>>>> +
>>>> +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>>>> +	dev = &rte_eventdevs[dev_id];
>>>> +	if (!is_valid_queue(dev, queue_id)) {
>>>> +		RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
>>>> +		return -EINVAL;
>>>> +	}
>>>> +
>>>> +	if (!(dev->data->event_dev_cap &
>>>> +	      RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) {
>>>> +		RTE_EDEV_LOG_ERR(
>>>> +			"Device %" PRIu8 "does not support changing queue
>>> attributes at runtime",
>>>> +			dev_id);
>>>> +		return -ENOTSUP;
>>>> +	}
>>>> +
>>>> +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_attr_set, -
>>> ENOTSUP);
>>>> +	return (*dev->dev_ops->queue_attr_set)(dev, queue_id, attr_id,
>>>> +					       attr_value);
>>>> +}
>>>> +
>>>>    int
>>>>    rte_event_port_link(uint8_t dev_id, uint8_t port_id,
>>>>    		    const uint8_t queues[], const uint8_t priorities[],
>>>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>>>> index 42a5660169..c1163ee8ec 100644
>>>> --- a/lib/eventdev/rte_eventdev.h
>>>> +++ b/lib/eventdev/rte_eventdev.h
>>>> @@ -225,7 +225,7 @@ struct rte_event;
>>>>    /**< Event scheduling prioritization is based on the priority associated with
>>>>     *  each event queue.
>>>>     *
>>>> - *  @see rte_event_queue_setup()
>>>> + *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
>>>>     */
>>>>    #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
>>>>    /**< Event scheduling prioritization is based on the priority associated with
>>>> @@ -307,6 +307,13 @@ struct rte_event;
>>>>     * global pool, or process signaling related to load balancing.
>>>>     */
>>>>
>>>> +#define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
>>>> +/**< Event device is capable of changing the queue attributes at runtime i.e
>>>> + * after rte_event_queue_setup() or rte_event_start() call sequence. If this
>>>> + * flag is not set, eventdev queue attributes can only be configured during
>>>> + * rte_event_queue_setup().
>>>> + */
>>>> +
>>>>    /* Event device priority levels */
>>>>    #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
>>>>    /**< Highest priority expressed across eventdev subsystem
>>>> @@ -702,6 +709,30 @@ int
>>>>    rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t
>attr_id,
>>>>    			uint32_t *attr_value);
>>>>
>>>> +/**
>>>> + * Set an event queue attribute.
>>>> + *
>>>> + * @param dev_id
>>>> + *   Eventdev id
>>>> + * @param queue_id
>>>> + *   Eventdev queue id
>>>> + * @param attr_id
>>>> + *   The attribute ID to set
>>>> + * @param attr_value
>>>> + *   The attribute value to set
>>>> + *
>>>> + * @return
>>>> + *   - 0: Successfully set attribute.
>>>> + *   - -EINVAL: invalid device, queue or attr_id.
>
>Can "invalid" here be something else than "non-existent"?
>

No. 

>>>> + *   - -ENOTSUP: device does not support setting event attribute.
>>>> + *   - -EBUSY: device is in running state
>>>
>>> I thought the point of this new interface was to allow setting queue
>>> attributes when the event device was running?
>>>
>>> It would be useful for the caller to be able to distinguish between
>>> "busy, but please try again later", and "busy, forever". Maybe the
>>> latter is what's meant here? In that case, what is the difference with
>>> EBUSY and ENOTSUP?
>>>
>>
>> As there are multiple queue attributes, not all attributes could be supported by
>> all PMDs. ENOTSUP can be returned for unsupported attributes.
>>
>
>So ENOTSUP means this particular attribute exists, but can't be change
>in runtime?

Yes. If the attribute doesn’t exist, EINVAL is returned.

> Is ENOTSUP returned also if no attributes can be modified
>(i.e., the event device does not have the appropriate capability)?
>

Yes. Application should also be checking the eventdev capability
RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR before calling this API.

>How is the application supposed to behaved in case -EBUSY is returned?
>What does -EBUSY mean? The event device being in a running state doesn't
>sound like an error to me.
>
 
EBUSY was added to indicate the device was busy and was unable to set the
attribute at that time. I think, it can be removed as it is causing confusion.
Please let me know if it make sense, I will send v4.

>
>>>> + *   - <0: failed to set event queue attribute
>>>> + */
>>>> +__rte_experimental
>>>> +int
>>>> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t
>attr_id,
>>>> +			 uint64_t attr_value);
>>>> +
>>>>    /* Event port specific APIs */
>>>>
>>>>    /* Event port configuration bitmap flags */
>>>> diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
>>>> index cd5dada07f..c581b75c18 100644
>>>> --- a/lib/eventdev/version.map
>>>> +++ b/lib/eventdev/version.map
>>>> @@ -108,6 +108,9 @@ EXPERIMENTAL {
>>>>
>>>>    	# added in 22.03
>>>>    	rte_event_eth_rx_adapter_event_port_get;
>>>> +
>>>> +	# added in 22.07
>>>> +	rte_event_queue_attr_set;
>>>>    };
>>>>
>>>>    INTERNAL {
>>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v4 0/5] Extend and set event queue attributes at runtime
  2022-05-15  9:53   ` [PATCH v3 0/5] " Shijith Thotton
                       ` (4 preceding siblings ...)
  2022-05-15  9:53     ` [PATCH v3 5/5] event/cnxk: support to set runtime queue attributes Shijith Thotton
@ 2022-05-16 17:35     ` Shijith Thotton
  2022-05-16 17:35       ` [PATCH v4 1/5] eventdev: support to set " Shijith Thotton
                         ` (4 more replies)
  5 siblings, 5 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-05-16 17:35 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Shijith Thotton, pbhagavatula, harry.van.haaren, mattias.ronnblom, mdr

This series adds support for setting event queue attributes at runtime
and adds two new event queue attributes weight and affinity. Eventdev
capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR is added to expose the
capability to set attributes at runtime and rte_event_queue_attr_set()
API is used to set the attributes.

Attributes weight and affinity are not yet added to rte_event_queue_conf
structure to avoid ABI break and will be added in 22.11. Till then, PMDs
using the new attributes are expected to manage them.

Test application changes and example implementation are added as last
three patches.

v4:
* Removed EBUSY from rte_event_queue_attr_set() return vals.

v3:
* Updated release notes.
* Removed deprecation patch from series.
* Used event enq/deq to test queue priority.

v2:
* Modified attr_value type from u32 to u64 for set().
* Removed RTE_EVENT_QUEUE_ATTR_MAX macro.
* Fixed return value in implementation.

Pavan Nikhilesh (1):
  common/cnxk: use lock when accessing mbox of SSO

Shijith Thotton (4):
  eventdev: support to set queue attributes at runtime
  eventdev: add weight and affinity to queue attributes
  test/event: test cases to test runtime queue attribute
  event/cnxk: support to set runtime queue attributes

 app/test/test_eventdev.c                  | 201 ++++++++++++++++++++++
 doc/guides/eventdevs/features/cnxk.ini    |   1 +
 doc/guides/eventdevs/features/default.ini |   1 +
 doc/guides/rel_notes/release_22_07.rst    |  12 ++
 drivers/common/cnxk/roc_sso.c             | 174 +++++++++++++------
 drivers/common/cnxk/roc_sso_priv.h        |   1 +
 drivers/common/cnxk/roc_tim.c             | 134 ++++++++++-----
 drivers/event/cnxk/cn10k_eventdev.c       |   4 +
 drivers/event/cnxk/cn9k_eventdev.c        |   4 +
 drivers/event/cnxk/cnxk_eventdev.c        |  91 +++++++++-
 drivers/event/cnxk/cnxk_eventdev.h        |  19 ++
 lib/eventdev/eventdev_pmd.h               |  44 +++++
 lib/eventdev/rte_eventdev.c               |  38 ++++
 lib/eventdev/rte_eventdev.h               |  70 +++++++-
 lib/eventdev/version.map                  |   3 +
 15 files changed, 694 insertions(+), 103 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v4 1/5] eventdev: support to set queue attributes at runtime
  2022-05-16 17:35     ` [PATCH v4 0/5] Extend and set event queue attributes at runtime Shijith Thotton
@ 2022-05-16 17:35       ` Shijith Thotton
  2022-05-16 18:02         ` Jerin Jacob
  2022-05-19  8:49         ` Ray Kinsella
  2022-05-16 17:35       ` [PATCH v4 2/5] eventdev: add weight and affinity to queue attributes Shijith Thotton
                         ` (3 subsequent siblings)
  4 siblings, 2 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-05-16 17:35 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Shijith Thotton, pbhagavatula, harry.van.haaren, mattias.ronnblom, mdr

Added a new eventdev API rte_event_queue_attr_set(), to set event queue
attributes at runtime from the values set during initialization using
rte_event_queue_setup(). PMD's supporting this feature should expose the
capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
 doc/guides/eventdevs/features/default.ini |  1 +
 doc/guides/rel_notes/release_22_07.rst    |  5 ++++
 lib/eventdev/eventdev_pmd.h               | 22 ++++++++++++++++
 lib/eventdev/rte_eventdev.c               | 26 ++++++++++++++++++
 lib/eventdev/rte_eventdev.h               | 32 ++++++++++++++++++++++-
 lib/eventdev/version.map                  |  3 +++
 6 files changed, 88 insertions(+), 1 deletion(-)

diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini
index 2ea233463a..00360f60c6 100644
--- a/doc/guides/eventdevs/features/default.ini
+++ b/doc/guides/eventdevs/features/default.ini
@@ -17,6 +17,7 @@ runtime_port_link          =
 multiple_queue_port        =
 carry_flow_id              =
 maintenance_free           =
+runtime_queue_attr         =
 
 ;
 ; Features of a default Ethernet Rx adapter.
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 88d6e96cc1..a7a912d665 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -65,6 +65,11 @@ New Features
   * Added support for promiscuous mode on Windows.
   * Added support for MTU on Windows.
 
+* **Added support for setting queue attributes at runtime in eventdev.**
+
+  Added new API ``rte_event_queue_attr_set()``, to set event queue attributes
+  at runtime.
+
 
 Removed Items
 -------------
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index ce469d47a6..3b85d9f7a5 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
 typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
 		uint8_t queue_id);
 
+/**
+ * Set an event queue attribute at runtime.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param queue_id
+ *   Event queue index
+ * @param attr_id
+ *   Event queue attribute id
+ * @param attr_value
+ *   Event queue attribute value
+ *
+ * @return
+ *  - 0: Success.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev,
+					 uint8_t queue_id, uint32_t attr_id,
+					 uint64_t attr_value);
+
 /**
  * Retrieve the default event port configuration.
  *
@@ -1211,6 +1231,8 @@ struct eventdev_ops {
 	/**< Set up an event queue. */
 	eventdev_queue_release_t queue_release;
 	/**< Release an event queue. */
+	eventdev_queue_attr_set_t queue_attr_set;
+	/**< Set an event queue attribute. */
 
 	eventdev_port_default_conf_get_t port_def_conf;
 	/**< Get default port configuration. */
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 532a253553..a31e99be02 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -844,6 +844,32 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 	return 0;
 }
 
+int
+rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
+			 uint64_t attr_value)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	if (!is_valid_queue(dev, queue_id)) {
+		RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
+		return -EINVAL;
+	}
+
+	if (!(dev->data->event_dev_cap &
+	      RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) {
+		RTE_EDEV_LOG_ERR(
+			"Device %" PRIu8 "does not support changing queue attributes at runtime",
+			dev_id);
+		return -ENOTSUP;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_attr_set, -ENOTSUP);
+	return (*dev->dev_ops->queue_attr_set)(dev, queue_id, attr_id,
+					       attr_value);
+}
+
 int
 rte_event_port_link(uint8_t dev_id, uint8_t port_id,
 		    const uint8_t queues[], const uint8_t priorities[],
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 42a5660169..a79b1397eb 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -225,7 +225,7 @@ struct rte_event;
 /**< Event scheduling prioritization is based on the priority associated with
  *  each event queue.
  *
- *  @see rte_event_queue_setup()
+ *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
  */
 #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
 /**< Event scheduling prioritization is based on the priority associated with
@@ -307,6 +307,13 @@ struct rte_event;
  * global pool, or process signaling related to load balancing.
  */
 
+#define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
+/**< Event device is capable of changing the queue attributes at runtime i.e
+ * after rte_event_queue_setup() or rte_event_start() call sequence. If this
+ * flag is not set, eventdev queue attributes can only be configured during
+ * rte_event_queue_setup().
+ */
+
 /* Event device priority levels */
 #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
 /**< Highest priority expressed across eventdev subsystem
@@ -702,6 +709,29 @@ int
 rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 			uint32_t *attr_value);
 
+/**
+ * Set an event queue attribute.
+ *
+ * @param dev_id
+ *   Eventdev id
+ * @param queue_id
+ *   Eventdev queue id
+ * @param attr_id
+ *   The attribute ID to set
+ * @param attr_value
+ *   The attribute value to set
+ *
+ * @return
+ *   - 0: Successfully set attribute.
+ *   - -EINVAL: invalid device, queue or attr_id.
+ *   - -ENOTSUP: device does not support setting the event attribute.
+ *   - <0: failed to set event queue attribute
+ */
+__rte_experimental
+int
+rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
+			 uint64_t attr_value);
+
 /* Event port specific APIs */
 
 /* Event port configuration bitmap flags */
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index cd5dada07f..c581b75c18 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -108,6 +108,9 @@ EXPERIMENTAL {
 
 	# added in 22.03
 	rte_event_eth_rx_adapter_event_port_get;
+
+	# added in 22.07
+	rte_event_queue_attr_set;
 };
 
 INTERNAL {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v4 2/5] eventdev: add weight and affinity to queue attributes
  2022-05-16 17:35     ` [PATCH v4 0/5] Extend and set event queue attributes at runtime Shijith Thotton
  2022-05-16 17:35       ` [PATCH v4 1/5] eventdev: support to set " Shijith Thotton
@ 2022-05-16 17:35       ` Shijith Thotton
  2022-05-16 17:35       ` [PATCH v4 3/5] test/event: test cases to test runtime queue attribute Shijith Thotton
                         ` (2 subsequent siblings)
  4 siblings, 0 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-05-16 17:35 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Shijith Thotton, pbhagavatula, harry.van.haaren, mattias.ronnblom, mdr

Extended eventdev queue QoS attributes to support weight and affinity.
If queues are of the same priority, events from the queue with highest
weight will be scheduled first. Affinity indicates the number of times,
the subsequent schedule calls from an event port will use the same event
queue. Schedule call selects another queue if current queue goes empty
or schedule count reaches affinity count.

To avoid ABI break, weight and affinity attributes are not yet added to
queue config structure and rely on PMD for managing it. New eventdev op
queue_attr_get can be used to get it from the PMD.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
 doc/guides/rel_notes/release_22_07.rst |  7 +++++
 lib/eventdev/eventdev_pmd.h            | 22 +++++++++++++++
 lib/eventdev/rte_eventdev.c            | 12 ++++++++
 lib/eventdev/rte_eventdev.h            | 38 ++++++++++++++++++++++++--
 4 files changed, 77 insertions(+), 2 deletions(-)

diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index a7a912d665..f35a31bbdf 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -70,6 +70,13 @@ New Features
   Added new API ``rte_event_queue_attr_set()``, to set event queue attributes
   at runtime.
 
+* **Added new queues attributes weight and affinity in eventdev.**
+
+  Defined new event queue attributes weight and affinity as below:
+
+  * ``RTE_EVENT_QUEUE_ATTR_WEIGHT``
+  * ``RTE_EVENT_QUEUE_ATTR_AFFINITY``
+
 
 Removed Items
 -------------
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index 3b85d9f7a5..5495aee4f6 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
 typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
 		uint8_t queue_id);
 
+/**
+ * Get an event queue attribute at runtime.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param queue_id
+ *   Event queue index
+ * @param attr_id
+ *   Event queue attribute id
+ * @param[out] attr_value
+ *   Event queue attribute value
+ *
+ * @return
+ *  - 0: Success.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_queue_attr_get_t)(struct rte_eventdev *dev,
+					 uint8_t queue_id, uint32_t attr_id,
+					 uint32_t *attr_value);
+
 /**
  * Set an event queue attribute at runtime.
  *
@@ -1231,6 +1251,8 @@ struct eventdev_ops {
 	/**< Set up an event queue. */
 	eventdev_queue_release_t queue_release;
 	/**< Release an event queue. */
+	eventdev_queue_attr_get_t queue_attr_get;
+	/**< Get an event queue attribute. */
 	eventdev_queue_attr_set_t queue_attr_set;
 	/**< Set an event queue attribute. */
 
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index a31e99be02..12b261f923 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -838,6 +838,18 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 
 		*attr_value = conf->schedule_type;
 		break;
+	case RTE_EVENT_QUEUE_ATTR_WEIGHT:
+		*attr_value = RTE_EVENT_QUEUE_WEIGHT_LOWEST;
+		if (dev->dev_ops->queue_attr_get)
+			return (*dev->dev_ops->queue_attr_get)(
+				dev, queue_id, attr_id, attr_value);
+		break;
+	case RTE_EVENT_QUEUE_ATTR_AFFINITY:
+		*attr_value = RTE_EVENT_QUEUE_AFFINITY_LOWEST;
+		if (dev->dev_ops->queue_attr_get)
+			return (*dev->dev_ops->queue_attr_get)(
+				dev, queue_id, attr_id, attr_value);
+		break;
 	default:
 		return -EINVAL;
 	};
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index a79b1397eb..9a7c0bcf25 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -222,8 +222,14 @@ struct rte_event;
 
 /* Event device capability bitmap flags */
 #define RTE_EVENT_DEV_CAP_QUEUE_QOS           (1ULL << 0)
-/**< Event scheduling prioritization is based on the priority associated with
- *  each event queue.
+/**< Event scheduling prioritization is based on the priority and weight
+ * associated with each event queue. Events from a queue with highest priority
+ * is scheduled first. If the queues are of same priority, weight of the queues
+ * are considered to select a queue in a weighted round robin fashion.
+ * Subsequent dequeue calls from an event port could see events from the same
+ * event queue, if the queue is configured with an affinity count. Affinity
+ * count is the number of subsequent dequeue calls, in which an event port
+ * should use the same event queue if the queue is non-empty
  *
  *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
  */
@@ -331,6 +337,26 @@ struct rte_event;
  * @see rte_event_port_link()
  */
 
+/* Event queue scheduling weights */
+#define RTE_EVENT_QUEUE_WEIGHT_HIGHEST 255
+/**< Highest weight of an event queue
+ * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
+ */
+#define RTE_EVENT_QUEUE_WEIGHT_LOWEST 0
+/**< Lowest weight of an event queue
+ * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
+ */
+
+/* Event queue scheduling affinity */
+#define RTE_EVENT_QUEUE_AFFINITY_HIGHEST 255
+/**< Highest scheduling affinity of an event queue
+ * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
+ */
+#define RTE_EVENT_QUEUE_AFFINITY_LOWEST 0
+/**< Lowest scheduling affinity of an event queue
+ * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
+ */
+
 /**
  * Get the total number of event devices that have been successfully
  * initialised.
@@ -684,6 +710,14 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
  * The schedule type of the queue.
  */
 #define RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE 4
+/**
+ * The weight of the queue.
+ */
+#define RTE_EVENT_QUEUE_ATTR_WEIGHT 5
+/**
+ * Affinity of the queue.
+ */
+#define RTE_EVENT_QUEUE_ATTR_AFFINITY 6
 
 /**
  * Get an attribute from a queue.
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v4 3/5] test/event: test cases to test runtime queue attribute
  2022-05-16 17:35     ` [PATCH v4 0/5] Extend and set event queue attributes at runtime Shijith Thotton
  2022-05-16 17:35       ` [PATCH v4 1/5] eventdev: support to set " Shijith Thotton
  2022-05-16 17:35       ` [PATCH v4 2/5] eventdev: add weight and affinity to queue attributes Shijith Thotton
@ 2022-05-16 17:35       ` Shijith Thotton
  2022-05-16 17:35       ` [PATCH v4 4/5] common/cnxk: use lock when accessing mbox of SSO Shijith Thotton
  2022-05-16 17:35       ` [PATCH v4 5/5] event/cnxk: support to set runtime queue attributes Shijith Thotton
  4 siblings, 0 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-05-16 17:35 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Shijith Thotton, pbhagavatula, harry.van.haaren, mattias.ronnblom, mdr

Added test cases to test changing of queue QoS attributes priority,
weight and affinity at runtime.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
 app/test/test_eventdev.c | 201 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 201 insertions(+)

diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 4f51042bda..336529038e 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -385,6 +385,201 @@ test_eventdev_queue_attr_priority(void)
 	return TEST_SUCCESS;
 }
 
+static int
+test_eventdev_queue_attr_priority_runtime(void)
+{
+	uint32_t queue_count, queue_req, prio, deq_cnt;
+	struct rte_event_queue_conf qconf;
+	struct rte_event_port_conf pconf;
+	struct rte_event_dev_info info;
+	struct rte_event event = {
+		.op = RTE_EVENT_OP_NEW,
+		.event_type = RTE_EVENT_TYPE_CPU,
+		.sched_type = RTE_SCHED_TYPE_ATOMIC,
+		.u64 = 0xbadbadba,
+	};
+	int i, ret;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR))
+		return TEST_SKIPPED;
+
+	TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(
+				    TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+				    &queue_count),
+			    "Queue count get failed");
+
+	/* Need at least 2 queues to test LOW and HIGH priority. */
+	TEST_ASSERT(queue_count > 1, "Not enough event queues, needed 2");
+	queue_req = 2;
+
+	for (i = 0; i < (int)queue_count; i++) {
+		ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i);
+		ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	ret = rte_event_queue_attr_set(TEST_DEV_ID, 0,
+				       RTE_EVENT_QUEUE_ATTR_PRIORITY,
+				       RTE_EVENT_DEV_PRIORITY_LOWEST);
+	if (ret == -ENOTSUP)
+		return TEST_SKIPPED;
+	TEST_ASSERT_SUCCESS(ret, "Queue0 priority set failed");
+
+	ret = rte_event_queue_attr_set(TEST_DEV_ID, 1,
+				       RTE_EVENT_QUEUE_ATTR_PRIORITY,
+				       RTE_EVENT_DEV_PRIORITY_HIGHEST);
+	if (ret == -ENOTSUP)
+		return TEST_SKIPPED;
+	TEST_ASSERT_SUCCESS(ret, "Queue1 priority set failed");
+
+	/* Setup event port 0 */
+	ret = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get port0 info");
+	ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to setup port0");
+	ret = rte_event_port_link(TEST_DEV_ID, 0, NULL, NULL, 0);
+	TEST_ASSERT(ret == (int)queue_count, "Failed to link port, device %d",
+		    TEST_DEV_ID);
+
+	ret = rte_event_dev_start(TEST_DEV_ID);
+	TEST_ASSERT_SUCCESS(ret, "Failed to start device%d", TEST_DEV_ID);
+
+	for (i = 0; i < (int)queue_req; i++) {
+		event.queue_id = i;
+		while (rte_event_enqueue_burst(TEST_DEV_ID, 0, &event, 1) != 1)
+			rte_pause();
+	}
+
+	prio = RTE_EVENT_DEV_PRIORITY_HIGHEST;
+	deq_cnt = 0;
+	while (deq_cnt < queue_req) {
+		uint32_t queue_prio;
+
+		if (rte_event_dequeue_burst(TEST_DEV_ID, 0, &event, 1, 0) == 0)
+			continue;
+
+		ret = rte_event_queue_attr_get(TEST_DEV_ID, event.queue_id,
+					       RTE_EVENT_QUEUE_ATTR_PRIORITY,
+					       &queue_prio);
+		if (ret == -ENOTSUP)
+			return TEST_SKIPPED;
+
+		TEST_ASSERT_SUCCESS(ret, "Queue priority get failed");
+		TEST_ASSERT(queue_prio >= prio,
+			    "Received event from a lower priority queue first");
+		prio = queue_prio;
+		deq_cnt++;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_queue_attr_weight_runtime(void)
+{
+	struct rte_event_queue_conf qconf;
+	struct rte_event_dev_info info;
+	uint32_t queue_count;
+	int i, ret;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR))
+		return TEST_SKIPPED;
+
+	TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(
+				    TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+				    &queue_count),
+			    "Queue count get failed");
+
+	for (i = 0; i < (int)queue_count; i++) {
+		ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i);
+		ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	for (i = 0; i < (int)queue_count; i++) {
+		uint32_t get_val;
+		uint64_t set_val;
+
+		set_val = i % RTE_EVENT_QUEUE_WEIGHT_HIGHEST;
+		ret = rte_event_queue_attr_set(
+			TEST_DEV_ID, i, RTE_EVENT_QUEUE_ATTR_WEIGHT, set_val);
+		if (ret == -ENOTSUP)
+			return TEST_SKIPPED;
+
+		TEST_ASSERT_SUCCESS(ret, "Queue weight set failed");
+
+		ret = rte_event_queue_attr_get(
+			TEST_DEV_ID, i, RTE_EVENT_QUEUE_ATTR_WEIGHT, &get_val);
+		if (ret == -ENOTSUP)
+			return TEST_SKIPPED;
+
+		TEST_ASSERT_SUCCESS(ret, "Queue weight get failed");
+		TEST_ASSERT_EQUAL(get_val, set_val,
+				  "Wrong weight value for queue%d", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_queue_attr_affinity_runtime(void)
+{
+	struct rte_event_queue_conf qconf;
+	struct rte_event_dev_info info;
+	uint32_t queue_count;
+	int i, ret;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR))
+		return TEST_SKIPPED;
+
+	TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(
+				    TEST_DEV_ID, RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
+				    &queue_count),
+			    "Queue count get failed");
+
+	for (i = 0; i < (int)queue_count; i++) {
+		ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i);
+		ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	for (i = 0; i < (int)queue_count; i++) {
+		uint32_t get_val;
+		uint64_t set_val;
+
+		set_val = i % RTE_EVENT_QUEUE_AFFINITY_HIGHEST;
+		ret = rte_event_queue_attr_set(
+			TEST_DEV_ID, i, RTE_EVENT_QUEUE_ATTR_AFFINITY, set_val);
+		if (ret == -ENOTSUP)
+			return TEST_SKIPPED;
+
+		TEST_ASSERT_SUCCESS(ret, "Queue affinity set failed");
+
+		ret = rte_event_queue_attr_get(
+			TEST_DEV_ID, i, RTE_EVENT_QUEUE_ATTR_AFFINITY, &get_val);
+		if (ret == -ENOTSUP)
+			return TEST_SKIPPED;
+
+		TEST_ASSERT_SUCCESS(ret, "Queue affinity get failed");
+		TEST_ASSERT_EQUAL(get_val, set_val,
+				  "Wrong affinity value for queue%d", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
 static int
 test_eventdev_queue_attr_nb_atomic_flows(void)
 {
@@ -964,6 +1159,12 @@ static struct unit_test_suite eventdev_common_testsuite  = {
 			test_eventdev_queue_count),
 		TEST_CASE_ST(eventdev_configure_setup, NULL,
 			test_eventdev_queue_attr_priority),
+		TEST_CASE_ST(eventdev_configure_setup, eventdev_stop_device,
+			test_eventdev_queue_attr_priority_runtime),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_attr_weight_runtime),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_attr_affinity_runtime),
 		TEST_CASE_ST(eventdev_configure_setup, NULL,
 			test_eventdev_queue_attr_nb_atomic_flows),
 		TEST_CASE_ST(eventdev_configure_setup, NULL,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v4 4/5] common/cnxk: use lock when accessing mbox of SSO
  2022-05-16 17:35     ` [PATCH v4 0/5] Extend and set event queue attributes at runtime Shijith Thotton
                         ` (2 preceding siblings ...)
  2022-05-16 17:35       ` [PATCH v4 3/5] test/event: test cases to test runtime queue attribute Shijith Thotton
@ 2022-05-16 17:35       ` Shijith Thotton
  2022-05-16 17:35       ` [PATCH v4 5/5] event/cnxk: support to set runtime queue attributes Shijith Thotton
  4 siblings, 0 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-05-16 17:35 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Pavan Nikhilesh, harry.van.haaren, mattias.ronnblom, mdr,
	Shijith Thotton, Nithin Dabilpuram, Kiran Kumar K,
	Sunil Kumar Kori, Satha Rao

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Since mbox is now accessed from multiple threads, use lock to
synchronize access.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
 drivers/common/cnxk/roc_sso.c      | 174 +++++++++++++++++++++--------
 drivers/common/cnxk/roc_sso_priv.h |   1 +
 drivers/common/cnxk/roc_tim.c      | 134 ++++++++++++++--------
 3 files changed, 215 insertions(+), 94 deletions(-)

diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index f8a0a96533..358d37a9f2 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -36,8 +36,8 @@ sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf,
 	}
 
 	rc = mbox_process_msg(dev->mbox, rsp);
-	if (rc < 0)
-		return rc;
+	if (rc)
+		return -EIO;
 
 	return 0;
 }
@@ -69,8 +69,8 @@ sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf)
 	}
 
 	rc = mbox_process(dev->mbox);
-	if (rc < 0)
-		return rc;
+	if (rc)
+		return -EIO;
 
 	return 0;
 }
@@ -98,7 +98,7 @@ sso_rsrc_attach(struct roc_sso *roc_sso, enum sso_lf_type lf_type,
 	}
 
 	req->modify = true;
-	if (mbox_process(dev->mbox) < 0)
+	if (mbox_process(dev->mbox))
 		return -EIO;
 
 	return 0;
@@ -126,7 +126,7 @@ sso_rsrc_detach(struct roc_sso *roc_sso, enum sso_lf_type lf_type)
 	}
 
 	req->partial = true;
-	if (mbox_process(dev->mbox) < 0)
+	if (mbox_process(dev->mbox))
 		return -EIO;
 
 	return 0;
@@ -141,9 +141,9 @@ sso_rsrc_get(struct roc_sso *roc_sso)
 
 	mbox_alloc_msg_free_rsrc_cnt(dev->mbox);
 	rc = mbox_process_msg(dev->mbox, (void **)&rsrc_cnt);
-	if (rc < 0) {
+	if (rc) {
 		plt_err("Failed to get free resource count\n");
-		return rc;
+		return -EIO;
 	}
 
 	roc_sso->max_hwgrp = rsrc_cnt->sso;
@@ -197,8 +197,8 @@ sso_msix_fill(struct roc_sso *roc_sso, uint16_t nb_hws, uint16_t nb_hwgrp)
 
 	mbox_alloc_msg_msix_offset(dev->mbox);
 	rc = mbox_process_msg(dev->mbox, (void **)&rsp);
-	if (rc < 0)
-		return rc;
+	if (rc)
+		return -EIO;
 
 	for (i = 0; i < nb_hws; i++)
 		sso->hws_msix_offset[i] = rsp->ssow_msixoff[i];
@@ -285,53 +285,71 @@ int
 roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws,
 		      struct roc_sso_hws_stats *stats)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
 	struct sso_hws_stats *req_rsp;
+	struct dev *dev = &sso->dev;
 	int rc;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req_rsp = (struct sso_hws_stats *)mbox_alloc_msg_sso_hws_get_stats(
 		dev->mbox);
 	if (req_rsp == NULL) {
 		rc = mbox_process(dev->mbox);
-		if (rc < 0)
-			return rc;
+		if (rc) {
+			rc = -EIO;
+			goto fail;
+		}
 		req_rsp = (struct sso_hws_stats *)
 			mbox_alloc_msg_sso_hws_get_stats(dev->mbox);
-		if (req_rsp == NULL)
-			return -ENOSPC;
+		if (req_rsp == NULL) {
+			rc = -ENOSPC;
+			goto fail;
+		}
 	}
 	req_rsp->hws = hws;
 	rc = mbox_process_msg(dev->mbox, (void **)&req_rsp);
-	if (rc)
-		return rc;
+	if (rc) {
+		rc = -EIO;
+		goto fail;
+	}
 
 	stats->arbitration = req_rsp->arbitration;
-	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
 roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp,
 			struct roc_sso_hwgrp_stats *stats)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
 	struct sso_grp_stats *req_rsp;
+	struct dev *dev = &sso->dev;
 	int rc;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req_rsp = (struct sso_grp_stats *)mbox_alloc_msg_sso_grp_get_stats(
 		dev->mbox);
 	if (req_rsp == NULL) {
 		rc = mbox_process(dev->mbox);
-		if (rc < 0)
-			return rc;
+		if (rc) {
+			rc = -EIO;
+			goto fail;
+		}
 		req_rsp = (struct sso_grp_stats *)
 			mbox_alloc_msg_sso_grp_get_stats(dev->mbox);
-		if (req_rsp == NULL)
-			return -ENOSPC;
+		if (req_rsp == NULL) {
+			rc = -ENOSPC;
+			goto fail;
+		}
 	}
 	req_rsp->grp = hwgrp;
 	rc = mbox_process_msg(dev->mbox, (void **)&req_rsp);
-	if (rc)
-		return rc;
+	if (rc) {
+		rc = -EIO;
+		goto fail;
+	}
 
 	stats->aw_status = req_rsp->aw_status;
 	stats->dq_pc = req_rsp->dq_pc;
@@ -341,7 +359,10 @@ roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp,
 	stats->ts_pc = req_rsp->ts_pc;
 	stats->wa_pc = req_rsp->wa_pc;
 	stats->ws_pc = req_rsp->ws_pc;
-	return 0;
+
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -358,10 +379,12 @@ int
 roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos,
 			 uint8_t nb_qos, uint32_t nb_xaq)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
 	struct sso_grp_qos_cfg *req;
 	int i, rc;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	for (i = 0; i < nb_qos; i++) {
 		uint8_t xaq_prcnt = qos[i].xaq_prcnt;
 		uint8_t iaq_prcnt = qos[i].iaq_prcnt;
@@ -370,11 +393,16 @@ roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos,
 		req = mbox_alloc_msg_sso_grp_qos_config(dev->mbox);
 		if (req == NULL) {
 			rc = mbox_process(dev->mbox);
-			if (rc < 0)
-				return rc;
+			if (rc) {
+				rc = -EIO;
+				goto fail;
+			}
+
 			req = mbox_alloc_msg_sso_grp_qos_config(dev->mbox);
-			if (req == NULL)
-				return -ENOSPC;
+			if (req == NULL) {
+				rc = -ENOSPC;
+				goto fail;
+			}
 		}
 		req->grp = qos[i].hwgrp;
 		req->xaq_limit = (nb_xaq * (xaq_prcnt ? xaq_prcnt : 100)) / 100;
@@ -386,7 +414,12 @@ roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos,
 			       100;
 	}
 
-	return mbox_process(dev->mbox);
+	rc = mbox_process(dev->mbox);
+	if (rc)
+		rc = -EIO;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -482,11 +515,16 @@ sso_hwgrp_init_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq,
 int
 roc_sso_hwgrp_init_xaq_aura(struct roc_sso *roc_sso, uint32_t nb_xae)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
+	int rc;
 
-	return sso_hwgrp_init_xaq_aura(dev, &roc_sso->xaq, nb_xae,
-				       roc_sso->xae_waes, roc_sso->xaq_buf_size,
-				       roc_sso->nb_hwgrp);
+	plt_spinlock_lock(&sso->mbox_lock);
+	rc = sso_hwgrp_init_xaq_aura(dev, &roc_sso->xaq, nb_xae,
+				     roc_sso->xae_waes, roc_sso->xaq_buf_size,
+				     roc_sso->nb_hwgrp);
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -515,9 +553,14 @@ sso_hwgrp_free_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq,
 int
 roc_sso_hwgrp_free_xaq_aura(struct roc_sso *roc_sso, uint16_t nb_hwgrp)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
+	int rc;
 
-	return sso_hwgrp_free_xaq_aura(dev, &roc_sso->xaq, nb_hwgrp);
+	plt_spinlock_lock(&sso->mbox_lock);
+	rc = sso_hwgrp_free_xaq_aura(dev, &roc_sso->xaq, nb_hwgrp);
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -533,16 +576,24 @@ sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps)
 	req->npa_aura_id = npa_aura_id;
 	req->hwgrps = hwgrps;
 
-	return mbox_process(dev->mbox);
+	if (mbox_process(dev->mbox))
+		return -EIO;
+
+	return 0;
 }
 
 int
 roc_sso_hwgrp_alloc_xaq(struct roc_sso *roc_sso, uint32_t npa_aura_id,
 			uint16_t hwgrps)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
+	int rc;
 
-	return sso_hwgrp_alloc_xaq(dev, npa_aura_id, hwgrps);
+	plt_spinlock_lock(&sso->mbox_lock);
+	rc = sso_hwgrp_alloc_xaq(dev, npa_aura_id, hwgrps);
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -555,40 +606,56 @@ sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps)
 		return -EINVAL;
 	req->hwgrps = hwgrps;
 
-	return mbox_process(dev->mbox);
+	if (mbox_process(dev->mbox))
+		return -EIO;
+
+	return 0;
 }
 
 int
 roc_sso_hwgrp_release_xaq(struct roc_sso *roc_sso, uint16_t hwgrps)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
+	int rc;
 
-	return sso_hwgrp_release_xaq(dev, hwgrps);
+	plt_spinlock_lock(&sso->mbox_lock);
+	rc = sso_hwgrp_release_xaq(dev, hwgrps);
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
 roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso, uint16_t hwgrp,
 			   uint8_t weight, uint8_t affinity, uint8_t priority)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_sso);
+	struct dev *dev = &sso->dev;
 	struct sso_grp_priority *req;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_sso_grp_set_priority(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->grp = hwgrp;
 	req->weight = weight;
 	req->affinity = affinity;
 	req->priority = priority;
 
 	rc = mbox_process(dev->mbox);
-	if (rc < 0)
-		return rc;
+	if (rc) {
+		rc = -EIO;
+		goto fail;
+	}
+	plt_spinlock_unlock(&sso->mbox_lock);
 	plt_sso_dbg("HWGRP %d weight %d affinity %d priority %d", hwgrp, weight,
 		    affinity, priority);
 
 	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -603,10 +670,11 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
 	if (roc_sso->max_hws < nb_hws)
 		return -ENOENT;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	rc = sso_rsrc_attach(roc_sso, SSO_LF_TYPE_HWS, nb_hws);
 	if (rc < 0) {
 		plt_err("Unable to attach SSO HWS LFs");
-		return rc;
+		goto fail;
 	}
 
 	rc = sso_rsrc_attach(roc_sso, SSO_LF_TYPE_HWGRP, nb_hwgrp);
@@ -645,6 +713,7 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
 		goto sso_msix_fail;
 	}
 
+	plt_spinlock_unlock(&sso->mbox_lock);
 	roc_sso->nb_hwgrp = nb_hwgrp;
 	roc_sso->nb_hws = nb_hws;
 
@@ -657,6 +726,8 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
 	sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWGRP);
 hwgrp_atch_fail:
 	sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWS);
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
 	return rc;
 }
 
@@ -678,6 +749,7 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso)
 
 	roc_sso->nb_hwgrp = 0;
 	roc_sso->nb_hws = 0;
+	plt_spinlock_unlock(&sso->mbox_lock);
 }
 
 int
@@ -696,6 +768,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso)
 	sso = roc_sso_to_sso_priv(roc_sso);
 	memset(sso, 0, sizeof(*sso));
 	pci_dev = roc_sso->pci_dev;
+	plt_spinlock_init(&sso->mbox_lock);
 
 	rc = dev_init(&sso->dev, pci_dev);
 	if (rc < 0) {
@@ -703,6 +776,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso)
 		goto fail;
 	}
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	rc = sso_rsrc_get(roc_sso);
 	if (rc < 0) {
 		plt_err("Failed to get SSO resources");
@@ -739,6 +813,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso)
 	sso->pci_dev = pci_dev;
 	sso->dev.drv_inited = true;
 	roc_sso->lmt_base = sso->dev.lmt_base;
+	plt_spinlock_unlock(&sso->mbox_lock);
 
 	return 0;
 link_mem_free:
@@ -746,6 +821,7 @@ roc_sso_dev_init(struct roc_sso *roc_sso)
 rsrc_fail:
 	rc |= dev_fini(&sso->dev, pci_dev);
 fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
 	return rc;
 }
 
diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h
index 09729d4f62..674e4e0a39 100644
--- a/drivers/common/cnxk/roc_sso_priv.h
+++ b/drivers/common/cnxk/roc_sso_priv.h
@@ -22,6 +22,7 @@ struct sso {
 	/* SSO link mapping. */
 	struct plt_bitmap **link_map;
 	void *link_map_mem;
+	plt_spinlock_t mbox_lock;
 } __plt_cache_aligned;
 
 enum sso_err_status {
diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c
index cefd9bc89d..0f9209937b 100644
--- a/drivers/common/cnxk/roc_tim.c
+++ b/drivers/common/cnxk/roc_tim.c
@@ -8,15 +8,16 @@
 static int
 tim_fill_msix(struct roc_tim *roc_tim, uint16_t nb_ring)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
 	struct tim *tim = roc_tim_to_tim_priv(roc_tim);
+	struct dev *dev = &sso->dev;
 	struct msix_offset_rsp *rsp;
 	int i, rc;
 
 	mbox_alloc_msg_msix_offset(dev->mbox);
 	rc = mbox_process_msg(dev->mbox, (void **)&rsp);
-	if (rc < 0)
-		return rc;
+	if (rc)
+		return -EIO;
 
 	for (i = 0; i < nb_ring; i++)
 		tim->tim_msix_offsets[i] = rsp->timlf_msixoff[i];
@@ -88,20 +89,23 @@ int
 roc_tim_lf_enable(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *start_tsc,
 		  uint32_t *cur_bkt)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
+	struct dev *dev = &sso->dev;
 	struct tim_enable_rsp *rsp;
 	struct tim_ring_req *req;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_enable_ring(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->ring = ring_id;
 
 	rc = mbox_process_msg(dev->mbox, (void **)&rsp);
-	if (rc < 0) {
+	if (rc) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
+		goto fail;
 	}
 
 	if (cur_bkt)
@@ -109,28 +113,34 @@ roc_tim_lf_enable(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *start_tsc,
 	if (start_tsc)
 		*start_tsc = rsp->timestarted;
 
-	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
 roc_tim_lf_disable(struct roc_tim *roc_tim, uint8_t ring_id)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
+	struct dev *dev = &sso->dev;
 	struct tim_ring_req *req;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_disable_ring(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->ring = ring_id;
 
 	rc = mbox_process(dev->mbox);
-	if (rc < 0) {
+	if (rc) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
 	}
 
-	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 uintptr_t
@@ -147,13 +157,15 @@ roc_tim_lf_config(struct roc_tim *roc_tim, uint8_t ring_id,
 		  uint8_t ena_dfb, uint32_t bucket_sz, uint32_t chunk_sz,
 		  uint32_t interval, uint64_t intervalns, uint64_t clockfreq)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
+	struct dev *dev = &sso->dev;
 	struct tim_config_req *req;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_config_ring(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->ring = ring_id;
 	req->bigendian = false;
 	req->bucketsize = bucket_sz;
@@ -167,12 +179,14 @@ roc_tim_lf_config(struct roc_tim *roc_tim, uint8_t ring_id,
 	req->gpioedge = TIM_GPIO_LTOH_TRANS;
 
 	rc = mbox_process(dev->mbox);
-	if (rc < 0) {
+	if (rc) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
 	}
 
-	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -180,27 +194,32 @@ roc_tim_lf_interval(struct roc_tim *roc_tim, enum roc_tim_clk_src clk_src,
 		    uint64_t clockfreq, uint64_t *intervalns,
 		    uint64_t *interval)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
+	struct dev *dev = &sso->dev;
 	struct tim_intvl_req *req;
 	struct tim_intvl_rsp *rsp;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_get_min_intvl(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 
 	req->clockfreq = clockfreq;
 	req->clocksource = clk_src;
 	rc = mbox_process_msg(dev->mbox, (void **)&rsp);
-	if (rc < 0) {
+	if (rc) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
+		goto fail;
 	}
 
 	*intervalns = rsp->intvl_ns;
 	*interval = rsp->intvl_cyc;
 
-	return 0;
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
+	return rc;
 }
 
 int
@@ -214,17 +233,19 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk)
 	struct dev *dev = &sso->dev;
 	int rc = -ENOSPC;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_lf_alloc(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->npa_pf_func = idev_npa_pffunc_get();
 	req->sso_pf_func = idev_sso_pffunc_get();
 	req->ring = ring_id;
 
 	rc = mbox_process_msg(dev->mbox, (void **)&rsp);
-	if (rc < 0) {
+	if (rc) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
+		goto fail;
 	}
 
 	if (clk)
@@ -235,12 +256,18 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk)
 	if (rc < 0) {
 		plt_tim_dbg("Failed to register Ring[%d] IRQ", ring_id);
 		free_req = mbox_alloc_msg_tim_lf_free(dev->mbox);
-		if (free_req == NULL)
-			return -ENOSPC;
+		if (free_req == NULL) {
+			rc = -ENOSPC;
+			goto fail;
+		}
 		free_req->ring = ring_id;
-		mbox_process(dev->mbox);
+		rc = mbox_process(dev->mbox);
+		if (rc)
+			rc = -EIO;
 	}
 
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
 	return rc;
 }
 
@@ -256,17 +283,20 @@ roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id)
 	tim_unregister_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
 				tim->tim_msix_offsets[ring_id]);
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	req = mbox_alloc_msg_tim_lf_free(dev->mbox);
 	if (req == NULL)
-		return rc;
+		goto fail;
 	req->ring = ring_id;
 
 	rc = mbox_process(dev->mbox);
 	if (rc < 0) {
 		tim_err_desc(rc);
-		return rc;
+		rc = -EIO;
 	}
 
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
 	return 0;
 }
 
@@ -276,40 +306,48 @@ roc_tim_init(struct roc_tim *roc_tim)
 	struct rsrc_attach_req *attach_req;
 	struct rsrc_detach_req *detach_req;
 	struct free_rsrcs_rsp *free_rsrc;
-	struct dev *dev;
+	struct sso *sso;
 	uint16_t nb_lfs;
+	struct dev *dev;
 	int rc;
 
 	if (roc_tim == NULL || roc_tim->roc_sso == NULL)
 		return TIM_ERR_PARAM;
 
+	sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
+	dev = &sso->dev;
 	PLT_STATIC_ASSERT(sizeof(struct tim) <= TIM_MEM_SZ);
-	dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
 	nb_lfs = roc_tim->nb_lfs;
+	plt_spinlock_lock(&sso->mbox_lock);
 	mbox_alloc_msg_free_rsrc_cnt(dev->mbox);
 	rc = mbox_process_msg(dev->mbox, (void *)&free_rsrc);
-	if (rc < 0) {
+	if (rc) {
 		plt_err("Unable to get free rsrc count.");
-		return 0;
+		nb_lfs = 0;
+		goto fail;
 	}
 
 	if (nb_lfs && (free_rsrc->tim < nb_lfs)) {
 		plt_tim_dbg("Requested LFs : %d Available LFs : %d", nb_lfs,
 			    free_rsrc->tim);
-		return 0;
+		nb_lfs = 0;
+		goto fail;
 	}
 
 	attach_req = mbox_alloc_msg_attach_resources(dev->mbox);
-	if (attach_req == NULL)
-		return -ENOSPC;
+	if (attach_req == NULL) {
+		nb_lfs = 0;
+		goto fail;
+	}
 	attach_req->modify = true;
 	attach_req->timlfs = nb_lfs ? nb_lfs : free_rsrc->tim;
 	nb_lfs = attach_req->timlfs;
 
 	rc = mbox_process(dev->mbox);
-	if (rc < 0) {
+	if (rc) {
 		plt_err("Unable to attach TIM LFs.");
-		return 0;
+		nb_lfs = 0;
+		goto fail;
 	}
 
 	rc = tim_fill_msix(roc_tim, nb_lfs);
@@ -317,28 +355,34 @@ roc_tim_init(struct roc_tim *roc_tim)
 		plt_err("Unable to get TIM MSIX vectors");
 
 		detach_req = mbox_alloc_msg_detach_resources(dev->mbox);
-		if (detach_req == NULL)
-			return -ENOSPC;
+		if (detach_req == NULL) {
+			nb_lfs = 0;
+			goto fail;
+		}
 		detach_req->partial = true;
 		detach_req->timlfs = true;
 		mbox_process(dev->mbox);
-
-		return 0;
+		nb_lfs = 0;
 	}
 
+fail:
+	plt_spinlock_unlock(&sso->mbox_lock);
 	return nb_lfs;
 }
 
 void
 roc_tim_fini(struct roc_tim *roc_tim)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_tim->roc_sso)->dev;
+	struct sso *sso = roc_sso_to_sso_priv(roc_tim->roc_sso);
 	struct rsrc_detach_req *detach_req;
+	struct dev *dev = &sso->dev;
 
+	plt_spinlock_lock(&sso->mbox_lock);
 	detach_req = mbox_alloc_msg_detach_resources(dev->mbox);
 	PLT_ASSERT(detach_req);
 	detach_req->partial = true;
 	detach_req->timlfs = true;
 
 	mbox_process(dev->mbox);
+	plt_spinlock_unlock(&sso->mbox_lock);
 }
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v4 5/5] event/cnxk: support to set runtime queue attributes
  2022-05-16 17:35     ` [PATCH v4 0/5] Extend and set event queue attributes at runtime Shijith Thotton
                         ` (3 preceding siblings ...)
  2022-05-16 17:35       ` [PATCH v4 4/5] common/cnxk: use lock when accessing mbox of SSO Shijith Thotton
@ 2022-05-16 17:35       ` Shijith Thotton
  4 siblings, 0 replies; 58+ messages in thread
From: Shijith Thotton @ 2022-05-16 17:35 UTC (permalink / raw)
  To: dev, jerinj
  Cc: Shijith Thotton, pbhagavatula, harry.van.haaren, mattias.ronnblom, mdr

Added API to set queue attributes at runtime and API to get weight and
affinity.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
 doc/guides/eventdevs/features/cnxk.ini |  1 +
 drivers/event/cnxk/cn10k_eventdev.c    |  4 ++
 drivers/event/cnxk/cn9k_eventdev.c     |  4 ++
 drivers/event/cnxk/cnxk_eventdev.c     | 91 ++++++++++++++++++++++++--
 drivers/event/cnxk/cnxk_eventdev.h     | 19 ++++++
 5 files changed, 113 insertions(+), 6 deletions(-)

diff --git a/doc/guides/eventdevs/features/cnxk.ini b/doc/guides/eventdevs/features/cnxk.ini
index 7633c6e3a2..bee69bf8f4 100644
--- a/doc/guides/eventdevs/features/cnxk.ini
+++ b/doc/guides/eventdevs/features/cnxk.ini
@@ -12,6 +12,7 @@ runtime_port_link          = Y
 multiple_queue_port        = Y
 carry_flow_id              = Y
 maintenance_free           = Y
+runtime_queue_attr         = y
 
 [Eth Rx adapter Features]
 internal_port              = Y
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 94829e789c..450e1bf50c 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -846,9 +846,13 @@ cn10k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev,
 static struct eventdev_ops cn10k_sso_dev_ops = {
 	.dev_infos_get = cn10k_sso_info_get,
 	.dev_configure = cn10k_sso_dev_configure,
+
 	.queue_def_conf = cnxk_sso_queue_def_conf,
 	.queue_setup = cnxk_sso_queue_setup,
 	.queue_release = cnxk_sso_queue_release,
+	.queue_attr_get = cnxk_sso_queue_attribute_get,
+	.queue_attr_set = cnxk_sso_queue_attribute_set,
+
 	.port_def_conf = cnxk_sso_port_def_conf,
 	.port_setup = cn10k_sso_port_setup,
 	.port_release = cn10k_sso_port_release,
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 987888d3db..3de22d7f4e 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -1084,9 +1084,13 @@ cn9k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev,
 static struct eventdev_ops cn9k_sso_dev_ops = {
 	.dev_infos_get = cn9k_sso_info_get,
 	.dev_configure = cn9k_sso_dev_configure,
+
 	.queue_def_conf = cnxk_sso_queue_def_conf,
 	.queue_setup = cnxk_sso_queue_setup,
 	.queue_release = cnxk_sso_queue_release,
+	.queue_attr_get = cnxk_sso_queue_attribute_get,
+	.queue_attr_set = cnxk_sso_queue_attribute_set,
+
 	.port_def_conf = cnxk_sso_port_def_conf,
 	.port_setup = cn9k_sso_port_setup,
 	.port_release = cn9k_sso_port_release,
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index be021d86c9..a2829b817e 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -120,7 +120,8 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
 				  RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
 				  RTE_EVENT_DEV_CAP_NONSEQ_MODE |
 				  RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
-				  RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
+				  RTE_EVENT_DEV_CAP_MAINTENANCE_FREE |
+				  RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR;
 }
 
 int
@@ -300,11 +301,27 @@ cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
 		     const struct rte_event_queue_conf *queue_conf)
 {
 	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
-
-	plt_sso_dbg("Queue=%d prio=%d", queue_id, queue_conf->priority);
-	/* Normalize <0-255> to <0-7> */
-	return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, 0xFF, 0xFF,
-					  queue_conf->priority / 32);
+	uint8_t priority, weight, affinity;
+
+	/* Default weight and affinity */
+	dev->mlt_prio[queue_id].weight = RTE_EVENT_QUEUE_WEIGHT_LOWEST;
+	dev->mlt_prio[queue_id].affinity = RTE_EVENT_QUEUE_AFFINITY_HIGHEST;
+
+	priority = CNXK_QOS_NORMALIZE(queue_conf->priority, 0,
+				      RTE_EVENT_DEV_PRIORITY_LOWEST,
+				      CNXK_SSO_PRIORITY_CNT);
+	weight = CNXK_QOS_NORMALIZE(
+		dev->mlt_prio[queue_id].weight, CNXK_SSO_WEIGHT_MIN,
+		RTE_EVENT_QUEUE_WEIGHT_HIGHEST, CNXK_SSO_WEIGHT_CNT);
+	affinity = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].affinity, 0,
+				      RTE_EVENT_QUEUE_AFFINITY_HIGHEST,
+				      CNXK_SSO_AFFINITY_CNT);
+
+	plt_sso_dbg("Queue=%u prio=%u weight=%u affinity=%u", queue_id,
+		    priority, weight, affinity);
+
+	return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, weight, affinity,
+					  priority);
 }
 
 void
@@ -314,6 +331,68 @@ cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id)
 	RTE_SET_USED(queue_id);
 }
 
+int
+cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev, uint8_t queue_id,
+			     uint32_t attr_id, uint32_t *attr_value)
+{
+	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+
+	if (attr_id == RTE_EVENT_QUEUE_ATTR_WEIGHT)
+		*attr_value = dev->mlt_prio[queue_id].weight;
+	else if (attr_id == RTE_EVENT_QUEUE_ATTR_AFFINITY)
+		*attr_value = dev->mlt_prio[queue_id].affinity;
+	else
+		return -EINVAL;
+
+	return 0;
+}
+
+int
+cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev, uint8_t queue_id,
+			     uint32_t attr_id, uint64_t attr_value)
+{
+	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
+	uint8_t priority, weight, affinity;
+	struct rte_event_queue_conf *conf;
+
+	conf = &event_dev->data->queues_cfg[queue_id];
+
+	switch (attr_id) {
+	case RTE_EVENT_QUEUE_ATTR_PRIORITY:
+		conf->priority = attr_value;
+		break;
+	case RTE_EVENT_QUEUE_ATTR_WEIGHT:
+		dev->mlt_prio[queue_id].weight = attr_value;
+		break;
+	case RTE_EVENT_QUEUE_ATTR_AFFINITY:
+		dev->mlt_prio[queue_id].affinity = attr_value;
+		break;
+	case RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_FLOWS:
+	case RTE_EVENT_QUEUE_ATTR_NB_ATOMIC_ORDER_SEQUENCES:
+	case RTE_EVENT_QUEUE_ATTR_EVENT_QUEUE_CFG:
+	case RTE_EVENT_QUEUE_ATTR_SCHEDULE_TYPE:
+		/* FALLTHROUGH */
+		plt_sso_dbg("Unsupported attribute id %u", attr_id);
+		return -ENOTSUP;
+	default:
+		plt_err("Invalid attribute id %u", attr_id);
+		return -EINVAL;
+	}
+
+	priority = CNXK_QOS_NORMALIZE(conf->priority, 0,
+				      RTE_EVENT_DEV_PRIORITY_LOWEST,
+				      CNXK_SSO_PRIORITY_CNT);
+	weight = CNXK_QOS_NORMALIZE(
+		dev->mlt_prio[queue_id].weight, CNXK_SSO_WEIGHT_MIN,
+		RTE_EVENT_QUEUE_WEIGHT_HIGHEST, CNXK_SSO_WEIGHT_CNT);
+	affinity = CNXK_QOS_NORMALIZE(dev->mlt_prio[queue_id].affinity, 0,
+				      RTE_EVENT_QUEUE_AFFINITY_HIGHEST,
+				      CNXK_SSO_AFFINITY_CNT);
+
+	return roc_sso_hwgrp_set_priority(&dev->sso, queue_id, weight, affinity,
+					  priority);
+}
+
 void
 cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
 		       struct rte_event_port_conf *port_conf)
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 5564746e6d..531f6d1a84 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -38,6 +38,11 @@
 #define CNXK_SSO_XAQ_CACHE_CNT (0x7)
 #define CNXK_SSO_XAQ_SLACK     (8)
 #define CNXK_SSO_WQE_SG_PTR    (9)
+#define CNXK_SSO_PRIORITY_CNT  (0x8)
+#define CNXK_SSO_WEIGHT_MAX    (0x3f)
+#define CNXK_SSO_WEIGHT_MIN    (0x3)
+#define CNXK_SSO_WEIGHT_CNT    (CNXK_SSO_WEIGHT_MAX - CNXK_SSO_WEIGHT_MIN + 1)
+#define CNXK_SSO_AFFINITY_CNT  (0x10)
 
 #define CNXK_TT_FROM_TAG(x)	    (((x) >> 32) & SSO_TT_EMPTY)
 #define CNXK_TT_FROM_EVENT(x)	    (((x) >> 38) & SSO_TT_EMPTY)
@@ -54,6 +59,8 @@
 #define CN10K_GW_MODE_PREF     1
 #define CN10K_GW_MODE_PREF_WFE 2
 
+#define CNXK_QOS_NORMALIZE(val, min, max, cnt)                                 \
+	(min + val / ((max + cnt - 1) / cnt))
 #define CNXK_VALID_DEV_OR_ERR_RET(dev, drv_name)                               \
 	do {                                                                   \
 		if (strncmp(dev->driver->name, drv_name, strlen(drv_name)))    \
@@ -79,6 +86,11 @@ struct cnxk_sso_qos {
 	uint16_t iaq_prcnt;
 };
 
+struct cnxk_sso_mlt_prio {
+	uint8_t weight;
+	uint8_t affinity;
+};
+
 struct cnxk_sso_evdev {
 	struct roc_sso sso;
 	uint8_t max_event_queues;
@@ -108,6 +120,7 @@ struct cnxk_sso_evdev {
 	uint64_t *timer_adptr_sz;
 	uint16_t vec_pool_cnt;
 	uint64_t *vec_pools;
+	struct cnxk_sso_mlt_prio mlt_prio[RTE_EVENT_MAX_QUEUES_PER_DEV];
 	/* Dev args */
 	uint32_t xae_cnt;
 	uint8_t qos_queue_cnt;
@@ -234,6 +247,12 @@ void cnxk_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
 int cnxk_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
 			 const struct rte_event_queue_conf *queue_conf);
 void cnxk_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id);
+int cnxk_sso_queue_attribute_get(struct rte_eventdev *event_dev,
+				 uint8_t queue_id, uint32_t attr_id,
+				 uint32_t *attr_value);
+int cnxk_sso_queue_attribute_set(struct rte_eventdev *event_dev,
+				 uint8_t queue_id, uint32_t attr_id,
+				 uint64_t attr_value);
 void cnxk_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
 			    struct rte_event_port_conf *port_conf);
 int cnxk_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v4 1/5] eventdev: support to set queue attributes at runtime
  2022-05-16 17:35       ` [PATCH v4 1/5] eventdev: support to set " Shijith Thotton
@ 2022-05-16 18:02         ` Jerin Jacob
  2022-05-17  8:55           ` Mattias Rönnblom
  2022-05-19  8:49         ` Ray Kinsella
  1 sibling, 1 reply; 58+ messages in thread
From: Jerin Jacob @ 2022-05-16 18:02 UTC (permalink / raw)
  To: Shijith Thotton
  Cc: dpdk-dev, Jerin Jacob, Pavan Nikhilesh, Van Haaren, Harry,
	Mattias Rönnblom, Ray Kinsella

On Mon, May 16, 2022 at 11:09 PM Shijith Thotton <sthotton@marvell.com> wrote:
>
> Added a new eventdev API rte_event_queue_attr_set(), to set event queue
> attributes at runtime from the values set during initialization using
> rte_event_queue_setup(). PMD's supporting this feature should expose the
> capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR.
>
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> Acked-by: Jerin Jacob <jerinj@marvell.com>

Hi @Mattias Rönnblom

Planning to merge this version for -rc1. Do you have any more comments
for this series?


> ---
>  doc/guides/eventdevs/features/default.ini |  1 +
>  doc/guides/rel_notes/release_22_07.rst    |  5 ++++
>  lib/eventdev/eventdev_pmd.h               | 22 ++++++++++++++++
>  lib/eventdev/rte_eventdev.c               | 26 ++++++++++++++++++
>  lib/eventdev/rte_eventdev.h               | 32 ++++++++++++++++++++++-
>  lib/eventdev/version.map                  |  3 +++
>  6 files changed, 88 insertions(+), 1 deletion(-)
>
> diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini
> index 2ea233463a..00360f60c6 100644
> --- a/doc/guides/eventdevs/features/default.ini
> +++ b/doc/guides/eventdevs/features/default.ini
> @@ -17,6 +17,7 @@ runtime_port_link          =
>  multiple_queue_port        =
>  carry_flow_id              =
>  maintenance_free           =
> +runtime_queue_attr         =
>
>  ;
>  ; Features of a default Ethernet Rx adapter.
> diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
> index 88d6e96cc1..a7a912d665 100644
> --- a/doc/guides/rel_notes/release_22_07.rst
> +++ b/doc/guides/rel_notes/release_22_07.rst
> @@ -65,6 +65,11 @@ New Features
>    * Added support for promiscuous mode on Windows.
>    * Added support for MTU on Windows.
>
> +* **Added support for setting queue attributes at runtime in eventdev.**
> +
> +  Added new API ``rte_event_queue_attr_set()``, to set event queue attributes
> +  at runtime.
> +
>
>  Removed Items
>  -------------
> diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
> index ce469d47a6..3b85d9f7a5 100644
> --- a/lib/eventdev/eventdev_pmd.h
> +++ b/lib/eventdev/eventdev_pmd.h
> @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
>  typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
>                 uint8_t queue_id);
>
> +/**
> + * Set an event queue attribute at runtime.
> + *
> + * @param dev
> + *   Event device pointer
> + * @param queue_id
> + *   Event queue index
> + * @param attr_id
> + *   Event queue attribute id
> + * @param attr_value
> + *   Event queue attribute value
> + *
> + * @return
> + *  - 0: Success.
> + *  - <0: Error code on failure.
> + */
> +typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev,
> +                                        uint8_t queue_id, uint32_t attr_id,
> +                                        uint64_t attr_value);
> +
>  /**
>   * Retrieve the default event port configuration.
>   *
> @@ -1211,6 +1231,8 @@ struct eventdev_ops {
>         /**< Set up an event queue. */
>         eventdev_queue_release_t queue_release;
>         /**< Release an event queue. */
> +       eventdev_queue_attr_set_t queue_attr_set;
> +       /**< Set an event queue attribute. */
>
>         eventdev_port_default_conf_get_t port_def_conf;
>         /**< Get default port configuration. */
> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
> index 532a253553..a31e99be02 100644
> --- a/lib/eventdev/rte_eventdev.c
> +++ b/lib/eventdev/rte_eventdev.c
> @@ -844,6 +844,32 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>         return 0;
>  }
>
> +int
> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
> +                        uint64_t attr_value)
> +{
> +       struct rte_eventdev *dev;
> +
> +       RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> +       dev = &rte_eventdevs[dev_id];
> +       if (!is_valid_queue(dev, queue_id)) {
> +               RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
> +               return -EINVAL;
> +       }
> +
> +       if (!(dev->data->event_dev_cap &
> +             RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) {
> +               RTE_EDEV_LOG_ERR(
> +                       "Device %" PRIu8 "does not support changing queue attributes at runtime",
> +                       dev_id);
> +               return -ENOTSUP;
> +       }
> +
> +       RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_attr_set, -ENOTSUP);
> +       return (*dev->dev_ops->queue_attr_set)(dev, queue_id, attr_id,
> +                                              attr_value);
> +}
> +
>  int
>  rte_event_port_link(uint8_t dev_id, uint8_t port_id,
>                     const uint8_t queues[], const uint8_t priorities[],
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 42a5660169..a79b1397eb 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -225,7 +225,7 @@ struct rte_event;
>  /**< Event scheduling prioritization is based on the priority associated with
>   *  each event queue.
>   *
> - *  @see rte_event_queue_setup()
> + *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
>   */
>  #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
>  /**< Event scheduling prioritization is based on the priority associated with
> @@ -307,6 +307,13 @@ struct rte_event;
>   * global pool, or process signaling related to load balancing.
>   */
>
> +#define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
> +/**< Event device is capable of changing the queue attributes at runtime i.e
> + * after rte_event_queue_setup() or rte_event_start() call sequence. If this
> + * flag is not set, eventdev queue attributes can only be configured during
> + * rte_event_queue_setup().
> + */
> +
>  /* Event device priority levels */
>  #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
>  /**< Highest priority expressed across eventdev subsystem
> @@ -702,6 +709,29 @@ int
>  rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>                         uint32_t *attr_value);
>
> +/**
> + * Set an event queue attribute.
> + *
> + * @param dev_id
> + *   Eventdev id
> + * @param queue_id
> + *   Eventdev queue id
> + * @param attr_id
> + *   The attribute ID to set
> + * @param attr_value
> + *   The attribute value to set
> + *
> + * @return
> + *   - 0: Successfully set attribute.
> + *   - -EINVAL: invalid device, queue or attr_id.
> + *   - -ENOTSUP: device does not support setting the event attribute.
> + *   - <0: failed to set event queue attribute
> + */
> +__rte_experimental
> +int
> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
> +                        uint64_t attr_value);
> +
>  /* Event port specific APIs */
>
>  /* Event port configuration bitmap flags */
> diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
> index cd5dada07f..c581b75c18 100644
> --- a/lib/eventdev/version.map
> +++ b/lib/eventdev/version.map
> @@ -108,6 +108,9 @@ EXPERIMENTAL {
>
>         # added in 22.03
>         rte_event_eth_rx_adapter_event_port_get;
> +
> +       # added in 22.07
> +       rte_event_queue_attr_set;
>  };
>
>  INTERNAL {
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v4 1/5] eventdev: support to set queue attributes at runtime
  2022-05-16 18:02         ` Jerin Jacob
@ 2022-05-17  8:55           ` Mattias Rönnblom
  2022-05-17 13:35             ` Jerin Jacob
  0 siblings, 1 reply; 58+ messages in thread
From: Mattias Rönnblom @ 2022-05-17  8:55 UTC (permalink / raw)
  To: Jerin Jacob, Shijith Thotton
  Cc: dpdk-dev, Jerin Jacob, Pavan Nikhilesh, Van Haaren, Harry, Ray Kinsella

On 2022-05-16 20:02, Jerin Jacob wrote:
> On Mon, May 16, 2022 at 11:09 PM Shijith Thotton <sthotton@marvell.com> wrote:
>>
>> Added a new eventdev API rte_event_queue_attr_set(), to set event queue
>> attributes at runtime from the values set during initialization using
>> rte_event_queue_setup(). PMD's supporting this feature should expose the
>> capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR.
>>
>> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
>> Acked-by: Jerin Jacob <jerinj@marvell.com>
> 
> Hi @Mattias Rönnblom
> 
> Planning to merge this version for -rc1. Do you have any more comments
> for this series?
> 

No.

> 
>> ---
>>   doc/guides/eventdevs/features/default.ini |  1 +
>>   doc/guides/rel_notes/release_22_07.rst    |  5 ++++
>>   lib/eventdev/eventdev_pmd.h               | 22 ++++++++++++++++
>>   lib/eventdev/rte_eventdev.c               | 26 ++++++++++++++++++
>>   lib/eventdev/rte_eventdev.h               | 32 ++++++++++++++++++++++-
>>   lib/eventdev/version.map                  |  3 +++
>>   6 files changed, 88 insertions(+), 1 deletion(-)
>>
>> diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini
>> index 2ea233463a..00360f60c6 100644
>> --- a/doc/guides/eventdevs/features/default.ini
>> +++ b/doc/guides/eventdevs/features/default.ini
>> @@ -17,6 +17,7 @@ runtime_port_link          =
>>   multiple_queue_port        =
>>   carry_flow_id              =
>>   maintenance_free           =
>> +runtime_queue_attr         =
>>
>>   ;
>>   ; Features of a default Ethernet Rx adapter.
>> diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
>> index 88d6e96cc1..a7a912d665 100644
>> --- a/doc/guides/rel_notes/release_22_07.rst
>> +++ b/doc/guides/rel_notes/release_22_07.rst
>> @@ -65,6 +65,11 @@ New Features
>>     * Added support for promiscuous mode on Windows.
>>     * Added support for MTU on Windows.
>>
>> +* **Added support for setting queue attributes at runtime in eventdev.**
>> +
>> +  Added new API ``rte_event_queue_attr_set()``, to set event queue attributes
>> +  at runtime.
>> +
>>
>>   Removed Items
>>   -------------
>> diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
>> index ce469d47a6..3b85d9f7a5 100644
>> --- a/lib/eventdev/eventdev_pmd.h
>> +++ b/lib/eventdev/eventdev_pmd.h
>> @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
>>   typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
>>                  uint8_t queue_id);
>>
>> +/**
>> + * Set an event queue attribute at runtime.
>> + *
>> + * @param dev
>> + *   Event device pointer
>> + * @param queue_id
>> + *   Event queue index
>> + * @param attr_id
>> + *   Event queue attribute id
>> + * @param attr_value
>> + *   Event queue attribute value
>> + *
>> + * @return
>> + *  - 0: Success.
>> + *  - <0: Error code on failure.
>> + */
>> +typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev,
>> +                                        uint8_t queue_id, uint32_t attr_id,
>> +                                        uint64_t attr_value);
>> +
>>   /**
>>    * Retrieve the default event port configuration.
>>    *
>> @@ -1211,6 +1231,8 @@ struct eventdev_ops {
>>          /**< Set up an event queue. */
>>          eventdev_queue_release_t queue_release;
>>          /**< Release an event queue. */
>> +       eventdev_queue_attr_set_t queue_attr_set;
>> +       /**< Set an event queue attribute. */
>>
>>          eventdev_port_default_conf_get_t port_def_conf;
>>          /**< Get default port configuration. */
>> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
>> index 532a253553..a31e99be02 100644
>> --- a/lib/eventdev/rte_eventdev.c
>> +++ b/lib/eventdev/rte_eventdev.c
>> @@ -844,6 +844,32 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>>          return 0;
>>   }
>>
>> +int
>> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>> +                        uint64_t attr_value)
>> +{
>> +       struct rte_eventdev *dev;
>> +
>> +       RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>> +       dev = &rte_eventdevs[dev_id];
>> +       if (!is_valid_queue(dev, queue_id)) {
>> +               RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
>> +               return -EINVAL;
>> +       }
>> +
>> +       if (!(dev->data->event_dev_cap &
>> +             RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) {
>> +               RTE_EDEV_LOG_ERR(
>> +                       "Device %" PRIu8 "does not support changing queue attributes at runtime",
>> +                       dev_id);
>> +               return -ENOTSUP;
>> +       }
>> +
>> +       RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_attr_set, -ENOTSUP);
>> +       return (*dev->dev_ops->queue_attr_set)(dev, queue_id, attr_id,
>> +                                              attr_value);
>> +}
>> +
>>   int
>>   rte_event_port_link(uint8_t dev_id, uint8_t port_id,
>>                      const uint8_t queues[], const uint8_t priorities[],
>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>> index 42a5660169..a79b1397eb 100644
>> --- a/lib/eventdev/rte_eventdev.h
>> +++ b/lib/eventdev/rte_eventdev.h
>> @@ -225,7 +225,7 @@ struct rte_event;
>>   /**< Event scheduling prioritization is based on the priority associated with
>>    *  each event queue.
>>    *
>> - *  @see rte_event_queue_setup()
>> + *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
>>    */
>>   #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
>>   /**< Event scheduling prioritization is based on the priority associated with
>> @@ -307,6 +307,13 @@ struct rte_event;
>>    * global pool, or process signaling related to load balancing.
>>    */
>>
>> +#define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
>> +/**< Event device is capable of changing the queue attributes at runtime i.e
>> + * after rte_event_queue_setup() or rte_event_start() call sequence. If this
>> + * flag is not set, eventdev queue attributes can only be configured during
>> + * rte_event_queue_setup().
>> + */
>> +
>>   /* Event device priority levels */
>>   #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
>>   /**< Highest priority expressed across eventdev subsystem
>> @@ -702,6 +709,29 @@ int
>>   rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>>                          uint32_t *attr_value);
>>
>> +/**
>> + * Set an event queue attribute.
>> + *
>> + * @param dev_id
>> + *   Eventdev id
>> + * @param queue_id
>> + *   Eventdev queue id
>> + * @param attr_id
>> + *   The attribute ID to set
>> + * @param attr_value
>> + *   The attribute value to set
>> + *
>> + * @return
>> + *   - 0: Successfully set attribute.
>> + *   - -EINVAL: invalid device, queue or attr_id.
>> + *   - -ENOTSUP: device does not support setting the event attribute.
>> + *   - <0: failed to set event queue attribute
>> + */
>> +__rte_experimental
>> +int
>> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>> +                        uint64_t attr_value);
>> +
>>   /* Event port specific APIs */
>>
>>   /* Event port configuration bitmap flags */
>> diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
>> index cd5dada07f..c581b75c18 100644
>> --- a/lib/eventdev/version.map
>> +++ b/lib/eventdev/version.map
>> @@ -108,6 +108,9 @@ EXPERIMENTAL {
>>
>>          # added in 22.03
>>          rte_event_eth_rx_adapter_event_port_get;
>> +
>> +       # added in 22.07
>> +       rte_event_queue_attr_set;
>>   };
>>
>>   INTERNAL {
>> --
>> 2.25.1
>>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v4 1/5] eventdev: support to set queue attributes at runtime
  2022-05-17  8:55           ` Mattias Rönnblom
@ 2022-05-17 13:35             ` Jerin Jacob
  0 siblings, 0 replies; 58+ messages in thread
From: Jerin Jacob @ 2022-05-17 13:35 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: Shijith Thotton, dpdk-dev, Jerin Jacob, Pavan Nikhilesh,
	Van Haaren, Harry, Ray Kinsella

On Tue, May 17, 2022 at 2:25 PM Mattias Rönnblom
<mattias.ronnblom@ericsson.com> wrote:
>
> On 2022-05-16 20:02, Jerin Jacob wrote:
> > On Mon, May 16, 2022 at 11:09 PM Shijith Thotton <sthotton@marvell.com> wrote:
> >>
> >> Added a new eventdev API rte_event_queue_attr_set(), to set event queue
> >> attributes at runtime from the values set during initialization using
> >> rte_event_queue_setup(). PMD's supporting this feature should expose the
> >> capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR.
> >>
> >> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> >> Acked-by: Jerin Jacob <jerinj@marvell.com>
> >
> > Hi @Mattias Rönnblom
> >
> > Planning to merge this version for -rc1. Do you have any more comments
> > for this series?
> >
>
> No.


Series applied to dpdk-next-net-eventdev/for-main. Thanks

>
> >
> >> ---
> >>   doc/guides/eventdevs/features/default.ini |  1 +
> >>   doc/guides/rel_notes/release_22_07.rst    |  5 ++++
> >>   lib/eventdev/eventdev_pmd.h               | 22 ++++++++++++++++
> >>   lib/eventdev/rte_eventdev.c               | 26 ++++++++++++++++++
> >>   lib/eventdev/rte_eventdev.h               | 32 ++++++++++++++++++++++-
> >>   lib/eventdev/version.map                  |  3 +++
> >>   6 files changed, 88 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini
> >> index 2ea233463a..00360f60c6 100644
> >> --- a/doc/guides/eventdevs/features/default.ini
> >> +++ b/doc/guides/eventdevs/features/default.ini
> >> @@ -17,6 +17,7 @@ runtime_port_link          =
> >>   multiple_queue_port        =
> >>   carry_flow_id              =
> >>   maintenance_free           =
> >> +runtime_queue_attr         =
> >>
> >>   ;
> >>   ; Features of a default Ethernet Rx adapter.
> >> diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
> >> index 88d6e96cc1..a7a912d665 100644
> >> --- a/doc/guides/rel_notes/release_22_07.rst
> >> +++ b/doc/guides/rel_notes/release_22_07.rst
> >> @@ -65,6 +65,11 @@ New Features
> >>     * Added support for promiscuous mode on Windows.
> >>     * Added support for MTU on Windows.
> >>
> >> +* **Added support for setting queue attributes at runtime in eventdev.**
> >> +
> >> +  Added new API ``rte_event_queue_attr_set()``, to set event queue attributes
> >> +  at runtime.
> >> +
> >>
> >>   Removed Items
> >>   -------------
> >> diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
> >> index ce469d47a6..3b85d9f7a5 100644
> >> --- a/lib/eventdev/eventdev_pmd.h
> >> +++ b/lib/eventdev/eventdev_pmd.h
> >> @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
> >>   typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
> >>                  uint8_t queue_id);
> >>
> >> +/**
> >> + * Set an event queue attribute at runtime.
> >> + *
> >> + * @param dev
> >> + *   Event device pointer
> >> + * @param queue_id
> >> + *   Event queue index
> >> + * @param attr_id
> >> + *   Event queue attribute id
> >> + * @param attr_value
> >> + *   Event queue attribute value
> >> + *
> >> + * @return
> >> + *  - 0: Success.
> >> + *  - <0: Error code on failure.
> >> + */
> >> +typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev,
> >> +                                        uint8_t queue_id, uint32_t attr_id,
> >> +                                        uint64_t attr_value);
> >> +
> >>   /**
> >>    * Retrieve the default event port configuration.
> >>    *
> >> @@ -1211,6 +1231,8 @@ struct eventdev_ops {
> >>          /**< Set up an event queue. */
> >>          eventdev_queue_release_t queue_release;
> >>          /**< Release an event queue. */
> >> +       eventdev_queue_attr_set_t queue_attr_set;
> >> +       /**< Set an event queue attribute. */
> >>
> >>          eventdev_port_default_conf_get_t port_def_conf;
> >>          /**< Get default port configuration. */
> >> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
> >> index 532a253553..a31e99be02 100644
> >> --- a/lib/eventdev/rte_eventdev.c
> >> +++ b/lib/eventdev/rte_eventdev.c
> >> @@ -844,6 +844,32 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
> >>          return 0;
> >>   }
> >>
> >> +int
> >> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
> >> +                        uint64_t attr_value)
> >> +{
> >> +       struct rte_eventdev *dev;
> >> +
> >> +       RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >> +       dev = &rte_eventdevs[dev_id];
> >> +       if (!is_valid_queue(dev, queue_id)) {
> >> +               RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
> >> +               return -EINVAL;
> >> +       }
> >> +
> >> +       if (!(dev->data->event_dev_cap &
> >> +             RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) {
> >> +               RTE_EDEV_LOG_ERR(
> >> +                       "Device %" PRIu8 "does not support changing queue attributes at runtime",
> >> +                       dev_id);
> >> +               return -ENOTSUP;
> >> +       }
> >> +
> >> +       RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_attr_set, -ENOTSUP);
> >> +       return (*dev->dev_ops->queue_attr_set)(dev, queue_id, attr_id,
> >> +                                              attr_value);
> >> +}
> >> +
> >>   int
> >>   rte_event_port_link(uint8_t dev_id, uint8_t port_id,
> >>                      const uint8_t queues[], const uint8_t priorities[],
> >> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> >> index 42a5660169..a79b1397eb 100644
> >> --- a/lib/eventdev/rte_eventdev.h
> >> +++ b/lib/eventdev/rte_eventdev.h
> >> @@ -225,7 +225,7 @@ struct rte_event;
> >>   /**< Event scheduling prioritization is based on the priority associated with
> >>    *  each event queue.
> >>    *
> >> - *  @see rte_event_queue_setup()
> >> + *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
> >>    */
> >>   #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
> >>   /**< Event scheduling prioritization is based on the priority associated with
> >> @@ -307,6 +307,13 @@ struct rte_event;
> >>    * global pool, or process signaling related to load balancing.
> >>    */
> >>
> >> +#define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
> >> +/**< Event device is capable of changing the queue attributes at runtime i.e
> >> + * after rte_event_queue_setup() or rte_event_start() call sequence. If this
> >> + * flag is not set, eventdev queue attributes can only be configured during
> >> + * rte_event_queue_setup().
> >> + */
> >> +
> >>   /* Event device priority levels */
> >>   #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
> >>   /**< Highest priority expressed across eventdev subsystem
> >> @@ -702,6 +709,29 @@ int
> >>   rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
> >>                          uint32_t *attr_value);
> >>
> >> +/**
> >> + * Set an event queue attribute.
> >> + *
> >> + * @param dev_id
> >> + *   Eventdev id
> >> + * @param queue_id
> >> + *   Eventdev queue id
> >> + * @param attr_id
> >> + *   The attribute ID to set
> >> + * @param attr_value
> >> + *   The attribute value to set
> >> + *
> >> + * @return
> >> + *   - 0: Successfully set attribute.
> >> + *   - -EINVAL: invalid device, queue or attr_id.
> >> + *   - -ENOTSUP: device does not support setting the event attribute.
> >> + *   - <0: failed to set event queue attribute
> >> + */
> >> +__rte_experimental
> >> +int
> >> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
> >> +                        uint64_t attr_value);
> >> +
> >>   /* Event port specific APIs */
> >>
> >>   /* Event port configuration bitmap flags */
> >> diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
> >> index cd5dada07f..c581b75c18 100644
> >> --- a/lib/eventdev/version.map
> >> +++ b/lib/eventdev/version.map
> >> @@ -108,6 +108,9 @@ EXPERIMENTAL {
> >>
> >>          # added in 22.03
> >>          rte_event_eth_rx_adapter_event_port_get;
> >> +
> >> +       # added in 22.07
> >> +       rte_event_queue_attr_set;
> >>   };
> >>
> >>   INTERNAL {
> >> --
> >> 2.25.1
> >>
>

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v4 1/5] eventdev: support to set queue attributes at runtime
  2022-05-16 17:35       ` [PATCH v4 1/5] eventdev: support to set " Shijith Thotton
  2022-05-16 18:02         ` Jerin Jacob
@ 2022-05-19  8:49         ` Ray Kinsella
  1 sibling, 0 replies; 58+ messages in thread
From: Ray Kinsella @ 2022-05-19  8:49 UTC (permalink / raw)
  To: Shijith Thotton
  Cc: dev, jerinj, pbhagavatula, harry.van.haaren, mattias.ronnblom


Shijith Thotton <sthotton@marvell.com> writes:

> Added a new eventdev API rte_event_queue_attr_set(), to set event queue
> attributes at runtime from the values set during initialization using
> rte_event_queue_setup(). PMD's supporting this feature should expose the
> capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR.
>
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
> ---
>  doc/guides/eventdevs/features/default.ini |  1 +
>  doc/guides/rel_notes/release_22_07.rst    |  5 ++++
>  lib/eventdev/eventdev_pmd.h               | 22 ++++++++++++++++
>  lib/eventdev/rte_eventdev.c               | 26 ++++++++++++++++++
>  lib/eventdev/rte_eventdev.h               | 32 ++++++++++++++++++++++-
>  lib/eventdev/version.map                  |  3 +++
>  6 files changed, 88 insertions(+), 1 deletion(-)
>
Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3] doc: announce change in event queue conf structure
  2022-05-15 10:24     ` [PATCH v3] " Shijith Thotton
@ 2022-07-12 14:05       ` Jerin Jacob
  2022-07-13  6:52         ` [EXT] " Pavan Nikhilesh Bhagavatula
  2022-07-13  8:55         ` Mattias Rönnblom
  2022-07-17 12:52       ` Thomas Monjalon
  1 sibling, 2 replies; 58+ messages in thread
From: Jerin Jacob @ 2022-07-12 14:05 UTC (permalink / raw)
  To: Shijith Thotton
  Cc: dpdk-dev, Jerin Jacob, Pavan Nikhilesh, Van Haaren, Harry,
	Mattias Rönnblom, Ray Kinsella

On Sun, May 15, 2022 at 3:56 PM Shijith Thotton <sthotton@marvell.com> wrote:
>
> Structure rte_event_queue_conf will be extended to include fields to
> support weight and affinity attribute. Once it gets added in DPDK 22.11,
> eventdev internal op, queue_attr_get can be removed.
>
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>


Acked-by: Jerin Jacob <jerinj@marvell.com>

@Van Haaren, Harry  @Mattias Rönnblom  @Ray Kinsella  @Pavan Nikhilesh
Please Ack if you are OK.

> ---
>  doc/guides/rel_notes/deprecation.rst | 3 +++
>  1 file changed, 3 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 4e5b23c53d..04125db681 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -125,3 +125,6 @@ Deprecation Notices
>    applications should be updated to use the ``dmadev`` library instead,
>    with the underlying HW-functionality being provided by the ``ioat`` or
>    ``idxd`` dma drivers
> +
> +* eventdev: New fields to represent event queue weight and affinity will be
> +  added to ``rte_event_queue_conf`` structure in DPDK 22.11.
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 58+ messages in thread

* RE: [EXT] Re: [PATCH v3] doc: announce change in event queue conf structure
  2022-07-12 14:05       ` Jerin Jacob
@ 2022-07-13  6:52         ` Pavan Nikhilesh Bhagavatula
  2022-07-13  8:55         ` Mattias Rönnblom
  1 sibling, 0 replies; 58+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2022-07-13  6:52 UTC (permalink / raw)
  To: Jerin Jacob, Shijith Thotton
  Cc: dpdk-dev, Jerin Jacob Kollanukkaran, Van Haaren, Harry,
	Mattias Rönnblom, Ray Kinsella, timothy.mcdaniel,
	hemant.agrawal, sachin.saxena, mattias.ronnblom,
	Jerin Jacob Kollanukkaran, liangma, peter.mccarthy,
	harry.van.haaren, erik.g.carrillo, abhinandan.gujjar,
	jay.jayatheerthan, mdr, anatoly.burakov

+Cc
timothy.mcdaniel@intel.com;
hemant.agrawal@nxp.com;
sachin.saxena@oss.nxp.com;
mattias.ronnblom@ericsson.com;
jerinj@marvell.com;
liangma@liangbit.com;
peter.mccarthy@intel.com;
harry.van.haaren@intel.com;
erik.g.carrillo@intel.com;
abhinandan.gujjar@intel.com;
jay.jayatheerthan@intel.com;
mdr@ashroe.eu;
anatoly.burakov@intel.com;


> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Tuesday, July 12, 2022 7:35 PM
> To: Shijith Thotton <sthotton@marvell.com>
> Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob Kollanukkaran
> <jerinj@marvell.com>; Pavan Nikhilesh Bhagavatula
> <pbhagavatula@marvell.com>; Van Haaren, Harry
> <harry.van.haaren@intel.com>; Mattias Rönnblom
> <mattias.ronnblom@ericsson.com>; Ray Kinsella <mdr@ashroe.eu>
> Subject: [EXT] Re: [PATCH v3] doc: announce change in event queue conf
> structure
> 
> External Email
> 
> ----------------------------------------------------------------------
> On Sun, May 15, 2022 at 3:56 PM Shijith Thotton <sthotton@marvell.com>
> wrote:
> >
> > Structure rte_event_queue_conf will be extended to include fields to
> > support weight and affinity attribute. Once it gets added in DPDK 22.11,
> > eventdev internal op, queue_attr_get can be removed.
> >
> > Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> 
> 
> Acked-by: Jerin Jacob <jerinj@marvell.com>

Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>

> @Van Haaren, Harry  @Mattias Rönnblom  @Ray Kinsella  @Pavan Nikhilesh
> Please Ack if you are OK.
> 
> > ---
> >  doc/guides/rel_notes/deprecation.rst | 3 +++
> >  1 file changed, 3 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> > index 4e5b23c53d..04125db681 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -125,3 +125,6 @@ Deprecation Notices
> >    applications should be updated to use the ``dmadev`` library instead,
> >    with the underlying HW-functionality being provided by the ``ioat`` or
> >    ``idxd`` dma drivers
> > +
> > +* eventdev: New fields to represent event queue weight and affinity will
> be
> > +  added to ``rte_event_queue_conf`` structure in DPDK 22.11.
> > --
> > 2.25.1
> >

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3] doc: announce change in event queue conf structure
  2022-07-12 14:05       ` Jerin Jacob
  2022-07-13  6:52         ` [EXT] " Pavan Nikhilesh Bhagavatula
@ 2022-07-13  8:55         ` Mattias Rönnblom
  2022-07-13  9:56           ` Pavan Nikhilesh Bhagavatula
  1 sibling, 1 reply; 58+ messages in thread
From: Mattias Rönnblom @ 2022-07-13  8:55 UTC (permalink / raw)
  To: Jerin Jacob, Shijith Thotton
  Cc: dpdk-dev, Jerin Jacob, Pavan Nikhilesh, Van Haaren, Harry, Ray Kinsella

On 2022-07-12 16:05, Jerin Jacob wrote:
> On Sun, May 15, 2022 at 3:56 PM Shijith Thotton <sthotton@marvell.com> wrote:
>>
>> Structure rte_event_queue_conf will be extended to include fields to
>> support weight and affinity attribute. Once it gets added in DPDK 22.11,
>> eventdev internal op, queue_attr_get can be removed.
>>
>> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> 
> 
> Acked-by: Jerin Jacob <jerinj@marvell.com>
> 
> @Van Haaren, Harry  @Mattias Rönnblom  @Ray Kinsella  @Pavan Nikhilesh
> Please Ack if you are OK.
> 

Will there be new capabilities to go with those new fields? To let the 
application know if they will be honored, or not.

Acked-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>

>> ---
>>   doc/guides/rel_notes/deprecation.rst | 3 +++
>>   1 file changed, 3 insertions(+)
>>
>> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
>> index 4e5b23c53d..04125db681 100644
>> --- a/doc/guides/rel_notes/deprecation.rst
>> +++ b/doc/guides/rel_notes/deprecation.rst
>> @@ -125,3 +125,6 @@ Deprecation Notices
>>     applications should be updated to use the ``dmadev`` library instead,
>>     with the underlying HW-functionality being provided by the ``ioat`` or
>>     ``idxd`` dma drivers
>> +
>> +* eventdev: New fields to represent event queue weight and affinity will be
>> +  added to ``rte_event_queue_conf`` structure in DPDK 22.11.
>> --
>> 2.25.1
>>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* RE: [PATCH v3] doc: announce change in event queue conf structure
  2022-07-13  8:55         ` Mattias Rönnblom
@ 2022-07-13  9:56           ` Pavan Nikhilesh Bhagavatula
  0 siblings, 0 replies; 58+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2022-07-13  9:56 UTC (permalink / raw)
  To: Mattias Rönnblom, Jerin Jacob, Shijith Thotton
  Cc: dpdk-dev, Jerin Jacob Kollanukkaran, Van Haaren, Harry, Ray Kinsella

> On 2022-07-12 16:05, Jerin Jacob wrote:
> > On Sun, May 15, 2022 at 3:56 PM Shijith Thotton <sthotton@marvell.com>
> wrote:
> >>
> >> Structure rte_event_queue_conf will be extended to include fields to
> >> support weight and affinity attribute. Once it gets added in DPDK 22.11,
> >> eventdev internal op, queue_attr_get can be removed.
> >>
> >> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> >
> >
> > Acked-by: Jerin Jacob <jerinj@marvell.com>
> >
> > @Van Haaren, Harry  @Mattias Rönnblom  @Ray Kinsella  @Pavan
> Nikhilesh
> > Please Ack if you are OK.
> >
> 
> Will there be new capabilities to go with those new fields? To let the
> application know if they will be honored, or not.
> 

I think this capability got covered under 
RTE_EVENT_DEV_CAP_QUEUE_QOS.

> Acked-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> 
> >> ---
> >>   doc/guides/rel_notes/deprecation.rst | 3 +++
> >>   1 file changed, 3 insertions(+)
> >>
> >> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> >> index 4e5b23c53d..04125db681 100644
> >> --- a/doc/guides/rel_notes/deprecation.rst
> >> +++ b/doc/guides/rel_notes/deprecation.rst
> >> @@ -125,3 +125,6 @@ Deprecation Notices
> >>     applications should be updated to use the ``dmadev`` library instead,
> >>     with the underlying HW-functionality being provided by the ``ioat`` or
> >>     ``idxd`` dma drivers
> >> +
> >> +* eventdev: New fields to represent event queue weight and affinity
> will be
> >> +  added to ``rte_event_queue_conf`` structure in DPDK 22.11.
> >> --
> >> 2.25.1
> >>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3] doc: announce change in event queue conf structure
  2022-05-15 10:24     ` [PATCH v3] " Shijith Thotton
  2022-07-12 14:05       ` Jerin Jacob
@ 2022-07-17 12:52       ` Thomas Monjalon
  1 sibling, 0 replies; 58+ messages in thread
From: Thomas Monjalon @ 2022-07-17 12:52 UTC (permalink / raw)
  To: Shijith Thotton
  Cc: dev, jerinj, pbhagavatula, harry.van.haaren, mattias.ronnblom, mdr

15/05/2022 12:24, Shijith Thotton:
> Structure rte_event_queue_conf will be extended to include fields to
> support weight and affinity attribute. Once it gets added in DPDK 22.11,
> eventdev internal op, queue_attr_get can be removed.
> 
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
    Acked-by: Jerin Jacob <jerinj@marvell.com>
    Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
    Acked-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>

There's not enough non-Marvell interested parties.
But I judge it non-controversial, so
Applied, thanks.



^ permalink raw reply	[flat|nested] 58+ messages in thread

end of thread, other threads:[~2022-07-17 12:52 UTC | newest]

Thread overview: 58+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-29 13:10 [PATCH 0/6] Extend and set event queue attributes at runtime Shijith Thotton
2022-03-29 13:11 ` [PATCH 1/6] eventdev: support to set " Shijith Thotton
2022-03-30 10:58   ` Van Haaren, Harry
2022-04-04  9:35     ` Shijith Thotton
2022-04-04  9:45       ` Van Haaren, Harry
2022-03-30 12:14   ` Mattias Rönnblom
2022-04-04 11:45     ` Shijith Thotton
2022-03-29 13:11 ` [PATCH 2/6] eventdev: add weight and affinity to queue attributes Shijith Thotton
2022-03-30 12:12   ` Mattias Rönnblom
2022-04-04  9:33     ` Shijith Thotton
2022-03-29 13:11 ` [PATCH 3/6] doc: announce change in event queue conf structure Shijith Thotton
2022-03-29 13:11 ` [PATCH 4/6] test/event: test cases to test runtime queue attribute Shijith Thotton
2022-03-29 13:11 ` [PATCH 5/6] event/cnxk: support to set runtime queue attributes Shijith Thotton
2022-03-30 11:05   ` Van Haaren, Harry
2022-04-04  7:59     ` Shijith Thotton
2022-03-29 13:11 ` [PATCH 6/6] common/cnxk: use lock when accessing mbox of SSO Shijith Thotton
2022-03-29 18:49 ` [PATCH 0/6] Extend and set event queue attributes at runtime Jerin Jacob
2022-03-30 10:52   ` Van Haaren, Harry
2022-04-04  7:57     ` Shijith Thotton
2022-04-05  5:40 ` [PATCH v2 " Shijith Thotton
2022-04-05  5:40   ` [PATCH v2 1/6] eventdev: support to set " Shijith Thotton
2022-05-09 12:43     ` Jerin Jacob
2022-04-05  5:40   ` [PATCH v2 2/6] eventdev: add weight and affinity to queue attributes Shijith Thotton
2022-05-09 12:46     ` Jerin Jacob
2022-04-05  5:41   ` [PATCH v2 3/6] doc: announce change in event queue conf structure Shijith Thotton
2022-05-09 12:47     ` Jerin Jacob
2022-05-15 10:24     ` [PATCH v3] " Shijith Thotton
2022-07-12 14:05       ` Jerin Jacob
2022-07-13  6:52         ` [EXT] " Pavan Nikhilesh Bhagavatula
2022-07-13  8:55         ` Mattias Rönnblom
2022-07-13  9:56           ` Pavan Nikhilesh Bhagavatula
2022-07-17 12:52       ` Thomas Monjalon
2022-04-05  5:41   ` [PATCH v2 4/6] test/event: test cases to test runtime queue attribute Shijith Thotton
2022-05-09 12:55     ` Jerin Jacob
2022-04-05  5:41   ` [PATCH v2 5/6] event/cnxk: support to set runtime queue attributes Shijith Thotton
2022-05-09 12:57     ` Jerin Jacob
2022-04-05  5:41   ` [PATCH v2 6/6] common/cnxk: use lock when accessing mbox of SSO Shijith Thotton
2022-04-11 11:07   ` [PATCH v2 0/6] Extend and set event queue attributes at runtime Shijith Thotton
2022-05-15  9:53   ` [PATCH v3 0/5] " Shijith Thotton
2022-05-15  9:53     ` [PATCH v3 1/5] eventdev: support to set " Shijith Thotton
2022-05-15 13:11       ` Mattias Rönnblom
2022-05-16  3:57         ` Shijith Thotton
2022-05-16 10:23           ` Mattias Rönnblom
2022-05-16 12:12             ` Shijith Thotton
2022-05-15  9:53     ` [PATCH v3 2/5] eventdev: add weight and affinity to queue attributes Shijith Thotton
2022-05-15  9:53     ` [PATCH v3 3/5] test/event: test cases to test runtime queue attribute Shijith Thotton
2022-05-15  9:53     ` [PATCH v3 4/5] common/cnxk: use lock when accessing mbox of SSO Shijith Thotton
2022-05-15  9:53     ` [PATCH v3 5/5] event/cnxk: support to set runtime queue attributes Shijith Thotton
2022-05-16 17:35     ` [PATCH v4 0/5] Extend and set event queue attributes at runtime Shijith Thotton
2022-05-16 17:35       ` [PATCH v4 1/5] eventdev: support to set " Shijith Thotton
2022-05-16 18:02         ` Jerin Jacob
2022-05-17  8:55           ` Mattias Rönnblom
2022-05-17 13:35             ` Jerin Jacob
2022-05-19  8:49         ` Ray Kinsella
2022-05-16 17:35       ` [PATCH v4 2/5] eventdev: add weight and affinity to queue attributes Shijith Thotton
2022-05-16 17:35       ` [PATCH v4 3/5] test/event: test cases to test runtime queue attribute Shijith Thotton
2022-05-16 17:35       ` [PATCH v4 4/5] common/cnxk: use lock when accessing mbox of SSO Shijith Thotton
2022-05-16 17:35       ` [PATCH v4 5/5] event/cnxk: support to set runtime queue attributes Shijith Thotton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).