DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [RFC] queue: introduce queue APIs and driver framework
@ 2018-06-27 16:06 Honnappa Nagarahalli
  2018-06-27 16:06 ` Honnappa Nagarahalli
  2018-06-27 16:19 ` Jerin Jacob
  0 siblings, 2 replies; 6+ messages in thread
From: Honnappa Nagarahalli @ 2018-06-27 16:06 UTC (permalink / raw)
  To: dev; +Cc: honnappa.nagarahalli, gavin.hu, nd

DPDK offers pipeline model of packet processing. One of the key
components of this model is the core to core packet exchange.
rte_ring and rte_event_ring functions are 2 methods provided
currently for core to core communication. However, these two
do not separate the APIs from implementation. This does not
allow using hardware queue implementations in pipeline model.
This change adds queue APIs and driver framework so that
HW queues can be used for core to core communication in
pipeline model.
When different implementations (ex: HW queues and rte_ring) are used
for the same object in different platforms, it is important to
make sure that the application is portable. Hence features of
different implementations must be elevated to the API level, so that
the application writers can make the right choice.
Currently, basic APIs are created, will add more required APIs
as this progresses.

Honnappa Nagarahalli (1):
  queue: introduce queue APIs and driver framework

 lib/librte_queue/rte_queue.c        | 122 ++++++++++++++++++++++
 lib/librte_queue/rte_queue.h        | 200 ++++++++++++++++++++++++++++++++++++
 lib/librte_queue/rte_queue_driver.h | 157 ++++++++++++++++++++++++++++
 3 files changed, 479 insertions(+)
 create mode 100644 lib/librte_queue/rte_queue.c
 create mode 100644 lib/librte_queue/rte_queue.h
 create mode 100644 lib/librte_queue/rte_queue_driver.h

-- 
2.7.4

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dpdk-dev] [RFC] queue: introduce queue APIs and driver framework
  2018-06-27 16:06 [dpdk-dev] [RFC] queue: introduce queue APIs and driver framework Honnappa Nagarahalli
@ 2018-06-27 16:06 ` Honnappa Nagarahalli
  2018-06-27 16:19 ` Jerin Jacob
  1 sibling, 0 replies; 6+ messages in thread
From: Honnappa Nagarahalli @ 2018-06-27 16:06 UTC (permalink / raw)
  To: dev; +Cc: honnappa.nagarahalli, gavin.hu, nd

rte_ring and rte_event_ring functions are 2 methods provided
currently for core to core communication. However, these two
do not separate the APIs from implementation. This does not
allow using hardware queue facilities. This change adds queue
APIs and driver framework.

Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
---
 lib/librte_queue/rte_queue.c        | 122 ++++++++++++++++++++++
 lib/librte_queue/rte_queue.h        | 200 ++++++++++++++++++++++++++++++++++++
 lib/librte_queue/rte_queue_driver.h | 157 ++++++++++++++++++++++++++++
 3 files changed, 479 insertions(+)
 create mode 100644 lib/librte_queue/rte_queue.c
 create mode 100644 lib/librte_queue/rte_queue.h
 create mode 100644 lib/librte_queue/rte_queue_driver.h

diff --git a/lib/librte_queue/rte_queue.c b/lib/librte_queue/rte_queue.c
new file mode 100644
index 0000000..ddde778
--- /dev/null
+++ b/lib/librte_queue/rte_queue.c
@@ -0,0 +1,122 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Arm
+ */
+
+#include <sys/queue.h>
+#include <string.h>
+
+#include <rte_tailq.h>
+#include <rte_memzone.h>
+#include <rte_rwlock.h>
+#include <rte_eal_memconfig.h>
+#include "rte_queue.h"
+
+TAILQ_HEAD(rte_queue_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_queue_tailq = {
+	.name = RTE_TAILQ_QUEUE_NAME,
+};
+EAL_REGISTER_TAILQ(rte_queue_tailq)
+
+/* create the queue */
+struct rte_queue *
+rte_queue_create(struct rte_queue_ctx *instance, const char *name,
+		unsigned int count, int socket_id, unsigned int flags)
+{
+	struct rte_queue *q = NULL;
+	struct rte_tailq_entry *te;
+	struct rte_queue_list *queue_list = NULL;
+
+	RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->queue_create, NULL);
+
+	queue_list = RTE_TAILQ_CAST(rte_queue_tailq.head, rte_queue_list);
+
+	te = rte_zmalloc("QUEUE_TAILQ_ENTRY", sizeof(*te), 0);
+	if (te == NULL) {
+		RTE_LOG(ERR, RING, "Cannot reserve memory for tailq\n");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+
+	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+	if (instance->ops->queue_create(instance->device, count, socket_id,
+					flags, &q)) {
+		rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+		rte_free(te);
+
+		return NULL;
+	}
+
+	te->data = q;
+	TAILQ_INSERT_TAIL(queue_list, te, next);
+	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+	return q;
+}
+
+/* free the queue */
+void
+rte_queue_free(struct rte_queue_ctx *instance, struct rte_queue *q)
+{
+	struct rte_tailq_entry *te = NULL;
+	struct rte_queue_list *queue_list = NULL;
+
+	RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->queue_free, 0);
+
+	queue_list = RTE_TAILQ_CAST(rte_queue_tailq.head, rte_queue_list);
+	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+	/* find out tailq entry */
+	TAILQ_FOREACH(te, queue_list, next) {
+		if (te->data == (void *) q)
+			break;
+	}
+
+	if (te == NULL) {
+		rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+		return;
+	}
+
+	TAILQ_REMOVE(queue_list, te, next);
+
+	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+	rte_free(te);
+
+	instance->ops->queue_free(instance->device, q);
+}
+
+/* enqueue to the queue */
+unsigned int
+rte_queue_enqueue_burst(struct rte_queue_ctx *instance, struct rte_queue *q,
+		void * const *obj_table, unsigned int n,
+		unsigned int *free_space)
+{
+	return instance->ops->queue_enqueue_burst(instance->device, q,
+						obj_table, n, free_space);
+}
+
+/* dequeue from the queue */
+void
+rte_queue_dequeue_burst(struct rte_queue_ctx *instance, struct rte_queue *q,
+		void * const *obj_table, unsigned int n,
+		unsigned int *available)
+{
+	return instance->ops->queue_enqueue_burst(instance->device, q,
+						obj_table, n, available);
+}
+
+/* return size of the queue */
+void
+rte_queue_get_size(struct rte_queue_ctx *instance, struct rte_queue *q)
+{
+	return instance->ops->queue_get_size(instance->device, q);
+}
+
+/* return usable size of the queue */
+void
+rte_queue_get_capacity(struct rte_queue_ctx *instance, struct rte_queue *q)
+{
+	return instance->ops->queue_get_capacity(instance->device, q);
+}
diff --git a/lib/librte_queue/rte_queue.h b/lib/librte_queue/rte_queue.h
new file mode 100644
index 0000000..90dc34d
--- /dev/null
+++ b/lib/librte_queue/rte_queue.h
@@ -0,0 +1,200 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Arm
+ */
+
+/**
+ * @file rte_queue.h
+ * @b EXPERIMENTAL: these APIs may change without prior notice
+ *
+ * RTE Queue
+ *
+ * This provides queue APIs for passing any data from one core to another.
+ */
+
+#ifndef _RTE_QUEUE_
+#define _RTE_QUEUE_
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+
+#define RTE_TAILQ_QUEUE_NAME "RTE_QUEUE"
+
+/**
+ * Context for queue device
+ *
+ * Queue instance for each driver to register queue operations.
+ */
+struct rte_queue_ctx {
+	void *device;
+	/**< Queue device attached */
+	const struct rte_queue_ops *ops;
+	/**< Pointer to queue ops for the device */
+};
+
+/**
+ * Handle to the implementation specific queue
+ */
+struct rte_queue {
+	RTE_STD_C11
+	union {
+		void *private_data;
+		/**< Queue implementation specific data */
+		uintptr_t queue_handle;
+		/**< Queue handle */
+	}
+};
+
+#define RTE_QUEUE_SP_ENQ 0x0001 /**< The enqueue is "single-producer". */
+#define RTE_QUEUE_SC_DEQ 0x0002 /**< The dequeue is "single-consumer". */
+#define RTE_QUEUE_NON_BLOCK 0x0004
+/**< On the same queue, producers do not block other producers,
+ *   consumers do not block other consumers.
+ */
+
+/*
+ * Create a queue
+ *
+ * This function creates a queue and returns a handle to it.
+ *
+ * @param name
+ *   Name to be given to the queue
+ * @param count
+ *   Minimum number of elements to be stored in the queue. If this is
+ *   not a power of 2, some implementations might use extra memory.
+ * @param socket_id
+ *   The *socket_id* argument is the socket identifier in case of
+ *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ *   constraint for the reserved zone.
+ * @param flags
+ *   An OR of the following:
+ *    - RTE_QUEUE_SP_ENQ: If this flag is set, only one producer will
+ *      use this queue at a time. Otherwise, multiple producers use
+ *      this queue simultaneously.
+ *    - RTE_QUEUE_SC_DEQ: If this flag is set, only one consumer uses
+ *      this queue at a time. Otherwise, multiple consumers may use this
+ *      queue simultaneously.
+ *    - RTE_QUEUE_NON_BLOCK: If this flag is set, underlying queue
+ *      implementation should not block threads while doing queue operations.
+ *
+ * @return
+ *   On success, the pointer to the new allocated queue. NULL on error with
+ *    rte_errno set appropriately. Possible errno values include:
+ *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
+ *    - E_RTE_SECONDARY - function was called from a secondary process instance
+ *    - ENOSPC - the maximum number of memzones has already been allocated
+ *    - EEXIST - a memzone with the same name already exists
+ *    - ENOMEM - no appropriate memory area found in which to create memzone
+ */
+struct rte_queue *__rte_experimental
+rte_queue_create(struct rte_queue_ctx *instance, const char *name,
+		unsigned int count, int socket_id, unsigned int flags);
+
+/**
+ * Destroy the queue.
+ *
+ * @param q
+ *   Queue to free
+ */
+void __rte_experimental
+rte_queue_free(struct rte_queue_ctx *instance, struct rte_queue *q);
+
+/**
+ * Search for a queue based on its name
+ *
+ * @param name
+ *   The name of the queue.
+ * @return
+ *   The pointer to the queue matching the name, or NULL if not found,
+ *   with rte_errno set appropriately. Possible rte_errno values include:
+ *    - ENOENT - required entry not available to return.
+ */
+struct rte_queue *
+rte_queue_lookup(const char *name);
+
+/**
+ * Enqueue a set of objects onto a queue
+ *
+ * @param q
+ *   pointer to queue
+ * @param obj_table
+ *   pointer to an array of void * pointers (objects, events, pkts etc)
+ * @param n
+ *   number of objects to add to the queue from obj_table
+ * @return
+ *   Actual number of objects enqueued.
+ */
+unsigned int __rte_experimental
+rte_queue_enqueue_burst(struct rte_queue_ctx *instance, struct rte_queue *q,
+		void * const *obj_table, unsigned int n);
+
+/**
+ * Dequeue a set of objects from a queue
+ *
+ * @param q
+ *   pointer to queue
+ * @param obj_table
+ *   pointer to an array of void * pointers (objects, events, pkts etc)
+ * @param n
+ *   number of objects to dequeue from the queue to obj_table.
+ *   obj_table is assumed to have enough space.
+ * @return
+ *   Actual number of objects dequeued from queue, 0 if queue is empty
+ */
+unsigned int __rte_experimental
+rte_queue_dequeue_burst(struct rte_queue_ctx *instance, struct rte_queue *q,
+		void * const *obj_table,
+		unsigned int n);
+
+/**
+ * Returns the number of entries stored in the queue
+ *
+ * @param q
+ *   pointer to the queue
+ * @return
+ *   the number of elements in the queue
+ */
+unsigned int __rte_experimental
+rte_queue_get_count(struct rte_queue_ctx *instance, const struct rte_queue *q);
+
+/**
+ * Returns the number of free elements in the queue
+ *
+ * @param r
+ *   pointer to the queue
+ * @return
+ *   the number of free slots in the queue, i.e. the number of events that
+ *   can be successfully enqueued before dequeue must be called
+ */
+unsigned int __rte_experimental
+rte_queue_get_free_count(struct rte_queue_ctx *instance,
+		const struct rte_queue *q);
+
+/**
+ * Return the size of the queue.
+ *
+ * @param q
+ *   A pointer to the queue structure.
+ * @return
+ *   The size of the memory used by the queue.
+ *   NOTE: this is not the same as the usable space in the queue. To query that
+ *   use ``rte_queue_get_capacity()``.
+ */
+unsigned int __rte_experimental
+rte_queue_get_size(struct rte_queue_ctx *instance, const struct rte_queue *q);
+
+/**
+ * Return total number of elements which can be stored in the queue.
+ *
+ * @param r
+ *   A pointer to the queue structure.
+ * @return
+ *   The usable size of the queue.
+ */
+unsigned int __rte_experimental
+rte_queue_get_capacity(struct rte_queue_ctx *instance,
+		const struct rte_queue *q);
+
+#endif
diff --git a/lib/librte_queue/rte_queue_driver.h b/lib/librte_queue/rte_queue_driver.h
new file mode 100644
index 0000000..742cb6c
--- /dev/null
+++ b/lib/librte_queue/rte_queue_driver.h
@@ -0,0 +1,157 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Arm
+ */
+
+/**
+ * @file rte_queue_driver.h
+ * @b EXPERIMENTAL: these APIs may change without prior notice
+ *
+ * Declarations for driver functions
+ *
+ */
+
+#ifndef _RTE_QUEUE_DRIVER_H_
+#define _RTE_QUEUE_DRIVER_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+
+/*
+ * Create a queue
+ *
+ * This function creates a queue and returns a handle to it.
+ *
+ * @param device
+ *   Queue device
+ * @param count
+ *   number of elements to be stored in the queue. If this is not a
+ *   power of 2, some implementations might use extra memory.
+ * @param socket_id
+ *   The *socket_id* argument is the socket identifier in case of
+ *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ *   constraint for the reserved zone.
+ * @param flags
+ *   An OR of the following:
+ *    - RTE_QUEUE_SP_ENQ: If this flag is set, only one producer uses
+ *      this queue at a time. Otherwise, multiple producers use this
+ *      queue simultaneously.
+ *    - RTE_QUEUE_SC_DEQ: If this flag is set, only one consumer uses
+ *      this queue at a time. Otherwise, multiple consumers use this
+ *      queue simultaneously.
+ * @return
+ *    - Returns 0 if the queue is created successfully.
+ *    - Returns E_RTE_NO_CONFIG - function could not get pointer to
+ *				  rte_config structure
+ *    - Returns E_RTE_SECONDARY - function was called from a secondary
+ *				  process instance
+ *    - Returns ENOSPC - the maximum number of memzones has already been
+ *			 allocated
+ *    - Returns EEXIST - a memzone with the same name already exists
+ *    - Returns ENOMEM - no appropriate memory area found in which to
+ *			 create memzone
+ */
+
+typedef int (*queue_create_t)(void *device, unsigned int count, int socket_id,
+		unsigned int flags, struct rte_queue **queue);
+
+/**
+ * De-allocate all memory used by the queue.
+ *
+ * @param q
+ *   Queue to free
+ */
+typedef int (*queue_free_t)(void *device, struct rte_queue *q);
+
+/**
+ * Enqueue a set of objects onto a queue
+ *
+ * @param q
+ *   pointer to queue
+ * @param obj_table
+ *   pointer to an array of void * pointers (objects, events, pkts etc)
+ * @param n
+ *   number of objects to add to the queue from obj_table
+ * @param free_space
+ *   if non-null, is updated to indicate the amount of free slots in the
+ *   queue once the enqueue has completed.
+ * @return
+ *   Actual number of objects enqueued.
+ */
+typedef unsigned int
+(*queue_enqueue_burst_t)(void *device, struct rte_queue *q,
+		void * const *obj_table,
+		unsigned int n, unsigned int *free_space);
+
+/**
+ * Dequeue a set of objects from a queue
+ *
+ * @param q
+ *   pointer to queue
+ * @param obj_table
+ *   pointer to an array of void * pointers (objects, events, pkts etc)
+ * @param n
+ *   number of objects to dequeue from the queue to obj_table.
+ *   obj_table is assumed to have enough space.
+ * @param available
+ *   if non-null, is updated to indicate the number of objects remaining in
+ *   the queue once the dequeue has completed
+ * @return
+ *   Actual number of objects dequeued from queue, 0 if queue is empty
+ */
+typedef unsigned int
+(*queue_dequeue_burst_t)(void *device, struct rte_queue *q,
+		void * const *obj_table,
+		unsigned int n, uint16_t *available);
+
+/**
+ * Return the size of the queue.
+ *
+ * @param q
+ *   A pointer to the queue structure.
+ * @return
+ *   The size of the data store used by the queue.
+ *   NOTE: this is not the same as the usable space in the queue. To query that
+ *   use ``rte_queue_get_capacity()``.
+ */
+typedef unsigned int
+(*queue_get_size_t)(void *device, const struct rte_queue *q);
+
+/**
+ * Return the number of elements which can be stored in the queue.
+ *
+ * @param r
+ *   A pointer to the queue structure.
+ * @return
+ *   The usable size of the queue.
+ */
+typedef unsigned int
+(*queue_get_capacity_t)(void *device, const struct rte_queue *q);
+
+/** Queue operations function pointer table */
+struct rte_queue_ops {
+	queue_create_t create;
+	/**< Queue create. */
+	queue_free_t free;
+	/**< Destroy queue. */
+	queue_enqueue_burst_t enqueue_burst;
+	/**< Enqueue objects to the queue. */
+	queue_dequeue_burst_t dequeue_burst;
+	/**< Dequeue objects from the queue. */
+	queue_get_size_t get_size;
+	/**< Get size of the memory used by the queue */
+	queue_get_capacity_t get_capacity;
+	/**< Get total number of usable slots in the queue. */
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
-- 
2.7.4

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [RFC] queue: introduce queue APIs and driver framework
  2018-06-27 16:06 [dpdk-dev] [RFC] queue: introduce queue APIs and driver framework Honnappa Nagarahalli
  2018-06-27 16:06 ` Honnappa Nagarahalli
@ 2018-06-27 16:19 ` Jerin Jacob
  2018-07-11  2:02   ` Honnappa Nagarahalli
  1 sibling, 1 reply; 6+ messages in thread
From: Jerin Jacob @ 2018-06-27 16:19 UTC (permalink / raw)
  To: Honnappa Nagarahalli; +Cc: dev, gavin.hu, nd

-----Original Message-----
> Date: Wed, 27 Jun 2018 11:06:13 -0500
> From: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> To: dev@dpdk.org
> CC: honnappa.nagarahalli@arm.com, gavin.hu@arm.com, nd@arm.com
> Subject: [dpdk-dev] [RFC] queue: introduce queue APIs and driver framework
> X-Mailer: git-send-email 2.7.4
> 
> 
> DPDK offers pipeline model of packet processing. One of the key
> components of this model is the core to core packet exchange.
> rte_ring and rte_event_ring functions are 2 methods provided
> currently for core to core communication. However, these two
> do not separate the APIs from implementation. This does not
> allow using hardware queue implementations in pipeline model.
> This change adds queue APIs and driver framework so that
> HW queues can be used for core to core communication in
> pipeline model.
> When different implementations (ex: HW queues and rte_ring) are used

Just to understand, Do you have any HW in mind where it can do
generic multi producer/multi consumer queue operations for core to core
in HW as offload.



> for the same object in different platforms, it is important to
> make sure that the application is portable. Hence features of
> different implementations must be elevated to the API level, so that
> the application writers can make the right choice.
> Currently, basic APIs are created, will add more required APIs
> as this progresses.
> 
> Honnappa Nagarahalli (1):
>   queue: introduce queue APIs and driver framework
> 
>  lib/librte_queue/rte_queue.c        | 122 ++++++++++++++++++++++
>  lib/librte_queue/rte_queue.h        | 200 ++++++++++++++++++++++++++++++++++++
>  lib/librte_queue/rte_queue_driver.h | 157 ++++++++++++++++++++++++++++
>  3 files changed, 479 insertions(+)
>  create mode 100644 lib/librte_queue/rte_queue.c
>  create mode 100644 lib/librte_queue/rte_queue.h
>  create mode 100644 lib/librte_queue/rte_queue_driver.h
> 
> --
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [RFC] queue: introduce queue APIs and driver framework
  2018-06-27 16:19 ` Jerin Jacob
@ 2018-07-11  2:02   ` Honnappa Nagarahalli
  2018-07-11  6:51     ` Jerin Jacob
  0 siblings, 1 reply; 6+ messages in thread
From: Honnappa Nagarahalli @ 2018-07-11  2:02 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, Gavin Hu, nd, Hemant Agrawal



-----Original Message-----
From: Jerin Jacob <jerin.jacob@caviumnetworks.com> 
Sent: Wednesday, June 27, 2018 11:20 AM
To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
Cc: dev@dpdk.org; Gavin Hu <Gavin.Hu@arm.com>; nd <nd@arm.com>
Subject: Re: [dpdk-dev] [RFC] queue: introduce queue APIs and driver framework

-----Original Message-----
> Date: Wed, 27 Jun 2018 11:06:13 -0500
> From: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> To: dev@dpdk.org
> CC: honnappa.nagarahalli@arm.com, gavin.hu@arm.com, nd@arm.com
> Subject: [dpdk-dev] [RFC] queue: introduce queue APIs and driver 
> framework
> X-Mailer: git-send-email 2.7.4
> 
> 
> DPDK offers pipeline model of packet processing. One of the key 
> components of this model is the core to core packet exchange.
> rte_ring and rte_event_ring functions are 2 methods provided currently 
> for core to core communication. However, these two do not separate the 
> APIs from implementation. This does not allow using hardware queue 
> implementations in pipeline model.
> This change adds queue APIs and driver framework so that HW queues can 
> be used for core to core communication in pipeline model.
> When different implementations (ex: HW queues and rte_ring) are used

Just to understand, Do you have any HW in mind where it can do generic multi producer/multi consumer queue operations for core to core in HW as offload.

It is my understanding that NXP SoCs provide this capability (Hemant, please correct me if I am wrong).
It is not needed that the offload is a queue. It can be some other mechanism (for ex: enqueue/dequeue via the scheduler) as long as it performs better than the rte_ring implementation.

> for the same object in different platforms, it is important to make 
> sure that the application is portable. Hence features of different 
> implementations must be elevated to the API level, so that the 
> application writers can make the right choice.
> Currently, basic APIs are created, will add more required APIs as this 
> progresses.
> 
> Honnappa Nagarahalli (1):
>   queue: introduce queue APIs and driver framework
> 
>  lib/librte_queue/rte_queue.c        | 122 ++++++++++++++++++++++
>  lib/librte_queue/rte_queue.h        | 200 ++++++++++++++++++++++++++++++++++++
>  lib/librte_queue/rte_queue_driver.h | 157 
> ++++++++++++++++++++++++++++
>  3 files changed, 479 insertions(+)
>  create mode 100644 lib/librte_queue/rte_queue.c  create mode 100644 
> lib/librte_queue/rte_queue.h  create mode 100644 
> lib/librte_queue/rte_queue_driver.h
> 
> --
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [RFC] queue: introduce queue APIs and driver framework
  2018-07-11  2:02   ` Honnappa Nagarahalli
@ 2018-07-11  6:51     ` Jerin Jacob
  2018-07-16  2:51       ` Honnappa Nagarahalli
  0 siblings, 1 reply; 6+ messages in thread
From: Jerin Jacob @ 2018-07-11  6:51 UTC (permalink / raw)
  To: Honnappa Nagarahalli; +Cc: dev, Gavin Hu, nd, Hemant Agrawal

-----Original Message-----
> Date: Wed, 11 Jul 2018 02:02:32 +0000
> From: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> CC: "dev@dpdk.org" <dev@dpdk.org>, Gavin Hu <Gavin.Hu@arm.com>, nd
>  <nd@arm.com>, Hemant Agrawal <hemant.agrawal@nxp.com>
> Subject: RE: [dpdk-dev] [RFC] queue: introduce queue APIs and driver
>  framework
> 
> External Email
> 
> -----Original Message-----
> From: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Sent: Wednesday, June 27, 2018 11:20 AM
> To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Cc: dev@dpdk.org; Gavin Hu <Gavin.Hu@arm.com>; nd <nd@arm.com>
> Subject: Re: [dpdk-dev] [RFC] queue: introduce queue APIs and driver framework
> 
> -----Original Message-----
> > Date: Wed, 27 Jun 2018 11:06:13 -0500
> > From: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > To: dev@dpdk.org
> > CC: honnappa.nagarahalli@arm.com, gavin.hu@arm.com, nd@arm.com
> > Subject: [dpdk-dev] [RFC] queue: introduce queue APIs and driver
> > framework
> > X-Mailer: git-send-email 2.7.4
> >
> >
> > DPDK offers pipeline model of packet processing. One of the key
> > components of this model is the core to core packet exchange.
> > rte_ring and rte_event_ring functions are 2 methods provided currently
> > for core to core communication. However, these two do not separate the
> > APIs from implementation. This does not allow using hardware queue
> > implementations in pipeline model.
> > This change adds queue APIs and driver framework so that HW queues can
> > be used for core to core communication in pipeline model.
> > When different implementations (ex: HW queues and rte_ring) are used
> 
> Just to understand, Do you have any HW in mind where it can do generic multi producer/multi consumer queue operations for core to core in HW as offload.
> 
>> It is my understanding that NXP SoCs provide this capability (Hemant, please correct me if I am wrong).
>> It is not needed that the offload is a queue. It can be some other mechanism (for ex: enqueue/dequeue via the scheduler) as long as it performs better than the rte_ring implementation.

eventdev already abstracts CPU to CPU communication for HW offloads.
If NXP's HW comes under scheduler offload then it is already abstracted over
eventdev.


> 
> > for the same object in different platforms, it is important to make
> > sure that the application is portable. Hence features of different
> > implementations must be elevated to the API level, so that the
> > application writers can make the right choice.
> > Currently, basic APIs are created, will add more required APIs as this
> > progresses.
> >
> > Honnappa Nagarahalli (1):
> >   queue: introduce queue APIs and driver framework
> >
> >  lib/librte_queue/rte_queue.c        | 122 ++++++++++++++++++++++
> >  lib/librte_queue/rte_queue.h        | 200 ++++++++++++++++++++++++++++++++++++
> >  lib/librte_queue/rte_queue_driver.h | 157
> > ++++++++++++++++++++++++++++
> >  3 files changed, 479 insertions(+)
> >  create mode 100644 lib/librte_queue/rte_queue.c  create mode 100644
> > lib/librte_queue/rte_queue.h  create mode 100644
> > lib/librte_queue/rte_queue_driver.h
> >
> > --
> > 2.7.4
> >

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [RFC] queue: introduce queue APIs and driver framework
  2018-07-11  6:51     ` Jerin Jacob
@ 2018-07-16  2:51       ` Honnappa Nagarahalli
  0 siblings, 0 replies; 6+ messages in thread
From: Honnappa Nagarahalli @ 2018-07-16  2:51 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, Gavin Hu, nd, Hemant Agrawal

> -----Original Message-----
> From: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Sent: Wednesday, June 27, 2018 11:20 AM
> To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Cc: dev@dpdk.org; Gavin Hu <Gavin.Hu@arm.com>; nd <nd@arm.com>
> Subject: Re: [dpdk-dev] [RFC] queue: introduce queue APIs and driver 
> framework
> 
> -----Original Message-----
> > Date: Wed, 27 Jun 2018 11:06:13 -0500
> > From: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > To: dev@dpdk.org
> > CC: honnappa.nagarahalli@arm.com, gavin.hu@arm.com, nd@arm.com
> > Subject: [dpdk-dev] [RFC] queue: introduce queue APIs and driver 
> > framework
> > X-Mailer: git-send-email 2.7.4
> >
> >
> > DPDK offers pipeline model of packet processing. One of the key 
> > components of this model is the core to core packet exchange.
> > rte_ring and rte_event_ring functions are 2 methods provided 
> > currently for core to core communication. However, these two do not 
> > separate the APIs from implementation. This does not allow using 
> > hardware queue implementations in pipeline model.
> > This change adds queue APIs and driver framework so that HW queues 
> > can be used for core to core communication in pipeline model.
> > When different implementations (ex: HW queues and rte_ring) are used
> 
> Just to understand, Do you have any HW in mind where it can do generic multi producer/multi consumer queue operations for core to core in HW as offload.
> 
>> It is my understanding that NXP SoCs provide this capability (Hemant, please correct me if I am wrong).
>> It is not needed that the offload is a queue. It can be some other mechanism (for ex: enqueue/dequeue via the scheduler) as long as it performs better than the rte_ring implementation.

eventdev already abstracts CPU to CPU communication for HW offloads.
If NXP's HW comes under scheduler offload then it is already abstracted over eventdev.

I will let Hemant answer the specifics of NXP implementation.

Currently, the pipeline model does not use the event dev abstraction for CPU to CPU communication. One way is to use the event dev abstraction in the pipeline model.

> 
> > for the same object in different platforms, it is important to make 
> > sure that the application is portable. Hence features of different 
> > implementations must be elevated to the API level, so that the 
> > application writers can make the right choice.
> > Currently, basic APIs are created, will add more required APIs as 
> > this progresses.
> >
> > Honnappa Nagarahalli (1):
> >   queue: introduce queue APIs and driver framework
> >
> >  lib/librte_queue/rte_queue.c        | 122 ++++++++++++++++++++++
> >  lib/librte_queue/rte_queue.h        | 200 ++++++++++++++++++++++++++++++++++++
> >  lib/librte_queue/rte_queue_driver.h | 157
> > ++++++++++++++++++++++++++++
> >  3 files changed, 479 insertions(+)
> >  create mode 100644 lib/librte_queue/rte_queue.c  create mode 100644 
> > lib/librte_queue/rte_queue.h  create mode 100644 
> > lib/librte_queue/rte_queue_driver.h
> >
> > --
> > 2.7.4
> >

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-07-16  2:51 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-27 16:06 [dpdk-dev] [RFC] queue: introduce queue APIs and driver framework Honnappa Nagarahalli
2018-06-27 16:06 ` Honnappa Nagarahalli
2018-06-27 16:19 ` Jerin Jacob
2018-07-11  2:02   ` Honnappa Nagarahalli
2018-07-11  6:51     ` Jerin Jacob
2018-07-16  2:51       ` Honnappa Nagarahalli

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).