DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 1/4] eventdev: add eth Tx adapter APIs
@ 2018-07-06  6:42 Nikhil Rao
  2018-07-06  6:42 ` [dpdk-dev] [PATCH 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter Nikhil Rao
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Nikhil Rao @ 2018-07-06  6:42 UTC (permalink / raw)
  To: jerin.jacob, olivier.matz; +Cc: nikhil.rao, dev

The ethernet Tx adapter abstracts the transmit stage of an
event driven packet processing application. The transmit
stage may be implemented with eventdev PMD support or use a
rte_service function implemented in the adapter. These APIs
provide a common configuration and control interface and
an transmit API for the eventdev PMD implementation.

The transmit port is specified using mbuf::port. The transmit
queue is specified using the rte_event_eth_tx_adapter_txq_set()
function. The mbuf will specify a queue ID in the future
(http://mails.dpdk.org/archives/dev/2018-February/090651.html)
at which point this function will be replaced with a macro.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
---

This patch series adds the event ethernet Tx adapter which is
based on a previous RFC 
 * RFCv1 - http://mails.dpdk.org/archives/dev/2018-May/102936.html
 * RFCv2 - http://mails.dpdk.org/archives/dev/2018-June/104075.html

RFC -> V1:
=========

* Move port and tx queue id to mbuf from mbuf private area. (Jerin Jacob)

* Support for PMD transmit function. (Jerin Jacob)

* mbuf change has been replaced with rte_event_eth_tx_adapter_txq_set(). 
The goal is to align with the mbuf change for a qid field. 
(http://mails.dpdk.org/archives/dev/2018-February/090651.html). Once the mbuf
change is available, the function can be replaced with a macro with no impact
to applications.

* Various cleanups (Jerin Jacob)

 lib/librte_eventdev/rte_event_eth_tx_adapter.h | 497 +++++++++++++++++++++++++
 lib/librte_mbuf/rte_mbuf.h                     |   4 +-
 MAINTAINERS                                    |   5 +
 3 files changed, 505 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_eventdev/rte_event_eth_tx_adapter.h

diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.h b/lib/librte_eventdev/rte_event_eth_tx_adapter.h
new file mode 100644
index 0000000..b85ca70
--- /dev/null
+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.h
@@ -0,0 +1,497 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation.
+ */
+
+#ifndef _RTE_EVENT_ETH_TX_ADAPTER_
+#define _RTE_EVENT_ETH_TX_ADAPTER_
+
+/**
+ * @file
+ *
+ * RTE Event Ethernet Tx Adapter
+ *
+ * The event ethernet Tx adapter provides configuration and data path APIs
+ * for the ethernet transmit stage of an event driven packet processing
+ * application. These APIs abstract the implementation of the transmit stage
+ * and allow the the application to use eventdev PMD support or a common
+ * implementation.
+ *
+ * In the common implementation, the application enqueues mbufs to the adapter
+ * which runs as a rte_service function. The service function dequeues events
+ * from its event port and transmits the mbufs referenced by these events.
+ *
+ * The ethernet Tx event adapter APIs are:
+ *
+ *  - rte_event_eth_tx_adapter_create()
+ *  - rte_event_eth_tx_adapter_create_ext()
+ *  - rte_event_eth_tx_adapter_free()
+ *  - rte_event_eth_tx_adapter_start()
+ *  - rte_event_eth_tx_adapter_stop()
+ *  - rte_event_eth_tx_adapter_queue_add()
+ *  - rte_event_eth_tx_adapter_queue_del()
+ *  - rte_event_eth_tx_adapter_stats_get()
+ *  - rte_event_eth_tx_adapter_stats_reset()
+ *  - rte_event_eth_tx_adapter_enqueue()
+ *  - rte_event_eth_tx_adapter_event_port_get()
+ *  - rte_event_eth_tx_adapter_service_id_get()
+ *
+ * The application creates the adapter using
+ * rte_event_eth_tx_adapter_create() or rte_event_eth_tx_adapter_create_ext().
+ *
+ * The adapter will use the common implementation when the eventdev PMD
+ * does not have the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability.
+ * The common implementation uses an event port that is created using the port
+ * configuration parameter passed to rte_event_eth_tx_adapter_create(). The
+ * application can get the port identifier using
+ * rte_event_eth_tx_adapter_event_port_get() and must link an event queue to
+ * this port.
+ *
+ * If the eventdev PMD has the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT
+ * flags set, Tx adapter events should be enqueued using the
+ * rte_event_eth_tx_adapter_enqueue() function, else the application should
+ * use rte_event_enqueue_burst().
+ *
+ * Transmit queues can be added and deleted from the adapter using
+ * rte_event_eth_tx_adapter_queue_add()/del() APIs respectively.
+ *
+ * The application can start and stop the adapter using the
+ * rte_event_eth_tx_adapter_start/stop() calls.
+ *
+ * The common adapter implementation uses an EAL service function as described
+ * before and its execution is controlled using the rte_service APIs. The
+ * rte_event_eth_tx_adapter_service_id_get()
+ * function can be used to retrieve the adapter's service function ID.
+ *
+ * The ethernet port and transmit queue index to transmit the mbuf on are
+ * specified in the mbuf using rte_event_eth_tx_adapter_txq_set().
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+
+#include "rte_eventdev.h"
+
+#define RTE_EVENT_ETH_TX_ADAPTER_SERVICE_NAME_LEN 32
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Adapter configuration structure
+ *
+ * @see rte_event_eth_tx_adapter_create_ext
+ * @see rte_event_eth_tx_adapter_conf_cb
+ */
+struct rte_event_eth_tx_adapter_conf {
+	uint8_t event_port_id;
+	/**< Event port identifier, the adapter service function dequeues mbuf
+	 * events from this port.
+	 * @see RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT
+	 */
+	uint32_t max_nb_tx;
+	/**< The adapter can return early if it has processed at least
+	 * max_nb_tx mbufs. This isn't treated as a requirement; batching may
+	 * cause the adapter to process more than max_nb_tx mbufs.
+	 */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Function type used for adapter configuration callback. The callback is
+ * used to fill in members of the struct rte_event_eth_tx_adapter_conf, this
+ * callback is invoked when creating a RTE service function based
+ * adapter implementation.
+ *
+ * @param id
+ *  Adapter identifier.
+ * @param dev_id
+ *  Event device identifier.
+ * @param [out] conf
+ *  Structure that needs to be populated by this callback.
+ * @param arg
+ *  Argument to the callback. This is the same as the conf_arg passed to the
+ *  rte_event_eth_tx_adapter_create_ext().
+ *
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure
+ */
+typedef int (*rte_event_eth_tx_adapter_conf_cb) (uint8_t id, uint8_t dev_id,
+				struct rte_event_eth_tx_adapter_conf *conf,
+				void *arg);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * A structure used to retrieve statistics for an eth tx adapter instance.
+ */
+struct rte_event_eth_tx_adapter_stats {
+	uint64_t tx_retry;
+	/**< Number of transmit retries */
+	uint64_t tx_packets;
+	/**< Number of packets transmitted */
+	uint64_t tx_dropped;
+	/**< Number of packets dropped */
+};
+
+/** Event Eth Tx Adapter Structure */
+struct rte_event_eth_tx_adapter {
+	uint8_t id;
+	/**< Adapter Identifier */
+	uint8_t eventdev_id;
+	/**< Max mbufs processed in any service function invocation */
+	uint32_t max_nb_tx;
+	/**< The adapter can return early if it has processed at least
+	 * max_nb_tx mbufs. This isn't treated as a requirement; batching may
+	 * cause the adapter to process more than max_nb_tx mbufs.
+	 */
+	uint32_t nb_queues;
+	/**< Number of Tx queues in adapter */
+	int socket_id;
+	/**< socket id */
+	rte_spinlock_t tx_lock;
+	/**<  Synchronization with data path */
+	void *dev_private;
+	/**< PMD private data */
+	char mem_name[RTE_EVENT_ETH_TX_ADAPTER_SERVICE_NAME_LEN];
+	/**< Memory allocation name */
+	rte_event_eth_tx_adapter_conf_cb conf_cb;
+	/** Configuration callback */
+	void *conf_arg;
+	/**< Configuration callback argument */
+	uint16_t dev_count;
+	/**< Highest port id supported + 1 */
+	struct rte_event_eth_tx_adapter_ethdev *txa_ethdev;
+	/**< Per ethernet device structure */
+	struct rte_event_eth_tx_adapter_stats stats;
+} __rte_cache_aligned;
+
+struct rte_event_eth_tx_adapters {
+	struct rte_event_eth_tx_adapter **data;
+};
+
+/* Per eth device structure */
+struct rte_event_eth_tx_adapter_ethdev {
+	/* Pointer to ethernet device */
+	struct rte_eth_dev *dev;
+	/* Number of queues added */
+	uint16_t nb_queues;
+	/* PMD specific queue data */
+	void *queues;
+};
+
+extern struct rte_event_eth_tx_adapters rte_event_eth_tx_adapters;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new event ethernet Tx adapter with the specified identifier.
+ *
+ * @param id
+ *  The identifier of the event ethernet Tx adapter.
+ * @param dev_id
+ *  The event device identifier.
+ * @param port_config
+ *  Event port configuration, the adapter uses this configuration to
+ *  create an event port if needed.
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_create(uint8_t id, uint8_t dev_id,
+				struct rte_event_port_conf *port_config);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new event ethernet Tx adapter with the specified identifier.
+ *
+ * @param id
+ *  The identifier of the event ethernet Tx adapter.
+ * @param dev_id
+ *  The event device identifier.
+ * @param conf_cb
+ *  Callback function that initalizes members of the
+ *  struct rte_event_eth_tx_adapter_conf struct passed into
+ *  it.
+ * @param conf_arg
+ *  Argument that is passed to the conf_cb function.
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_create_ext(uint8_t id, uint8_t dev_id,
+				rte_event_eth_tx_adapter_conf_cb conf_cb,
+				void *conf_arg);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free an event adapter
+ *
+ * @param id
+ *  Adapter identifier.
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure, If the adapter still has Tx queues
+ *      added to it, the function returns -EBUSY.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_free(uint8_t id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Start ethernet Tx event adapter
+ *
+ * @param id
+ *  Adapter identifier.
+ * @return
+ *  - 0: Success, Adapter started correctly.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_start(uint8_t id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Stop ethernet Tx event adapter
+ *
+ * @param id
+ *  Adapter identifier.
+ * @return
+ *  - 0: Success.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_stop(uint8_t id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a Tx queue to the adapter.
+ * A queue value of -1 is used to indicate all
+ * queues within the device.
+ *
+ * @param id
+ *  Adapter identifier.
+ * @param eth_dev_id
+ *  Ethernet Port Identifier.
+ * @param queue
+ *  Tx queue index.
+ * @return
+ *  - 0: Success, Queues added succcessfully.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_queue_add(uint8_t id,
+				uint16_t eth_dev_id,
+				int32_t queue);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Delete a Tx queue from the adapter.
+ * A queue value of -1 is used to indicate all
+ * queues within the device, that have been added to this
+ * adapter.
+ *
+ * @param id
+ *  Adapter identifier.
+ * @param eth_dev_id
+ *  Ethernet Port Identifier.
+ * @param queue
+ *  Tx queue index.
+ * @return
+ *  - 0: Success, Queues deleted successfully.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_queue_del(uint8_t id,
+				uint16_t eth_dev_id,
+				int32_t queue);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Set Tx queue in the mbuf.
+ *
+ * @param pkt
+ *  Pointer to the mbuf.
+ * @param queue
+ *  Tx queue index.
+ */
+void __rte_experimental
+rte_event_eth_tx_adapter_txq_set(struct rte_mbuf *pkt, uint16_t queue);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve the adapter event port. The adapter creates an event port if
+ * the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT is not set in the
+ * eth Tx capabilities of the event device.
+ *
+ * @param id
+ *  Adapter Identifier.
+ * @param[out] event_port_id
+ *  Event port pointer.
+ * @return
+ *   - 0: Success.
+ *   - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id);
+
+static __rte_always_inline uint16_t __rte_experimental
+__rte_event_eth_tx_adapter_enqueue(uint8_t id, uint8_t dev_id, uint8_t port_id,
+				struct rte_event ev[],
+				uint16_t nb_events,
+				const event_tx_adapter_enqueue fn)
+{
+	const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+	struct rte_event_eth_tx_adapter *txa =
+					rte_event_eth_tx_adapters.data[id];
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+	if (id >= RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE ||
+		dev_id >= RTE_EVENT_MAX_DEVS ||
+		!rte_eventdevs[dev_id].attached) {
+		rte_errno = -EINVAL;
+		return 0;
+	}
+
+	if (port_id >= dev->data->nb_ports) {
+		rte_errno = -EINVAL;
+		return 0;
+	}
+#endif
+	return fn((void *)txa, dev, dev->data->ports[port_id], ev, nb_events);
+}
+
+/**
+ * Enqueue a burst of events objects or an event object supplied in *rte_event*
+ * structure on an  event device designated by its *dev_id* through the event
+ * port specified by *port_id*. This function is supported if the eventdev PMD
+ * has the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability flag set.
+ *
+ * The *nb_events* parameter is the number of event objects to enqueue which are
+ * supplied in the *ev* array of *rte_event* structure.
+ *
+ * The rte_event_eth_tx_adapter_enqueue() function returns the number of
+ * events objects it actually enqueued. A return value equal to *nb_events*
+ * means that all event objects have been enqueued.
+ *
+ * @param id
+ *  The identifier of the tx adapter.
+ * @param dev_id
+ *  The identifier of the device.
+ * @param port_id
+ *  The identifier of the event port.
+ * @param ev
+ *  Points to an array of *nb_events* objects of type *rte_event* structure
+ *  which contain the event object enqueue operations to be processed.
+ * @param nb_events
+ *  The number of event objects to enqueue, typically number of
+ *  rte_event_port_enqueue_depth() available for this port.
+ *
+ * @return
+ *   The number of event objects actually enqueued on the event device. The
+ *   return value can be less than the value of the *nb_events* parameter when
+ *   the event devices queue is full or if invalid parameters are specified in a
+ *   *rte_event*. If the return value is less than *nb_events*, the remaining
+ *   events at the end of ev[] are not consumed and the caller has to take care
+ *   of them, and rte_errno is set accordingly. Possible errno values include:
+ *   - -EINVAL  The port ID is invalid, device ID is invalid, an event's queue
+ *              ID is invalid, or an event's sched type doesn't match the
+ *              capabilities of the destination queue.
+ *   - -ENOSPC  The event port was backpressured and unable to enqueue
+ *              one or more events. This error code is only applicable to
+ *              closed systems.
+ */
+static inline uint16_t __rte_experimental
+rte_event_eth_tx_adapter_enqueue(uint8_t id, uint8_t dev_id,
+				uint8_t port_id,
+				struct rte_event ev[],
+				uint16_t nb_events)
+{
+	const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+
+	return __rte_event_eth_tx_adapter_enqueue(id, dev_id, port_id, ev,
+						nb_events,
+						dev->txa_enqueue);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve statistics for an adapter
+ *
+ * @param id
+ *  Adapter identifier.
+ * @param [out] stats
+ *  A pointer to structure used to retrieve statistics for an adapter.
+ * @return
+ *  - 0: Success, statistics retrieved successfully.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_stats_get(uint8_t id,
+				struct rte_event_eth_tx_adapter_stats *stats);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Reset statistics for an adapter.
+ *
+ * @param id
+ *  Adapter identifier.
+ * @return
+ *  - 0: Success, statistics reset successfully.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_stats_reset(uint8_t id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve the service ID of an adapter. If the adapter doesn't use
+ * a rte_service function, this function returns -ESRCH.
+ *
+ * @param id
+ *  Adapter identifier.
+ * @param [out] service_id
+ *  A pointer to a uint32_t, to be filled in with the service id.
+ * @return
+ *  - 0: Success
+ *  - <0: Error code on failure, if the adapter doesn't use a rte_service
+ * function, this function returns -ESRCH.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_service_id_get(uint8_t id, uint32_t *service_id);
+
+#ifdef __cplusplus
+}
+#endif
+#endif	/* _RTE_EVENT_ETH_TX_ADAPTER_ */
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 8e6b4d2..216212c 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -464,7 +464,9 @@ struct rte_mbuf {
 	};
 	uint16_t nb_segs;         /**< Number of segments. */
 
-	/** Input port (16 bits to support more than 256 virtual ports). */
+	/** Input port (16 bits to support more than 256 virtual ports).
+	 * The event eth Tx adapter uses this field to specify the output port.
+	 */
 	uint16_t port;
 
 	uint64_t ol_flags;        /**< Offload features. */
diff --git a/MAINTAINERS b/MAINTAINERS
index dabb12d..ab23503 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -388,6 +388,11 @@ F: lib/librte_eventdev/*crypto_adapter*
 F: test/test/test_event_crypto_adapter.c
 F: doc/guides/prog_guide/event_crypto_adapter.rst
 
+Eventdev Ethdev Tx Adapter API - EXPERIMENTAL
+M: Nikhil Rao <nikhil.rao@intel.com>
+T: git://dpdk.org/next/dpdk-next-eventdev
+F: lib/librte_eventdev/*eth_tx_adapter*
+
 Raw device API - EXPERIMENTAL
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [PATCH 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter
  2018-07-06  6:42 [dpdk-dev] [PATCH 1/4] eventdev: add eth Tx adapter APIs Nikhil Rao
@ 2018-07-06  6:42 ` Nikhil Rao
  2018-07-10 10:56   ` Pavan Nikhilesh
  2018-07-06  6:42 ` [dpdk-dev] [PATCH 3/4] eventdev: add eth Tx adapter implementation Nikhil Rao
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 9+ messages in thread
From: Nikhil Rao @ 2018-07-06  6:42 UTC (permalink / raw)
  To: jerin.jacob, olivier.matz; +Cc: nikhil.rao, dev

The caps API allows the application to query if the transmit
stage is implemented in the eventdev PMD or uses the common
rte_service function. The PMD callbacks support the
eventdev PMD implementation of the adapter.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
---
 lib/librte_eventdev/rte_eventdev.h     |  30 ++++-
 lib/librte_eventdev/rte_eventdev_pmd.h | 193 +++++++++++++++++++++++++++++++++
 lib/librte_eventdev/rte_eventdev.c     |  19 ++++
 3 files changed, 241 insertions(+), 1 deletion(-)

diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index b6fd6ee..e8df526 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -1186,6 +1186,28 @@ struct rte_event {
 rte_event_crypto_adapter_caps_get(uint8_t dev_id, uint8_t cdev_id,
 				  uint32_t *caps);
 
+/* Ethdev Tx adapter capability bitmap flags */
+#define RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT	0x1
+/**< This flag is sent when the PMD supports a packet transmit callback
+ */
+
+/**
+ * Retrieve the event device's eth Tx adapter capabilities
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param[out] caps
+ *   A pointer to memory filled with eth Tx adapter capabilities.
+ *
+ * @return
+ *   - 0: Success, driver provides eth Tx adapter capabilities.
+ *   - <0: Error code returned by the driver function.
+ *
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint32_t *caps);
+
 struct rte_eventdev_ops;
 struct rte_eventdev;
 
@@ -1204,6 +1226,11 @@ typedef uint16_t (*event_dequeue_burst_t)(void *port, struct rte_event ev[],
 		uint16_t nb_events, uint64_t timeout_ticks);
 /**< @internal Dequeue burst of events from port of a device */
 
+typedef uint16_t (*event_tx_adapter_enqueue)(void *adapter,
+			const struct rte_eventdev *dev, void *port,
+			struct rte_event ev[], uint16_t nb_events);
+/**< @internal Enqueue burst of events on port of a device */
+
 #define RTE_EVENTDEV_NAME_MAX_LEN	(64)
 /**< @internal Max length of name of event PMD */
 
@@ -1266,7 +1293,8 @@ struct rte_eventdev {
 	/**< Pointer to PMD dequeue function. */
 	event_dequeue_burst_t dequeue_burst;
 	/**< Pointer to PMD dequeue burst function. */
-
+	event_tx_adapter_enqueue txa_enqueue;
+	/**< Pointer to PMD eth Tx adapter enqueue function. */
 	struct rte_eventdev_data *data;
 	/**< Pointer to device data */
 	struct rte_eventdev_ops *dev_ops;
diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h b/lib/librte_eventdev/rte_eventdev_pmd.h
index 3fbb4d2..ccf07a8 100644
--- a/lib/librte_eventdev/rte_eventdev_pmd.h
+++ b/lib/librte_eventdev/rte_eventdev_pmd.h
@@ -789,6 +789,178 @@ typedef int (*eventdev_crypto_adapter_stats_reset)
 			(const struct rte_eventdev *dev,
 			 const struct rte_cryptodev *cdev);
 
+/**
+ * Retrieve the event device's eth Tx adapter capabilities.
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @param[out] caps
+ *   A pointer to memory filled with eth Tx adapter capabilities.
+ *
+ * @return
+ *   - 0: Success, driver provides eth Tx adapter capabilities
+ *   - <0: Error code returned by the driver function.
+ *
+ */
+typedef int (*eventdev_eth_tx_adapter_caps_get_t)
+					(const struct rte_eventdev *dev,
+					uint32_t *caps);
+
+struct rte_event_eth_tx_adapter;
+
+/**
+ * Retrieve the adapter event port. The adapter creates an event port if
+ * the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT is not set in the
+ * eth Tx capabilities of the event device.
+ *
+ * @param txa
+ *  Adapter pointer
+ *
+ * @param[out] event_port_id
+ *  Event port pointer
+ *
+ * @return
+ *   - 0: Success.
+ *   - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_event_port_get)
+					(struct rte_event_eth_tx_adapter *txa,
+					uint8_t *port);
+
+/**
+ * Initialize adapter PMD resources. This callback is invoked when
+ * adding the first Tx queue to the adapter.
+ *
+ * @param txa
+ *  Adapter pointer
+ *
+ * @return
+ *   - 0: Success.
+ *   - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_init_t)(
+					struct rte_event_eth_tx_adapter *txa);
+
+/**
+ * Free adapter PMD resources. This callback is invoked after the last queue
+ * has been deleted from the adapter.
+ *
+ * @param txa
+ *  Adapter pointer
+ *
+ * @return
+ *   - 0: Success.
+ *   - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_free_t)(
+					struct rte_event_eth_tx_adapter *txa);
+
+/**
+ * Add a Tx queue to the adapter.
+ * A queue value of -1 is used to indicate all
+ * queues within the device.
+ *
+ * @param txa
+ *  Adapter pointer
+ *
+ * @param eth_dev
+ *  Pointer to ethernet device
+ *
+ * @param tx_queue_id
+ *  Transmt queue index
+ *
+ * @return
+ *   - 0: Success.
+ *   - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_queue_add_t)(
+					struct rte_event_eth_tx_adapter *txa,
+					const struct rte_eth_dev *eth_dev,
+					int32_t tx_queue_id);
+
+/**
+ * Delete a Tx queue from the adapter.
+ * A queue value of -1 is used to indicate all
+ * queues within the device, that have been added to this
+ * adapter.
+ *
+ * @param txa
+ *  Adapter pointer
+ *
+ * @param eth_dev_id
+ *  Ethernet Port Identifier
+ *
+ * @param queue
+ *  Tx queue index
+ *
+ * @return
+ *  - 0: Success, Queues deleted successfully.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_queue_del_t)(
+					struct rte_event_eth_tx_adapter *txa,
+					const struct rte_eth_dev *eth_dev,
+					int32_t tx_queue_id);
+
+/**
+ * Start the adapter.
+ *
+ * @param txa
+ *  Adapter pointer
+ *
+ * @return
+ *  - 0: Success, Adapter started correctly.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_start_t)(
+					struct rte_event_eth_tx_adapter *txa);
+
+/**
+ * Stop the adapter.
+ *
+ * @param txa
+ *  Adapter pointer
+ *
+ * @return
+ *  - 0: Success.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_stop_t)(
+					struct rte_event_eth_tx_adapter *txa);
+
+struct rte_event_eth_tx_adapter_stats;
+
+/**
+ * Retrieve statistics for an adapter
+ *
+ * @param txa
+ *  Adapter Pointer
+ *
+ * @param [out] stats
+ *  A pointer to structure used to retrieve statistics for an adapter
+ *
+ * @return
+ *  - 0: Success, statistics retrieved successfully.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_stats_get_t)(
+				struct rte_event_eth_tx_adapter *txa,
+				struct rte_event_eth_tx_adapter_stats *stats);
+
+/**
+ * Reset statistics for an adapter
+ *
+ * @param txa
+ *  Adapter Pointer
+ *
+ * @return
+ *  - 0: Success, statistics retrieved successfully.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_stats_reset_t)(
+					struct rte_event_eth_tx_adapter *txa);
+
 /** Event device operations function pointer table */
 struct rte_eventdev_ops {
 	eventdev_info_get_t dev_infos_get;	/**< Get device info. */
@@ -862,6 +1034,27 @@ struct rte_eventdev_ops {
 	eventdev_crypto_adapter_stats_reset crypto_adapter_stats_reset;
 	/**< Reset crypto stats */
 
+	eventdev_eth_tx_adapter_caps_get_t eth_tx_adapter_caps_get;
+	/**< Get ethernet Tx adapter capabilities */
+	eventdev_eth_tx_adapter_event_port_get eth_tx_adapter_event_port_get;
+	/**< Get event port */
+	eventdev_eth_tx_adapter_init_t eth_tx_adapter_init;
+	/**< Initialize eth Tx adapter */
+	eventdev_eth_tx_adapter_free_t eth_tx_adapter_free;
+	/**< Free ethernet Tx adapter resources */
+	eventdev_eth_tx_adapter_queue_add_t eth_tx_adapter_queue_add;
+	/**< Add Tx queues to the eth Tx adapter */
+	eventdev_eth_tx_adapter_queue_del_t eth_tx_adapter_queue_del;
+	/**< Delete Tx queues from the eth Tx adapter */
+	eventdev_eth_tx_adapter_start_t eth_tx_adapter_start;
+	/**< Start eth Tx adapter */
+	eventdev_eth_tx_adapter_stop_t eth_tx_adapter_stop;
+	/**< Stop eth Tx adapter */
+	eventdev_eth_tx_adapter_stats_get_t eth_tx_adapter_stats_get;
+	/**< Get eth Tx adapter statistics */
+	eventdev_eth_tx_adapter_stats_reset_t eth_tx_adapter_stats_reset;
+	/**< Reset eth Tx adapter statistics */
+
 	eventdev_selftest dev_selftest;
 	/**< Start eventdev Selftest */
 
diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index 801810e..a29fae1 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -175,6 +175,25 @@
 		(dev, cdev, caps) : -ENOTSUP;
 }
 
+int __rte_experimental
+rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint32_t *caps)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+
+	dev = &rte_eventdevs[dev_id];
+
+	if (caps == NULL)
+		return -EINVAL;
+	*caps = 0;
+
+	return dev->dev_ops->eth_tx_adapter_caps_get ?
+				(*dev->dev_ops->eth_tx_adapter_caps_get)(dev,
+									caps)
+				: 0;
+}
+
 static inline int
 rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
 {
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [PATCH 3/4] eventdev: add eth Tx adapter implementation
  2018-07-06  6:42 [dpdk-dev] [PATCH 1/4] eventdev: add eth Tx adapter APIs Nikhil Rao
  2018-07-06  6:42 ` [dpdk-dev] [PATCH 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter Nikhil Rao
@ 2018-07-06  6:42 ` Nikhil Rao
  2018-07-06  6:42 ` [dpdk-dev] [PATCH 4/4] eventdev: add auto test for eth Tx adapter Nikhil Rao
  2018-07-10 12:17 ` [dpdk-dev] [PATCH 1/4] eventdev: add eth Tx adapter APIs Jerin Jacob
  3 siblings, 0 replies; 9+ messages in thread
From: Nikhil Rao @ 2018-07-06  6:42 UTC (permalink / raw)
  To: jerin.jacob, olivier.matz; +Cc: nikhil.rao, dev

This patch implements the Tx adapter APIs by invoking the
corresponding eventdev PMD callbacks and also provides
the common rte_service function based implementation when
the eventdev PMD support is absent.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
---
 config/rte_config.h                            |    1 +
 lib/librte_eventdev/rte_event_eth_tx_adapter.c | 1210 ++++++++++++++++++++++++
 config/common_base                             |    2 +-
 lib/librte_eventdev/Makefile                   |    2 +
 lib/librte_eventdev/meson.build                |    6 +-
 lib/librte_eventdev/rte_eventdev_version.map   |   13 +
 6 files changed, 1231 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_eventdev/rte_event_eth_tx_adapter.c

diff --git a/config/rte_config.h b/config/rte_config.h
index 0ba0ead..f60cc80 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -65,6 +65,7 @@
 #define RTE_EVENT_MAX_QUEUES_PER_DEV 64
 #define RTE_EVENT_TIMER_ADAPTER_NUM_MAX 32
 #define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
+#define RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE 32
 
 /* rawdev defines */
 #define RTE_RAWDEV_MAX_DEVS 10
diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.c b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
new file mode 100644
index 0000000..b802a13
--- /dev/null
+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
@@ -0,0 +1,1210 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation.
+ */
+#include <rte_spinlock.h>
+#include <rte_service_component.h>
+#include <rte_ethdev.h>
+
+#include "rte_eventdev_pmd.h"
+#include "rte_event_eth_tx_adapter.h"
+
+#define TXA_BATCH_SIZE		32
+#define TXA_SERVICE_NAME_LEN	32
+#define TXA_MEM_NAME_LEN	32
+#define TXA_FLUSH_THRESHOLD	1024
+#define TXA_RETRY_CNT		100
+#define TXA_MAX_NB_TX		128
+
+enum txa_pmd_type {
+	/* No PMD in use */
+	TXA_PMD_NONE = 0,
+	/* Event dev PMD */
+	TXA_PMD_EVENTDEV,
+	/* Service PMD */
+	TXA_PMD_SERVICE,
+};
+
+/* Tx retry callback structure */
+struct txa_retry {
+	/* Ethernet port id */
+	uint16_t port_id;
+	/* Tx queue */
+	uint16_t tx_queue;
+	/* Adapter ID */
+	uint8_t id;
+};
+
+/* Per queue structure */
+struct txa_service_queue_info {
+	/* Queue has been added */
+	uint8_t added;
+	/* Retry callback argument */
+	struct txa_retry txa_retry;
+	/* Tx buffer */
+	struct rte_eth_dev_tx_buffer *tx_buf;
+};
+
+/* PMD private structure */
+struct txa_service_data {
+	/* Event port ID */
+	uint8_t port_id;
+	/* Per adapter EAL service */
+	uint32_t service_id;
+	/* Adapter started */
+	int started;
+	/* Lock to serialize config updates with service function */
+	rte_spinlock_t tx_lock;
+	/* stats */
+	struct rte_event_eth_tx_adapter_stats stats;
+	/* Loop count to flush Tx buffers */
+	int loop_cnt;
+};
+
+struct txa_dev_ops {
+	event_tx_adapter_enqueue enqueue;
+	eventdev_eth_tx_adapter_queue_add_t queue_add;
+	eventdev_eth_tx_adapter_queue_del_t queue_del;
+	eventdev_eth_tx_adapter_stats_get_t stats_get;
+	eventdev_eth_tx_adapter_stats_reset_t stats_reset;
+	eventdev_eth_tx_adapter_init_t init;
+	eventdev_eth_tx_adapter_free_t free;
+	eventdev_eth_tx_adapter_start_t start;
+	eventdev_eth_tx_adapter_stop_t stop;
+	eventdev_eth_tx_adapter_event_port_get event_port_get;
+};
+
+/* Library private structure */
+struct txa_internal {
+	/* Tx adapter PMD type */
+	enum txa_pmd_type pmd_type;
+	/* Conf arg must be freed */
+	uint8_t conf_free;
+	/* Original dev ops from event device */
+	struct txa_dev_ops dev_ops;
+};
+
+#define txa_evdev(t) (&rte_eventdevs[(t)->eventdev_id])
+
+#define txa_internal(t) (txa_internals[(t)->id])
+
+#define txa_caps_get(t) txa_evdev(t)->dev_ops->eth_tx_adapter_caps_get
+
+#define txa_enqueue(t) txa_evdev(t)->txa_enqueue
+
+#define txa_event_port_get(t) \
+			txa_evdev(t)->dev_ops->eth_tx_adapter_event_port_get
+
+#define txa_pmd_free(t) txa_evdev(t)->dev_ops->eth_tx_adapter_free
+
+#define txa_pmd_init_func(t) txa_evdev(t)->dev_ops->eth_tx_adapter_init
+
+#define txa_pmd_none(t) (txa_internal(t)->pmd_type == TXA_PMD_NONE)
+
+#define txa_pmd_service(t) (txa_internal(t)->pmd_type == TXA_PMD_SERVICE)
+
+#define txa_queue_add(t) txa_evdev(t)->dev_ops->eth_tx_adapter_queue_add
+
+#define txa_queue_del(t) txa_evdev(t)->dev_ops->eth_tx_adapter_queue_del
+
+#define txa_start(t) txa_evdev(t)->dev_ops->eth_tx_adapter_start
+
+#define txa_stats_reset(t) txa_evdev(t)->dev_ops->eth_tx_adapter_stats_reset
+
+#define txa_stats_get(t) txa_evdev(t)->dev_ops->eth_tx_adapter_stats_get
+
+#define txa_stop(t) txa_evdev(t)->dev_ops->eth_tx_adapter_stop
+
+struct rte_event_eth_tx_adapters rte_event_eth_tx_adapters;
+static struct txa_internal **txa_internals;
+
+static inline struct txa_service_queue_info *
+txa_service_queue(struct rte_event_eth_tx_adapter *txa, uint16_t port_id,
+		uint16_t tx_queue_id)
+{
+	struct txa_service_queue_info *tqi;
+
+	tqi = txa->txa_ethdev[port_id].queues;
+
+	return tqi != NULL ? tqi + tx_queue_id : NULL;
+}
+
+static inline int
+txa_valid_id(uint8_t id)
+{
+	return id < RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE;
+}
+
+#define RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) \
+do { \
+	if (!txa_valid_id(id)) { \
+		RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d", id); \
+		return retval; \
+	} \
+} while (0)
+
+/* Private definition of tx queue identifier within mbuf */
+struct txa_mbuf_txq_id {
+	uint32_t resvd1;
+	uint16_t resvd2;
+	uint16_t txq_id;
+};
+
+#define TXA_QID_READ(m)						\
+({								\
+	const struct txa_mbuf_txq_id *txa_qid;			\
+	txa_qid = (struct txa_mbuf_txq_id *)(&(m)->hash);	\
+	txa_qid->txq_id;					\
+})
+
+#define TXA_QID_WRITE(m, qid)					\
+({								\
+	struct txa_mbuf_txq_id *txa_qid;			\
+	txa_qid = (struct txa_mbuf_txq_id *)(&(m)->hash);	\
+	txa_qid->txq_id	= qid;					\
+})
+
+static int
+txa_service_get_service_id(struct rte_event_eth_tx_adapter *txa,
+			uint32_t *service_id);
+static int
+txa_service_event_port_get(struct rte_event_eth_tx_adapter *txa, uint8_t *port);
+
+static uint16_t
+txa_service_enqueue(void *adapter,
+		const struct rte_eventdev *dev, void *port,
+		struct rte_event ev[], uint16_t nb_events);
+
+static int
+txa_service_pmd_init(struct rte_event_eth_tx_adapter *txa);
+
+static int
+txa_service_pmd_free(struct rte_event_eth_tx_adapter *txa);
+
+static int
+txa_service_queue_add(struct rte_event_eth_tx_adapter *txa,
+		const struct rte_eth_dev *dev,
+		int32_t tx_queue_id);
+static int
+txa_service_queue_del(struct rte_event_eth_tx_adapter *txa,
+		const struct rte_eth_dev *dev,
+		int32_t tx_queue_id);
+
+static int
+txa_service_start(struct rte_event_eth_tx_adapter *txa);
+
+static int
+txa_service_stats_get(struct rte_event_eth_tx_adapter *txa,
+		struct rte_event_eth_tx_adapter_stats *stats);
+
+static int
+txa_service_stats_reset(struct rte_event_eth_tx_adapter *txa);
+
+static int
+txa_service_stop(struct rte_event_eth_tx_adapter *txa);
+
+static struct rte_event_eth_tx_adapter **
+txa_adapter_init(void)
+{
+	const char *name = "rte_event_eth_tx_adapter_array";
+	const struct rte_memzone *mz;
+	unsigned int sz;
+
+	sz = sizeof(void *) *
+	    RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE;
+	sz = RTE_ALIGN(2 * sz, RTE_CACHE_LINE_SIZE);
+
+	mz = rte_memzone_lookup(name);
+	if (mz == NULL) {
+		mz = rte_memzone_reserve_aligned(name, sz, rte_socket_id(), 0,
+						 RTE_CACHE_LINE_SIZE);
+		if (mz == NULL) {
+			RTE_EDEV_LOG_ERR("failed to reserve memzone err = %"
+					PRId32, rte_errno);
+			return NULL;
+		}
+	}
+
+	return  mz->addr;
+}
+
+static inline struct rte_event_eth_tx_adapter *
+txa_id_to_adapter(uint8_t id)
+{
+	struct rte_event_eth_tx_adapter **p;
+
+	p = rte_event_eth_tx_adapters.data;
+	if (!p) {
+		int n = RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE;
+		p = rte_event_eth_tx_adapters.data = txa_adapter_init();
+		txa_internals = (struct txa_internal **)(p + n);
+	}
+	return p ? p[id] : NULL;
+}
+
+static void
+txa_save_ops(struct rte_event_eth_tx_adapter *txa)
+{
+	struct txa_dev_ops  *ops;
+
+	ops = &txa_internal(txa)->dev_ops;
+
+	ops->enqueue = txa_enqueue(txa);
+	ops->queue_add = txa_queue_add(txa);
+	ops->queue_del = txa_queue_del(txa);
+	ops->stats_get = txa_stats_get(txa);
+	ops->stats_reset = txa_stats_reset(txa);
+	ops->init = txa_pmd_init_func(txa);
+	ops->free = txa_pmd_free(txa);
+	ops->start = txa_start(txa);
+	ops->stop = txa_stop(txa);
+	ops->event_port_get = txa_event_port_get(txa);
+}
+
+static void
+txa_restore_ops(struct rte_event_eth_tx_adapter *txa)
+{
+	struct txa_dev_ops  *ops;
+
+	ops = &txa_internal(txa)->dev_ops;
+
+	txa_enqueue(txa) = ops->enqueue;
+	txa_queue_add(txa) = ops->queue_add;
+	txa_queue_del(txa) = ops->queue_del;
+	txa_stats_get(txa) = ops->stats_get;
+	txa_stats_reset(txa) = ops->stats_reset;
+	txa_pmd_init_func(txa) = ops->init;
+	txa_pmd_free(txa) = ops->free;
+	txa_start(txa) = ops->start;
+	txa_stop(txa) = ops->stop;
+	txa_event_port_get(txa) = ops->event_port_get;
+}
+
+static int
+txa_default_conf_cb(uint8_t id, uint8_t dev_id,
+		struct rte_event_eth_tx_adapter_conf *conf, void *arg)
+{
+	int ret;
+	struct rte_eventdev *dev;
+	struct rte_event_port_conf *pc;
+	struct rte_event_eth_tx_adapter *txa;
+	struct rte_event_dev_config dev_conf;
+	int started;
+	uint8_t port_id;
+
+	pc = arg;
+	txa = txa_id_to_adapter(id);
+	dev = txa_evdev(txa);
+	dev_conf = dev->data->dev_conf;
+
+	started = dev->data->dev_started;
+	if (started)
+		rte_event_dev_stop(dev_id);
+
+	port_id = dev_conf.nb_event_ports;
+	dev_conf.nb_event_ports += 1;
+
+	ret = rte_event_dev_configure(dev_id, &dev_conf);
+	if (ret) {
+		RTE_EDEV_LOG_ERR("failed to configure event dev %u",
+						dev_id);
+		if (started) {
+			if (rte_event_dev_start(dev_id))
+				return -EIO;
+		}
+		return ret;
+	}
+
+	ret = rte_event_port_setup(dev_id, port_id, pc);
+	if (ret) {
+		RTE_EDEV_LOG_ERR("failed to setup event port %u\n",
+					port_id);
+		if (started) {
+			if (rte_event_dev_start(dev_id))
+				return -EIO;
+		}
+		return ret;
+	}
+
+	conf->event_port_id = port_id;
+	conf->max_nb_tx = TXA_MAX_NB_TX;
+	if (started)
+		ret = rte_event_dev_start(dev_id);
+	return ret;
+}
+
+static int
+txa_ethdev_ok(struct rte_event_eth_tx_adapter *txa, uint16_t eth_dev_id)
+{
+	return eth_dev_id < txa->dev_count;
+}
+
+static int
+txa_service_queue_array_alloc(struct rte_event_eth_tx_adapter *txa,
+			uint16_t port_id)
+
+{
+	struct txa_service_queue_info *tqi;
+	uint16_t nb_queue;
+
+	if (txa->txa_ethdev[port_id].queues)
+		return 0;
+
+	nb_queue = txa->txa_ethdev[port_id].dev->data->nb_tx_queues;
+	tqi = rte_zmalloc_socket(txa->mem_name,
+				nb_queue *
+				sizeof(struct txa_service_queue_info), 0,
+				txa->socket_id);
+	if (tqi == NULL)
+		return -ENOMEM;
+	txa->txa_ethdev[port_id].queues = tqi;
+	return 0;
+}
+
+static void
+txa_service_queue_array_free(struct rte_event_eth_tx_adapter *txa,
+			uint16_t port_id)
+{
+	struct rte_event_eth_tx_adapter_ethdev *txa_ethdev;
+	struct txa_service_queue_info *tqi;
+
+	txa_ethdev = &txa->txa_ethdev[port_id];
+	if (txa->txa_ethdev == NULL || txa_ethdev->nb_queues != 0)
+		return;
+
+	tqi = txa_ethdev->queues;
+	txa_ethdev->queues = NULL;
+	rte_free(tqi);
+}
+
+static int
+txa_cap_int_port(struct rte_event_eth_tx_adapter *txa)
+{
+	uint32_t caps = 0;
+
+	if (txa_caps_get(txa))
+		(txa_caps_get(txa))(txa_evdev(txa), &caps);
+	return !!(caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT);
+}
+
+static int
+txa_init(struct rte_event_eth_tx_adapter *txa)
+{
+	int ret;
+	int txa_service;
+	uint16_t i;
+	struct rte_event_eth_tx_adapter_ethdev *txa_ethdev;
+
+	if (txa->txa_ethdev)
+		return 0;
+
+	txa_save_ops(txa);
+	txa_service = 0;
+
+	txa_ethdev = rte_zmalloc_socket(txa->mem_name,
+					txa->dev_count *
+					sizeof(*txa_ethdev), 0,
+					txa->socket_id);
+	if (txa_ethdev == NULL) {
+		RTE_EDEV_LOG_ERR("Failed to alloc txa::txa_ethdev ");
+		return -ENOMEM;
+	}
+
+	RTE_ETH_FOREACH_DEV(i) {
+		if (i == txa->dev_count)
+			break;
+		txa_ethdev[i].dev = &rte_eth_devices[i];
+	}
+
+	if (!txa_cap_int_port(txa)) {
+		txa_pmd_init_func(txa) = txa_service_pmd_init;
+		txa_pmd_free(txa) = txa_service_pmd_free;
+		txa_queue_add(txa) = txa_service_queue_add;
+		txa_queue_del(txa) = txa_service_queue_del;
+		txa_enqueue(txa) = txa_service_enqueue;
+		txa_start(txa) = txa_service_start;
+		txa_stop(txa) = txa_service_stop;
+		txa_stats_get(txa) = txa_service_stats_get;
+		txa_stats_reset(txa) = txa_service_stats_reset;
+		txa_event_port_get(txa) = txa_service_event_port_get;
+		txa_service = 1;
+	}
+
+	ret = (txa_pmd_init_func(txa)) ?
+			txa_pmd_init_func(txa)(txa)
+			: 0;
+
+	txa_internal(txa)->pmd_type = TXA_PMD_NONE;
+	if (ret == 0) {
+		txa_internal(txa)->pmd_type = txa_service ?
+					TXA_PMD_SERVICE :
+					TXA_PMD_EVENTDEV;
+		txa->txa_ethdev = txa_ethdev;
+	} else {
+		rte_free(txa_ethdev);
+	}
+
+	return ret;
+}
+
+static void
+txa_service_tx(struct rte_event_eth_tx_adapter *txa, struct rte_event *ev,
+	uint32_t n)
+{
+	uint32_t i;
+	uint16_t nb_tx;
+	struct txa_service_data *data;
+	struct rte_event_eth_tx_adapter_ethdev *tdi;
+	struct rte_event_eth_tx_adapter_stats *stats;
+
+	tdi = txa->txa_ethdev;
+	data = txa->dev_private;
+	stats = &data->stats;
+
+	nb_tx = 0;
+	for (i = 0; i < n; i++) {
+		struct rte_mbuf *m;
+		uint16_t port;
+		uint16_t queue;
+		struct txa_service_queue_info *tqi;
+
+		m = ev[i].mbuf;
+		port = m->port;
+		queue = TXA_QID_READ(m);
+
+		tqi = txa_service_queue(txa, port, queue);
+		if (unlikely(tdi == NULL ||
+			tdi[port].nb_queues == 0 || !tqi->added)) {
+			rte_pktmbuf_free(m);
+			continue;
+		}
+
+		nb_tx += rte_eth_tx_buffer(port, queue, tqi->tx_buf, m);
+	}
+
+	stats->tx_packets += nb_tx;
+}
+
+static int32_t
+txa_service_func(void *args)
+{
+	struct rte_event_eth_tx_adapter *txa = args;
+	uint8_t dev_id;
+	uint8_t port;
+	uint16_t n;
+	uint32_t nb_tx, max_nb_tx;
+	struct rte_event ev[TXA_BATCH_SIZE];
+	struct txa_service_data *data;
+
+	dev_id = txa->eventdev_id;
+	max_nb_tx = txa->max_nb_tx;
+	data = txa->dev_private;
+	port = data->port_id;
+
+	if (!rte_spinlock_trylock(&txa->tx_lock))
+		return 0;
+
+	for (nb_tx = 0; nb_tx < max_nb_tx; nb_tx += n) {
+
+		n = rte_event_dequeue_burst(dev_id, port, ev, RTE_DIM(ev), 0);
+		if (!n)
+			break;
+		txa_service_tx(txa, ev, n);
+	}
+
+	if ((data->loop_cnt++ & (TXA_FLUSH_THRESHOLD - 1)) == 0) {
+
+		struct rte_event_eth_tx_adapter_ethdev *tdi;
+		struct txa_service_queue_info *tqi;
+		struct rte_eth_dev *dev;
+		uint16_t i;
+
+		tdi = txa->txa_ethdev;
+		nb_tx = 0;
+
+		RTE_ETH_FOREACH_DEV(i) {
+			uint16_t q;
+
+			if (i == txa->dev_count)
+				break;
+
+			dev = tdi[i].dev;
+			if (tdi[i].nb_queues == 0)
+				continue;
+			for (q = 0; q < dev->data->nb_tx_queues; q++) {
+
+				tqi = txa_service_queue(txa, i, q);
+				if (!tqi->added)
+					continue;
+
+				nb_tx += rte_eth_tx_buffer_flush(i, q,
+							tqi->tx_buf);
+			}
+		}
+
+		data->stats.tx_packets += nb_tx;
+	}
+	rte_spinlock_unlock(&txa->tx_lock);
+	return 0;
+}
+
+static int
+txa_service_init(struct rte_event_eth_tx_adapter *txa)
+{
+	int ret;
+	struct rte_service_spec service;
+	struct rte_event_eth_tx_adapter_conf conf;
+	struct txa_service_data *data;
+
+	data = txa->dev_private;
+
+	memset(&service, 0, sizeof(service));
+	snprintf(service.name, TXA_SERVICE_NAME_LEN,
+		"rte_event_eth_txa_%d", txa->id);
+	service.socket_id = txa->socket_id;
+	service.callback = txa_service_func;
+	service.callback_userdata = txa;
+	/* Service function handles locking for queue add/del updates */
+	service.capabilities = RTE_SERVICE_CAP_MT_SAFE;
+	ret = rte_service_component_register(&service, &data->service_id);
+	if (ret) {
+		RTE_EDEV_LOG_ERR("failed to register service %s err = %" PRId32,
+			service.name, ret);
+		return ret;
+	}
+
+	ret = txa->conf_cb(txa->id, txa->eventdev_id, &conf, txa->conf_arg);
+	if (ret) {
+		rte_service_component_unregister(data->service_id);
+		return ret;
+	}
+
+	data->port_id = conf.event_port_id;
+	txa->max_nb_tx = conf.max_nb_tx;
+	return 0;
+}
+
+static int
+txa_service_pmd_free(struct rte_event_eth_tx_adapter *txa)
+{
+	struct txa_service_data *data;
+
+	data = txa->dev_private;
+
+	if (txa->nb_queues != 0)
+		return 0;
+
+	if (txa_pmd_service(txa)) {
+		rte_service_component_runstate_set(data->service_id, 0);
+		while (rte_service_may_be_active(data->service_id))
+			rte_pause();
+		rte_service_component_unregister(data->service_id);
+	}
+
+	rte_free(txa->dev_private);
+	txa->dev_private = NULL;
+
+	return 0;
+}
+
+static int
+txa_service_pmd_init(struct rte_event_eth_tx_adapter *txa)
+{
+	int ret;
+	struct txa_service_data *data;
+
+	data = rte_zmalloc_socket(txa->mem_name,
+				sizeof(*data), 0,
+				txa->socket_id);
+	if (!data) {
+		RTE_EDEV_LOG_ERR("Failed to alloc PMD private data");
+		return -ENOMEM;
+	}
+
+	txa->dev_private = data;
+	ret = txa_service_init(txa);
+	if (ret) {
+		rte_free(data);
+		txa->dev_private = NULL;
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
+txa_service_ctrl(struct rte_event_eth_tx_adapter *txa, int start)
+{
+	int ret;
+	struct txa_service_data *data = txa->dev_private;
+
+	ret = rte_service_runstate_set(data->service_id, start);
+	if (ret == 0 && !start) {
+		while (rte_service_may_be_active(data->service_id))
+			rte_pause();
+	}
+	return ret;
+}
+
+static int
+txa_service_start(struct rte_event_eth_tx_adapter *txa)
+{
+	return txa_service_ctrl(txa, 1);
+}
+
+static int
+txa_service_stop(struct rte_event_eth_tx_adapter *txa)
+{
+	return txa_service_ctrl(txa, 0);
+}
+
+static int
+txa_service_event_port_get(struct rte_event_eth_tx_adapter *txa, uint8_t *port)
+{
+	struct txa_service_data *data = txa->dev_private;
+
+	*port = data->port_id;
+	return 0;
+}
+
+static void
+txa_service_buffer_retry(struct rte_mbuf **pkts, uint16_t unsent,
+			void *userdata)
+{
+	struct txa_retry *tr;
+	struct txa_service_data *data;
+	struct rte_event_eth_tx_adapter_stats *stats;
+	uint16_t sent = 0;
+	unsigned int retry = 0;
+	uint16_t i, n;
+
+	tr = (struct txa_retry *)(uintptr_t)userdata;
+	data = txa_id_to_adapter(tr->id)->dev_private;
+	stats = &data->stats;
+
+	do {
+		n = rte_eth_tx_burst(tr->port_id, tr->tx_queue,
+			       &pkts[sent], unsent - sent);
+
+		sent += n;
+	} while (sent != unsent && retry++ < TXA_RETRY_CNT);
+
+	for (i = sent; i < unsent; i++)
+		rte_pktmbuf_free(pkts[i]);
+
+	stats->tx_retry += retry;
+	stats->tx_packets += sent;
+	stats->tx_dropped += unsent - sent;
+}
+
+static struct rte_eth_dev_tx_buffer *
+txa_service_tx_buf_alloc(struct rte_event_eth_tx_adapter *txa,
+			const struct rte_eth_dev *dev)
+{
+	struct rte_eth_dev_tx_buffer *tb;
+	uint16_t port_id;
+
+	port_id = dev->data->port_id;
+	tb = rte_zmalloc_socket(txa->mem_name,
+				RTE_ETH_TX_BUFFER_SIZE(TXA_BATCH_SIZE),
+				0,
+				rte_eth_dev_socket_id(port_id));
+	if (tb == NULL)
+		RTE_EDEV_LOG_ERR("Failed to allocate memory for tx buffer");
+	return tb;
+}
+
+static int
+txa_service_queue_del(struct rte_event_eth_tx_adapter *txa,
+		const struct rte_eth_dev *dev,
+		int32_t tx_queue_id)
+{
+	struct txa_service_queue_info *tqi;
+	struct rte_eth_dev_tx_buffer *tb;
+	uint16_t port_id;
+
+	if (tx_queue_id == -1) {
+		uint16_t i;
+		int ret;
+
+		for (i = 0; i < dev->data->nb_tx_queues; i++) {
+			ret = txa_service_queue_del(txa, dev, i);
+			if (ret != 0)
+				break;
+		}
+		return ret;
+	}
+
+	port_id = dev->data->port_id;
+	tqi = txa_service_queue(txa, port_id, tx_queue_id);
+	if (!tqi || !tqi->added)
+		return 0;
+
+	tb = tqi->tx_buf;
+
+	tqi->added = 0;
+	tqi->tx_buf = NULL;
+	rte_free(tb);
+	txa->nb_queues--;
+	txa->txa_ethdev[port_id].nb_queues--;
+
+	txa_service_queue_array_free(txa, port_id);
+	return 0;
+}
+
+static int
+txa_service_queue_added(struct rte_event_eth_tx_adapter *txa,
+			const struct rte_eth_dev *dev,
+			uint16_t tx_queue_id)
+{
+	struct txa_service_queue_info *tqi;
+
+	tqi = txa_service_queue(txa, dev->data->port_id, tx_queue_id);
+	return tqi && tqi->added;
+}
+
+static int
+txa_service_queue_add(struct rte_event_eth_tx_adapter *txa,
+		const struct rte_eth_dev *dev,
+		int32_t tx_queue_id)
+{
+	struct txa_service_data *data = txa->dev_private;
+	struct rte_event_eth_tx_adapter_ethdev *tdi;
+	struct txa_service_queue_info *tqi;
+	struct rte_eth_dev_tx_buffer *tb;
+	struct txa_retry *txa_retry;
+	int ret;
+
+	ret = txa_service_queue_array_alloc(txa, dev->data->port_id);
+	if (ret)
+		return ret;
+	tdi = &txa->txa_ethdev[dev->data->port_id];
+	if (tx_queue_id == -1) {
+		int nb_queues = dev->data->nb_tx_queues - tdi->nb_queues;
+		uint16_t i, j;
+		uint16_t *qdone;
+
+		qdone = rte_zmalloc(txa->mem_name,
+				nb_queues * sizeof(*qdone), 0);
+		j = 0;
+		for (i = 0; i < nb_queues; i++) {
+			if (txa_service_queue_added(txa, dev, i))
+				continue;
+			ret = txa_service_queue_add(txa, dev, i);
+			if (ret == 0)
+				qdone[j++] = i;
+			else
+				break;
+		}
+
+		if (i != nb_queues) {
+			for (i = 0; i < j; i++)
+				txa_service_queue_del(txa, dev, qdone[i]);
+		}
+		rte_free(qdone);
+		return ret;
+	}
+
+	if (txa_service_queue_added(txa, dev, tx_queue_id))
+		return 0;
+
+	tb = txa_service_tx_buf_alloc(txa, dev);
+	if (tb == NULL)
+		return -ENOMEM;
+
+	tqi = txa_service_queue(txa, dev->data->port_id, tx_queue_id);
+
+	txa_retry = &tqi->txa_retry;
+	txa_retry->id = txa->id;
+	txa_retry->port_id = dev->data->port_id;
+	txa_retry->tx_queue = tx_queue_id;
+
+	rte_eth_tx_buffer_init(tb, TXA_BATCH_SIZE);
+	rte_eth_tx_buffer_set_err_callback(tb,
+		txa_service_buffer_retry, txa_retry);
+
+	tqi->tx_buf = tb;
+	tqi->added = 1;
+	rte_service_component_runstate_set(data->service_id, 1);
+	tdi->nb_queues++;
+	txa->nb_queues++;
+	return 0;
+}
+
+static uint16_t
+txa_service_enqueue(void *adapter,
+		const struct rte_eventdev *dev, void *port,
+		struct rte_event ev[], uint16_t nb_events)
+{
+	RTE_SET_USED(adapter);
+	RTE_SET_USED(dev);
+	RTE_SET_USED(port);
+	RTE_SET_USED(ev);
+	RTE_SET_USED(nb_events);
+
+	RTE_EDEV_LOG_ERR("Service adapter does not support enqueue callback");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static int
+txa_service_stats_get(struct rte_event_eth_tx_adapter *txa,
+		struct rte_event_eth_tx_adapter_stats *stats)
+{
+	struct txa_service_data *data;
+
+	data = txa->dev_private;
+	*stats = data->stats;
+	return 0;
+}
+
+static int
+txa_service_get_service_id(struct rte_event_eth_tx_adapter *txa,
+			uint32_t *service_id)
+{
+	struct txa_service_data *data;
+
+	data = txa->dev_private;
+	if (data == NULL)
+		return -ESRCH;
+
+	*service_id = data->service_id;
+	return 0;
+}
+
+static int
+txa_service_stats_reset(struct rte_event_eth_tx_adapter *txa)
+{
+	struct txa_service_data *data;
+
+	data = txa->dev_private;
+
+	memset(&data->stats, 0, sizeof(data->stats));
+	return 0;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_create(uint8_t id, uint8_t dev_id,
+				struct rte_event_port_conf *port_conf)
+{
+	struct rte_event_port_conf *cb_conf;
+	struct rte_event_eth_tx_adapter *txa;
+	int ret;
+
+	if (port_conf == NULL)
+		return -EINVAL;
+
+	RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+	txa = txa_id_to_adapter(id);
+	if (txa != NULL) {
+		RTE_EDEV_LOG_ERR("Eth Tx adapter exists id = %" PRIu8, id);
+		return -EEXIST;
+	}
+
+	cb_conf = rte_malloc(NULL, sizeof(*cb_conf), 0);
+	if (cb_conf == NULL)
+		return -ENOMEM;
+	*cb_conf = *port_conf;
+	ret = rte_event_eth_tx_adapter_create_ext(id, dev_id,
+						txa_default_conf_cb,
+						cb_conf);
+	if (ret) {
+		rte_free(cb_conf);
+		return ret;
+	}
+
+	txa = txa_id_to_adapter(id);
+	txa_internal(txa)->conf_free = 1;
+	return ret;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_create_ext(uint8_t id, uint8_t dev_id,
+				rte_event_eth_tx_adapter_conf_cb conf_cb,
+				void *conf_arg)
+{
+	struct rte_event_eth_tx_adapter *txa;
+	struct txa_internal *internal;
+	int socket_id;
+	char mem_name[TXA_SERVICE_NAME_LEN];
+
+	RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	if (conf_cb == NULL)
+		return -EINVAL;
+
+	txa = txa_id_to_adapter(id);
+	if (txa != NULL) {
+		RTE_EDEV_LOG_ERR("Eth Tx adapter exists id = %" PRIu8, id);
+		return -EEXIST;
+	}
+
+	socket_id = rte_event_dev_socket_id(dev_id);
+	snprintf(mem_name, TXA_MEM_NAME_LEN,
+		"rte_event_eth_txa_%d",
+		id);
+
+	txa = rte_zmalloc_socket(mem_name,
+				sizeof(*txa),
+				RTE_CACHE_LINE_SIZE, socket_id);
+	if (txa == NULL) {
+		RTE_EDEV_LOG_ERR("failed to get mem for tx adapter");
+		return -ENOMEM;
+	}
+
+	internal = rte_zmalloc_socket(mem_name,
+				sizeof(*internal),
+				RTE_CACHE_LINE_SIZE, socket_id);
+	if (internal == NULL) {
+		RTE_EDEV_LOG_ERR("failed to get mem for tx adapter internal"
+			" data");
+		rte_free(txa);
+		return -ENOMEM;
+	}
+
+	txa->id = id;
+	txa->eventdev_id = dev_id;
+	txa->socket_id = socket_id;
+	strncpy(txa->mem_name, mem_name, TXA_SERVICE_NAME_LEN);
+	txa->conf_cb = conf_cb;
+	txa->conf_arg = conf_arg;
+	rte_spinlock_init(&txa->tx_lock);
+	rte_event_eth_tx_adapters.data[id] = txa;
+	txa_internals[id] = internal;
+	return 0;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
+{
+	struct rte_event_eth_tx_adapter *txa;
+
+	RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+	txa = txa_id_to_adapter(id);
+	if (txa == NULL)
+		return -EINVAL;
+
+	return txa_event_port_get(txa) ?
+			txa_event_port_get(txa)(txa, event_port_id) :
+			-ENOTSUP;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_free(uint8_t id)
+{
+	struct rte_event_eth_tx_adapter *txa;
+	struct txa_internal *internal;
+
+	RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+	txa = txa_id_to_adapter(id);
+	if (txa == NULL)
+		return -EINVAL;
+
+	if (txa->nb_queues) {
+		RTE_EDEV_LOG_ERR("%" PRIu16 " Tx queues not deleted",
+				txa->nb_queues);
+		return -EBUSY;
+	}
+
+	internal = txa_internal(txa);
+	txa_restore_ops(txa);
+	if (internal->conf_free)
+		rte_free(txa->conf_arg);
+	rte_free(txa);
+	rte_free(internal);
+	rte_event_eth_tx_adapters.data[id] = NULL;
+	txa_internals[id] = NULL;
+	return 0;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_queue_add(uint8_t id,
+				uint16_t eth_dev_id,
+				int32_t queue)
+{
+	struct rte_event_eth_tx_adapter *txa;
+	int ret;
+
+	RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL);
+
+	txa = txa_id_to_adapter(id);
+	if (txa == NULL)
+		return -EINVAL;
+
+	if (txa->dev_count == 0)
+		txa->dev_count = rte_eth_dev_count_total();
+
+	if (txa->dev_count == 0)
+		return -EINVAL;
+
+	if (!txa_ethdev_ok(txa, eth_dev_id)) {
+		RTE_EDEV_LOG_ERR("Hot plugged device is unsupported eth port %"
+				PRIu16, eth_dev_id);
+		return -ENOTSUP;
+	}
+
+	if (queue != -1 && (uint16_t)queue >=
+			rte_eth_devices[eth_dev_id].data->nb_rx_queues) {
+		RTE_EDEV_LOG_ERR("Invalid rx queue_id %" PRIu16,
+				(uint16_t)queue);
+		return -EINVAL;
+	}
+
+	ret = txa_init(txa);
+	if (ret)
+		return ret;
+
+	rte_spinlock_lock(&txa->tx_lock);
+	ret =  txa_queue_add(txa) ?
+			txa_queue_add(txa)(txa,
+				&rte_eth_devices[eth_dev_id],
+				queue)
+			: 0;
+
+	if (txa->nb_queues == 0) {
+		txa_pmd_free(txa)(txa);
+		txa_internal(txa)->pmd_type = TXA_PMD_NONE;
+	}
+
+	rte_spinlock_unlock(&txa->tx_lock);
+	return ret;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_queue_del(uint8_t id,
+					uint16_t eth_dev_id,
+					int32_t queue)
+{
+	struct rte_event_eth_tx_adapter *txa;
+	int ret;
+
+	RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL);
+
+	txa = txa_id_to_adapter(id);
+	if (txa == NULL)
+		return -EINVAL;
+
+	if (queue != -1 && (uint16_t)queue >=
+			rte_eth_devices[eth_dev_id].data->nb_rx_queues) {
+		RTE_EDEV_LOG_ERR("Invalid rx queue_id %" PRIu16,
+				(uint16_t)queue);
+		return -EINVAL;
+	}
+
+	if (txa_pmd_none(txa))
+		return 0;
+
+	rte_spinlock_lock(&txa->tx_lock);
+
+	ret =  txa_queue_del(txa) ?
+			txa_queue_del(txa)(txa,
+					&rte_eth_devices[eth_dev_id],
+					queue)
+					: 0;
+
+	if (ret != 0)
+		return ret;
+
+	if (txa->nb_queues == 0) {
+		txa_pmd_free(txa)(txa);
+		txa_internal(txa)->pmd_type = TXA_PMD_NONE;
+	}
+
+	rte_spinlock_unlock(&txa->tx_lock);
+	return 0;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_service_id_get(uint8_t id, uint32_t *service_id)
+{
+	struct rte_event_eth_tx_adapter *txa;
+
+	RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+	txa = txa_id_to_adapter(id);
+	if (txa == NULL || service_id == NULL)
+		return -EINVAL;
+
+	return txa_pmd_service(txa) ?
+				txa_service_get_service_id(txa, service_id) :
+				-ESRCH;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_start(uint8_t id)
+{
+	struct rte_event_eth_tx_adapter *txa;
+
+	RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+	txa = txa_id_to_adapter(id);
+	if (txa == NULL)
+		return -EINVAL;
+
+	if (txa_pmd_none(txa))
+		return 0;
+
+	return (txa_start(txa)) ? txa_start(txa)(txa) : 0;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_stats_get(uint8_t id,
+				struct rte_event_eth_tx_adapter_stats *stats)
+{
+	struct rte_event_eth_tx_adapter *txa;
+
+	RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+	txa = txa_id_to_adapter(id);
+	if (txa == NULL || stats == NULL)
+		return -EINVAL;
+
+	if (txa_pmd_none(txa)) {
+		memset(stats, 0, sizeof(*stats));
+		return 0;
+	}
+
+	return txa_stats_get(txa) ?
+		txa_stats_get(txa)(txa, stats) :
+		-ENOTSUP;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_stats_reset(uint8_t id)
+{
+	struct rte_event_eth_tx_adapter *txa;
+
+	RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+	txa = txa_id_to_adapter(id);
+	if (txa == NULL)
+		return -EINVAL;
+
+	if (txa_pmd_none(txa))
+		return 0;
+
+	return txa_stats_reset(txa) ?
+		txa_stats_reset(txa)(txa) : -ENOTSUP;
+}
+
+int rte_event_eth_tx_adapter_stop(uint8_t id)
+{
+	struct rte_event_eth_tx_adapter *txa;
+
+	RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+	txa = txa_id_to_adapter(id);
+	if (txa == NULL)
+		return -EINVAL;
+
+	if (txa_pmd_none(txa))
+		return 0;
+
+	return (txa_stop(txa)) ? txa_stop(txa)(txa) : 0;
+}
+
+void
+rte_event_eth_tx_adapter_txq_set(struct rte_mbuf *pkt, uint16_t txq_id)
+{
+	TXA_QID_WRITE(pkt, txq_id);
+}
diff --git a/config/common_base b/config/common_base
index 721e59b..ea5b06f 100644
--- a/config/common_base
+++ b/config/common_base
@@ -593,7 +593,7 @@ CONFIG_RTE_EVENT_MAX_DEVS=16
 CONFIG_RTE_EVENT_MAX_QUEUES_PER_DEV=64
 CONFIG_RTE_EVENT_TIMER_ADAPTER_NUM_MAX=32
 CONFIG_RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE=32
-
+CONFIG_RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE=32
 #
 # Compile PMD for skeleton event device
 #
diff --git a/lib/librte_eventdev/Makefile b/lib/librte_eventdev/Makefile
index b3e2546..0c077f6 100644
--- a/lib/librte_eventdev/Makefile
+++ b/lib/librte_eventdev/Makefile
@@ -23,6 +23,7 @@ SRCS-y += rte_event_ring.c
 SRCS-y += rte_event_eth_rx_adapter.c
 SRCS-y += rte_event_timer_adapter.c
 SRCS-y += rte_event_crypto_adapter.c
+SRCS-y += rte_event_eth_tx_adapter.c
 
 # export include files
 SYMLINK-y-include += rte_eventdev.h
@@ -34,6 +35,7 @@ SYMLINK-y-include += rte_event_eth_rx_adapter.h
 SYMLINK-y-include += rte_event_timer_adapter.h
 SYMLINK-y-include += rte_event_timer_adapter_pmd.h
 SYMLINK-y-include += rte_event_crypto_adapter.h
+SYMLINK-y-include += rte_event_eth_tx_adapter.h
 
 # versioning export map
 EXPORT_MAP := rte_eventdev_version.map
diff --git a/lib/librte_eventdev/meson.build b/lib/librte_eventdev/meson.build
index bd138bd..d885743 100644
--- a/lib/librte_eventdev/meson.build
+++ b/lib/librte_eventdev/meson.build
@@ -7,7 +7,8 @@ sources = files('rte_eventdev.c',
 		'rte_event_ring.c',
 		'rte_event_eth_rx_adapter.c',
 		'rte_event_timer_adapter.c',
-		'rte_event_crypto_adapter.c')
+		'rte_event_crypto_adapter.c',
+		'rte_event_eth_tx_adapter.c')
 headers = files('rte_eventdev.h',
 		'rte_eventdev_pmd.h',
 		'rte_eventdev_pmd_pci.h',
@@ -16,5 +17,6 @@ headers = files('rte_eventdev.h',
 		'rte_event_eth_rx_adapter.h',
 		'rte_event_timer_adapter.h',
 		'rte_event_timer_adapter_pmd.h',
-		'rte_event_crypto_adapter.h')
+		'rte_event_crypto_adapter.h'
+		'rte_event_eth_tx_adapter.h')
 deps += ['ring', 'ethdev', 'hash', 'mempool', 'mbuf', 'timer', 'cryptodev']
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index c3f18d6..8284c7c 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -109,4 +109,17 @@ EXPERIMENTAL {
 	rte_event_crypto_adapter_stats_get;
 	rte_event_crypto_adapter_stats_reset;
 	rte_event_crypto_adapter_stop;
+	rte_event_eth_tx_adapter_caps_get;
+	rte_event_eth_tx_adapter_create;
+	rte_event_eth_tx_adapter_create_ext;
+	rte_event_eth_tx_adapter_event_port_get;
+	rte_event_eth_tx_adapter_free;
+	rte_event_eth_tx_adapter_queue_add;
+	rte_event_eth_tx_adapter_queue_del;
+	rte_event_eth_tx_adapter_service_id_get;
+	rte_event_eth_tx_adapter_start;
+	rte_event_eth_tx_adapter_stats_get;
+	rte_event_eth_tx_adapter_stats_reset;
+	rte_event_eth_tx_adapter_stop;
+	rte_event_eth_tx_adapter_txq_set;
 };
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [PATCH 4/4] eventdev: add auto test for eth Tx adapter
  2018-07-06  6:42 [dpdk-dev] [PATCH 1/4] eventdev: add eth Tx adapter APIs Nikhil Rao
  2018-07-06  6:42 ` [dpdk-dev] [PATCH 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter Nikhil Rao
  2018-07-06  6:42 ` [dpdk-dev] [PATCH 3/4] eventdev: add eth Tx adapter implementation Nikhil Rao
@ 2018-07-06  6:42 ` Nikhil Rao
  2018-07-10 12:17 ` [dpdk-dev] [PATCH 1/4] eventdev: add eth Tx adapter APIs Jerin Jacob
  3 siblings, 0 replies; 9+ messages in thread
From: Nikhil Rao @ 2018-07-06  6:42 UTC (permalink / raw)
  To: jerin.jacob, olivier.matz; +Cc: nikhil.rao, dev

This patch adds tests for the eth Tx adapter APIs. It also
tests the data path for the rte_service function based
implementation of the APIs.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
---
 test/test/test_event_eth_tx_adapter.c | 633 ++++++++++++++++++++++++++++++++++
 MAINTAINERS                           |   1 +
 lib/librte_eventdev/meson.build       |   2 +-
 test/test/Makefile                    |   1 +
 test/test/meson.build                 |   2 +
 5 files changed, 638 insertions(+), 1 deletion(-)
 create mode 100644 test/test/test_event_eth_tx_adapter.c

diff --git a/test/test/test_event_eth_tx_adapter.c b/test/test/test_event_eth_tx_adapter.c
new file mode 100644
index 0000000..a5a0112
--- /dev/null
+++ b/test/test/test_event_eth_tx_adapter.c
@@ -0,0 +1,633 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+#include <string.h>
+#include <rte_common.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_bus_vdev.h>
+#include <rte_eth_ring.h>
+#include <rte_service.h>
+#include <rte_event_eth_tx_adapter.h>
+
+#include "test.h"
+
+#define MAX_NUM_QUEUE		RTE_PMD_RING_MAX_RX_RINGS
+#define TEST_INST_ID		0
+#define TEST_DEV_ID		0
+#define SOCKET0			0
+#define RING_SIZE		256
+#define ETH_NAME_LEN		32
+#define NUM_ETH_PAIR		1
+#define NUM_ETH_DEV		(2 * NUM_ETH_PAIR)
+#define NB_MBUF			512
+#define PAIR_PORT_INDEX(p)	((p) + NUM_ETH_PAIR)
+#define PORT(p)			default_params.port[(p)]
+#define TEST_ETHDEV_ID		PORT(0)
+#define TEST_ETHDEV_PAIR_ID	PORT(PAIR_PORT_INDEX(0))
+
+#define EDEV_RETRY		0xffff
+
+struct event_eth_tx_adapter_test_params {
+	struct rte_mempool *mp;
+	uint16_t rx_rings, tx_rings;
+	uint32_t caps;
+	struct rte_ring *r[NUM_ETH_DEV][MAX_NUM_QUEUE];
+	int port[NUM_ETH_DEV];
+};
+
+static int event_dev_delete;
+static struct event_eth_tx_adapter_test_params default_params;
+
+static inline int
+port_init_common(uint8_t port, const struct rte_eth_conf *port_conf,
+		struct rte_mempool *mp)
+{
+	const uint16_t rx_ring_size = RING_SIZE, tx_ring_size = RING_SIZE;
+	int retval;
+	uint16_t q;
+
+	if (!rte_eth_dev_is_valid_port(port))
+		return -1;
+
+	default_params.rx_rings = MAX_NUM_QUEUE;
+	default_params.tx_rings = MAX_NUM_QUEUE;
+
+	/* Configure the Ethernet device. */
+	retval = rte_eth_dev_configure(port, default_params.rx_rings,
+				default_params.tx_rings, port_conf);
+	if (retval != 0)
+		return retval;
+
+	for (q = 0; q < default_params.rx_rings; q++) {
+		retval = rte_eth_rx_queue_setup(port, q, rx_ring_size,
+				rte_eth_dev_socket_id(port), NULL, mp);
+		if (retval < 0)
+			return retval;
+	}
+
+	for (q = 0; q < default_params.tx_rings; q++) {
+		retval = rte_eth_tx_queue_setup(port, q, tx_ring_size,
+				rte_eth_dev_socket_id(port), NULL);
+		if (retval < 0)
+			return retval;
+	}
+
+	/* Start the Ethernet port. */
+	retval = rte_eth_dev_start(port);
+	if (retval < 0)
+		return retval;
+
+	/* Display the port MAC address. */
+	struct ether_addr addr;
+	rte_eth_macaddr_get(port, &addr);
+	printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8
+			   " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n",
+			(unsigned int)port,
+			addr.addr_bytes[0], addr.addr_bytes[1],
+			addr.addr_bytes[2], addr.addr_bytes[3],
+			addr.addr_bytes[4], addr.addr_bytes[5]);
+
+	/* Enable RX in promiscuous mode for the Ethernet device. */
+	rte_eth_promiscuous_enable(port);
+
+	return 0;
+}
+
+static inline int
+port_init(uint8_t port, struct rte_mempool *mp)
+{
+	struct rte_eth_conf conf = { 0 };
+	return port_init_common(port, &conf, mp);
+}
+
+#define RING_NAME_LEN	20
+#define DEV_NAME_LEN	20
+
+static int
+init_ports(void)
+{
+	char ring_name[ETH_NAME_LEN];
+	unsigned int i, j;
+	struct rte_ring * const *c1;
+	struct rte_ring * const *c2;
+	int err;
+
+	if (!default_params.mp)
+		default_params.mp = rte_pktmbuf_pool_create("mbuf_pool",
+			NB_MBUF, 32,
+			0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+
+	if (!default_params.mp)
+		return -ENOMEM;
+
+	for (i = 0; i < NUM_ETH_DEV; i++) {
+		for (j = 0; j < MAX_NUM_QUEUE; j++) {
+			snprintf(ring_name, sizeof(ring_name), "R%u%u", i, j);
+			default_params.r[i][j] = rte_ring_create(ring_name,
+						RING_SIZE,
+						SOCKET0,
+						RING_F_SP_ENQ | RING_F_SC_DEQ);
+			TEST_ASSERT((default_params.r[i][j] != NULL),
+				"Failed to allocate ring");
+		}
+	}
+
+	/*
+	 * To create two pseudo-Ethernet ports where the traffic is
+	 * switched between them, that is, traffic sent to port 1 is
+	 * read back from port 2 and vice-versa
+	 */
+	for (i = 0; i < NUM_ETH_PAIR; i++) {
+		char dev_name[DEV_NAME_LEN];
+		int p;
+
+		c1 = default_params.r[i];
+		c2 = default_params.r[PAIR_PORT_INDEX(i)];
+
+		snprintf(dev_name, DEV_NAME_LEN, "%u-%u", i, i + NUM_ETH_PAIR);
+		p = rte_eth_from_rings(dev_name, c1, MAX_NUM_QUEUE,
+				 c2, MAX_NUM_QUEUE, SOCKET0);
+		TEST_ASSERT(p >= 0, "Port creation failed %s", dev_name);
+		err = port_init(p, default_params.mp);
+		TEST_ASSERT(err == 0, "Port init failed %s", dev_name);
+		default_params.port[i] = p;
+
+		snprintf(dev_name, DEV_NAME_LEN, "%u-%u",  i + NUM_ETH_PAIR, i);
+		p = rte_eth_from_rings(dev_name, c2, MAX_NUM_QUEUE,
+				c1, MAX_NUM_QUEUE, SOCKET0);
+		TEST_ASSERT(p > 0, "Port creation failed %s", dev_name);
+		err = port_init(p, default_params.mp);
+		TEST_ASSERT(err == 0, "Port init failed %s", dev_name);
+		default_params.port[PAIR_PORT_INDEX(i)] = p;
+	}
+
+	return 0;
+}
+
+static void
+deinit_ports(void)
+{
+	uint16_t i, j;
+	char name[ETH_NAME_LEN];
+
+	RTE_ETH_FOREACH_DEV(i)
+		rte_eth_dev_stop(i);
+
+	for (i = 0; i < RTE_DIM(default_params.port); i++) {
+		rte_eth_dev_get_name_by_port(default_params.port[i], name);
+		rte_vdev_uninit(name);
+		for (j = 0; j < RTE_DIM(default_params.r[i]); j++)
+			rte_ring_free(default_params.r[i][j]);
+	}
+}
+
+static int
+testsuite_setup(void)
+{
+	int err;
+	uint8_t count;
+	struct rte_event_dev_info dev_info;
+	uint8_t priority;
+	uint8_t queue_id;
+
+	count = rte_event_dev_count();
+	if (!count) {
+		printf("Failed to find a valid event device,"
+			" testing with event_sw0 device\n");
+		rte_vdev_init("event_sw0", NULL);
+		event_dev_delete = 1;
+	}
+
+	struct rte_event_dev_config config = {
+			.nb_event_queues = 1,
+			.nb_event_ports = 1,
+	};
+
+	struct rte_event_queue_conf wkr_q_conf = {
+			.schedule_type = RTE_SCHED_TYPE_ORDERED,
+			.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+			.nb_atomic_flows = 1024,
+			.nb_atomic_order_sequences = 1024,
+	};
+
+	err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
+	config.nb_event_queue_flows = dev_info.max_event_queue_flows;
+	config.nb_event_port_dequeue_depth =
+			dev_info.max_event_port_dequeue_depth;
+	config.nb_event_port_enqueue_depth =
+			dev_info.max_event_port_enqueue_depth;
+	config.nb_events_limit =
+			dev_info.max_num_events;
+
+	rte_log_set_level(0, RTE_LOG_DEBUG);
+	err = rte_event_dev_configure(TEST_DEV_ID, &config);
+	TEST_ASSERT(err == 0, "Event device initialization failed err %d\n",
+			err);
+	if (rte_event_queue_setup(TEST_DEV_ID, 0, &wkr_q_conf) < 0) {
+		printf("%d: error creating qid %d\n", __LINE__, 0);
+		return -1;
+	}
+	if (rte_event_port_setup(TEST_DEV_ID, 0, NULL) < 0) {
+		printf("Error setting up port %d\n", 0);
+		return -1;
+	}
+
+	priority = RTE_EVENT_DEV_PRIORITY_LOWEST;
+	if (rte_event_port_link(TEST_DEV_ID, 0, &queue_id, &priority, 1) != 1) {
+		printf("Error linking port\n");
+		return -1;
+	}
+
+	err = init_ports();
+	TEST_ASSERT(err == 0, "Port initialization failed err %d\n", err);
+
+	err = rte_event_eth_tx_adapter_caps_get(TEST_DEV_ID,
+						&default_params.caps);
+	TEST_ASSERT(err == 0, "Failed to get adapter cap err %d\n",
+			err);
+
+	return err;
+}
+
+#define DEVICE_ID_SIZE 64
+
+static void
+testsuite_teardown(void)
+{
+	deinit_ports();
+	rte_mempool_free(default_params.mp);
+	default_params.mp = NULL;
+	if (event_dev_delete)
+		rte_vdev_uninit("event_sw0");
+}
+
+static int
+tx_adapter_create(void)
+{
+	int err;
+	struct rte_event_dev_info dev_info;
+	struct rte_event_port_conf tx_p_conf = {0};
+
+	err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	tx_p_conf.new_event_threshold = dev_info.max_num_events;
+	tx_p_conf.dequeue_depth = dev_info.max_event_port_dequeue_depth;
+	tx_p_conf.enqueue_depth = dev_info.max_event_port_enqueue_depth;
+	err = rte_event_eth_tx_adapter_create(TEST_INST_ID, TEST_DEV_ID,
+					&tx_p_conf);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	return err;
+}
+
+static void
+tx_adapter_free(void)
+{
+	rte_event_eth_tx_adapter_free(TEST_INST_ID);
+}
+
+static int
+tx_adapter_create_free(void)
+{
+	int err;
+	struct rte_event_dev_info dev_info;
+	struct rte_event_port_conf tx_p_conf;
+
+	err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	tx_p_conf.new_event_threshold = dev_info.max_num_events;
+	tx_p_conf.dequeue_depth = dev_info.max_event_port_dequeue_depth;
+	tx_p_conf.enqueue_depth = dev_info.max_event_port_enqueue_depth;
+
+	err = rte_event_eth_tx_adapter_create(TEST_INST_ID, TEST_DEV_ID,
+					NULL);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_tx_adapter_create(TEST_INST_ID, TEST_DEV_ID,
+					&tx_p_conf);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_create(TEST_INST_ID,
+					TEST_DEV_ID, &tx_p_conf);
+	TEST_ASSERT(err == -EEXIST, "Expected -EEXIST %d got %d", -EEXIST, err);
+
+	err = rte_event_eth_tx_adapter_free(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_free(TEST_INST_ID);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL %d got %d", -EINVAL, err);
+
+	err = rte_event_eth_tx_adapter_free(1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL %d got %d", -EINVAL, err);
+
+	return TEST_SUCCESS;
+}
+
+static int
+tx_adapter_queue_add_del(void)
+{
+	int err;
+	uint32_t cap;
+
+	err = rte_event_eth_tx_adapter_caps_get(TEST_DEV_ID,
+					 &cap);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+
+	err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID,
+						rte_eth_dev_count_total(),
+						-1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID,
+						TEST_ETHDEV_ID,
+						0);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID,
+						TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID,
+						TEST_ETHDEV_ID,
+						0);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID,
+						TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID,
+						TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_add(1, TEST_ETHDEV_ID, -1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(1, TEST_ETHDEV_ID, -1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	return TEST_SUCCESS;
+}
+
+static int
+tx_adapter_start_stop(void)
+{
+	int err;
+
+	err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID, TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_start(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_stop(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID, TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_start(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_stop(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_start(1);
+
+	err = rte_event_eth_tx_adapter_stop(1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	return TEST_SUCCESS;
+}
+
+static uint32_t eid, tid;
+
+static int
+tx_adapter_single(uint16_t port, uint16_t tx_queue_id,
+		struct rte_mbuf *m, uint8_t qid,
+		uint8_t sched_type)
+{
+	struct rte_event event;
+	struct rte_mbuf *r;
+	int ret;
+	unsigned int l;
+
+	event.queue_id = qid;
+	event.op = RTE_EVENT_OP_NEW;
+	event.sched_type = sched_type;
+	event.mbuf = m;
+
+	m->port = port;
+	rte_event_eth_tx_adapter_txq_set(m, tx_queue_id);
+
+	l = 0;
+	while (rte_event_enqueue_burst(TEST_DEV_ID, 0, &event, 1) != 1) {
+		l++;
+		if (l > EDEV_RETRY)
+			break;
+	}
+
+	TEST_ASSERT(l < EDEV_RETRY, "Unable to enqueue to eventdev");
+	l = 0;
+	while (l++ < EDEV_RETRY) {
+
+		ret = rte_service_run_iter_on_app_lcore(eid, 0);
+		TEST_ASSERT(ret == 0, "failed to run service %d", ret);
+
+		ret = rte_service_run_iter_on_app_lcore(tid, 0);
+		TEST_ASSERT(ret == 0, "failed to run service %d", ret);
+
+		if (rte_eth_rx_burst(TEST_ETHDEV_PAIR_ID, tx_queue_id, &r, 1)) {
+			TEST_ASSERT_EQUAL(r, m, "mbuf comparison failed"
+					" expected %p received %p", m, r);
+			return 0;
+		}
+	}
+
+	TEST_ASSERT(0, "Failed to receive packet");
+	return -1;
+}
+
+static int
+tx_adapter_service(void)
+{
+	struct rte_event_eth_tx_adapter_stats stats;
+	uint32_t i;
+	int err;
+	uint8_t ev_port, ev_qid;
+	struct rte_mbuf  bufs[RING_SIZE];
+	struct rte_mbuf *pbufs[RING_SIZE];
+	struct rte_event_dev_info dev_info;
+	struct rte_event_dev_config dev_conf;
+	struct rte_event_queue_conf qconf;
+	uint32_t qcnt, pcnt;
+	uint16_t q;
+	int internal_port;
+
+	err = rte_event_eth_tx_adapter_caps_get(TEST_DEV_ID,
+						&default_params.caps);
+	TEST_ASSERT(err == 0, "Failed to get adapter cap err %d\n",
+			err);
+
+	internal_port = !!(default_params.caps &
+			RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT);
+
+	if (internal_port)
+		return TEST_SUCCESS;
+
+	err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID, TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_event_port_get(TEST_INST_ID,
+						&ev_port);
+	TEST_ASSERT_SUCCESS(err, "Failed to get event port %d", err);
+
+	err = rte_event_dev_attr_get(TEST_DEV_ID, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+					&pcnt);
+	TEST_ASSERT_SUCCESS(err, "Port count get failed");
+
+	err = rte_event_dev_attr_get(TEST_DEV_ID,
+				RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &qcnt);
+	TEST_ASSERT_SUCCESS(err, "Queue count get failed");
+
+	err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
+	TEST_ASSERT_SUCCESS(err, "Dev info failed");
+
+	dev_conf.nb_event_queue_flows = dev_info.max_event_queue_flows;
+	dev_conf.nb_event_port_dequeue_depth =
+			dev_info.max_event_port_dequeue_depth;
+	dev_conf.nb_event_port_enqueue_depth =
+			dev_info.max_event_port_enqueue_depth;
+	dev_conf.nb_events_limit =
+			dev_info.max_num_events;
+	dev_conf.nb_event_queues = qcnt + 1;
+	dev_conf.nb_event_ports = pcnt;
+	err = rte_event_dev_configure(TEST_DEV_ID, &dev_conf);
+	TEST_ASSERT(err == 0, "Event device initialization failed err %d\n",
+			err);
+
+	ev_qid = qcnt;
+	qconf.nb_atomic_flows = dev_info.max_event_queue_flows;
+	qconf.nb_atomic_order_sequences = 32;
+	qconf.schedule_type = RTE_SCHED_TYPE_ATOMIC;
+	qconf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST;
+	qconf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_SINGLE_LINK;
+	err = rte_event_queue_setup(TEST_DEV_ID, ev_qid, &qconf);
+	TEST_ASSERT_SUCCESS(err, "Failed to setup queue %u", ev_qid);
+
+	err = rte_event_port_link(TEST_DEV_ID, ev_port, &ev_qid, NULL, 1);
+	TEST_ASSERT(err == 1, "Failed to link queue port %u",
+		    ev_port);
+
+	err = rte_event_eth_tx_adapter_start(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_dev_service_id_get(0, &eid);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_service_id_get(TEST_INST_ID, &tid);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_service_runstate_set(tid, 1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_service_set_runstate_mapped_check(tid, 0);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_service_runstate_set(eid, 1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_service_set_runstate_mapped_check(eid, 0);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_dev_start(TEST_DEV_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	for (q = 0; q < MAX_NUM_QUEUE; q++) {
+		for (i = 0; i < RING_SIZE; i++)
+			pbufs[i] = &bufs[i];
+		for (i = 0; i < RING_SIZE; i++) {
+			pbufs[i] = &bufs[i];
+			err = tx_adapter_single(TEST_ETHDEV_ID, q, pbufs[i],
+						ev_qid,
+						RTE_SCHED_TYPE_ORDERED);
+			TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+		}
+		for (i = 0; i < RING_SIZE; i++) {
+			TEST_ASSERT_EQUAL(pbufs[i], &bufs[i],
+				"Error: received data does not match"
+				" that transmitted");
+		}
+	}
+
+	err = rte_event_eth_tx_adapter_stats_get(TEST_INST_ID, NULL);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_tx_adapter_stats_get(TEST_INST_ID, &stats);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+	TEST_ASSERT_EQUAL(stats.tx_packets, MAX_NUM_QUEUE * RING_SIZE,
+			"stats.tx_packets expected %u got %lu",
+			MAX_NUM_QUEUE * RING_SIZE,
+			stats.tx_packets);
+
+	err = rte_event_eth_tx_adapter_stats_reset(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_stats_get(TEST_INST_ID, &stats);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+	TEST_ASSERT_EQUAL(stats.tx_packets, 0,
+			"stats.tx_packets expected %u got %lu",
+			0,
+			stats.tx_packets);
+
+	err = rte_event_eth_tx_adapter_stats_get(1, &stats);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID, TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_free(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	rte_event_dev_stop(TEST_DEV_ID);
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite event_eth_tx_tests = {
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.suite_name = "tx event eth adapter test suite",
+	.unit_test_cases = {
+		TEST_CASE_ST(NULL, NULL, tx_adapter_create_free),
+		TEST_CASE_ST(tx_adapter_create, tx_adapter_free,
+					tx_adapter_queue_add_del),
+		TEST_CASE_ST(tx_adapter_create, tx_adapter_free,
+					tx_adapter_start_stop),
+		TEST_CASE_ST(tx_adapter_create, tx_adapter_free,
+					tx_adapter_service),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_event_eth_tx_adapter_common(void)
+{
+	return unit_test_suite_runner(&event_eth_tx_tests);
+}
+
+REGISTER_TEST_COMMAND(event_eth_tx_adapter_autotest,
+		test_event_eth_tx_adapter_common);
diff --git a/MAINTAINERS b/MAINTAINERS
index ab23503..4a4e7b6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -392,6 +392,7 @@ Eventdev Ethdev Tx Adapter API - EXPERIMENTAL
 M: Nikhil Rao <nikhil.rao@intel.com>
 T: git://dpdk.org/next/dpdk-next-eventdev
 F: lib/librte_eventdev/*eth_tx_adapter*
+F: test/test/test_event_eth_tx_adapter.c
 
 Raw device API - EXPERIMENTAL
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
diff --git a/lib/librte_eventdev/meson.build b/lib/librte_eventdev/meson.build
index d885743..0f54491 100644
--- a/lib/librte_eventdev/meson.build
+++ b/lib/librte_eventdev/meson.build
@@ -17,6 +17,6 @@ headers = files('rte_eventdev.h',
 		'rte_event_eth_rx_adapter.h',
 		'rte_event_timer_adapter.h',
 		'rte_event_timer_adapter_pmd.h',
-		'rte_event_crypto_adapter.h'
+		'rte_event_crypto_adapter.h',
 		'rte_event_eth_tx_adapter.h')
 deps += ['ring', 'ethdev', 'hash', 'mempool', 'mbuf', 'timer', 'cryptodev']
diff --git a/test/test/Makefile b/test/test/Makefile
index eccc8ef..bec56b5 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -188,6 +188,7 @@ ifeq ($(CONFIG_RTE_LIBRTE_EVENTDEV),y)
 SRCS-y += test_eventdev.c
 SRCS-y += test_event_ring.c
 SRCS-y += test_event_eth_rx_adapter.c
+SRCS-y += test_event_eth_tx_adapter.c
 SRCS-y += test_event_timer_adapter.c
 SRCS-y += test_event_crypto_adapter.c
 endif
diff --git a/test/test/meson.build b/test/test/meson.build
index a907fd2..a75e327 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -33,6 +33,7 @@ test_sources = files('commands.c',
 	'test_efd_perf.c',
 	'test_errno.c',
 	'test_event_ring.c',
+	'test_event_eth_tx_adapter.c',
 	'test_eventdev.c',
 	'test_func_reentrancy.c',
 	'test_flow_classify.c',
@@ -150,6 +151,7 @@ test_names = [
 	'efd_perf_autotest',
 	'errno_autotest',
 	'event_ring_autotest',
+	'event_eth_tx_adapter_autotest',
 	'eventdev_common_autotest',
 	'eventdev_octeontx_autotest',
 	'eventdev_sw_autotest',
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [PATCH 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter
  2018-07-06  6:42 ` [dpdk-dev] [PATCH 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter Nikhil Rao
@ 2018-07-10 10:56   ` Pavan Nikhilesh
  2018-07-16  5:55     ` Rao, Nikhil
  0 siblings, 1 reply; 9+ messages in thread
From: Pavan Nikhilesh @ 2018-07-10 10:56 UTC (permalink / raw)
  To: Nikhil Rao, jerin.jacob, olivier.matz; +Cc: dev

Hi Nikhil,

On Fri, Jul 06, 2018 at 12:12:07PM +0530, Nikhil Rao wrote:
> The caps API allows the application to query if the transmit
> stage is implemented in the eventdev PMD or uses the common
> rte_service function. The PMD callbacks support the
> eventdev PMD implementation of the adapter.
>
> Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
> ---
>  lib/librte_eventdev/rte_eventdev.h     |  30 ++++-
>  lib/librte_eventdev/rte_eventdev_pmd.h | 193 +++++++++++++++++++++++++++++++++
>  lib/librte_eventdev/rte_eventdev.c     |  19 ++++
>  3 files changed, 241 insertions(+), 1 deletion(-)
>
<...>
>
> diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
> index 801810e..a29fae1 100644
> --- a/lib/librte_eventdev/rte_eventdev.c
> +++ b/lib/librte_eventdev/rte_eventdev.c
> @@ -175,6 +175,25 @@
>                 (dev, cdev, caps) : -ENOTSUP;
>  }
>
> +int __rte_experimental
> +rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint32_t *caps)
> +{

The caps get API needs to be similar to rx adapter caps get i.e. it needs to
have the eth_port_id as a parameter so that the underlying event dev driver can
expose INTERNAL PORT capability as not all ethdev drivers have the capability
to interact with the eventdevs internal port.

rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
			uint32_t *caps);


> +       struct rte_eventdev *dev;
> +
> +       RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> +
> +       dev = &rte_eventdevs[dev_id];
> +
> +       if (caps == NULL)
> +               return -EINVAL;
> +       *caps = 0;
> +
> +       return dev->dev_ops->eth_tx_adapter_caps_get ?
> +                               (*dev->dev_ops->eth_tx_adapter_caps_get)(dev,
> +                                                                       caps)
> +                               : 0;
> +}
> +
>  static inline int
>  rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
>  {
> --
> 1.8.3.1
>

Thanks,
Pavan.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [PATCH 1/4] eventdev: add eth Tx adapter APIs
  2018-07-06  6:42 [dpdk-dev] [PATCH 1/4] eventdev: add eth Tx adapter APIs Nikhil Rao
                   ` (2 preceding siblings ...)
  2018-07-06  6:42 ` [dpdk-dev] [PATCH 4/4] eventdev: add auto test for eth Tx adapter Nikhil Rao
@ 2018-07-10 12:17 ` Jerin Jacob
  2018-07-16  8:34   ` Rao, Nikhil
  3 siblings, 1 reply; 9+ messages in thread
From: Jerin Jacob @ 2018-07-10 12:17 UTC (permalink / raw)
  To: Nikhil Rao; +Cc: olivier.matz, dev, anoob.joseph

-----Original Message-----
> Date: Fri, 6 Jul 2018 12:12:06 +0530
> From: Nikhil Rao <nikhil.rao@intel.com>
> To: jerin.jacob@caviumnetworks.com, olivier.matz@6wind.com
> CC: nikhil.rao@intel.com, dev@dpdk.org
> Subject: [PATCH 1/4] eventdev: add eth Tx adapter APIs
> X-Mailer: git-send-email 1.8.3.1
> 
> 
> The ethernet Tx adapter abstracts the transmit stage of an
> event driven packet processing application. The transmit
> stage may be implemented with eventdev PMD support or use a
> rte_service function implemented in the adapter. These APIs
> provide a common configuration and control interface and
> an transmit API for the eventdev PMD implementation.
> 
> The transmit port is specified using mbuf::port. The transmit
> queue is specified using the rte_event_eth_tx_adapter_txq_set()
> function. The mbuf will specify a queue ID in the future
> (http://mails.dpdk.org/archives/dev/2018-February/090651.html)
> at which point this function will be replaced with a macro.
> 
> Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
> ---

1) Update doc/api/doxy-api-index.md
2) Update lib/librte_eventdev/Makefile
+SYMLINK-y-include += rte_event_eth_tx_adapter.h


I think, the following working is _pending_

1) Update app/test-eventdev/ for Tx adapter
2) Update examples/eventdev_pipeline/ for Tx adapter
3) Add Tx adapter documentation
4) Add Tx adapter ops for octeontx driver
5) Add Tx adapter ops for dpaa driver(if need)

Nikhil,
If you are OK then Cavium would like to take up (1), (2) and (4) activities.

Let me know your thoughts.

Since this patch set already crossed the RC1 deadline. We will complete
all the _pending_ work and push to next-eventdev tree in the very beginning of
v18.11 so that Anoob's adapter helper function work can be added v18.11.


> 
> This patch series adds the event ethernet Tx adapter which is
> based on a previous RFC
>  * RFCv1 - http://mails.dpdk.org/archives/dev/2018-May/102936.html
>  * RFCv2 - http://mails.dpdk.org/archives/dev/2018-June/104075.html
> 
> RFC -> V1:
> =========
> 
> * Move port and tx queue id to mbuf from mbuf private area. (Jerin Jacob)
> 
> * Support for PMD transmit function. (Jerin Jacob)
> 
> * mbuf change has been replaced with rte_event_eth_tx_adapter_txq_set().
> The goal is to align with the mbuf change for a qid field.
> (http://mails.dpdk.org/archives/dev/2018-February/090651.html). Once the mbuf
> change is available, the function can be replaced with a macro with no impact
> to applications.
> 
> * Various cleanups (Jerin Jacob)
> 
>  lib/librte_eventdev/rte_event_eth_tx_adapter.h | 497 +++++++++++++++++++++++++
>  lib/librte_mbuf/rte_mbuf.h                     |   4 +-
>  MAINTAINERS                                    |   5 +
>  3 files changed, 505 insertions(+), 1 deletion(-)
>  create mode 100644 lib/librte_eventdev/rte_event_eth_tx_adapter.h
> 
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * A structure used to retrieve statistics for an eth tx adapter instance.
> + */
> +struct rte_event_eth_tx_adapter_stats {
> +       uint64_t tx_retry;
> +       /**< Number of transmit retries */
> +       uint64_t tx_packets;
> +       /**< Number of packets transmitted */
> +       uint64_t tx_dropped;
> +       /**< Number of packets dropped */
> +};
> +
> +/** Event Eth Tx Adapter Structure */
> +struct rte_event_eth_tx_adapter {
> +       uint8_t id;
> +       /**< Adapter Identifier */
> +       uint8_t eventdev_id;
> +       /**< Max mbufs processed in any service function invocation */
> +       uint32_t max_nb_tx;
> +       /**< The adapter can return early if it has processed at least
> +        * max_nb_tx mbufs. This isn't treated as a requirement; batching may
> +        * cause the adapter to process more than max_nb_tx mbufs.
> +        */
> +       uint32_t nb_queues;
> +       /**< Number of Tx queues in adapter */
> +       int socket_id;
> +       /**< socket id */
> +       rte_spinlock_t tx_lock;
> +       /**<  Synchronization with data path */
> +       void *dev_private;
> +       /**< PMD private data */
> +       char mem_name[RTE_EVENT_ETH_TX_ADAPTER_SERVICE_NAME_LEN];
> +       /**< Memory allocation name */
> +       rte_event_eth_tx_adapter_conf_cb conf_cb;
> +       /** Configuration callback */
> +       void *conf_arg;
> +       /**< Configuration callback argument */
> +       uint16_t dev_count;
> +       /**< Highest port id supported + 1 */
> +       struct rte_event_eth_tx_adapter_ethdev *txa_ethdev;
> +       /**< Per ethernet device structure */
> +       struct rte_event_eth_tx_adapter_stats stats;
> +} __rte_cache_aligned;

Can you move this structure to .c file as implementation, Reasons are -
a) It should not be under ABI deprecation
b) INTERNAL_PORT based adapter may have different values.i.e the above
structure is implementation defined.

> +
> +struct rte_event_eth_tx_adapters {
> +       struct rte_event_eth_tx_adapter **data;
> +};
> +

same as above

> +/* Per eth device structure */
> +struct rte_event_eth_tx_adapter_ethdev {
> +       /* Pointer to ethernet device */
> +       struct rte_eth_dev *dev;
> +       /* Number of queues added */
> +       uint16_t nb_queues;
> +       /* PMD specific queue data */
> +       void *queues;
> +};

same as above

> +
> +extern struct rte_event_eth_tx_adapters rte_event_eth_tx_adapters;
> +

same as above

> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Create a new event ethernet Tx adapter with the specified identifier.
> + *
> + * @param id
> + *  The identifier of the event ethernet Tx adapter.
> + * @param dev_id
> + *  The event device identifier.
> + * @param port_config
> + *  Event port configuration, the adapter uses this configuration to
> + *  create an event port if needed.
> + * @return
> + *   - 0: Success
> + *   - <0: Error code on failure
> + */
> +int __rte_experimental
> +rte_event_eth_tx_adapter_create(uint8_t id, uint8_t dev_id,
> +                               struct rte_event_port_conf *port_config);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Create a new event ethernet Tx adapter with the specified identifier.
> + *
> + * @param id
> + *  The identifier of the event ethernet Tx adapter.
> + * @param dev_id
> + *  The event device identifier.
> + * @param conf_cb
> + *  Callback function that initalizes members of the

s/initalizes/initializes

> + *  struct rte_event_eth_tx_adapter_conf struct passed into
> + *  it.
> + * @param conf_arg
> + *  Argument that is passed to the conf_cb function.
> + * @return
> + *   - 0: Success
> + *   - <0: Error code on failure
> + */
> +int __rte_experimental
> +rte_event_eth_tx_adapter_create_ext(uint8_t id, uint8_t dev_id,
> +                               rte_event_eth_tx_adapter_conf_cb conf_cb,
> +                               void *conf_arg);
> +
> +/**
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Add a Tx queue to the adapter.
> + * A queue value of -1 is used to indicate all
> + * queues within the device.
> + *
> + * @param id
> + *  Adapter identifier.
> + * @param eth_dev_id
> + *  Ethernet Port Identifier.
> + * @param queue
> + *  Tx queue index.
> + * @return
> + *  - 0: Success, Queues added succcessfully.

s/succcessfully/successfully


> + *  - <0: Error code on failure.
> + */
> +int __rte_experimental
> +rte_event_eth_tx_adapter_queue_add(uint8_t id,
> +                               uint16_t eth_dev_id,
> +                               int32_t queue);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + *
> + * Set Tx queue in the mbuf.
> + *
> + * @param pkt
> + *  Pointer to the mbuf.
> + * @param queue
> + *  Tx queue index.
> + */
> +void __rte_experimental
> +rte_event_eth_tx_adapter_txq_set(struct rte_mbuf *pkt, uint16_t queue);

1) Can you make this as static inline for better performance(as it is just
a mbuf field access)?

2) Please add _get function, It will be useful for application and
Tx adapter op implementation.


> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Retrieve the adapter event port. The adapter creates an event port if
> + * the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT is not set in the
> + * eth Tx capabilities of the event device.
> + *
> + * @param id
> + *  Adapter Identifier.
> + * @param[out] event_port_id
> + *  Event port pointer.
> + * @return
> + *   - 0: Success.
> + *   - <0: Error code on failure.
> + */
> +int __rte_experimental
> +rte_event_eth_tx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id);
> +
> +static __rte_always_inline uint16_t __rte_experimental
> +__rte_event_eth_tx_adapter_enqueue(uint8_t id, uint8_t dev_id, uint8_t port_id,
> +                               struct rte_event ev[],
> +                               uint16_t nb_events,
> +                               const event_tx_adapter_enqueue fn)
> +{
> +       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];

Access to *dev twice(see below rte_event_eth_tx_adapter_enqueue())

> +       struct rte_event_eth_tx_adapter *txa =
> +                                       rte_event_eth_tx_adapters.data[id];

Just like common Tx adapter implementation, We can manage  ethdev queue to adapter mapping
internally. So this deference is not required in fastpath.

Please simply call the following, just like other eventdev ops.
fn(dev->data->ports[port_id], ev, nb_events)


> +
> +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> +       if (id >= RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE ||
> +               dev_id >= RTE_EVENT_MAX_DEVS ||
> +               !rte_eventdevs[dev_id].attached) {
> +               rte_errno = -EINVAL;
> +               return 0;
> +       }
> +
> +       if (port_id >= dev->data->nb_ports) {
> +               rte_errno = -EINVAL;
> +               return 0;
> +       }
> +#endif
> +       return fn((void *)txa, dev, dev->data->ports[port_id], ev, nb_events);
> +}
> +
> +/**
> + * Enqueue a burst of events objects or an event object supplied in *rte_event*
> + * structure on an  event device designated by its *dev_id* through the event
> + * port specified by *port_id*. This function is supported if the eventdev PMD
> + * has the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability flag set.
> + *
> + * The *nb_events* parameter is the number of event objects to enqueue which are
> + * supplied in the *ev* array of *rte_event* structure.
> + *
> + * The rte_event_eth_tx_adapter_enqueue() function returns the number of
> + * events objects it actually enqueued. A return value equal to *nb_events*
> + * means that all event objects have been enqueued.
> + *
> + * @param id
> + *  The identifier of the tx adapter.
> + * @param dev_id
> + *  The identifier of the device.
> + * @param port_id
> + *  The identifier of the event port.
> + * @param ev
> + *  Points to an array of *nb_events* objects of type *rte_event* structure
> + *  which contain the event object enqueue operations to be processed.
> + * @param nb_events
> + *  The number of event objects to enqueue, typically number of
> + *  rte_event_port_enqueue_depth() available for this port.
> + *
> + * @return
> + *   The number of event objects actually enqueued on the event device. The
> + *   return value can be less than the value of the *nb_events* parameter when
> + *   the event devices queue is full or if invalid parameters are specified in a
> + *   *rte_event*. If the return value is less than *nb_events*, the remaining
> + *   events at the end of ev[] are not consumed and the caller has to take care
> + *   of them, and rte_errno is set accordingly. Possible errno values include:
> + *   - -EINVAL  The port ID is invalid, device ID is invalid, an event's queue
> + *              ID is invalid, or an event's sched type doesn't match the
> + *              capabilities of the destination queue.
> + *   - -ENOSPC  The event port was backpressured and unable to enqueue
> + *              one or more events. This error code is only applicable to
> + *              closed systems.
> + */
> +static inline uint16_t __rte_experimental
> +rte_event_eth_tx_adapter_enqueue(uint8_t id, uint8_t dev_id,
> +                               uint8_t port_id,
> +                               struct rte_event ev[],
> +                               uint16_t nb_events)
> +{
> +       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> +
> +       return __rte_event_eth_tx_adapter_enqueue(id, dev_id, port_id, ev,
> +                                               nb_events,
> +                                               dev->txa_enqueue);

As per above, Since the function call logic is simplified you can add the
above function logic here.

> +}
> +
> index dabb12d..ab23503 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -388,6 +388,11 @@ F: lib/librte_eventdev/*crypto_adapter*
>  F: test/test/test_event_crypto_adapter.c
>  F: doc/guides/prog_guide/event_crypto_adapter.rst
> 
> +Eventdev Ethdev Tx Adapter API - EXPERIMENTAL
> +M: Nikhil Rao <nikhil.rao@intel.com>
> +T: git://dpdk.org/next/dpdk-next-eventdev
> +F: lib/librte_eventdev/*eth_tx_adapter*

Add the testcase also.

Overall it looks good. No more comments on specification.

> +
>  Raw device API - EXPERIMENTAL
>  M: Shreyansh Jain <shreyansh.jain@nxp.com>
>  M: Hemant Agrawal <hemant.agrawal@nxp.com>
> --
> 1.8.3.1
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [PATCH 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter
  2018-07-10 10:56   ` Pavan Nikhilesh
@ 2018-07-16  5:55     ` Rao, Nikhil
  0 siblings, 0 replies; 9+ messages in thread
From: Rao, Nikhil @ 2018-07-16  5:55 UTC (permalink / raw)
  To: Pavan Nikhilesh, jerin.jacob, olivier.matz; +Cc: dev


On 7/10/2018 4:26 PM, Pavan Nikhilesh wrote:
> +int __rte_experimental
> +rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint32_t *caps)
> +{
> The caps get API needs to be similar to rx adapter caps get i.e. it needs to
> have the eth_port_id as a parameter so that the underlying event dev driver can
> expose INTERNAL PORT capability as not all ethdev drivers have the capability
> to interact with the eventdevs internal port.
>
> rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
> 			uint32_t *caps);
Hi Pavan,

Is querying the INTERNAL PORT on a per ethdev basis useful to the 
application ?

For e.g., the txa_init() function in the adapter implementation can only 
decide to use the internal port if it is supported for all ethdevs, 
hence I left that upto the eventdev PMD to decide - i.e., it could 
iterate across txa->dev_count eth devices to make that determination. 
For caps in general, I agree it makes sense to pass in the ethdev, but 
the INTERNAL PORT didn't seem useful on a per ethdev basis.

We could also replace caps_get with something like a
rte_event_eth_tx_adapter_internal_port_check(dev_id) and add a per 
ethdev caps if needed later.

Thanks,
Nikhil

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [PATCH 1/4] eventdev: add eth Tx adapter APIs
  2018-07-10 12:17 ` [dpdk-dev] [PATCH 1/4] eventdev: add eth Tx adapter APIs Jerin Jacob
@ 2018-07-16  8:34   ` Rao, Nikhil
  0 siblings, 0 replies; 9+ messages in thread
From: Rao, Nikhil @ 2018-07-16  8:34 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: olivier.matz, dev, anoob.joseph

> -----Original Message-----
> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Tuesday, July 10, 2018 5:48 PM
> To: Rao, Nikhil <nikhil.rao@intel.com>
> Cc: olivier.matz@6wind.com; dev@dpdk.org; anoob.joseph@cavium.com
> Subject: Re: [PATCH 1/4] eventdev: add eth Tx adapter APIs
> 
> ---
> 
> 1) Update doc/api/doxy-api-index.md
OK.

> 2) Update lib/librte_eventdev/Makefile
> +SYMLINK-y-include += rte_event_eth_tx_adapter.h
> 
This is done in patch 3 of this series.

> 
> I think, the following working is _pending_
> 
> 1) Update app/test-eventdev/ for Tx adapter
> 2) Update examples/eventdev_pipeline/ for Tx adapter
> 3) Add Tx adapter documentation
> 4) Add Tx adapter ops for octeontx driver
> 5) Add Tx adapter ops for dpaa driver(if need)
> 
> Nikhil,
> If you are OK then Cavium would like to take up (1), (2) and (4) activities.
> 
> Let me know your thoughts.
> 
Fine with me.

> Since this patch set already crossed the RC1 deadline. We will complete all
> the _pending_ work and push to next-eventdev tree in the very beginning
> of
> v18.11 so that Anoob's adapter helper function work can be added v18.11.
> 
> 
> >
> > This patch series adds the event ethernet Tx adapter which is based on
> > a previous RFC
> >  * RFCv1 - http://mails.dpdk.org/archives/dev/2018-May/102936.html
> >  * RFCv2 - http://mails.dpdk.org/archives/dev/2018-June/104075.html
> >
> > RFC -> V1:
> > =========
> >
> > * Move port and tx queue id to mbuf from mbuf private area. (Jerin
> > Jacob)
> >
> > * Support for PMD transmit function. (Jerin Jacob)
> >
> > * mbuf change has been replaced with
> rte_event_eth_tx_adapter_txq_set().
> > The goal is to align with the mbuf change for a qid field.
> > (http://mails.dpdk.org/archives/dev/2018-February/090651.html). Once
> > the mbuf change is available, the function can be replaced with a
> > macro with no impact to applications.
> >
> > * Various cleanups (Jerin Jacob)
> >
> >  lib/librte_eventdev/rte_event_eth_tx_adapter.h | 497
> +++++++++++++++++++++++++
> >  lib/librte_mbuf/rte_mbuf.h                     |   4 +-
> >  MAINTAINERS                                    |   5 +
> >  3 files changed, 505 insertions(+), 1 deletion(-)  create mode 100644
> > lib/librte_eventdev/rte_event_eth_tx_adapter.h
> >
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * A structure used to retrieve statistics for an eth tx adapter instance.
> > + */
> > +struct rte_event_eth_tx_adapter_stats {
> > +       uint64_t tx_retry;
> > +       /**< Number of transmit retries */
> > +       uint64_t tx_packets;
> > +       /**< Number of packets transmitted */
> > +       uint64_t tx_dropped;
> > +       /**< Number of packets dropped */ };
> > +
> > +/** Event Eth Tx Adapter Structure */ struct rte_event_eth_tx_adapter
> > +{
> > +       uint8_t id;
> > +       /**< Adapter Identifier */
> > +       uint8_t eventdev_id;
> > +       /**< Max mbufs processed in any service function invocation */
> > +       uint32_t max_nb_tx;
> > +       /**< The adapter can return early if it has processed at least
> > +        * max_nb_tx mbufs. This isn't treated as a requirement; batching
> may
> > +        * cause the adapter to process more than max_nb_tx mbufs.
> > +        */
> > +       uint32_t nb_queues;
> > +       /**< Number of Tx queues in adapter */
> > +       int socket_id;
> > +       /**< socket id */
> > +       rte_spinlock_t tx_lock;
> > +       /**<  Synchronization with data path */
> > +       void *dev_private;
> > +       /**< PMD private data */
> > +       char
> mem_name[RTE_EVENT_ETH_TX_ADAPTER_SERVICE_NAME_LEN];
> > +       /**< Memory allocation name */
> > +       rte_event_eth_tx_adapter_conf_cb conf_cb;
> > +       /** Configuration callback */
> > +       void *conf_arg;
> > +       /**< Configuration callback argument */
> > +       uint16_t dev_count;
> > +       /**< Highest port id supported + 1 */
> > +       struct rte_event_eth_tx_adapter_ethdev *txa_ethdev;
> > +       /**< Per ethernet device structure */
> > +       struct rte_event_eth_tx_adapter_stats stats; }
> > +__rte_cache_aligned;
> 
> Can you move this structure to .c file as implementation, Reasons are -
> a) It should not be under ABI deprecation
> b) INTERNAL_PORT based adapter may have different values.i.e the above
> structure is implementation defined.
>
> > +
> > +struct rte_event_eth_tx_adapters {
> > +       struct rte_event_eth_tx_adapter **data; };
> > +
> 
> same as above
> 
> > +/* Per eth device structure */
> > +struct rte_event_eth_tx_adapter_ethdev {
> > +       /* Pointer to ethernet device */
> > +       struct rte_eth_dev *dev;
> > +       /* Number of queues added */
> > +       uint16_t nb_queues;
> > +       /* PMD specific queue data */
> > +       void *queues;
> > +};
> 
> same as above
> 
> > +
> > +extern struct rte_event_eth_tx_adapters rte_event_eth_tx_adapters;
> > +
> 
> same as above
>
OK, if these fields are not going to be used within the other adapter, I will move these to the .c file.
 
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Create a new event ethernet Tx adapter with the specified identifier.
> > + *
> > + * @param id
> > + *  The identifier of the event ethernet Tx adapter.
> > + * @param dev_id
> > + *  The event device identifier.
> > + * @param port_config
> > + *  Event port configuration, the adapter uses this configuration to
> > + *  create an event port if needed.
> > + * @return
> > + *   - 0: Success
> > + *   - <0: Error code on failure
> > + */
> > +int __rte_experimental
> > +rte_event_eth_tx_adapter_create(uint8_t id, uint8_t dev_id,
> > +                               struct rte_event_port_conf
> > +*port_config);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Create a new event ethernet Tx adapter with the specified identifier.
> > + *
> > + * @param id
> > + *  The identifier of the event ethernet Tx adapter.
> > + * @param dev_id
> > + *  The event device identifier.
> > + * @param conf_cb
> > + *  Callback function that initalizes members of the
> 
> s/initalizes/initializes
> 
> > + *  struct rte_event_eth_tx_adapter_conf struct passed into
> > + *  it.
> > + * @param conf_arg
> > + *  Argument that is passed to the conf_cb function.
> > + * @return
> > + *   - 0: Success
> > + *   - <0: Error code on failure
> > + */
> > +int __rte_experimental
> > +rte_event_eth_tx_adapter_create_ext(uint8_t id, uint8_t dev_id,
> > +                               rte_event_eth_tx_adapter_conf_cb conf_cb,
> > +                               void *conf_arg);
> > +
> > +/**
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Add a Tx queue to the adapter.
> > + * A queue value of -1 is used to indicate all
> > + * queues within the device.
> > + *
> > + * @param id
> > + *  Adapter identifier.
> > + * @param eth_dev_id
> > + *  Ethernet Port Identifier.
> > + * @param queue
> > + *  Tx queue index.
> > + * @return
> > + *  - 0: Success, Queues added succcessfully.
> 
> s/succcessfully/successfully
> 
> 
> > + *  - <0: Error code on failure.
> > + */
> > +int __rte_experimental
> > +rte_event_eth_tx_adapter_queue_add(uint8_t id,
> > +                               uint16_t eth_dev_id,
> > +                               int32_t queue);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + *
> > + * Set Tx queue in the mbuf.
> > + *
> > + * @param pkt
> > + *  Pointer to the mbuf.
> > + * @param queue
> > + *  Tx queue index.
> > + */
> > +void __rte_experimental
> > +rte_event_eth_tx_adapter_txq_set(struct rte_mbuf *pkt, uint16_t
> > +queue);
> 
> 1) Can you make this as static inline for better performance(as it is just a
> mbuf field access)?
OK.
This would also move the private definition of   struct txa_mbuf_txq_id  to the adapter header file, which would be needed to deprecated once the field is
available in rte_mbuf.h.

> 
> 2) Please add _get function, It will be useful for application and Tx adapter
> op implementation.
> 
> 
OK.

> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Retrieve the adapter event port. The adapter creates an event port
> > +if
> > + * the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT is not set in
> the
> > + * eth Tx capabilities of the event device.
> > + *
> > + * @param id
> > + *  Adapter Identifier.
> > + * @param[out] event_port_id
> > + *  Event port pointer.
> > + * @return
> > + *   - 0: Success.
> > + *   - <0: Error code on failure.
> > + */
> > +int __rte_experimental
> > +rte_event_eth_tx_adapter_event_port_get(uint8_t id, uint8_t
> > +*event_port_id);
> > +
> > +static __rte_always_inline uint16_t __rte_experimental
> > +__rte_event_eth_tx_adapter_enqueue(uint8_t id, uint8_t dev_id,
> uint8_t port_id,
> > +                               struct rte_event ev[],
> > +                               uint16_t nb_events,
> > +                               const event_tx_adapter_enqueue fn) {
> > +       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> 
> Access to *dev twice(see below rte_event_eth_tx_adapter_enqueue())
> 
> > +       struct rte_event_eth_tx_adapter *txa =
> > +
> > + rte_event_eth_tx_adapters.data[id];
> 
> Just like common Tx adapter implementation, We can manage  ethdev
> queue to adapter mapping internally. So this deference is not required in
> fastpath.
> 
> Please simply call the following, just like other eventdev ops.
> fn(dev->data->ports[port_id], ev, nb_events)
> 
> 
OK.

> > +
> > +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> > +       if (id >= RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE ||
> > +               dev_id >= RTE_EVENT_MAX_DEVS ||
> > +               !rte_eventdevs[dev_id].attached) {
> > +               rte_errno = -EINVAL;
> > +               return 0;
> > +       }
> > +
> > +       if (port_id >= dev->data->nb_ports) {
> > +               rte_errno = -EINVAL;
> > +               return 0;
> > +       }
> > +#endif
> > +       return fn((void *)txa, dev, dev->data->ports[port_id], ev,
> > +nb_events); }
> > +
> > +/**
> > + * Enqueue a burst of events objects or an event object supplied in
> > +*rte_event*
> > + * structure on an  event device designated by its *dev_id* through
> > +the event
> > + * port specified by *port_id*. This function is supported if the
> > +eventdev PMD
> > + * has the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT
> capability flag set.
> > + *
> > + * The *nb_events* parameter is the number of event objects to
> > +enqueue which are
> > + * supplied in the *ev* array of *rte_event* structure.
> > + *
> > + * The rte_event_eth_tx_adapter_enqueue() function returns the
> number
> > +of
> > + * events objects it actually enqueued. A return value equal to
> > +*nb_events*
> > + * means that all event objects have been enqueued.
> > + *
> > + * @param id
> > + *  The identifier of the tx adapter.
> > + * @param dev_id
> > + *  The identifier of the device.
> > + * @param port_id
> > + *  The identifier of the event port.
> > + * @param ev
> > + *  Points to an array of *nb_events* objects of type *rte_event*
> > +structure
> > + *  which contain the event object enqueue operations to be processed.
> > + * @param nb_events
> > + *  The number of event objects to enqueue, typically number of
> > + *  rte_event_port_enqueue_depth() available for this port.
> > + *
> > + * @return
> > + *   The number of event objects actually enqueued on the event device.
> The
> > + *   return value can be less than the value of the *nb_events*
> parameter when
> > + *   the event devices queue is full or if invalid parameters are specified
> in a
> > + *   *rte_event*. If the return value is less than *nb_events*, the
> remaining
> > + *   events at the end of ev[] are not consumed and the caller has to take
> care
> > + *   of them, and rte_errno is set accordingly. Possible errno values
> include:
> > + *   - -EINVAL  The port ID is invalid, device ID is invalid, an event's queue
> > + *              ID is invalid, or an event's sched type doesn't match the
> > + *              capabilities of the destination queue.
> > + *   - -ENOSPC  The event port was backpressured and unable to enqueue
> > + *              one or more events. This error code is only applicable to
> > + *              closed systems.
> > + */
> > +static inline uint16_t __rte_experimental
> > +rte_event_eth_tx_adapter_enqueue(uint8_t id, uint8_t dev_id,
> > +                               uint8_t port_id,
> > +                               struct rte_event ev[],
> > +                               uint16_t nb_events) {
> > +       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> > +
> > +       return __rte_event_eth_tx_adapter_enqueue(id, dev_id, port_id,
> ev,
> > +                                               nb_events,
> > +                                               dev->txa_enqueue);
> 
> As per above, Since the function call logic is simplified you can add the
> above function logic here.
> 
OK, I will also delete the id parameter.

> > +}
> > +
> > index dabb12d..ab23503 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -388,6 +388,11 @@ F: lib/librte_eventdev/*crypto_adapter*
> >  F: test/test/test_event_crypto_adapter.c
> >  F: doc/guides/prog_guide/event_crypto_adapter.rst
> >
> > +Eventdev Ethdev Tx Adapter API - EXPERIMENTAL
> > +M: Nikhil Rao <nikhil.rao@intel.com>
> > +T: git://dpdk.org/next/dpdk-next-eventdev
> > +F: lib/librte_eventdev/*eth_tx_adapter*
> 
> Add the testcase also.
> 
I have made that update in patch 4 of this series.

> Overall it looks good. No more comments on specification.
> 

Thanks for the review,
Nikhil

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [PATCH 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter
  2018-07-16 14:17 ` Rao, Nikhil
@ 2018-07-16 14:27   ` Pavan Nikhilesh
  0 siblings, 0 replies; 9+ messages in thread
From: Pavan Nikhilesh @ 2018-07-16 14:27 UTC (permalink / raw)
  To: Rao, Nikhil, jkollanukkaran, olivier.matz; +Cc: dev

On Mon, Jul 16, 2018 at 02:17:40PM +0000, Rao, Nikhil wrote:
>
> > -----Original Message-----
> > From: Pavan Nikhilesh [mailto:pbhagavatula@caviumnetworks.com]
> > Sent: Monday, July 16, 2018 5:03 PM
> > To: Rao, Nikhil <nikhil.rao@intel.com>;
> > jkollanukkaran@caviumnetworks.com; olivier.matz@6wind.com
> > Cc: dev@dpdk.org
> > Subject: [pbhagavatula@caviumnetworks.com: Re: [dpdk-dev] [PATCH 2/4]
> > eventdev: add caps API and PMD callbacks for eth Tx adapter]
> >
> > Hi Nikhil,
> >
> > On Mon, Jul 16, 2018 at 11:25:45AM +0530, Rao, Nikhil wrote:
> > > On 7/10/2018 4:26 PM, Pavan Nikhilesh wrote:
> > > > +int __rte_experimental
> > > > +rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint32_t *caps) {
> > > > The caps get API needs to be similar to rx adapter caps get i.e. it
> > > > needs to have the eth_port_id as a parameter so that the underlying
> > > > event dev driver can expose INTERNAL PORT capability as not all
> > > > ethdev drivers have the capability to interact with the eventdevs
> > internal port.
> > > >
> > > > rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t
> > eth_port_id,
> > > >                       uint32_t *caps);
> > > Hi Pavan,
> > >
> > > Is querying the INTERNAL PORT on a per ethdev basis useful to the
> > > application ?
> >
> > If an application chooses to use 2 ports one with INTERNAL PORT capability
> > and one _without_ it then it would be useful to have per ethdev based
> > classification similar to Rx adapter. The application can classify events based
> > on event type RTE_EVENT_TYPE_ETHDEV &
> > RTE_EVENT_TYPE_ETH_RX_ADAPTER to segregate between INTERNAL &
> > NON-INTERNAL port at ingress side and enqueue it to either
> > rte_event_eth_tx_adapter_enqueue or to the SINGLE link queue
> > respectively.
> >
> The current tx adapter is very similar to how the eventdev pipeline app decides between using the generic_tx v/s worker_tx, I guess what you are suggesting is using the 2 concurrently. I am fine with this,
> Would you always assume INTERNAL PORT on the Rx side to deduce INTERNAL PORT on the tx side ? Just curious if that was just an example, in the general case, you could have packets ingressing from an INTERNAL port and egressing on a port that is !INTERNAL ?

I was just giving an example :). The application would be free to model the
pipeline accordingly based on event type and caps.

>
> > Also, I dont think eventdev should iterate over all probed ethdevs and give
> > the overall caps as an application might want only a specific ethdev to be
> > connected to the event tx adapter.
> >
> Agreed. The current adapter only supports either the generic_tx or the worker_tx models and selects either at the earliest point it is feasible to,
>
> I will update the patch series for  caps_get()
>
> Thanks,
> Nikhil

Thanks,
Pavan

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2018-07-16 14:27 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-06  6:42 [dpdk-dev] [PATCH 1/4] eventdev: add eth Tx adapter APIs Nikhil Rao
2018-07-06  6:42 ` [dpdk-dev] [PATCH 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter Nikhil Rao
2018-07-10 10:56   ` Pavan Nikhilesh
2018-07-16  5:55     ` Rao, Nikhil
2018-07-06  6:42 ` [dpdk-dev] [PATCH 3/4] eventdev: add eth Tx adapter implementation Nikhil Rao
2018-07-06  6:42 ` [dpdk-dev] [PATCH 4/4] eventdev: add auto test for eth Tx adapter Nikhil Rao
2018-07-10 12:17 ` [dpdk-dev] [PATCH 1/4] eventdev: add eth Tx adapter APIs Jerin Jacob
2018-07-16  8:34   ` Rao, Nikhil
2018-07-16 11:33 [dpdk-dev] [pbhagavatula@caviumnetworks.com: Re: [PATCH 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter] Pavan Nikhilesh
2018-07-16 14:17 ` Rao, Nikhil
2018-07-16 14:27   ` [dpdk-dev] [PATCH 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter Pavan Nikhilesh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).