DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v2 1/4] eventdev: add eth Tx adapter APIs
@ 2018-08-17  4:20 Nikhil Rao
  2018-08-17  4:20 ` [dpdk-dev] [PATCH v2 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter Nikhil Rao
                   ` (4 more replies)
  0 siblings, 5 replies; 27+ messages in thread
From: Nikhil Rao @ 2018-08-17  4:20 UTC (permalink / raw)
  To: jerin.jacob, olivier.matz; +Cc: dev, Nikhil Rao

The ethernet Tx adapter abstracts the transmit stage of an
event driven packet processing application. The transmit
stage may be implemented with eventdev PMD support or use a
rte_service function implemented in the adapter. These APIs
provide a common configuration and control interface and
an transmit API for the eventdev PMD implementation.

The transmit port is specified using mbuf::port. The transmit
queue is specified using the rte_event_eth_tx_adapter_txq_set()
function.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
---

RFC -> V1:
=========

* Move port and tx queue id to mbuf from mbuf private area. (Jerin Jacob)

* Support for PMD transmit function. (Jerin Jacob)

* mbuf change has been replaced with rte_event_eth_tx_adapter_txq_set(). 
The goal is to align with the mbuf change for a qid field. 
(http://mails.dpdk.org/archives/dev/2018-February/090651.html). Once the mbuf
change is available, the function can be replaced with a macro with no impact
to applications.

* Various cleanups (Jerin Jacob)

V2:
==

* Add ethdev port to rte_event_eth_tx_adapter_caps_get() (Pavan Nikhilesh)

* Remove struct rte_event_eth_tx_adapter from rte_event_eth_tx_adapter.h
							(Jerin Jacob)

* Move rte_event_eth_tx_adapter_txq_set() to static inline,
  also add rte_event_eth_tx_adapter_txq_get().

* Remove adapter id from rte_event_eth_tx_adapter_enqueue(), also update
  eventdev PMD callback (Jerin Jacob)

* Fix doxy-api-index.md (Jerin Jacob)

* Rename eventdev_eth_tx_adapter_init_t to eventdev_eth_tx_adapter_create_t,
  Invoke it from rte_event_eth_tx_adapter_create()/create_ext()

* Remove the eventdev_eth_tx_adapter_event_port_get PMD callback, the event
  port is used by the application only in the case of the service function
  implementation.

* Add support for dynamic device (created after the adapter) in the service
  function implementation.

 MAINTAINERS                                    |   5 +
 doc/api/doxy-api-index.md                      |   1 +
 lib/librte_eventdev/rte_event_eth_tx_adapter.h | 479 +++++++++++++++++++++++++
 lib/librte_mbuf/rte_mbuf.h                     |   4 +-
 4 files changed, 488 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_eventdev/rte_event_eth_tx_adapter.h

Note: 

This patch generates a checkpatch error with the current version
of check-symbol-change.sh. The fix has been posted @
http://mails.dpdk.org/archives/dev/2018-August/109781.html

 lib/librte_eventdev/rte_event_eth_tx_adapter.h | 479 +++++++++++++++++++++++++
 lib/librte_mbuf/rte_mbuf.h                     |   4 +-
 MAINTAINERS                                    |   5 +
 doc/api/doxy-api-index.md                      |   1 +
 4 files changed, 488 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_eventdev/rte_event_eth_tx_adapter.h

diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.h b/lib/librte_eventdev/rte_event_eth_tx_adapter.h
new file mode 100644
index 0000000..837ecbb
--- /dev/null
+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.h
@@ -0,0 +1,479 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation.
+ */
+
+#ifndef _RTE_EVENT_ETH_TX_ADAPTER_
+#define _RTE_EVENT_ETH_TX_ADAPTER_
+
+/**
+ * @file
+ *
+ * RTE Event Ethernet Tx Adapter
+ *
+ * The event ethernet Tx adapter provides configuration and data path APIs
+ * for the ethernet transmit stage of an event driven packet processing
+ * application. These APIs abstract the implementation of the transmit stage
+ * and allow the application to use eventdev PMD support or a common
+ * implementation.
+ *
+ * In the common implementation, the application enqueues mbufs to the adapter
+ * which runs as a rte_service function. The service function dequeues events
+ * from its event port and transmits the mbufs referenced by these events.
+ *
+ * The ethernet Tx event adapter APIs are:
+ *
+ *  - rte_event_eth_tx_adapter_create()
+ *  - rte_event_eth_tx_adapter_create_ext()
+ *  - rte_event_eth_tx_adapter_free()
+ *  - rte_event_eth_tx_adapter_start()
+ *  - rte_event_eth_tx_adapter_stop()
+ *  - rte_event_eth_tx_adapter_queue_add()
+ *  - rte_event_eth_tx_adapter_queue_del()
+ *  - rte_event_eth_tx_adapter_stats_get()
+ *  - rte_event_eth_tx_adapter_stats_reset()
+ *  - rte_event_eth_tx_adapter_enqueue()
+ *  - rte_event_eth_tx_adapter_event_port_get()
+ *  - rte_event_eth_tx_adapter_service_id_get()
+ *
+ * The application creates the adapter using
+ * rte_event_eth_tx_adapter_create() or rte_event_eth_tx_adapter_create_ext().
+ *
+ * The adapter will use the common implementation when the eventdev PMD
+ * does not have the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability.
+ * The common implementation uses an event port that is created using the port
+ * configuration parameter passed to rte_event_eth_tx_adapter_create(). The
+ * application can get the port identifier using
+ * rte_event_eth_tx_adapter_event_port_get() and must link an event queue to
+ * this port.
+ *
+ * If the eventdev PMD has the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT
+ * flags set, Tx adapter events should be enqueued using the
+ * rte_event_eth_tx_adapter_enqueue() function, else the application should
+ * use rte_event_enqueue_burst().
+ *
+ * Transmit queues can be added and deleted from the adapter using
+ * rte_event_eth_tx_adapter_queue_add()/del() APIs respectively.
+ *
+ * The application can start and stop the adapter using the
+ * rte_event_eth_tx_adapter_start/stop() calls.
+ *
+ * The common adapter implementation uses an EAL service function as described
+ * before and its execution is controlled using the rte_service APIs. The
+ * rte_event_eth_tx_adapter_service_id_get()
+ * function can be used to retrieve the adapter's service function ID.
+ *
+ * The ethernet port and transmit queue index to transmit the mbuf on are
+ * specified using the mbuf port and the struct rte_event_tx_adapter_mbuf_queue
+ * (overlaid on mbuf::hash). The application should use the
+ * rte_event_eth_tx_adapter_txq_set() and rte_event_eth_tx_adapter_txq_get()
+ * functions to access the transmit queue index since it is expected that the
+ * transmit queue will be eventually defined within struct rte_mbuf and using
+ * these macros will help with minimizing application impact due to
+ * a change in how the transmit queue index is specified.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+
+#include <rte_mbuf.h>
+
+#include "rte_eventdev.h"
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * A structure used to specify the Tx queue to the ethernet Tx adapter.
+ */
+struct rte_event_tx_adapter_mbuf_queue {
+	uint32_t resvd1;
+	/**< Reserved */
+	uint16_t resvd2;
+	/**< Reserved */
+	uint16_t txq_id;
+	/**< Transmit queue index */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Adapter configuration structure
+ *
+ * @see rte_event_eth_tx_adapter_create_ext
+ * @see rte_event_eth_tx_adapter_conf_cb
+ */
+struct rte_event_eth_tx_adapter_conf {
+	uint8_t event_port_id;
+	/**< Event port identifier, the adapter service function dequeues mbuf
+	 * events from this port.
+	 * @see RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT
+	 */
+	uint32_t max_nb_tx;
+	/**< The adapter can return early if it has processed at least
+	 * max_nb_tx mbufs. This isn't treated as a requirement; batching may
+	 * cause the adapter to process more than max_nb_tx mbufs.
+	 */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Function type used for adapter configuration callback. The callback is
+ * used to fill in members of the struct rte_event_eth_tx_adapter_conf, this
+ * callback is invoked when creating a RTE service function based
+ * adapter implementation.
+ *
+ * @param id
+ *  Adapter identifier.
+ * @param dev_id
+ *  Event device identifier.
+ * @param [out] conf
+ *  Structure that needs to be populated by this callback.
+ * @param arg
+ *  Argument to the callback. This is the same as the conf_arg passed to the
+ *  rte_event_eth_tx_adapter_create_ext().
+ *
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure
+ */
+typedef int (*rte_event_eth_tx_adapter_conf_cb) (uint8_t id, uint8_t dev_id,
+				struct rte_event_eth_tx_adapter_conf *conf,
+				void *arg);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * A structure used to retrieve statistics for an ethernet Tx adapter instance.
+ */
+struct rte_event_eth_tx_adapter_stats {
+	uint64_t tx_retry;
+	/**< Number of transmit retries */
+	uint64_t tx_packets;
+	/**< Number of packets transmitted */
+	uint64_t tx_dropped;
+	/**< Number of packets dropped */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new ethernet Tx adapter with the specified identifier.
+ *
+ * @param id
+ *  The identifier of the ethernet Tx adapter.
+ * @param dev_id
+ *  The event device identifier.
+ * @param port_config
+ *  Event port configuration, the adapter uses this configuration to
+ *  create an event port if needed.
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_create(uint8_t id, uint8_t dev_id,
+				struct rte_event_port_conf *port_config);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new ethernet Tx adapter with the specified identifier.
+ *
+ * @param id
+ *  The identifier of the ethernet Tx adapter.
+ * @param dev_id
+ *  The event device identifier.
+ * @param conf_cb
+ *  Callback function that initializes members of the
+ *  struct rte_event_eth_tx_adapter_conf struct passed into
+ *  it.
+ * @param conf_arg
+ *  Argument that is passed to the conf_cb function.
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_create_ext(uint8_t id, uint8_t dev_id,
+				rte_event_eth_tx_adapter_conf_cb conf_cb,
+				void *conf_arg);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free an ethernet Tx adapter
+ *
+ * @param id
+ *  Adapter identifier.
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure, If the adapter still has Tx queues
+ *      added to it, the function returns -EBUSY.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_free(uint8_t id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Start ethernet Tx adapter
+ *
+ * @param id
+ *  Adapter identifier.
+ * @return
+ *  - 0: Success, Adapter started correctly.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_start(uint8_t id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Stop ethernet Tx adapter
+ *
+ * @param id
+ *  Adapter identifier.
+ * @return
+ *  - 0: Success.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_stop(uint8_t id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a Tx queue to the adapter.
+ * A queue value of -1 is used to indicate all
+ * queues within the device.
+ *
+ * @param id
+ *  Adapter identifier.
+ * @param eth_dev_id
+ *  Ethernet Port Identifier.
+ * @param queue
+ *  Tx queue index.
+ * @return
+ *  - 0: Success, Queues added successfully.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_queue_add(uint8_t id,
+				uint16_t eth_dev_id,
+				int32_t queue);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Delete a Tx queue from the adapter.
+ * A queue value of -1 is used to indicate all
+ * queues within the device, that have been added to this
+ * adapter.
+ *
+ * @param id
+ *  Adapter identifier.
+ * @param eth_dev_id
+ *  Ethernet Port Identifier.
+ * @param queue
+ *  Tx queue index.
+ * @return
+ *  - 0: Success, Queues deleted successfully.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_queue_del(uint8_t id,
+				uint16_t eth_dev_id,
+				int32_t queue);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Set Tx queue in the mbuf. This queue is used by the  adapter
+ * to transmit the mbuf.
+ *
+ * @param pkt
+ *  Pointer to the mbuf.
+ * @param queue
+ *  Tx queue index.
+ */
+static __rte_always_inline void __rte_experimental
+rte_event_eth_tx_adapter_txq_set(struct rte_mbuf *pkt, uint16_t queue)
+{
+	struct rte_event_tx_adapter_mbuf_queue *mbuf_queue =
+		(struct rte_event_tx_adapter_mbuf_queue *)(&pkt->hash);
+	mbuf_queue->txq_id = queue;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve Tx queue from the mbuf.
+ *
+ * @param pkt
+ *  Pointer to the mbuf.
+ * @return
+ *  Tx queue identifier.
+ *
+ * @see rte_event_eth_tx_adapter_txq_set()
+ */
+static __rte_always_inline uint16_t __rte_experimental
+rte_event_eth_tx_adapter_txq_get(struct rte_mbuf *pkt)
+{
+	struct rte_event_tx_adapter_mbuf_queue *mbuf_queue =
+		(struct rte_event_tx_adapter_mbuf_queue *)(&pkt->hash);
+	return mbuf_queue->txq_id;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve the adapter event port. The adapter creates an event port if
+ * the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT is not set in the
+ * ethernet Tx capabilities of the event device.
+ *
+ * @param id
+ *  Adapter Identifier.
+ * @param[out] event_port_id
+ *  Event port pointer.
+ * @return
+ *   - 0: Success.
+ *   - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id);
+
+/**
+ * Enqueue a burst of events objects or an event object supplied in *rte_event*
+ * structure on an  event device designated by its *dev_id* through the event
+ * port specified by *port_id*. This function is supported if the eventdev PMD
+ * has the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability flag set.
+ *
+ * The *nb_events* parameter is the number of event objects to enqueue which are
+ * supplied in the *ev* array of *rte_event* structure.
+ *
+ * The rte_event_eth_tx_adapter_enqueue() function returns the number of
+ * events objects it actually enqueued. A return value equal to *nb_events*
+ * means that all event objects have been enqueued.
+ *
+ * @param dev_id
+ *  The identifier of the device.
+ * @param port_id
+ *  The identifier of the event port.
+ * @param ev
+ *  Points to an array of *nb_events* objects of type *rte_event* structure
+ *  which contain the event object enqueue operations to be processed.
+ * @param nb_events
+ *  The number of event objects to enqueue, typically number of
+ *  rte_event_port_enqueue_depth() available for this port.
+ *
+ * @return
+ *   The number of event objects actually enqueued on the event device. The
+ *   return value can be less than the value of the *nb_events* parameter when
+ *   the event devices queue is full or if invalid parameters are specified in a
+ *   *rte_event*. If the return value is less than *nb_events*, the remaining
+ *   events at the end of ev[] are not consumed and the caller has to take care
+ *   of them, and rte_errno is set accordingly. Possible errno values include:
+ *   - -EINVAL  The port ID is invalid, device ID is invalid, an event's queue
+ *              ID is invalid, or an event's sched type doesn't match the
+ *              capabilities of the destination queue.
+ *   - -ENOSPC  The event port was backpressured and unable to enqueue
+ *              one or more events. This error code is only applicable to
+ *              closed systems.
+ */
+static inline uint16_t __rte_experimental
+rte_event_eth_tx_adapter_enqueue(uint8_t dev_id,
+				uint8_t port_id,
+				struct rte_event ev[],
+				uint16_t nb_events)
+{
+	const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+	if (dev_id >= RTE_EVENT_MAX_DEVS ||
+		!rte_eventdevs[dev_id].attached) {
+		rte_errno = -EINVAL;
+		return 0;
+	}
+
+	if (port_id >= dev->data->nb_ports) {
+		rte_errno = -EINVAL;
+		return 0;
+	}
+#endif
+	return dev->txa_enqueue(dev->data->ports[port_id], ev, nb_events);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve statistics for an adapter
+ *
+ * @param id
+ *  Adapter identifier.
+ * @param [out] stats
+ *  A pointer to structure used to retrieve statistics for an adapter.
+ * @return
+ *  - 0: Success, statistics retrieved successfully.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_stats_get(uint8_t id,
+				struct rte_event_eth_tx_adapter_stats *stats);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Reset statistics for an adapter.
+ *
+ * @param id
+ *  Adapter identifier.
+ * @return
+ *  - 0: Success, statistics reset successfully.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_stats_reset(uint8_t id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve the service ID of an adapter. If the adapter doesn't use
+ * a rte_service function, this function returns -ESRCH.
+ *
+ * @param id
+ *  Adapter identifier.
+ * @param [out] service_id
+ *  A pointer to a uint32_t, to be filled in with the service id.
+ * @return
+ *  - 0: Success
+ *  - <0: Error code on failure, if the adapter doesn't use a rte_service
+ * function, this function returns -ESRCH.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_service_id_get(uint8_t id, uint32_t *service_id);
+
+#ifdef __cplusplus
+}
+#endif
+#endif	/* _RTE_EVENT_ETH_TX_ADAPTER_ */
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 00793ed..7d1ed18 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -464,7 +464,9 @@ struct rte_mbuf {
 	};
 	uint16_t nb_segs;         /**< Number of segments. */
 
-	/** Input port (16 bits to support more than 256 virtual ports). */
+	/** Input port (16 bits to support more than 256 virtual ports).
+	 * The event eth Tx adapter uses this field to specify the output port.
+	 */
 	uint16_t port;
 
 	uint64_t ol_flags;        /**< Offload features. */
diff --git a/MAINTAINERS b/MAINTAINERS
index 227e32c..13f378a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -391,6 +391,11 @@ F: lib/librte_eventdev/*crypto_adapter*
 F: test/test/test_event_crypto_adapter.c
 F: doc/guides/prog_guide/event_crypto_adapter.rst
 
+Eventdev Ethdev Tx Adapter API - EXPERIMENTAL
+M: Nikhil Rao <nikhil.rao@intel.com>
+T: git://dpdk.org/next/dpdk-next-eventdev
+F: lib/librte_eventdev/*eth_tx_adapter*
+
 Raw device API - EXPERIMENTAL
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 9265907..911a902 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -24,6 +24,7 @@ The public API headers are grouped by topics:
   [event_eth_rx_adapter]   (@ref rte_event_eth_rx_adapter.h),
   [event_timer_adapter]    (@ref rte_event_timer_adapter.h),
   [event_crypto_adapter]   (@ref rte_event_crypto_adapter.h),
+  [event_eth_tx_adapter]   (@ref rte_event_eth_tx_adapter.h),
   [rawdev]             (@ref rte_rawdev.h),
   [metrics]            (@ref rte_metrics.h),
   [bitrate]            (@ref rte_bitrate.h),
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v2 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter
  2018-08-17  4:20 [dpdk-dev] [PATCH v2 1/4] eventdev: add eth Tx adapter APIs Nikhil Rao
@ 2018-08-17  4:20 ` Nikhil Rao
  2018-08-19 10:45   ` Jerin Jacob
  2018-08-17  4:20 ` [dpdk-dev] [PATCH v2 3/4] eventdev: add eth Tx adapter implementation Nikhil Rao
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 27+ messages in thread
From: Nikhil Rao @ 2018-08-17  4:20 UTC (permalink / raw)
  To: jerin.jacob, olivier.matz; +Cc: dev, Nikhil Rao

The caps API allows the application to query if the transmit
stage is implemented in the eventdev PMD or uses the common
rte_service function. The PMD callbacks support the
eventdev PMD implementation of the adapter.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
---
 lib/librte_eventdev/rte_eventdev.h     |  33 +++++-
 lib/librte_eventdev/rte_eventdev_pmd.h | 200 +++++++++++++++++++++++++++++++++
 lib/librte_eventdev/rte_eventdev.c     |  37 ++++++
 3 files changed, 269 insertions(+), 1 deletion(-)

diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index b6fd6ee..4717292 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -1186,6 +1186,32 @@ struct rte_event {
 rte_event_crypto_adapter_caps_get(uint8_t dev_id, uint8_t cdev_id,
 				  uint32_t *caps);
 
+/* Ethdev Tx adapter capability bitmap flags */
+#define RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT	0x1
+/**< This flag is sent when the PMD supports a packet transmit callback
+ */
+
+/**
+ * Retrieve the event device's eth Tx adapter capabilities
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param eth_port_id
+ *   The identifier of the ethernet device.
+ *
+ * @param[out] caps
+ *   A pointer to memory filled with eth Tx adapter capabilities.
+ *
+ * @return
+ *   - 0: Success, driver provides eth Tx adapter capabilities.
+ *   - <0: Error code returned by the driver function.
+ *
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
+				uint32_t *caps);
+
 struct rte_eventdev_ops;
 struct rte_eventdev;
 
@@ -1204,6 +1230,10 @@ typedef uint16_t (*event_dequeue_burst_t)(void *port, struct rte_event ev[],
 		uint16_t nb_events, uint64_t timeout_ticks);
 /**< @internal Dequeue burst of events from port of a device */
 
+typedef uint16_t (*event_tx_adapter_enqueue)(void *port,
+				struct rte_event ev[], uint16_t nb_events);
+/**< @internal Enqueue burst of events on port of a device */
+
 #define RTE_EVENTDEV_NAME_MAX_LEN	(64)
 /**< @internal Max length of name of event PMD */
 
@@ -1266,7 +1296,8 @@ struct rte_eventdev {
 	/**< Pointer to PMD dequeue function. */
 	event_dequeue_burst_t dequeue_burst;
 	/**< Pointer to PMD dequeue burst function. */
-
+	event_tx_adapter_enqueue txa_enqueue;
+	/**< Pointer to PMD eth Tx adapter enqueue function. */
 	struct rte_eventdev_data *data;
 	/**< Pointer to device data */
 	struct rte_eventdev_ops *dev_ops;
diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h b/lib/librte_eventdev/rte_eventdev_pmd.h
index 3fbb4d2..cd7145a 100644
--- a/lib/librte_eventdev/rte_eventdev_pmd.h
+++ b/lib/librte_eventdev/rte_eventdev_pmd.h
@@ -789,6 +789,186 @@ typedef int (*eventdev_crypto_adapter_stats_reset)
 			(const struct rte_eventdev *dev,
 			 const struct rte_cryptodev *cdev);
 
+/**
+ * Retrieve the event device's eth Tx adapter capabilities.
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @param eth_dev
+ *   Ethernet device pointer
+ *
+ * @param[out] caps
+ *   A pointer to memory filled with eth Tx adapter capabilities.
+ *
+ * @return
+ *   - 0: Success, driver provides eth Tx adapter capabilities
+ *   - <0: Error code returned by the driver function.
+ *
+ */
+typedef int (*eventdev_eth_tx_adapter_caps_get_t)
+					(const struct rte_eventdev *dev,
+					const struct rte_eth_dev *eth_dev,
+					uint32_t *caps);
+
+/**
+ * Create adapter callback.
+ *
+ * @param id
+ *   Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *   - 0: Success.
+ *   - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_create_t)(uint8_t id,
+					const struct rte_eventdev *dev);
+
+/**
+ * Free adapter callback.
+ *
+ * @param id
+ *   Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *   - 0: Success.
+ *   - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_free_t)(uint8_t id,
+					const struct rte_eventdev *dev);
+
+/**
+ * Add a Tx queue to the adapter.
+ * A queue value of -1 is used to indicate all
+ * queues within the device.
+ *
+ * @param id
+ *   Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @param eth_dev
+ *   Ethernet device pointer
+ *
+ * @param tx_queue_id
+ *   Transmt queue index
+ *
+ * @return
+ *   - 0: Success.
+ *   - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_queue_add_t)(
+					uint8_t id,
+					const struct rte_eventdev *dev,
+					const struct rte_eth_dev *eth_dev,
+					int32_t tx_queue_id);
+
+/**
+ * Delete a Tx queue from the adapter.
+ * A queue value of -1 is used to indicate all
+ * queues within the device, that have been added to this
+ * adapter.
+ *
+ * @param id
+ *   Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @param eth_dev
+ *   Ethernet device pointer
+ *
+ * @param tx_queue_id
+ *   Transmit queue index
+ *
+ * @return
+ *  - 0: Success, Queues deleted successfully.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_queue_del_t)(
+					uint8_t id,
+					const struct rte_eventdev *dev,
+					const struct rte_eth_dev *eth_dev,
+					int32_t tx_queue_id);
+
+/**
+ * Start the adapter.
+ *
+ * @param id
+ *   Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *  - 0: Success, Adapter started correctly.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_start_t)(uint8_t id,
+					const struct rte_eventdev *dev);
+
+/**
+ * Stop the adapter.
+ *
+ * @param id
+ *  Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *  - 0: Success.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_stop_t)(uint8_t id,
+					const struct rte_eventdev *dev);
+
+struct rte_event_eth_tx_adapter_stats;
+
+/**
+ * Retrieve statistics for an adapter
+ *
+ * @param id
+ *  Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @param [out] stats
+ *  A pointer to structure used to retrieve statistics for an adapter
+ *
+ * @return
+ *  - 0: Success, statistics retrieved successfully.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_stats_get_t)(
+				uint8_t id,
+				const struct rte_eventdev *dev,
+				struct rte_event_eth_tx_adapter_stats *stats);
+
+/**
+ * Reset statistics for an adapter
+ *
+ * @param id
+ *  Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *  - 0: Success, statistics retrieved successfully.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_stats_reset_t)(uint8_t id,
+					const struct rte_eventdev *dev);
+
 /** Event device operations function pointer table */
 struct rte_eventdev_ops {
 	eventdev_info_get_t dev_infos_get;	/**< Get device info. */
@@ -862,6 +1042,26 @@ struct rte_eventdev_ops {
 	eventdev_crypto_adapter_stats_reset crypto_adapter_stats_reset;
 	/**< Reset crypto stats */
 
+	eventdev_eth_tx_adapter_caps_get_t eth_tx_adapter_caps_get;
+	/**< Get ethernet Tx adapter capabilities */
+
+	eventdev_eth_tx_adapter_create_t eth_tx_adapter_create;
+	/**< Create adapter callback */
+	eventdev_eth_tx_adapter_free_t eth_tx_adapter_free;
+	/**< Free adapter callback */
+	eventdev_eth_tx_adapter_queue_add_t eth_tx_adapter_queue_add;
+	/**< Add Tx queues to the eth Tx adapter */
+	eventdev_eth_tx_adapter_queue_del_t eth_tx_adapter_queue_del;
+	/**< Delete Tx queues from the eth Tx adapter */
+	eventdev_eth_tx_adapter_start_t eth_tx_adapter_start;
+	/**< Start eth Tx adapter */
+	eventdev_eth_tx_adapter_stop_t eth_tx_adapter_stop;
+	/**< Stop eth Tx adapter */
+	eventdev_eth_tx_adapter_stats_get_t eth_tx_adapter_stats_get;
+	/**< Get eth Tx adapter statistics */
+	eventdev_eth_tx_adapter_stats_reset_t eth_tx_adapter_stats_reset;
+	/**< Reset eth Tx adapter statistics */
+
 	eventdev_selftest dev_selftest;
 	/**< Start eventdev Selftest */
 
diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index 801810e..52097a8 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -175,6 +175,31 @@
 		(dev, cdev, caps) : -ENOTSUP;
 }
 
+int __rte_experimental
+rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
+				uint32_t *caps)
+{
+	struct rte_eventdev *dev;
+	struct rte_eth_dev *eth_dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_port_id, -EINVAL);
+
+	dev = &rte_eventdevs[dev_id];
+	eth_dev = &rte_eth_devices[eth_port_id];
+
+	if (caps == NULL)
+		return -EINVAL;
+
+	*caps = 0;
+
+	return dev->dev_ops->eth_tx_adapter_caps_get ?
+			(*dev->dev_ops->eth_tx_adapter_caps_get)(dev,
+								eth_dev,
+								caps)
+			: 0;
+}
+
 static inline int
 rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
 {
@@ -1275,6 +1300,15 @@ int rte_event_dev_selftest(uint8_t dev_id)
 	return RTE_EVENT_MAX_DEVS;
 }
 
+static uint16_t
+rte_event_tx_adapter_enqueue(__rte_unused void *port,
+			__rte_unused struct rte_event ev[],
+			__rte_unused uint16_t nb_events)
+{
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
 struct rte_eventdev *
 rte_event_pmd_allocate(const char *name, int socket_id)
 {
@@ -1295,6 +1329,9 @@ struct rte_eventdev *
 
 	eventdev = &rte_eventdevs[dev_id];
 
+	if (eventdev->txa_enqueue == NULL)
+		eventdev->txa_enqueue = rte_event_tx_adapter_enqueue;
+
 	if (eventdev->data == NULL) {
 		struct rte_eventdev_data *eventdev_data = NULL;
 
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v2 3/4] eventdev: add eth Tx adapter implementation
  2018-08-17  4:20 [dpdk-dev] [PATCH v2 1/4] eventdev: add eth Tx adapter APIs Nikhil Rao
  2018-08-17  4:20 ` [dpdk-dev] [PATCH v2 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter Nikhil Rao
@ 2018-08-17  4:20 ` Nikhil Rao
  2018-08-17  4:20 ` [dpdk-dev] [PATCH v2 4/4] eventdev: add auto test for eth Tx adapter Nikhil Rao
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 27+ messages in thread
From: Nikhil Rao @ 2018-08-17  4:20 UTC (permalink / raw)
  To: jerin.jacob, olivier.matz; +Cc: dev, Nikhil Rao

This patch implements the Tx adapter APIs by invoking the
corresponding eventdev PMD callbacks and also provides
the common rte_service function based implementation when
the eventdev PMD support is absent.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
---
 config/rte_config.h                            |    1 +
 lib/librte_eventdev/rte_event_eth_tx_adapter.c | 1120 ++++++++++++++++++++++++
 config/common_base                             |    2 +-
 lib/librte_eventdev/Makefile                   |    2 +
 lib/librte_eventdev/meson.build                |    6 +-
 lib/librte_eventdev/rte_eventdev_version.map   |   12 +
 6 files changed, 1140 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_eventdev/rte_event_eth_tx_adapter.c

diff --git a/config/rte_config.h b/config/rte_config.h
index 28f04b4..eb096cc 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -66,6 +66,7 @@
 #define RTE_EVENT_TIMER_ADAPTER_NUM_MAX 32
 #define RTE_EVENT_ETH_INTR_RING_SIZE 1024
 #define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
+#define RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE 32
 
 /* rawdev defines */
 #define RTE_RAWDEV_MAX_DEVS 10
diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.c b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
new file mode 100644
index 0000000..05253d4
--- /dev/null
+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
@@ -0,0 +1,1120 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation.
+ */
+#include <rte_spinlock.h>
+#include <rte_service_component.h>
+#include <rte_ethdev.h>
+
+#include "rte_eventdev_pmd.h"
+#include "rte_event_eth_tx_adapter.h"
+
+#define TXA_BATCH_SIZE		32
+#define TXA_SERVICE_NAME_LEN	32
+#define TXA_MEM_NAME_LEN	32
+#define TXA_FLUSH_THRESHOLD	1024
+#define TXA_RETRY_CNT		100
+#define TXA_MAX_NB_TX		128
+#define TXA_INVALID_DEV_ID	INT32_C(-1)
+#define TXA_INVALID_SERVICE_ID	INT64_C(-1)
+
+#define txa_evdev(id) (&rte_eventdevs[txa_dev_id_array[(id)]])
+
+#define txa_dev_caps_get(id) txa_evdev((id))->dev_ops->eth_tx_adapter_caps_get
+
+#define txa_dev_adapter_create(t) txa_evdev(t)->dev_ops->eth_tx_adapter_create
+
+#define txa_dev_adapter_create_ext(t) \
+				txa_evdev(t)->dev_ops->eth_tx_adapter_create
+
+#define txa_dev_adapter_free(t) txa_evdev(t)->dev_ops->eth_tx_adapter_free
+
+#define txa_dev_queue_add(id) txa_evdev(id)->dev_ops->eth_tx_adapter_queue_add
+
+#define txa_dev_queue_del(t) txa_evdev(t)->dev_ops->eth_tx_adapter_queue_del
+
+#define txa_dev_start(t) txa_evdev(t)->dev_ops->eth_tx_adapter_start
+
+#define txa_dev_stop(t) txa_evdev(t)->dev_ops->eth_tx_adapter_stop
+
+#define txa_dev_stats_reset(t) txa_evdev(t)->dev_ops->eth_tx_adapter_stats_reset
+
+#define txa_dev_stats_get(t) txa_evdev(t)->dev_ops->eth_tx_adapter_stats_get
+
+#define RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) \
+do { \
+	if (!txa_valid_id(id)) { \
+		RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d", id); \
+		return retval; \
+	} \
+} while (0)
+
+#define TXA_CHECK_OR_ERR_RET(id) \
+do {\
+	int ret; \
+	RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET((id), -EINVAL); \
+	ret = txa_init(); \
+	if (ret != 0) \
+		return ret; \
+	if (!txa_adapter_exist((id))) \
+		return -EINVAL; \
+} while (0)
+
+/* Tx retry callback structure */
+struct txa_retry {
+	/* Ethernet port id */
+	uint16_t port_id;
+	/* Tx queue */
+	uint16_t tx_queue;
+	/* Adapter ID */
+	uint8_t id;
+};
+
+/* Per queue structure */
+struct txa_service_queue_info {
+	/* Queue has been added */
+	uint8_t added;
+	/* Retry callback argument */
+	struct txa_retry txa_retry;
+	/* Tx buffer */
+	struct rte_eth_dev_tx_buffer *tx_buf;
+};
+
+/* PMD private structure */
+struct txa_service_data {
+	/* Max mbufs processed in any service function invocation */
+	uint32_t max_nb_tx;
+	/* Number of Tx queues in adapter */
+	uint32_t nb_queues;
+	/*  Synchronization with data path */
+	rte_spinlock_t tx_lock;
+	/* Event port ID */
+	uint8_t port_id;
+	/* Event device identifier */
+	uint8_t eventdev_id;
+	/* Highest port id supported + 1 */
+	uint16_t dev_count;
+	/* Loop count to flush Tx buffers */
+	int loop_cnt;
+	/* Per ethernet device structure */
+	struct txa_service_ethdev *txa_ethdev;
+	/* Statistics */
+	struct rte_event_eth_tx_adapter_stats stats;
+	/* Adapter Identifier */
+	uint8_t id;
+	/* Conf arg must be freed */
+	uint8_t conf_free;
+	/* Configuration callback */
+	rte_event_eth_tx_adapter_conf_cb conf_cb;
+	/* Configuration callback argument */
+	void *conf_arg;
+	/* socket id */
+	int socket_id;
+	/* Per adapter EAL service */
+	int64_t service_id;
+	/* Memory allocation name */
+	char mem_name[TXA_MEM_NAME_LEN];
+} __rte_cache_aligned;
+
+/* Per eth device structure */
+struct txa_service_ethdev {
+	/* Pointer to ethernet device */
+	struct rte_eth_dev *dev;
+	/* Number of queues added */
+	uint16_t nb_queues;
+	/* PMD specific queue data */
+	void *queues;
+};
+
+/* Array of adapter instances, initialized with event device id
+ * when adapter is created
+ */
+static int *txa_dev_id_array;
+
+/* Array of pointers to service implementation data */
+static struct txa_service_data **txa_service_data_array;
+
+static int32_t txa_service_func(void *args);
+static int txa_service_adapter_create_ext(uint8_t id,
+			struct rte_eventdev *dev,
+			rte_event_eth_tx_adapter_conf_cb conf_cb,
+			void *conf_arg);
+static int txa_service_queue_del(uint8_t id,
+				const struct rte_eth_dev *dev,
+				int32_t tx_queue_id);
+
+static int
+txa_adapter_exist(uint8_t id)
+{
+	return txa_dev_id_array[id] != TXA_INVALID_DEV_ID;
+}
+
+static inline int
+txa_valid_id(uint8_t id)
+{
+	return id < RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE;
+}
+
+static void *
+txa_memzone_array_get(const char *name, unsigned int elt_size, int nb_elems)
+{
+	const struct rte_memzone *mz;
+	unsigned int sz;
+
+	sz = elt_size * nb_elems;
+	sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
+
+	mz = rte_memzone_lookup(name);
+	if (mz == NULL) {
+		mz = rte_memzone_reserve_aligned(name, sz, rte_socket_id(), 0,
+						 RTE_CACHE_LINE_SIZE);
+		if (mz == NULL) {
+			RTE_EDEV_LOG_ERR("failed to reserve memzone"
+					" name = %s err = %"
+					PRId32, name, rte_errno);
+			return NULL;
+		}
+	}
+
+	return  mz->addr;
+}
+
+static int
+txa_dev_id_array_init(void)
+{
+	if (txa_dev_id_array == NULL) {
+		int i;
+
+		txa_dev_id_array = txa_memzone_array_get("txa_adapter_array",
+					sizeof(int),
+					RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE);
+		if (txa_dev_id_array == NULL)
+			return -ENOMEM;
+
+		for (i = 0; i < RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE; i++)
+			txa_dev_id_array[i] = TXA_INVALID_DEV_ID;
+	}
+
+	return 0;
+}
+
+static int
+txa_init(void)
+{
+	return txa_dev_id_array_init();
+}
+
+static int
+txa_service_data_init(void)
+{
+	if (txa_service_data_array == NULL) {
+		txa_service_data_array =
+				txa_memzone_array_get("txa_service_data_array",
+					sizeof(int),
+					RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE);
+		if (txa_service_data_array == NULL)
+			return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static inline struct txa_service_data *
+txa_service_id_to_data(uint8_t id)
+{
+	return txa_service_data_array[id];
+}
+
+static inline struct txa_service_queue_info *
+txa_service_queue(struct txa_service_data *txa, uint16_t port_id,
+		uint16_t tx_queue_id)
+{
+	struct txa_service_queue_info *tqi;
+
+	if (unlikely(txa->txa_ethdev == NULL || txa->dev_count < port_id + 1))
+		return NULL;
+
+	tqi = txa->txa_ethdev[port_id].queues;
+
+	return likely(tqi != NULL) ? tqi + tx_queue_id : NULL;
+}
+
+static int
+txa_service_conf_cb(uint8_t __rte_unused id, uint8_t dev_id,
+		struct rte_event_eth_tx_adapter_conf *conf, void *arg)
+{
+	int ret;
+	struct rte_eventdev *dev;
+	struct rte_event_port_conf *pc;
+	struct rte_event_dev_config dev_conf;
+	int started;
+	uint8_t port_id;
+
+	pc = arg;
+	dev = &rte_eventdevs[dev_id];
+	dev_conf = dev->data->dev_conf;
+
+	started = dev->data->dev_started;
+	if (started)
+		rte_event_dev_stop(dev_id);
+
+	port_id = dev_conf.nb_event_ports;
+	dev_conf.nb_event_ports += 1;
+
+	ret = rte_event_dev_configure(dev_id, &dev_conf);
+	if (ret) {
+		RTE_EDEV_LOG_ERR("failed to configure event dev %u",
+						dev_id);
+		if (started) {
+			if (rte_event_dev_start(dev_id))
+				return -EIO;
+		}
+		return ret;
+	}
+
+	pc->disable_implicit_release = 0;
+	ret = rte_event_port_setup(dev_id, port_id, pc);
+	if (ret) {
+		RTE_EDEV_LOG_ERR("failed to setup event port %u\n",
+					port_id);
+		if (started) {
+			if (rte_event_dev_start(dev_id))
+				return -EIO;
+		}
+		return ret;
+	}
+
+	conf->event_port_id = port_id;
+	conf->max_nb_tx = TXA_MAX_NB_TX;
+	if (started)
+		ret = rte_event_dev_start(dev_id);
+	return ret;
+}
+
+static int
+txa_service_ethdev_alloc(struct txa_service_data *txa)
+{
+	struct txa_service_ethdev *txa_ethdev;
+	uint16_t i, dev_count;
+
+	dev_count = rte_eth_dev_count_avail();
+	if (txa->txa_ethdev && dev_count == txa->dev_count)
+		return 0;
+
+	txa_ethdev = rte_zmalloc_socket(txa->mem_name,
+					dev_count * sizeof(*txa_ethdev),
+					0,
+					txa->socket_id);
+	if (txa_ethdev == NULL) {
+		RTE_EDEV_LOG_ERR("Failed to alloc txa::txa_ethdev ");
+		return -ENOMEM;
+	}
+
+	if (txa->dev_count)
+		memcpy(txa_ethdev, txa->txa_ethdev,
+			txa->dev_count * sizeof(*txa_ethdev));
+
+	RTE_ETH_FOREACH_DEV(i) {
+		if (i == dev_count)
+			break;
+		txa_ethdev[i].dev = &rte_eth_devices[i];
+	}
+
+	txa->txa_ethdev = txa_ethdev;
+	txa->dev_count = dev_count;
+	return 0;
+}
+
+static int
+txa_service_queue_array_alloc(struct txa_service_data *txa,
+			uint16_t port_id)
+{
+	struct txa_service_queue_info *tqi;
+	uint16_t nb_queue;
+	int ret;
+
+	ret = txa_service_ethdev_alloc(txa);
+	if (ret != 0)
+		return ret;
+
+	if (txa->txa_ethdev[port_id].queues)
+		return 0;
+
+	nb_queue = txa->txa_ethdev[port_id].dev->data->nb_tx_queues;
+	tqi = rte_zmalloc_socket(txa->mem_name,
+				nb_queue *
+				sizeof(struct txa_service_queue_info), 0,
+				txa->socket_id);
+	if (tqi == NULL)
+		return -ENOMEM;
+	txa->txa_ethdev[port_id].queues = tqi;
+	return 0;
+}
+
+static void
+txa_service_queue_array_free(struct txa_service_data *txa,
+			uint16_t port_id)
+{
+	struct txa_service_ethdev *txa_ethdev;
+	struct txa_service_queue_info *tqi;
+
+	txa_ethdev = &txa->txa_ethdev[port_id];
+	if (txa->txa_ethdev == NULL || txa_ethdev->nb_queues != 0)
+		return;
+
+	tqi = txa_ethdev->queues;
+	txa_ethdev->queues = NULL;
+	rte_free(tqi);
+
+	if (txa->nb_queues == 0) {
+		rte_free(txa->txa_ethdev);
+		txa->txa_ethdev = NULL;
+	}
+}
+
+static void
+txa_service_unregister(struct txa_service_data *txa)
+{
+	if (txa->service_id != TXA_INVALID_SERVICE_ID) {
+		rte_service_component_runstate_set(txa->service_id, 0);
+		while (rte_service_may_be_active(txa->service_id))
+			rte_pause();
+		rte_service_component_unregister(txa->service_id);
+	}
+	txa->service_id = TXA_INVALID_SERVICE_ID;
+}
+
+static int
+txa_service_register(struct txa_service_data *txa)
+{
+	int ret;
+	struct rte_service_spec service;
+	struct rte_event_eth_tx_adapter_conf conf;
+
+	if (txa->service_id != TXA_INVALID_SERVICE_ID)
+		return 0;
+
+	memset(&service, 0, sizeof(service));
+	snprintf(service.name, TXA_SERVICE_NAME_LEN, "txa_%d", txa->id);
+	service.socket_id = txa->socket_id;
+	service.callback = txa_service_func;
+	service.callback_userdata = txa;
+	service.capabilities = RTE_SERVICE_CAP_MT_SAFE;
+	ret = rte_service_component_register(&service,
+					(uint32_t *)&txa->service_id);
+	if (ret) {
+		RTE_EDEV_LOG_ERR("failed to register service %s err = %"
+				 PRId32, service.name, ret);
+		return ret;
+	}
+
+	ret = txa->conf_cb(txa->id, txa->eventdev_id, &conf, txa->conf_arg);
+	if (ret) {
+		txa_service_unregister(txa);
+		return ret;
+	}
+
+	rte_service_component_runstate_set(txa->service_id, 1);
+	txa->port_id = conf.event_port_id;
+	txa->max_nb_tx = conf.max_nb_tx;
+	return 0;
+}
+
+static struct rte_eth_dev_tx_buffer *
+txa_service_tx_buf_alloc(struct txa_service_data *txa,
+			const struct rte_eth_dev *dev)
+{
+	struct rte_eth_dev_tx_buffer *tb;
+	uint16_t port_id;
+
+	port_id = dev->data->port_id;
+	tb = rte_zmalloc_socket(txa->mem_name,
+				RTE_ETH_TX_BUFFER_SIZE(TXA_BATCH_SIZE),
+				0,
+				rte_eth_dev_socket_id(port_id));
+	if (tb == NULL)
+		RTE_EDEV_LOG_ERR("Failed to allocate memory for tx buffer");
+	return tb;
+}
+
+static int
+txa_service_is_queue_added(struct txa_service_data *txa,
+			const struct rte_eth_dev *dev,
+			uint16_t tx_queue_id)
+{
+	struct txa_service_queue_info *tqi;
+
+	tqi = txa_service_queue(txa, dev->data->port_id, tx_queue_id);
+	return tqi && tqi->added;
+}
+
+static int
+txa_service_ctrl(uint8_t id, int start)
+{
+	int ret;
+	struct txa_service_data *txa;
+
+	txa = txa_service_id_to_data(id);
+	if (txa->service_id == TXA_INVALID_SERVICE_ID)
+		return 0;
+
+	ret = rte_service_runstate_set(txa->service_id, start);
+	if (ret == 0 && !start) {
+		while (rte_service_may_be_active(txa->service_id))
+			rte_pause();
+	}
+	return ret;
+}
+
+static void
+txa_service_buffer_retry(struct rte_mbuf **pkts, uint16_t unsent,
+			void *userdata)
+{
+	struct txa_retry *tr;
+	struct txa_service_data *data;
+	struct rte_event_eth_tx_adapter_stats *stats;
+	uint16_t sent = 0;
+	unsigned int retry = 0;
+	uint16_t i, n;
+
+	tr = (struct txa_retry *)(uintptr_t)userdata;
+	data = txa_service_id_to_data(tr->id);
+	stats = &data->stats;
+
+	do {
+		n = rte_eth_tx_burst(tr->port_id, tr->tx_queue,
+			       &pkts[sent], unsent - sent);
+
+		sent += n;
+	} while (sent != unsent && retry++ < TXA_RETRY_CNT);
+
+	for (i = sent; i < unsent; i++)
+		rte_pktmbuf_free(pkts[i]);
+
+	stats->tx_retry += retry;
+	stats->tx_packets += sent;
+	stats->tx_dropped += unsent - sent;
+}
+
+static void
+txa_service_tx(struct txa_service_data *txa, struct rte_event *ev,
+	uint32_t n)
+{
+	uint32_t i;
+	uint16_t nb_tx;
+	struct rte_event_eth_tx_adapter_stats *stats;
+
+	stats = &txa->stats;
+
+	nb_tx = 0;
+	for (i = 0; i < n; i++) {
+		struct rte_mbuf *m;
+		uint16_t port;
+		uint16_t queue;
+		struct txa_service_queue_info *tqi;
+
+		m = ev[i].mbuf;
+		port = m->port;
+		queue = rte_event_eth_tx_adapter_txq_get(m);
+
+		tqi = txa_service_queue(txa, port, queue);
+		if (unlikely(tqi == NULL || !tqi->added)) {
+			rte_pktmbuf_free(m);
+			continue;
+		}
+
+		nb_tx += rte_eth_tx_buffer(port, queue, tqi->tx_buf, m);
+	}
+
+	stats->tx_packets += nb_tx;
+}
+
+static int32_t
+txa_service_func(void *args)
+{
+	struct txa_service_data *txa = args;
+	uint8_t dev_id;
+	uint8_t port;
+	uint16_t n;
+	uint32_t nb_tx, max_nb_tx;
+	struct rte_event ev[TXA_BATCH_SIZE];
+
+	dev_id = txa->eventdev_id;
+	max_nb_tx = txa->max_nb_tx;
+	port = txa->port_id;
+
+	if (txa->nb_queues == 0)
+		return 0;
+
+	if (!rte_spinlock_trylock(&txa->tx_lock))
+		return 0;
+
+	for (nb_tx = 0; nb_tx < max_nb_tx; nb_tx += n) {
+
+		n = rte_event_dequeue_burst(dev_id, port, ev, RTE_DIM(ev), 0);
+		if (!n)
+			break;
+		txa_service_tx(txa, ev, n);
+	}
+
+	if ((txa->loop_cnt++ & (TXA_FLUSH_THRESHOLD - 1)) == 0) {
+
+		struct txa_service_ethdev *tdi;
+		struct txa_service_queue_info *tqi;
+		struct rte_eth_dev *dev;
+		uint16_t i;
+
+		tdi = txa->txa_ethdev;
+		nb_tx = 0;
+
+		RTE_ETH_FOREACH_DEV(i) {
+			uint16_t q;
+
+			if (i == txa->dev_count)
+				break;
+
+			dev = tdi[i].dev;
+			if (tdi[i].nb_queues == 0)
+				continue;
+			for (q = 0; q < dev->data->nb_tx_queues; q++) {
+
+				tqi = txa_service_queue(txa, i, q);
+				if (unlikely(tqi == NULL || !tqi->added))
+					continue;
+
+				nb_tx += rte_eth_tx_buffer_flush(i, q,
+							tqi->tx_buf);
+			}
+		}
+
+		txa->stats.tx_packets += nb_tx;
+	}
+	rte_spinlock_unlock(&txa->tx_lock);
+	return 0;
+}
+
+static int
+txa_service_adapter_create(uint8_t id, struct rte_eventdev *dev,
+			struct rte_event_port_conf *port_conf)
+{
+	struct txa_service_data *txa;
+	struct rte_event_port_conf *cb_conf;
+	int ret;
+
+	cb_conf = rte_malloc(NULL, sizeof(*cb_conf), 0);
+	if (cb_conf == NULL)
+		return -ENOMEM;
+
+	*cb_conf = *port_conf;
+	ret = txa_service_adapter_create_ext(id, dev, txa_service_conf_cb,
+					cb_conf);
+	if (ret) {
+		rte_free(cb_conf);
+		return ret;
+	}
+
+	txa = txa_service_id_to_data(id);
+	txa->conf_free = 1;
+	return ret;
+}
+
+static int
+txa_service_adapter_create_ext(uint8_t id, struct rte_eventdev *dev,
+			rte_event_eth_tx_adapter_conf_cb conf_cb,
+			void *conf_arg)
+{
+	struct txa_service_data *txa;
+	int socket_id;
+	char mem_name[TXA_SERVICE_NAME_LEN];
+	int ret;
+
+	if (conf_cb == NULL)
+		return -EINVAL;
+
+	socket_id = dev->data->socket_id;
+	snprintf(mem_name, TXA_MEM_NAME_LEN,
+		"rte_event_eth_txa_%d",
+		id);
+
+	ret = txa_service_data_init();
+	if (ret != 0)
+		return ret;
+
+	txa = rte_zmalloc_socket(mem_name,
+				sizeof(*txa),
+				RTE_CACHE_LINE_SIZE, socket_id);
+	if (txa == NULL) {
+		RTE_EDEV_LOG_ERR("failed to get mem for tx adapter");
+		return -ENOMEM;
+	}
+
+	txa->id = id;
+	txa->eventdev_id = dev->data->dev_id;
+	txa->socket_id = socket_id;
+	strncpy(txa->mem_name, mem_name, TXA_SERVICE_NAME_LEN);
+	txa->conf_cb = conf_cb;
+	txa->conf_arg = conf_arg;
+	txa->service_id = TXA_INVALID_SERVICE_ID;
+	rte_spinlock_init(&txa->tx_lock);
+	txa_service_data_array[id] = txa;
+
+	return 0;
+}
+
+static int
+txa_service_event_port_get(uint8_t id, uint8_t *port)
+{
+	struct txa_service_data *txa;
+
+	txa = txa_service_id_to_data(id);
+	if (txa->service_id == TXA_INVALID_SERVICE_ID)
+		return -ENODEV;
+
+	*port = txa->port_id;
+	return 0;
+}
+
+static int
+txa_service_adapter_free(uint8_t id)
+{
+	struct txa_service_data *txa;
+
+	txa = txa_service_id_to_data(id);
+	if (txa->nb_queues) {
+		RTE_EDEV_LOG_ERR("%" PRIu16 " Tx queues not deleted",
+				txa->nb_queues);
+		return -EBUSY;
+	}
+
+	if (txa->conf_free)
+		rte_free(txa->conf_arg);
+	rte_free(txa);
+	return 0;
+}
+
+static int
+txa_service_queue_add(uint8_t id,
+		__rte_unused struct rte_eventdev *dev,
+		const struct rte_eth_dev *eth_dev,
+		int32_t tx_queue_id)
+{
+	struct txa_service_data *txa;
+	struct txa_service_ethdev *tdi;
+	struct txa_service_queue_info *tqi;
+	struct rte_eth_dev_tx_buffer *tb;
+	struct txa_retry *txa_retry;
+	int ret;
+
+	txa = txa_service_id_to_data(id);
+
+	if (tx_queue_id == -1) {
+		int nb_queues;
+		uint16_t i, j;
+		uint16_t *qdone;
+
+		nb_queues = eth_dev->data->nb_tx_queues;
+		if (txa->dev_count > eth_dev->data->port_id) {
+			tdi = &txa->txa_ethdev[eth_dev->data->port_id];
+			nb_queues -= tdi->nb_queues;
+		}
+
+		qdone = rte_zmalloc(txa->mem_name,
+				nb_queues * sizeof(*qdone), 0);
+		j = 0;
+		for (i = 0; i < nb_queues; i++) {
+			if (txa_service_is_queue_added(txa, eth_dev, i))
+				continue;
+			ret = txa_service_queue_add(id, dev, eth_dev, i);
+			if (ret == 0)
+				qdone[j++] = i;
+			else
+				break;
+		}
+
+		if (i != nb_queues) {
+			for (i = 0; i < j; i++)
+				txa_service_queue_del(id, eth_dev, qdone[i]);
+		}
+		rte_free(qdone);
+		return ret;
+	}
+
+	ret = txa_service_register(txa);
+	if (ret)
+		return ret;
+
+	rte_spinlock_lock(&txa->tx_lock);
+
+	if (txa_service_is_queue_added(txa, eth_dev, tx_queue_id)) {
+		rte_spinlock_unlock(&txa->tx_lock);
+		return 0;
+	}
+
+	ret = txa_service_queue_array_alloc(txa, eth_dev->data->port_id);
+	if (ret)
+		goto err_unlock;
+
+	tb = txa_service_tx_buf_alloc(txa, eth_dev);
+	if (tb == NULL)
+		goto err_unlock;
+
+	tdi = &txa->txa_ethdev[eth_dev->data->port_id];
+	tqi = txa_service_queue(txa, eth_dev->data->port_id, tx_queue_id);
+
+	txa_retry = &tqi->txa_retry;
+	txa_retry->id = txa->id;
+	txa_retry->port_id = eth_dev->data->port_id;
+	txa_retry->tx_queue = tx_queue_id;
+
+	rte_eth_tx_buffer_init(tb, TXA_BATCH_SIZE);
+	rte_eth_tx_buffer_set_err_callback(tb,
+		txa_service_buffer_retry, txa_retry);
+
+	tqi->tx_buf = tb;
+	tqi->added = 1;
+	tdi->nb_queues++;
+	txa->nb_queues++;
+
+err_unlock:
+	if (txa->nb_queues == 0) {
+		txa_service_queue_array_free(txa,
+					eth_dev->data->port_id);
+		txa_service_unregister(txa);
+	}
+
+	rte_spinlock_unlock(&txa->tx_lock);
+	return 0;
+}
+
+static int
+txa_service_queue_del(uint8_t id,
+		const struct rte_eth_dev *dev,
+		int32_t tx_queue_id)
+{
+	struct txa_service_data *txa;
+	struct txa_service_queue_info *tqi;
+	struct rte_eth_dev_tx_buffer *tb;
+	uint16_t port_id;
+
+	if (tx_queue_id == -1) {
+		uint16_t i;
+		int ret;
+
+		for (i = 0; i < dev->data->nb_tx_queues; i++) {
+			ret = txa_service_queue_del(id, dev, i);
+			if (ret != 0)
+				break;
+		}
+		return ret;
+	}
+
+	txa = txa_service_id_to_data(id);
+	port_id = dev->data->port_id;
+
+	tqi = txa_service_queue(txa, port_id, tx_queue_id);
+	if (tqi == NULL || !tqi->added)
+		return 0;
+
+	tb = tqi->tx_buf;
+	tqi->added = 0;
+	tqi->tx_buf = NULL;
+	rte_free(tb);
+	txa->nb_queues--;
+	txa->txa_ethdev[port_id].nb_queues--;
+
+	txa_service_queue_array_free(txa, port_id);
+	return 0;
+}
+
+static int
+txa_service_id_get(uint8_t id, uint32_t *service_id)
+{
+	struct txa_service_data *txa;
+
+	txa = txa_service_id_to_data(id);
+	if (txa->service_id == TXA_INVALID_SERVICE_ID)
+		return -EINVAL;
+
+	*service_id = txa->service_id;
+	return 0;
+}
+
+static int
+txa_service_start(uint8_t id)
+{
+	return txa_service_ctrl(id, 1);
+}
+
+static int
+txa_service_stats_get(uint8_t id,
+		struct rte_event_eth_tx_adapter_stats *stats)
+{
+	struct txa_service_data *txa;
+
+	txa = txa_service_id_to_data(id);
+	*stats = txa->stats;
+	return 0;
+}
+
+static int
+txa_service_stats_reset(uint8_t id)
+{
+	struct txa_service_data *txa;
+
+	txa = txa_service_id_to_data(id);
+	memset(&txa->stats, 0, sizeof(txa->stats));
+	return 0;
+}
+
+static int
+txa_service_stop(uint8_t id)
+{
+	return txa_service_ctrl(id, 0);
+}
+
+
+int __rte_experimental
+rte_event_eth_tx_adapter_create(uint8_t id, uint8_t dev_id,
+				struct rte_event_port_conf *port_conf)
+{
+	struct rte_eventdev *dev;
+	int ret;
+
+	if (port_conf == NULL)
+		return -EINVAL;
+
+	RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+
+	dev = &rte_eventdevs[dev_id];
+
+	ret = txa_init();
+	if (ret != 0)
+		return ret;
+
+	if (txa_adapter_exist(id))
+		return -EEXIST;
+
+	txa_dev_id_array[id] = dev_id;
+	if (txa_dev_adapter_create(id))
+		ret = txa_dev_adapter_create(id)(id, dev);
+
+	if (ret != 0) {
+		txa_dev_id_array[id] = TXA_INVALID_DEV_ID;
+		return ret;
+	}
+
+	ret = txa_service_adapter_create(id, dev, port_conf);
+	if (ret != 0) {
+		if (txa_dev_adapter_free(id))
+			txa_dev_adapter_free(id)(id, dev);
+		txa_dev_id_array[id] = TXA_INVALID_DEV_ID;
+		return ret;
+	}
+
+	txa_dev_id_array[id] = dev_id;
+	return 0;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_create_ext(uint8_t id, uint8_t dev_id,
+				rte_event_eth_tx_adapter_conf_cb conf_cb,
+				void *conf_arg)
+{
+	struct rte_eventdev *dev;
+	int ret;
+
+	RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+
+	ret = txa_init();
+	if (ret != 0)
+		return ret;
+
+	if (txa_adapter_exist(id))
+		return -EINVAL;
+
+	dev = &rte_eventdevs[dev_id];
+
+	txa_dev_id_array[id] = dev_id;
+	if (txa_dev_adapter_create_ext(id))
+		ret = txa_dev_adapter_create_ext(id)(id, dev);
+
+	if (ret != 0) {
+		txa_dev_id_array[id] = TXA_INVALID_DEV_ID;
+		return ret;
+	}
+
+	ret = txa_service_adapter_create_ext(id, dev, conf_cb, conf_arg);
+	if (ret != 0) {
+		if (txa_dev_adapter_free(id))
+			txa_dev_adapter_free(id)(id, dev);
+		txa_dev_id_array[id] = TXA_INVALID_DEV_ID;
+		return ret;
+	}
+
+	txa_dev_id_array[id] = dev_id;
+	return 0;
+}
+
+
+int __rte_experimental
+rte_event_eth_tx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
+{
+	TXA_CHECK_OR_ERR_RET(id);
+
+	return txa_service_event_port_get(id, event_port_id);
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_free(uint8_t id)
+{
+	int ret;
+
+	TXA_CHECK_OR_ERR_RET(id);
+
+	ret = txa_dev_adapter_free(id) ?
+		txa_dev_adapter_free(id)(id, txa_evdev(id)) :
+		0;
+
+	if (ret == 0)
+		ret = txa_service_adapter_free(id);
+	txa_dev_id_array[id] = TXA_INVALID_DEV_ID;
+
+	return ret;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_queue_add(uint8_t id,
+				uint16_t eth_dev_id,
+				int32_t queue)
+{
+	struct rte_eth_dev *eth_dev;
+	int ret;
+	uint32_t caps;
+
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL);
+	TXA_CHECK_OR_ERR_RET(id);
+
+	eth_dev = &rte_eth_devices[eth_dev_id];
+	if (queue != -1 && (uint16_t)queue >= eth_dev->data->nb_tx_queues) {
+		RTE_EDEV_LOG_ERR("Invalid tx queue_id %" PRIu16,
+				(uint16_t)queue);
+		return -EINVAL;
+	}
+
+	caps = 0;
+	if (txa_dev_caps_get(id))
+		txa_dev_caps_get(id)(txa_evdev(id), eth_dev, &caps);
+
+	if (caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)
+		ret =  txa_dev_queue_add(id) ?
+					txa_dev_queue_add(id)(id,
+							txa_evdev(id),
+							eth_dev,
+							queue) : 0;
+	else
+		ret = txa_service_queue_add(id, txa_evdev(id), eth_dev, queue);
+
+	return ret;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_queue_del(uint8_t id,
+				uint16_t eth_dev_id,
+				int32_t queue)
+{
+	struct rte_eth_dev *eth_dev;
+	int ret;
+	uint32_t caps;
+
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL);
+	TXA_CHECK_OR_ERR_RET(id);
+
+	eth_dev = &rte_eth_devices[eth_dev_id];
+	if (queue != -1 && (uint16_t)queue >= eth_dev->data->nb_tx_queues) {
+		RTE_EDEV_LOG_ERR("Invalid tx queue_id %" PRIu16,
+				(uint16_t)queue);
+		return -EINVAL;
+	}
+
+	caps = 0;
+
+	if (txa_dev_caps_get(id))
+		txa_dev_caps_get(id)(txa_evdev(id), eth_dev, &caps);
+
+	if (caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)
+		ret =  txa_dev_queue_del(id) ?
+					txa_dev_queue_del(id)(id, txa_evdev(id),
+							eth_dev,
+							queue) : 0;
+	else
+		ret = txa_service_queue_del(id, eth_dev, queue);
+
+	return ret;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_service_id_get(uint8_t id, uint32_t *service_id)
+{
+	TXA_CHECK_OR_ERR_RET(id);
+
+	return txa_service_id_get(id, service_id);
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_start(uint8_t id)
+{
+	int ret;
+
+	TXA_CHECK_OR_ERR_RET(id);
+
+	ret = txa_dev_start(id) ? txa_dev_start(id)(id, txa_evdev(id)) : 0;
+	if (ret == 0)
+		ret = txa_service_start(id);
+	return ret;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_stats_get(uint8_t id,
+				struct rte_event_eth_tx_adapter_stats *stats)
+{
+	int ret;
+
+	TXA_CHECK_OR_ERR_RET(id);
+
+	if (stats == NULL)
+		return -EINVAL;
+
+	ret = txa_dev_stats_get(id) ?
+			txa_dev_stats_get(id)(id, txa_evdev(id), stats) : 0;
+	if (ret == 0)
+		ret = txa_service_stats_get(id, stats);
+	return ret;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_stats_reset(uint8_t id)
+{
+	int ret;
+
+	TXA_CHECK_OR_ERR_RET(id);
+
+	ret = txa_dev_stats_reset(id) ?
+		txa_dev_stats_reset(id)(id, txa_evdev(id)) : 0;
+	if (ret == 0)
+		ret = txa_service_stats_reset(id);
+	return ret;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_stop(uint8_t id)
+{
+	int ret;
+
+	TXA_CHECK_OR_ERR_RET(id);
+
+	ret = txa_dev_stop(id) ? txa_dev_stop(id)(id,  txa_evdev(id)) : 0;
+	if (ret == 0)
+		ret = txa_service_stop(id);
+	return ret;
+}
diff --git a/config/common_base b/config/common_base
index 201cdf6..2d445eb 100644
--- a/config/common_base
+++ b/config/common_base
@@ -590,7 +590,7 @@ CONFIG_RTE_EVENT_MAX_QUEUES_PER_DEV=64
 CONFIG_RTE_EVENT_TIMER_ADAPTER_NUM_MAX=32
 CONFIG_RTE_EVENT_ETH_INTR_RING_SIZE=1024
 CONFIG_RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE=32
-
+CONFIG_RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE=32
 #
 # Compile PMD for skeleton event device
 #
diff --git a/lib/librte_eventdev/Makefile b/lib/librte_eventdev/Makefile
index 47f599a..424ff35 100644
--- a/lib/librte_eventdev/Makefile
+++ b/lib/librte_eventdev/Makefile
@@ -28,6 +28,7 @@ SRCS-y += rte_event_ring.c
 SRCS-y += rte_event_eth_rx_adapter.c
 SRCS-y += rte_event_timer_adapter.c
 SRCS-y += rte_event_crypto_adapter.c
+SRCS-y += rte_event_eth_tx_adapter.c
 
 # export include files
 SYMLINK-y-include += rte_eventdev.h
@@ -39,6 +40,7 @@ SYMLINK-y-include += rte_event_eth_rx_adapter.h
 SYMLINK-y-include += rte_event_timer_adapter.h
 SYMLINK-y-include += rte_event_timer_adapter_pmd.h
 SYMLINK-y-include += rte_event_crypto_adapter.h
+SYMLINK-y-include += rte_event_eth_tx_adapter.h
 
 # versioning export map
 EXPORT_MAP := rte_eventdev_version.map
diff --git a/lib/librte_eventdev/meson.build b/lib/librte_eventdev/meson.build
index 3cbaf29..47989e7 100644
--- a/lib/librte_eventdev/meson.build
+++ b/lib/librte_eventdev/meson.build
@@ -14,7 +14,8 @@ sources = files('rte_eventdev.c',
 		'rte_event_ring.c',
 		'rte_event_eth_rx_adapter.c',
 		'rte_event_timer_adapter.c',
-		'rte_event_crypto_adapter.c')
+		'rte_event_crypto_adapter.c',
+		'rte_event_eth_tx_adapter.c')
 headers = files('rte_eventdev.h',
 		'rte_eventdev_pmd.h',
 		'rte_eventdev_pmd_pci.h',
@@ -23,5 +24,6 @@ headers = files('rte_eventdev.h',
 		'rte_event_eth_rx_adapter.h',
 		'rte_event_timer_adapter.h',
 		'rte_event_timer_adapter_pmd.h',
-		'rte_event_crypto_adapter.h')
+		'rte_event_crypto_adapter.h',
+		'rte_event_eth_tx_adapter.h')
 deps += ['ring', 'ethdev', 'hash', 'mempool', 'mbuf', 'timer', 'cryptodev']
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 12835e9..19c1494 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -96,6 +96,18 @@ EXPERIMENTAL {
 	rte_event_crypto_adapter_stats_reset;
 	rte_event_crypto_adapter_stop;
 	rte_event_eth_rx_adapter_cb_register;
+	rte_event_eth_tx_adapter_caps_get;
+	rte_event_eth_tx_adapter_create;
+	rte_event_eth_tx_adapter_create_ext;
+	rte_event_eth_tx_adapter_event_port_get;
+	rte_event_eth_tx_adapter_free;
+	rte_event_eth_tx_adapter_queue_add;
+	rte_event_eth_tx_adapter_queue_del;
+	rte_event_eth_tx_adapter_service_id_get;
+	rte_event_eth_tx_adapter_start;
+	rte_event_eth_tx_adapter_stats_get;
+	rte_event_eth_tx_adapter_stats_reset;
+	rte_event_eth_tx_adapter_stop;
 	rte_event_timer_adapter_caps_get;
 	rte_event_timer_adapter_create;
 	rte_event_timer_adapter_create_ext;
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v2 4/4] eventdev: add auto test for eth Tx adapter
  2018-08-17  4:20 [dpdk-dev] [PATCH v2 1/4] eventdev: add eth Tx adapter APIs Nikhil Rao
  2018-08-17  4:20 ` [dpdk-dev] [PATCH v2 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter Nikhil Rao
  2018-08-17  4:20 ` [dpdk-dev] [PATCH v2 3/4] eventdev: add eth Tx adapter implementation Nikhil Rao
@ 2018-08-17  4:20 ` Nikhil Rao
  2018-08-17 11:55   ` Pavan Nikhilesh
  2018-08-19 10:19 ` [dpdk-dev] [PATCH v2 1/4] eventdev: add eth Tx adapter APIs Jerin Jacob
  2018-08-31  5:41 ` [dpdk-dev] [PATCH v3 1/5] " Nikhil Rao
  4 siblings, 1 reply; 27+ messages in thread
From: Nikhil Rao @ 2018-08-17  4:20 UTC (permalink / raw)
  To: jerin.jacob, olivier.matz; +Cc: dev, Nikhil Rao

This patch adds tests for the eth Tx adapter APIs. It also
tests the data path for the rte_service function based
implementation of the APIs.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
---
 test/test/test_event_eth_tx_adapter.c | 676 ++++++++++++++++++++++++++++++++++
 MAINTAINERS                           |   1 +
 test/test/Makefile                    |   1 +
 test/test/meson.build                 |   2 +
 4 files changed, 680 insertions(+)
 create mode 100644 test/test/test_event_eth_tx_adapter.c

diff --git a/test/test/test_event_eth_tx_adapter.c b/test/test/test_event_eth_tx_adapter.c
new file mode 100644
index 0000000..2dc487b
--- /dev/null
+++ b/test/test/test_event_eth_tx_adapter.c
@@ -0,0 +1,676 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+#include <string.h>
+#include <rte_common.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_bus_vdev.h>
+#include <rte_eth_ring.h>
+#include <rte_service.h>
+#include <rte_event_eth_tx_adapter.h>
+
+#include "test.h"
+
+#define MAX_NUM_QUEUE		RTE_PMD_RING_MAX_RX_RINGS
+#define TEST_INST_ID		0
+#define TEST_DEV_ID		0
+#define SOCKET0			0
+#define RING_SIZE		256
+#define ETH_NAME_LEN		32
+#define NUM_ETH_PAIR		1
+#define NUM_ETH_DEV		(2 * NUM_ETH_PAIR)
+#define NB_MBUF			512
+#define PAIR_PORT_INDEX(p)	((p) + NUM_ETH_PAIR)
+#define PORT(p)			default_params.port[(p)]
+#define TEST_ETHDEV_ID		PORT(0)
+#define TEST_ETHDEV_PAIR_ID	PORT(PAIR_PORT_INDEX(0))
+
+#define EDEV_RETRY		0xffff
+
+struct event_eth_tx_adapter_test_params {
+	struct rte_mempool *mp;
+	uint16_t rx_rings, tx_rings;
+	uint32_t caps;
+	struct rte_ring *r[NUM_ETH_DEV][MAX_NUM_QUEUE];
+	int port[NUM_ETH_DEV];
+};
+
+static int event_dev_delete;
+static struct event_eth_tx_adapter_test_params default_params;
+
+static inline int
+port_init_common(uint8_t port, const struct rte_eth_conf *port_conf,
+		struct rte_mempool *mp)
+{
+	const uint16_t rx_ring_size = RING_SIZE, tx_ring_size = RING_SIZE;
+	int retval;
+	uint16_t q;
+
+	if (!rte_eth_dev_is_valid_port(port))
+		return -1;
+
+	default_params.rx_rings = MAX_NUM_QUEUE;
+	default_params.tx_rings = MAX_NUM_QUEUE;
+
+	/* Configure the Ethernet device. */
+	retval = rte_eth_dev_configure(port, default_params.rx_rings,
+				default_params.tx_rings, port_conf);
+	if (retval != 0)
+		return retval;
+
+	for (q = 0; q < default_params.rx_rings; q++) {
+		retval = rte_eth_rx_queue_setup(port, q, rx_ring_size,
+				rte_eth_dev_socket_id(port), NULL, mp);
+		if (retval < 0)
+			return retval;
+	}
+
+	for (q = 0; q < default_params.tx_rings; q++) {
+		retval = rte_eth_tx_queue_setup(port, q, tx_ring_size,
+				rte_eth_dev_socket_id(port), NULL);
+		if (retval < 0)
+			return retval;
+	}
+
+	/* Start the Ethernet port. */
+	retval = rte_eth_dev_start(port);
+	if (retval < 0)
+		return retval;
+
+	/* Display the port MAC address. */
+	struct ether_addr addr;
+	rte_eth_macaddr_get(port, &addr);
+	printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8
+			   " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n",
+			(unsigned int)port,
+			addr.addr_bytes[0], addr.addr_bytes[1],
+			addr.addr_bytes[2], addr.addr_bytes[3],
+			addr.addr_bytes[4], addr.addr_bytes[5]);
+
+	/* Enable RX in promiscuous mode for the Ethernet device. */
+	rte_eth_promiscuous_enable(port);
+
+	return 0;
+}
+
+static inline int
+port_init(uint8_t port, struct rte_mempool *mp)
+{
+	struct rte_eth_conf conf = { 0 };
+	return port_init_common(port, &conf, mp);
+}
+
+#define RING_NAME_LEN	20
+#define DEV_NAME_LEN	20
+
+static int
+init_ports(void)
+{
+	char ring_name[ETH_NAME_LEN];
+	unsigned int i, j;
+	struct rte_ring * const *c1;
+	struct rte_ring * const *c2;
+	int err;
+
+	if (!default_params.mp)
+		default_params.mp = rte_pktmbuf_pool_create("mbuf_pool",
+			NB_MBUF, 32,
+			0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+
+	if (!default_params.mp)
+		return -ENOMEM;
+
+	for (i = 0; i < NUM_ETH_DEV; i++) {
+		for (j = 0; j < MAX_NUM_QUEUE; j++) {
+			snprintf(ring_name, sizeof(ring_name), "R%u%u", i, j);
+			default_params.r[i][j] = rte_ring_create(ring_name,
+						RING_SIZE,
+						SOCKET0,
+						RING_F_SP_ENQ | RING_F_SC_DEQ);
+			TEST_ASSERT((default_params.r[i][j] != NULL),
+				"Failed to allocate ring");
+		}
+	}
+
+	/*
+	 * To create two pseudo-Ethernet ports where the traffic is
+	 * switched between them, that is, traffic sent to port 1 is
+	 * read back from port 2 and vice-versa
+	 */
+	for (i = 0; i < NUM_ETH_PAIR; i++) {
+		char dev_name[DEV_NAME_LEN];
+		int p;
+
+		c1 = default_params.r[i];
+		c2 = default_params.r[PAIR_PORT_INDEX(i)];
+
+		snprintf(dev_name, DEV_NAME_LEN, "%u-%u", i, i + NUM_ETH_PAIR);
+		p = rte_eth_from_rings(dev_name, c1, MAX_NUM_QUEUE,
+				 c2, MAX_NUM_QUEUE, SOCKET0);
+		TEST_ASSERT(p >= 0, "Port creation failed %s", dev_name);
+		err = port_init(p, default_params.mp);
+		TEST_ASSERT(err == 0, "Port init failed %s", dev_name);
+		default_params.port[i] = p;
+
+		snprintf(dev_name, DEV_NAME_LEN, "%u-%u",  i + NUM_ETH_PAIR, i);
+		p = rte_eth_from_rings(dev_name, c2, MAX_NUM_QUEUE,
+				c1, MAX_NUM_QUEUE, SOCKET0);
+		TEST_ASSERT(p > 0, "Port creation failed %s", dev_name);
+		err = port_init(p, default_params.mp);
+		TEST_ASSERT(err == 0, "Port init failed %s", dev_name);
+		default_params.port[PAIR_PORT_INDEX(i)] = p;
+	}
+
+	return 0;
+}
+
+static void
+deinit_ports(void)
+{
+	uint16_t i, j;
+	char name[ETH_NAME_LEN];
+
+	for (i = 0; i < RTE_DIM(default_params.port); i++) {
+		rte_eth_dev_stop(default_params.port[i]);
+		rte_eth_dev_get_name_by_port(default_params.port[i], name);
+		rte_vdev_uninit(name);
+		for (j = 0; j < RTE_DIM(default_params.r[i]); j++)
+			rte_ring_free(default_params.r[i][j]);
+	}
+}
+
+static int
+testsuite_setup(void)
+{
+	int err;
+	uint8_t count;
+	struct rte_event_dev_info dev_info;
+	uint8_t priority;
+	uint8_t queue_id;
+
+	count = rte_event_dev_count();
+	if (!count) {
+		printf("Failed to find a valid event device,"
+			" testing with event_sw0 device\n");
+		rte_vdev_init("event_sw0", NULL);
+		event_dev_delete = 1;
+	}
+
+	struct rte_event_dev_config config = {
+			.nb_event_queues = 1,
+			.nb_event_ports = 1,
+	};
+
+	struct rte_event_queue_conf wkr_q_conf = {
+			.schedule_type = RTE_SCHED_TYPE_ORDERED,
+			.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+			.nb_atomic_flows = 1024,
+			.nb_atomic_order_sequences = 1024,
+	};
+
+	err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
+	config.nb_event_queue_flows = dev_info.max_event_queue_flows;
+	config.nb_event_port_dequeue_depth =
+			dev_info.max_event_port_dequeue_depth;
+	config.nb_event_port_enqueue_depth =
+			dev_info.max_event_port_enqueue_depth;
+	config.nb_events_limit =
+			dev_info.max_num_events;
+
+	rte_log_set_level(0, RTE_LOG_DEBUG);
+	err = rte_event_dev_configure(TEST_DEV_ID, &config);
+	TEST_ASSERT(err == 0, "Event device initialization failed err %d\n",
+			err);
+	if (rte_event_queue_setup(TEST_DEV_ID, 0, &wkr_q_conf) < 0) {
+		printf("%d: error creating qid %d\n", __LINE__, 0);
+		return -1;
+	}
+	if (rte_event_port_setup(TEST_DEV_ID, 0, NULL) < 0) {
+		printf("Error setting up port %d\n", 0);
+		return -1;
+	}
+
+	priority = RTE_EVENT_DEV_PRIORITY_LOWEST;
+	if (rte_event_port_link(TEST_DEV_ID, 0, &queue_id, &priority, 1) != 1) {
+		printf("Error linking port\n");
+		return -1;
+	}
+
+	err = init_ports();
+	TEST_ASSERT(err == 0, "Port initialization failed err %d\n", err);
+
+	err = rte_event_eth_tx_adapter_caps_get(TEST_DEV_ID, TEST_ETHDEV_ID,
+						&default_params.caps);
+	TEST_ASSERT(err == 0, "Failed to get adapter cap err %d\n",
+			err);
+
+	return err;
+}
+
+#define DEVICE_ID_SIZE 64
+
+static void
+testsuite_teardown(void)
+{
+	deinit_ports();
+	rte_mempool_free(default_params.mp);
+	default_params.mp = NULL;
+	if (event_dev_delete)
+		rte_vdev_uninit("event_sw0");
+}
+
+static int
+tx_adapter_create(void)
+{
+	int err;
+	struct rte_event_dev_info dev_info;
+	struct rte_event_port_conf tx_p_conf = {0};
+
+	err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	tx_p_conf.new_event_threshold = dev_info.max_num_events;
+	tx_p_conf.dequeue_depth = dev_info.max_event_port_dequeue_depth;
+	tx_p_conf.enqueue_depth = dev_info.max_event_port_enqueue_depth;
+	err = rte_event_eth_tx_adapter_create(TEST_INST_ID, TEST_DEV_ID,
+					&tx_p_conf);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	return err;
+}
+
+static void
+tx_adapter_free(void)
+{
+	rte_event_eth_tx_adapter_free(TEST_INST_ID);
+}
+
+static int
+tx_adapter_create_free(void)
+{
+	int err;
+	struct rte_event_dev_info dev_info;
+	struct rte_event_port_conf tx_p_conf;
+
+	err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	tx_p_conf.new_event_threshold = dev_info.max_num_events;
+	tx_p_conf.dequeue_depth = dev_info.max_event_port_dequeue_depth;
+	tx_p_conf.enqueue_depth = dev_info.max_event_port_enqueue_depth;
+
+	err = rte_event_eth_tx_adapter_create(TEST_INST_ID, TEST_DEV_ID,
+					NULL);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_tx_adapter_create(TEST_INST_ID, TEST_DEV_ID,
+					&tx_p_conf);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_create(TEST_INST_ID,
+					TEST_DEV_ID, &tx_p_conf);
+	TEST_ASSERT(err == -EEXIST, "Expected -EEXIST %d got %d", -EEXIST, err);
+
+	err = rte_event_eth_tx_adapter_free(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_free(TEST_INST_ID);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL %d got %d", -EINVAL, err);
+
+	err = rte_event_eth_tx_adapter_free(1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL %d got %d", -EINVAL, err);
+
+	return TEST_SUCCESS;
+}
+
+static int
+tx_adapter_queue_add_del(void)
+{
+	int err;
+	uint32_t cap;
+
+	err = rte_event_eth_tx_adapter_caps_get(TEST_DEV_ID, TEST_ETHDEV_ID,
+					 &cap);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+
+	err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID,
+						rte_eth_dev_count_total(),
+						-1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID,
+						TEST_ETHDEV_ID,
+						0);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID,
+						TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID,
+						TEST_ETHDEV_ID,
+						0);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID,
+						TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID,
+						TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_add(1, TEST_ETHDEV_ID, -1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(1, TEST_ETHDEV_ID, -1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	return TEST_SUCCESS;
+}
+
+static int
+tx_adapter_start_stop(void)
+{
+	int err;
+
+	err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID, TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_start(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_stop(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID, TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_start(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_stop(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_start(1);
+
+	err = rte_event_eth_tx_adapter_stop(1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	return TEST_SUCCESS;
+}
+
+static uint32_t eid, tid;
+
+static int
+tx_adapter_single(uint16_t port, uint16_t tx_queue_id,
+		struct rte_mbuf *m, uint8_t qid,
+		uint8_t sched_type)
+{
+	struct rte_event event;
+	struct rte_mbuf *r;
+	int ret;
+	unsigned int l;
+
+	event.queue_id = qid;
+	event.op = RTE_EVENT_OP_NEW;
+	event.sched_type = sched_type;
+	event.mbuf = m;
+
+	m->port = port;
+	rte_event_eth_tx_adapter_txq_set(m, tx_queue_id);
+
+	l = 0;
+	while (rte_event_enqueue_burst(TEST_DEV_ID, 0, &event, 1) != 1) {
+		l++;
+		if (l > EDEV_RETRY)
+			break;
+	}
+
+	TEST_ASSERT(l < EDEV_RETRY, "Unable to enqueue to eventdev");
+	l = 0;
+	while (l++ < EDEV_RETRY) {
+
+		ret = rte_service_run_iter_on_app_lcore(eid, 0);
+		TEST_ASSERT(ret == 0, "failed to run service %d", ret);
+
+		ret = rte_service_run_iter_on_app_lcore(tid, 0);
+		TEST_ASSERT(ret == 0, "failed to run service %d", ret);
+
+		if (rte_eth_rx_burst(TEST_ETHDEV_PAIR_ID, tx_queue_id, &r, 1)) {
+			TEST_ASSERT_EQUAL(r, m, "mbuf comparison failed"
+					" expected %p received %p", m, r);
+			return 0;
+		}
+	}
+
+	TEST_ASSERT(0, "Failed to receive packet");
+	return -1;
+}
+
+static int
+tx_adapter_service(void)
+{
+	struct rte_event_eth_tx_adapter_stats stats;
+	uint32_t i;
+	int err;
+	uint8_t ev_port, ev_qid;
+	struct rte_mbuf  bufs[RING_SIZE];
+	struct rte_mbuf *pbufs[RING_SIZE];
+	struct rte_event_dev_info dev_info;
+	struct rte_event_dev_config dev_conf;
+	struct rte_event_queue_conf qconf;
+	uint32_t qcnt, pcnt;
+	uint16_t q;
+	int internal_port;
+
+	err = rte_event_eth_tx_adapter_caps_get(TEST_DEV_ID, TEST_ETHDEV_ID,
+						&default_params.caps);
+	TEST_ASSERT(err == 0, "Failed to get adapter cap err %d\n",
+			err);
+
+	internal_port = !!(default_params.caps &
+			RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT);
+
+	if (internal_port)
+		return TEST_SUCCESS;
+
+	err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID, TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_event_port_get(TEST_INST_ID,
+						&ev_port);
+	TEST_ASSERT_SUCCESS(err, "Failed to get event port %d", err);
+
+	err = rte_event_dev_attr_get(TEST_DEV_ID, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+					&pcnt);
+	TEST_ASSERT_SUCCESS(err, "Port count get failed");
+
+	err = rte_event_dev_attr_get(TEST_DEV_ID,
+				RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &qcnt);
+	TEST_ASSERT_SUCCESS(err, "Queue count get failed");
+
+	err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
+	TEST_ASSERT_SUCCESS(err, "Dev info failed");
+
+	dev_conf.nb_event_queue_flows = dev_info.max_event_queue_flows;
+	dev_conf.nb_event_port_dequeue_depth =
+			dev_info.max_event_port_dequeue_depth;
+	dev_conf.nb_event_port_enqueue_depth =
+			dev_info.max_event_port_enqueue_depth;
+	dev_conf.nb_events_limit =
+			dev_info.max_num_events;
+	dev_conf.nb_event_queues = qcnt + 1;
+	dev_conf.nb_event_ports = pcnt;
+	err = rte_event_dev_configure(TEST_DEV_ID, &dev_conf);
+	TEST_ASSERT(err == 0, "Event device initialization failed err %d\n",
+			err);
+
+	ev_qid = qcnt;
+	qconf.nb_atomic_flows = dev_info.max_event_queue_flows;
+	qconf.nb_atomic_order_sequences = 32;
+	qconf.schedule_type = RTE_SCHED_TYPE_ATOMIC;
+	qconf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST;
+	qconf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_SINGLE_LINK;
+	err = rte_event_queue_setup(TEST_DEV_ID, ev_qid, &qconf);
+	TEST_ASSERT_SUCCESS(err, "Failed to setup queue %u", ev_qid);
+
+	err = rte_event_port_link(TEST_DEV_ID, ev_port, &ev_qid, NULL, 1);
+	TEST_ASSERT(err == 1, "Failed to link queue port %u",
+		    ev_port);
+
+	err = rte_event_eth_tx_adapter_start(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_dev_service_id_get(0, &eid);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_service_id_get(TEST_INST_ID, &tid);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_service_runstate_set(tid, 1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_service_set_runstate_mapped_check(tid, 0);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_service_runstate_set(eid, 1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_service_set_runstate_mapped_check(eid, 0);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_dev_start(TEST_DEV_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	for (q = 0; q < MAX_NUM_QUEUE; q++) {
+		for (i = 0; i < RING_SIZE; i++)
+			pbufs[i] = &bufs[i];
+		for (i = 0; i < RING_SIZE; i++) {
+			pbufs[i] = &bufs[i];
+			err = tx_adapter_single(TEST_ETHDEV_ID, q, pbufs[i],
+						ev_qid,
+						RTE_SCHED_TYPE_ORDERED);
+			TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+		}
+		for (i = 0; i < RING_SIZE; i++) {
+			TEST_ASSERT_EQUAL(pbufs[i], &bufs[i],
+				"Error: received data does not match"
+				" that transmitted");
+		}
+	}
+
+	err = rte_event_eth_tx_adapter_stats_get(TEST_INST_ID, NULL);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_tx_adapter_stats_get(TEST_INST_ID, &stats);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+	TEST_ASSERT_EQUAL(stats.tx_packets, MAX_NUM_QUEUE * RING_SIZE,
+			"stats.tx_packets expected %u got %lu",
+			MAX_NUM_QUEUE * RING_SIZE,
+			stats.tx_packets);
+
+	err = rte_event_eth_tx_adapter_stats_reset(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_stats_get(TEST_INST_ID, &stats);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+	TEST_ASSERT_EQUAL(stats.tx_packets, 0,
+			"stats.tx_packets expected %u got %lu",
+			0,
+			stats.tx_packets);
+
+	err = rte_event_eth_tx_adapter_stats_get(1, &stats);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID, TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_free(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	rte_event_dev_stop(TEST_DEV_ID);
+
+	return TEST_SUCCESS;
+}
+
+static int
+tx_adapter_dynamic_device(void)
+{
+	uint16_t port_id = rte_eth_dev_count_avail();
+	const char *null_dev[2] = { "eth_null0", "eth_null1" };
+	struct rte_eth_conf dev_conf = {0};
+	int ret;
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(null_dev); i++) {
+		ret = rte_vdev_init(null_dev[i], NULL);
+		TEST_ASSERT_SUCCESS(ret, "%s Port creation failed %d",
+				null_dev[i], ret);
+
+		if (i == 0) {
+			ret = tx_adapter_create();
+			TEST_ASSERT_SUCCESS(ret, "Adapter create failed %d",
+					ret);
+		}
+
+		ret = rte_eth_dev_configure(port_id + i, MAX_NUM_QUEUE,
+					MAX_NUM_QUEUE, &dev_conf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to configure device %d", ret);
+
+		ret = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID,
+							port_id + i, 0);
+		TEST_ASSERT_SUCCESS(ret, "Failed to add queues %d", ret);
+
+	}
+
+	for (i = 0; i < RTE_DIM(null_dev); i++) {
+		ret = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID,
+							port_id + i, -1);
+		TEST_ASSERT_SUCCESS(ret, "Failed to delete queues %d", ret);
+	}
+
+	tx_adapter_free();
+
+	for (i = 0; i < RTE_DIM(null_dev); i++)
+		rte_vdev_uninit(null_dev[i]);
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite event_eth_tx_tests = {
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.suite_name = "tx event eth adapter test suite",
+	.unit_test_cases = {
+		TEST_CASE_ST(NULL, NULL, tx_adapter_create_free),
+		TEST_CASE_ST(tx_adapter_create, tx_adapter_free,
+					tx_adapter_queue_add_del),
+		TEST_CASE_ST(tx_adapter_create, tx_adapter_free,
+					tx_adapter_start_stop),
+		TEST_CASE_ST(tx_adapter_create, tx_adapter_free,
+					tx_adapter_service),
+		TEST_CASE_ST(NULL, NULL, tx_adapter_dynamic_device),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_event_eth_tx_adapter_common(void)
+{
+	return unit_test_suite_runner(&event_eth_tx_tests);
+}
+
+REGISTER_TEST_COMMAND(event_eth_tx_adapter_autotest,
+		test_event_eth_tx_adapter_common);
diff --git a/MAINTAINERS b/MAINTAINERS
index 13f378a..2930f6c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -395,6 +395,7 @@ Eventdev Ethdev Tx Adapter API - EXPERIMENTAL
 M: Nikhil Rao <nikhil.rao@intel.com>
 T: git://dpdk.org/next/dpdk-next-eventdev
 F: lib/librte_eventdev/*eth_tx_adapter*
+F: test/test/test_event_eth_tx_adapter.c
 
 Raw device API - EXPERIMENTAL
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
diff --git a/test/test/Makefile b/test/test/Makefile
index e6967ba..dcea441 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -191,6 +191,7 @@ ifeq ($(CONFIG_RTE_LIBRTE_EVENTDEV),y)
 SRCS-y += test_eventdev.c
 SRCS-y += test_event_ring.c
 SRCS-y += test_event_eth_rx_adapter.c
+SRCS-y += test_event_eth_tx_adapter.c
 SRCS-y += test_event_timer_adapter.c
 SRCS-y += test_event_crypto_adapter.c
 endif
diff --git a/test/test/meson.build b/test/test/meson.build
index b1dd6ec..3d2887b 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -34,6 +34,7 @@ test_sources = files('commands.c',
 	'test_efd_perf.c',
 	'test_errno.c',
 	'test_event_ring.c',
+	'test_event_eth_tx_adapter.c',
 	'test_eventdev.c',
 	'test_func_reentrancy.c',
 	'test_flow_classify.c',
@@ -152,6 +153,7 @@ test_names = [
 	'efd_perf_autotest',
 	'errno_autotest',
 	'event_ring_autotest',
+	'event_eth_tx_adapter_autotest',
 	'eventdev_common_autotest',
 	'eventdev_octeontx_autotest',
 	'eventdev_sw_autotest',
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v2 4/4] eventdev: add auto test for eth Tx adapter
  2018-08-17  4:20 ` [dpdk-dev] [PATCH v2 4/4] eventdev: add auto test for eth Tx adapter Nikhil Rao
@ 2018-08-17 11:55   ` Pavan Nikhilesh
  2018-08-22 16:13     ` Rao, Nikhil
  0 siblings, 1 reply; 27+ messages in thread
From: Pavan Nikhilesh @ 2018-08-17 11:55 UTC (permalink / raw)
  To: Nikhil Rao, jerin.jacob, olivier.matz; +Cc: dev

Hi Nikhil,

Few comments inline.

Nit: Please use --in-reply-to while sending next versions.
     (https://core.dpdk.org/contribute/)

Thanks,
Pavan

On Fri, Aug 17, 2018 at 09:50:52AM +0530, Nikhil Rao wrote:
>
> This patch adds tests for the eth Tx adapter APIs. It also
> tests the data path for the rte_service function based
> implementation of the APIs.
>
> Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
> ---
>  test/test/test_event_eth_tx_adapter.c | 676 ++++++++++++++++++++++++++++++++++
>  MAINTAINERS                           |   1 +
>  test/test/Makefile                    |   1 +
>  test/test/meson.build                 |   2 +
>  4 files changed, 680 insertions(+)
>  create mode 100644 test/test/test_event_eth_tx_adapter.c
>
> diff --git a/test/test/test_event_eth_tx_adapter.c b/test/test/test_event_eth_tx_adapter.c
> new file mode 100644
> index 0000000..2dc487b
> --- /dev/null
> +++ b/test/test/test_event_eth_tx_adapter.c

<snip>

> +static int
> +testsuite_setup(void)
> +{
> +       int err;
> +       uint8_t count;
> +       struct rte_event_dev_info dev_info;
> +       uint8_t priority;
> +       uint8_t queue_id;
> +
> +       count = rte_event_dev_count();
> +       if (!count) {
> +               printf("Failed to find a valid event device,"
> +                       " testing with event_sw0 device\n");
> +               rte_vdev_init("event_sw0", NULL);
> +               event_dev_delete = 1;
> +       }
> +
> +       struct rte_event_dev_config config = {
> +                       .nb_event_queues = 1,
> +                       .nb_event_ports = 1,
> +       };
> +
> +       struct rte_event_queue_conf wkr_q_conf = {
> +                       .schedule_type = RTE_SCHED_TYPE_ORDERED,
> +                       .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
> +                       .nb_atomic_flows = 1024,
> +                       .nb_atomic_order_sequences = 1024,
> +       };
> +
> +       err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
> +       config.nb_event_queue_flows = dev_info.max_event_queue_flows;
> +       config.nb_event_port_dequeue_depth =
> +                       dev_info.max_event_port_dequeue_depth;
> +       config.nb_event_port_enqueue_depth =
> +                       dev_info.max_event_port_enqueue_depth;
> +       config.nb_events_limit =
> +                       dev_info.max_num_events;
> +
> +       rte_log_set_level(0, RTE_LOG_DEBUG);
> +       err = rte_event_dev_configure(TEST_DEV_ID, &config);
> +       TEST_ASSERT(err == 0, "Event device initialization failed err %d\n",
> +                       err);
> +       if (rte_event_queue_setup(TEST_DEV_ID, 0, &wkr_q_conf) < 0) {
> +               printf("%d: error creating qid %d\n", __LINE__, 0);
> +               return -1;
> +       }
> +       if (rte_event_port_setup(TEST_DEV_ID, 0, NULL) < 0) {
> +               printf("Error setting up port %d\n", 0);
> +               return -1;
> +       }
> +
> +       priority = RTE_EVENT_DEV_PRIORITY_LOWEST;

queue_id is uninitilized here, garbage value might be passed.

> +       if (rte_event_port_link(TEST_DEV_ID, 0, &queue_id, &priority, 1) != 1) {
> +               printf("Error linking port\n");
> +               return -1;
> +       }
> +
> +       err = init_ports();
> +       TEST_ASSERT(err == 0, "Port initialization failed err %d\n", err);
> +
> +       err = rte_event_eth_tx_adapter_caps_get(TEST_DEV_ID, TEST_ETHDEV_ID,
> +                                               &default_params.caps);
> +       TEST_ASSERT(err == 0, "Failed to get adapter cap err %d\n",
> +                       err);
> +
> +       return err;
> +}
> +

<snip>

> +static uint32_t eid, tid;
> +
> +static int
> +tx_adapter_single(uint16_t port, uint16_t tx_queue_id,
> +               struct rte_mbuf *m, uint8_t qid,
> +               uint8_t sched_type)
> +{
> +       struct rte_event event;
> +       struct rte_mbuf *r;
> +       int ret;
> +       unsigned int l;
> +
> +       event.queue_id = qid;

Set event_type to RTE_EVENT_TYPE_CPU so that underlying drivers don't mess up the
packet.

> +       event.op = RTE_EVENT_OP_NEW;
> +       event.sched_type = sched_type;
> +       event.mbuf = m;
> +

<snip>

> +
> +static int
> +tx_adapter_service(void)
> +{
> +       struct rte_event_eth_tx_adapter_stats stats;
> +       uint32_t i;
> +       int err;
> +       uint8_t ev_port, ev_qid;
> +       struct rte_mbuf  bufs[RING_SIZE];
> +       struct rte_mbuf *pbufs[RING_SIZE];
> +       struct rte_event_dev_info dev_info;
> +       struct rte_event_dev_config dev_conf;
> +       struct rte_event_queue_conf qconf;
> +       uint32_t qcnt, pcnt;
> +       uint16_t q;
> +       int internal_port;
> +
> +       err = rte_event_eth_tx_adapter_caps_get(TEST_DEV_ID, TEST_ETHDEV_ID,
> +                                               &default_params.caps);
> +       TEST_ASSERT(err == 0, "Failed to get adapter cap err %d\n",
> +                       err);
> +
> +       internal_port = !!(default_params.caps &
> +                       RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT);
> +
> +       if (internal_port)
> +               return TEST_SUCCESS;
> +
> +       err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID, TEST_ETHDEV_ID,
> +                                               -1);
> +       TEST_ASSERT(err == 0, "Expected 0 got %d", err);
> +
> +       err = rte_event_eth_tx_adapter_event_port_get(TEST_INST_ID,
> +                                               &ev_port);
> +       TEST_ASSERT_SUCCESS(err, "Failed to get event port %d", err);
> +
> +       err = rte_event_dev_attr_get(TEST_DEV_ID, RTE_EVENT_DEV_ATTR_PORT_COUNT,
> +                                       &pcnt);
> +       TEST_ASSERT_SUCCESS(err, "Port count get failed");
> +
> +       err = rte_event_dev_attr_get(TEST_DEV_ID,
> +                               RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &qcnt);
> +       TEST_ASSERT_SUCCESS(err, "Queue count get failed");
> +
> +       err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
> +       TEST_ASSERT_SUCCESS(err, "Dev info failed");
> +

memset dev_config to avoid invalid values.

> +       dev_conf.nb_event_queue_flows = dev_info.max_event_queue_flows;
> +       dev_conf.nb_event_port_dequeue_depth =
> +                       dev_info.max_event_port_dequeue_depth;
> +       dev_conf.nb_event_port_enqueue_depth =
> +                       dev_info.max_event_port_enqueue_depth;
> +       dev_conf.nb_events_limit =
> +                       dev_info.max_num_events;
> +       dev_conf.nb_event_queues = qcnt + 1;
> +       dev_conf.nb_event_ports = pcnt;
> +       err = rte_event_dev_configure(TEST_DEV_ID, &dev_conf);
> +       TEST_ASSERT(err == 0, "Event device initialization failed err %d\n",
> +                       err);
> +
> +       ev_qid = qcnt;
> +       qconf.nb_atomic_flows = dev_info.max_event_queue_flows;
> +       qconf.nb_atomic_order_sequences = 32;
> +       qconf.schedule_type = RTE_SCHED_TYPE_ATOMIC;
> +       qconf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST;
> +       qconf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_SINGLE_LINK;
> +       err = rte_event_queue_setup(TEST_DEV_ID, ev_qid, &qconf);
> +       TEST_ASSERT_SUCCESS(err, "Failed to setup queue %u", ev_qid);

On reconfigure, setup all the ports and queues so that the newly configured
values are seen by them.

> +
> +       err = rte_event_port_link(TEST_DEV_ID, ev_port, &ev_qid, NULL, 1);
> +       TEST_ASSERT(err == 1, "Failed to link queue port %u",
> +                   ev_port);
> +
> +       err = rte_event_eth_tx_adapter_start(TEST_INST_ID);
> +       TEST_ASSERT(err == 0, "Expected 0 got %d", err);
> +
> +       err = rte_event_dev_service_id_get(0, &eid);
> +       TEST_ASSERT(err == 0, "Expected 0 got %d", err);

An event device might not be needing a service core, check the capabilities
before requesting service id above.

> +
> +       err = rte_event_eth_tx_adapter_service_id_get(TEST_INST_ID, &tid);
> +       TEST_ASSERT(err == 0, "Expected 0 got %d", err);
> +
> +       err = rte_service_runstate_set(tid, 1);
> +       TEST_ASSERT(err == 0, "Expected 0 got %d", err);
> +
> +       err = rte_service_set_runstate_mapped_check(tid, 0);
> +       TEST_ASSERT(err == 0, "Expected 0 got %d", err);
> +
<snip>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/4] eventdev: add eth Tx adapter APIs
  2018-08-17  4:20 [dpdk-dev] [PATCH v2 1/4] eventdev: add eth Tx adapter APIs Nikhil Rao
                   ` (2 preceding siblings ...)
  2018-08-17  4:20 ` [dpdk-dev] [PATCH v2 4/4] eventdev: add auto test for eth Tx adapter Nikhil Rao
@ 2018-08-19 10:19 ` Jerin Jacob
  2018-08-31  5:41 ` [dpdk-dev] [PATCH v3 1/5] " Nikhil Rao
  4 siblings, 0 replies; 27+ messages in thread
From: Jerin Jacob @ 2018-08-19 10:19 UTC (permalink / raw)
  To: Nikhil Rao; +Cc: olivier.matz, dev

-----Original Message-----
> Date: Fri, 17 Aug 2018 09:50:49 +0530
> From: Nikhil Rao <nikhil.rao@intel.com>
> To: jerin.jacob@caviumnetworks.com, olivier.matz@6wind.com
> CC: dev@dpdk.org, Nikhil Rao <nikhil.rao@intel.com>
> Subject: [PATCH v2 1/4] eventdev: add eth Tx adapter APIs
> X-Mailer: git-send-email 1.8.3.1
> 
> 
> The ethernet Tx adapter abstracts the transmit stage of an
> event driven packet processing application. The transmit
> stage may be implemented with eventdev PMD support or use a
> rte_service function implemented in the adapter. These APIs
> provide a common configuration and control interface and
> an transmit API for the eventdev PMD implementation.
> 
> The transmit port is specified using mbuf::port. The transmit
> queue is specified using the rte_event_eth_tx_adapter_txq_set()
> function.
> 
> Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>

Overall it looks good to me.

Could you please add programmers guide in the next revision?

Some minor comments below.

> ---
> 
> +/**
> + * @file
> + *
> + * RTE Event Ethernet Tx Adapter
> + *
> + * The event ethernet Tx adapter provides configuration and data path APIs
> + * for the ethernet transmit stage of an event driven packet processing
> + * application. These APIs abstract the implementation of the transmit stage
> + * and allow the application to use eventdev PMD support or a common
> + * implementation.
> + *
> + * In the common implementation, the application enqueues mbufs to the adapter
> + * which runs as a rte_service function. The service function dequeues events
> + * from its event port and transmits the mbufs referenced by these events.
> + *
> + * The ethernet Tx event adapter APIs are:
> + *
> + *  - rte_event_eth_tx_adapter_create()
> + *  - rte_event_eth_tx_adapter_create_ext()
> + *  - rte_event_eth_tx_adapter_free()
> + *  - rte_event_eth_tx_adapter_start()
> + *  - rte_event_eth_tx_adapter_stop()
> + *  - rte_event_eth_tx_adapter_queue_add()
> + *  - rte_event_eth_tx_adapter_queue_del()
> + *  - rte_event_eth_tx_adapter_stats_get()
> + *  - rte_event_eth_tx_adapter_stats_reset()
> + *  - rte_event_eth_tx_adapter_enqueue()
> + *  - rte_event_eth_tx_adapter_event_port_get()
> + *  - rte_event_eth_tx_adapter_service_id_get()
> + *
> + * The application creates the adapter using
> + * rte_event_eth_tx_adapter_create() or rte_event_eth_tx_adapter_create_ext().
> + *
> + * The adapter will use the common implementation when the eventdev PMD
> + * does not have the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability.

Due to some reason, the generated doxygen file, does not show
RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT as hyperlink.


> + * The common implementation uses an event port that is created using the port
> + * configuration parameter passed to rte_event_eth_tx_adapter_create(). The
> + * application can get the port identifier using
> + * rte_event_eth_tx_adapter_event_port_get() and must link an event queue to
> + * this port.
> + *
> + * If the eventdev PMD has the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT
> + * flags set, Tx adapter events should be enqueued using the
> + * rte_event_eth_tx_adapter_enqueue() function, else the application should
> + * use rte_event_enqueue_burst().
> + *
> + * Transmit queues can be added and deleted from the adapter using
> + * rte_event_eth_tx_adapter_queue_add()/del() APIs respectively.
> + *
> + * The application can start and stop the adapter using the
> + * rte_event_eth_tx_adapter_start/stop() calls.
> + *
> + * The common adapter implementation uses an EAL service function as described
> + * before and its execution is controlled using the rte_service APIs. The
> + * rte_event_eth_tx_adapter_service_id_get()
> + * function can be used to retrieve the adapter's service function ID.
> + *
> + * The ethernet port and transmit queue index to transmit the mbuf on are
> + * specified using the mbuf port and the struct rte_event_tx_adapter_mbuf_queue
> + * (overlaid on mbuf::hash). The application should use the
> + * rte_event_eth_tx_adapter_txq_set() and rte_event_eth_tx_adapter_txq_get()
> + * functions to access the transmit queue index since it is expected that the
> + * transmit queue will be eventually defined within struct rte_mbuf and using
> + * these macros will help with minimizing application impact due to
> + * a change in how the transmit queue index is specified.
> + */
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <stdint.h>
> +
> +#include <rte_mbuf.h>
> +
> +#include "rte_eventdev.h"
> +
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Set Tx queue in the mbuf. This queue is used by the  adapter
> + * to transmit the mbuf.
> + *
> + * @param pkt
> + *  Pointer to the mbuf.
> + * @param queue
> + *  Tx queue index.
> + */
> +static __rte_always_inline void __rte_experimental
> +rte_event_eth_tx_adapter_txq_set(struct rte_mbuf *pkt, uint16_t queue)
> +{
> +       struct rte_event_tx_adapter_mbuf_queue *mbuf_queue =
> +               (struct rte_event_tx_adapter_mbuf_queue *)(&pkt->hash);

It make sense to have inline function to set and get txq so that mbuf
change wont have any visible impact.

But, Can we get ride of struct rte_event_tx_adapter_mbuf_queue
typecasting/indirection?

I prefer to use just pkt->hi and add comment in rte_mbuf.h, see rte_event_eth_tx_adapter_txq_set()
or add following without breaking anything on exiting scheme.

+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -529,6 +529,11 @@ struct rte_mbuf {
                        /**< First 4 flexible bytes or FD ID, dependent
 * on
                             PKT_RX_FDIR_* flag in ol_flags. */
                } fdir;           /**< Filter identifier if FDIR enabled
*/
+               struct {
+                       uint32_t resvd1;
+                       uint16_t resvd2;
+                       uint16_t txq_id;
+               } txadapter;

Reasons:
1) Additional indirection may result in additional instruction(s) in fastpath.
2) It is kind of hiding the mbuf usage on specific variable. So consumers
may get confused on the usage and it may hide problem when mbuf gets
changed as changes are not in one place.

With above changes:

Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter
  2018-08-17  4:20 ` [dpdk-dev] [PATCH v2 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter Nikhil Rao
@ 2018-08-19 10:45   ` Jerin Jacob
  2018-08-21  8:52     ` Rao, Nikhil
  0 siblings, 1 reply; 27+ messages in thread
From: Jerin Jacob @ 2018-08-19 10:45 UTC (permalink / raw)
  To: Nikhil Rao; +Cc: olivier.matz, dev

-----Original Message-----
> Date: Fri, 17 Aug 2018 09:50:50 +0530
> From: Nikhil Rao <nikhil.rao@intel.com>
> To: jerin.jacob@caviumnetworks.com, olivier.matz@6wind.com
> CC: dev@dpdk.org, Nikhil Rao <nikhil.rao@intel.com>
> Subject: [PATCH v2 2/4] eventdev: add caps API and PMD callbacks for eth Tx
>  adapter
> X-Mailer: git-send-email 1.8.3.1
> 
> 
> The caps API allows the application to query if the transmit
> stage is implemented in the eventdev PMD or uses the common
> rte_service function. The PMD callbacks support the
> eventdev PMD implementation of the adapter.
> 
> Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
> ---
> +
>  static inline int
>  rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
>  {
> @@ -1275,6 +1300,15 @@ int rte_event_dev_selftest(uint8_t dev_id)
>         return RTE_EVENT_MAX_DEVS;
>  }
> 
> @@ -1295,6 +1329,9 @@ struct rte_eventdev *
> 
>         eventdev = &rte_eventdevs[dev_id];
> 
> +       if (eventdev->txa_enqueue == NULL)

Is this check required, it will be always NULL. Right? if so,
Can't we write eventdev->txa_enqueue directly?

> +               eventdev->txa_enqueue = rte_event_tx_adapter_enqueue;
> +

With above changes,

Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter
  2018-08-19 10:45   ` Jerin Jacob
@ 2018-08-21  8:52     ` Rao, Nikhil
  2018-08-21  9:11       ` Jerin Jacob
  0 siblings, 1 reply; 27+ messages in thread
From: Rao, Nikhil @ 2018-08-21  8:52 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: olivier.matz, dev, nikhil.rao

On 8/19/2018 4:15 PM, Jerin Jacob wrote:
> -----Original Message-----
>> Date: Fri, 17 Aug 2018 09:50:50 +0530
>> From: Nikhil Rao <nikhil.rao@intel.com>
>> To: jerin.jacob@caviumnetworks.com, olivier.matz@6wind.com
>> CC: dev@dpdk.org, Nikhil Rao <nikhil.rao@intel.com>
>> Subject: [PATCH v2 2/4] eventdev: add caps API and PMD callbacks for eth Tx
>>   adapter
>> X-Mailer: git-send-email 1.8.3.1
>>
>>
>> The caps API allows the application to query if the transmit
>> stage is implemented in the eventdev PMD or uses the common
>> rte_service function. The PMD callbacks support the
>> eventdev PMD implementation of the adapter.
>>
>> Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
>> ---
>> +
>>   static inline int
>>   rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
>>   {
>> @@ -1275,6 +1300,15 @@ int rte_event_dev_selftest(uint8_t dev_id)
>>          return RTE_EVENT_MAX_DEVS;
>>   }
>>
>> @@ -1295,6 +1329,9 @@ struct rte_eventdev *
>>
>>          eventdev = &rte_eventdevs[dev_id];
>>
>> +       if (eventdev->txa_enqueue == NULL)
> 
> Is this check required, it will be always NULL. Right? if so,
> Can't we write eventdev->txa_enqueue directly?
> 
>> +               eventdev->txa_enqueue = rte_event_tx_adapter_enqueue;
>> +
> 

The thought was that if the PMD supports txa_enqueue then it wouldn't be 
NULL.

Thanks for the review,
Nikhil

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter
  2018-08-21  8:52     ` Rao, Nikhil
@ 2018-08-21  9:11       ` Jerin Jacob
  2018-08-22 13:34         ` Rao, Nikhil
  0 siblings, 1 reply; 27+ messages in thread
From: Jerin Jacob @ 2018-08-21  9:11 UTC (permalink / raw)
  To: Rao, Nikhil; +Cc: olivier.matz, dev

-----Original Message-----
> Date: Tue, 21 Aug 2018 14:22:15 +0530
> From: "Rao, Nikhil" <nikhil.rao@intel.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> CC: olivier.matz@6wind.com, dev@dpdk.org, nikhil.rao@intel.com
> Subject: Re: [PATCH v2 2/4] eventdev: add caps API and PMD callbacks for
>  eth Tx adapter
> User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101
>  Thunderbird/60.0
> 
> External Email
> 
> On 8/19/2018 4:15 PM, Jerin Jacob wrote:
> > -----Original Message-----
> > > Date: Fri, 17 Aug 2018 09:50:50 +0530
> > > From: Nikhil Rao <nikhil.rao@intel.com>
> > > To: jerin.jacob@caviumnetworks.com, olivier.matz@6wind.com
> > > CC: dev@dpdk.org, Nikhil Rao <nikhil.rao@intel.com>
> > > Subject: [PATCH v2 2/4] eventdev: add caps API and PMD callbacks for eth Tx
> > >   adapter
> > > X-Mailer: git-send-email 1.8.3.1
> > > 
> > > 
> > > The caps API allows the application to query if the transmit
> > > stage is implemented in the eventdev PMD or uses the common
> > > rte_service function. The PMD callbacks support the
> > > eventdev PMD implementation of the adapter.
> > > 
> > > Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
> > > ---
> > > +
> > >   static inline int
> > >   rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
> > >   {
> > > @@ -1275,6 +1300,15 @@ int rte_event_dev_selftest(uint8_t dev_id)
> > >          return RTE_EVENT_MAX_DEVS;
> > >   }
> > > 
> > > @@ -1295,6 +1329,9 @@ struct rte_eventdev *
> > > 
> > >          eventdev = &rte_eventdevs[dev_id];
> > > 
> > > +       if (eventdev->txa_enqueue == NULL)
> > 
> > Is this check required, it will be always NULL. Right? if so,
> > Can't we write eventdev->txa_enqueue directly?
> > 
> > > +               eventdev->txa_enqueue = rte_event_tx_adapter_enqueue;
> > > +
> > 
> 
> The thought was that if the PMD supports txa_enqueue then it wouldn't be
> NULL.


Yes that's true. But in rte_event_pmd_allocate(), eventdev->txa_enqueue
it will be NULL. Right? Do we need to add the if (eventdev->txa_enqueue == NULL) check?

> 
> Thanks for the review,
> Nikhil
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter
  2018-08-21  9:11       ` Jerin Jacob
@ 2018-08-22 13:34         ` Rao, Nikhil
  0 siblings, 0 replies; 27+ messages in thread
From: Rao, Nikhil @ 2018-08-22 13:34 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: olivier.matz, dev

On 8/21/2018 2:41 PM, Jerin Jacob wrote:
> -----Original Message-----
>> Date: Tue, 21 Aug 2018 14:22:15 +0530
>> From: "Rao, Nikhil" <nikhil.rao@intel.com>
>> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>> CC: olivier.matz@6wind.com, dev@dpdk.org, nikhil.rao@intel.com
>> Subject: Re: [PATCH v2 2/4] eventdev: add caps API and PMD callbacks for
>>   eth Tx adapter
>> User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101
>>   Thunderbird/60.0
>>
>> External Email
>>
>> On 8/19/2018 4:15 PM, Jerin Jacob wrote:
>>> -----Original Message-----
>>>> Date: Fri, 17 Aug 2018 09:50:50 +0530
>>>> From: Nikhil Rao <nikhil.rao@intel.com>
>>>> To: jerin.jacob@caviumnetworks.com, olivier.matz@6wind.com
>>>> CC: dev@dpdk.org, Nikhil Rao <nikhil.rao@intel.com>
>>>> Subject: [PATCH v2 2/4] eventdev: add caps API and PMD callbacks for eth Tx
>>>>    adapter
>>>> X-Mailer: git-send-email 1.8.3.1
>>>>
>>>>
>>>> The caps API allows the application to query if the transmit
>>>> stage is implemented in the eventdev PMD or uses the common
>>>> rte_service function. The PMD callbacks support the
>>>> eventdev PMD implementation of the adapter.
>>>>
>>>> Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
>>>> ---
>>>> +
>>>>    static inline int
>>>>    rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
>>>>    {
>>>> @@ -1275,6 +1300,15 @@ int rte_event_dev_selftest(uint8_t dev_id)
>>>>           return RTE_EVENT_MAX_DEVS;
>>>>    }
>>>>
>>>> @@ -1295,6 +1329,9 @@ struct rte_eventdev *
>>>>
>>>>           eventdev = &rte_eventdevs[dev_id];
>>>>
>>>> +       if (eventdev->txa_enqueue == NULL)
>>>
>>> Is this check required, it will be always NULL. Right? if so,
>>> Can't we write eventdev->txa_enqueue directly?
>>>
>>>> +               eventdev->txa_enqueue = rte_event_tx_adapter_enqueue;
>>>> +
>>>
>>
>> The thought was that if the PMD supports txa_enqueue then it wouldn't be
>> NULL.
> 
> 
> Yes that's true. But in rte_event_pmd_allocate(), eventdev->txa_enqueue
> it will be NULL. Right? Do we need to add the if (eventdev->txa_enqueue == NULL) check?

OK, got it.

Nikhil

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v2 4/4] eventdev: add auto test for eth Tx adapter
  2018-08-17 11:55   ` Pavan Nikhilesh
@ 2018-08-22 16:13     ` Rao, Nikhil
  2018-08-22 16:23       ` Pavan Nikhilesh
  0 siblings, 1 reply; 27+ messages in thread
From: Rao, Nikhil @ 2018-08-22 16:13 UTC (permalink / raw)
  To: Pavan Nikhilesh, jerin.jacob, olivier.matz; +Cc: dev, Rao, Nikhil

On 8/17/2018 5:25 PM, Pavan Nikhilesh wrote:
> Hi Nikhil,
> 
> Few comments inline.
> 
> Nit: Please use --in-reply-to while sending next versions.
>       (https://core.dpdk.org/contribute/)
> 
> Thanks,
> Pavan
> 
> On Fri, Aug 17, 2018 at 09:50:52AM +0530, Nikhil Rao wrote:
>>
>> This patch adds tests for the eth Tx adapter APIs. It also
>> tests the data path for the rte_service function based
>> implementation of the APIs.
>>
>> Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
>> ---
>>   test/test/test_event_eth_tx_adapter.c | 676 ++++++++++++++++++++++++++++++++++
>>   MAINTAINERS                           |   1 +
>>   test/test/Makefile                    |   1 +
>>   test/test/meson.build                 |   2 +
>>   4 files changed, 680 insertions(+)
>>   create mode 100644 test/test/test_event_eth_tx_adapter.c
>>
>> diff --git a/test/test/test_event_eth_tx_adapter.c b/test/test/test_event_eth_tx_adapter.c
>> new file mode 100644
>> index 0000000..2dc487b
>> --- /dev/null
>> +++ b/test/test/test_event_eth_tx_adapter.c
> 
> <snip>
> 
>> +static int
>> +testsuite_setup(void)
>> +{
>> +       int err;
>> +       uint8_t count;
>> +       struct rte_event_dev_info dev_info;
>> +       uint8_t priority;
>> +       uint8_t queue_id;
>> +
>> +       count = rte_event_dev_count();
>> +       if (!count) {
>> +               printf("Failed to find a valid event device,"
>> +                       " testing with event_sw0 device\n");
>> +               rte_vdev_init("event_sw0", NULL);
>> +               event_dev_delete = 1;
>> +       }
>> +
>> +       struct rte_event_dev_config config = {
>> +                       .nb_event_queues = 1,
>> +                       .nb_event_ports = 1,
>> +       };
>> +
>> +       struct rte_event_queue_conf wkr_q_conf = {
>> +                       .schedule_type = RTE_SCHED_TYPE_ORDERED,
>> +                       .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
>> +                       .nb_atomic_flows = 1024,
>> +                       .nb_atomic_order_sequences = 1024,
>> +       };
>> +
>> +       err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
>> +       config.nb_event_queue_flows = dev_info.max_event_queue_flows;
>> +       config.nb_event_port_dequeue_depth =
>> +                       dev_info.max_event_port_dequeue_depth;
>> +       config.nb_event_port_enqueue_depth =
>> +                       dev_info.max_event_port_enqueue_depth;
>> +       config.nb_events_limit =
>> +                       dev_info.max_num_events;
>> +
>> +       rte_log_set_level(0, RTE_LOG_DEBUG);
>> +       err = rte_event_dev_configure(TEST_DEV_ID, &config);
>> +       TEST_ASSERT(err == 0, "Event device initialization failed err %d\n",
>> +                       err);
>> +       if (rte_event_queue_setup(TEST_DEV_ID, 0, &wkr_q_conf) < 0) {
>> +               printf("%d: error creating qid %d\n", __LINE__, 0);
>> +               return -1;
>> +       }
>> +       if (rte_event_port_setup(TEST_DEV_ID, 0, NULL) < 0) {
>> +               printf("Error setting up port %d\n", 0);
>> +               return -1;
>> +       }
>> +
>> +       priority = RTE_EVENT_DEV_PRIORITY_LOWEST;
> 
> queue_id is uninitilized here, garbage value might be passed.

Ok.
> 
>> +       if (rte_event_port_link(TEST_DEV_ID, 0, &queue_id, &priority, 1) != 1) {
>> +               printf("Error linking port\n");
>> +               return -1;
>> +       }
>> +
>> +       err = init_ports();
>> +       TEST_ASSERT(err == 0, "Port initialization failed err %d\n", err);
>> +
>> +       err = rte_event_eth_tx_adapter_caps_get(TEST_DEV_ID, TEST_ETHDEV_ID,
>> +                                               &default_params.caps);
>> +       TEST_ASSERT(err == 0, "Failed to get adapter cap err %d\n",
>> +                       err);
>> +
>> +       return err;
>> +}
>> +
> 
> <snip>
> 
>> +static uint32_t eid, tid;
>> +
>> +static int
>> +tx_adapter_single(uint16_t port, uint16_t tx_queue_id,
>> +               struct rte_mbuf *m, uint8_t qid,
>> +               uint8_t sched_type)
>> +{
>> +       struct rte_event event;
>> +       struct rte_mbuf *r;
>> +       int ret;
>> +       unsigned int l;
>> +
>> +       event.queue_id = qid;
> 
> Set event_type to RTE_EVENT_TYPE_CPU so that underlying drivers don't mess up the
> packet.
Ok.

> 
> <snip>
> 
>> +
>> +static int
>> +tx_adapter_service(void)
>> +{
>> +       struct rte_event_eth_tx_adapter_stats stats;
>> +       uint32_t i;
>> +       int err;
>> +       uint8_t ev_port, ev_qid;
>> +       struct rte_mbuf  bufs[RING_SIZE];
>> +       struct rte_mbuf *pbufs[RING_SIZE];
>> +       struct rte_event_dev_info dev_info;
>> +       struct rte_event_dev_config dev_conf;
>> +       struct rte_event_queue_conf qconf;
>> +       uint32_t qcnt, pcnt;
>> +       uint16_t q;
>> +       int internal_port;
>> +
>> +       err = rte_event_eth_tx_adapter_caps_get(TEST_DEV_ID, TEST_ETHDEV_ID,
>> +                                               &default_params.caps);
>> +       TEST_ASSERT(err == 0, "Failed to get adapter cap err %d\n",
>> +                       err);
>> +
>> +       internal_port = !!(default_params.caps &
>> +                       RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT);
>> +
>> +       if (internal_port)
>> +               return TEST_SUCCESS;
>> +
>> +       err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID, TEST_ETHDEV_ID,
>> +                                               -1);
>> +       TEST_ASSERT(err == 0, "Expected 0 got %d", err);
>> +
>> +       err = rte_event_eth_tx_adapter_event_port_get(TEST_INST_ID,
>> +                                               &ev_port);
>> +       TEST_ASSERT_SUCCESS(err, "Failed to get event port %d", err);
>> +
>> +       err = rte_event_dev_attr_get(TEST_DEV_ID, RTE_EVENT_DEV_ATTR_PORT_COUNT,
>> +                                       &pcnt);
>> +       TEST_ASSERT_SUCCESS(err, "Port count get failed");
>> +
>> +       err = rte_event_dev_attr_get(TEST_DEV_ID,
>> +                               RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &qcnt);
>> +       TEST_ASSERT_SUCCESS(err, "Queue count get failed");
>> +
>> +       err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
>> +       TEST_ASSERT_SUCCESS(err, "Dev info failed");
>> +
> 
> memset dev_config to avoid invalid values.
> 
Ok.

>> +       dev_conf.nb_event_queue_flows = dev_info.max_event_queue_flows;
>> +       dev_conf.nb_event_port_dequeue_depth =
>> +                       dev_info.max_event_port_dequeue_depth;
>> +       dev_conf.nb_event_port_enqueue_depth =
>> +                       dev_info.max_event_port_enqueue_depth;
>> +       dev_conf.nb_events_limit =
>> +                       dev_info.max_num_events;
>> +       dev_conf.nb_event_queues = qcnt + 1;
>> +       dev_conf.nb_event_ports = pcnt;
>> +       err = rte_event_dev_configure(TEST_DEV_ID, &dev_conf);
>> +       TEST_ASSERT(err == 0, "Event device initialization failed err %d\n",
>> +                       err);
>> +
>> +       ev_qid = qcnt;
>> +       qconf.nb_atomic_flows = dev_info.max_event_queue_flows;
>> +       qconf.nb_atomic_order_sequences = 32;
>> +       qconf.schedule_type = RTE_SCHED_TYPE_ATOMIC;
>> +       qconf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST;
>> +       qconf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_SINGLE_LINK;
>> +       err = rte_event_queue_setup(TEST_DEV_ID, ev_qid, &qconf);
>> +       TEST_ASSERT_SUCCESS(err, "Failed to setup queue %u", ev_qid);
> 
> On reconfigure, setup all the ports and queues so that the newly configured
> values are seen by them.

Is that required since dev_conf matches dev_info except for the number 
of queues ?

> 
>> +
>> +       err = rte_event_port_link(TEST_DEV_ID, ev_port, &ev_qid, NULL, 1);
>> +       TEST_ASSERT(err == 1, "Failed to link queue port %u",
>> +                   ev_port);
>> +
>> +       err = rte_event_eth_tx_adapter_start(TEST_INST_ID);
>> +       TEST_ASSERT(err == 0, "Expected 0 got %d", err);
>> +
>> +       err = rte_event_dev_service_id_get(0, &eid);
>> +       TEST_ASSERT(err == 0, "Expected 0 got %d", err);
> 
> An event device might not be needing a service core, check the capabilities
> before requesting service id above.
>

The internal_port capability is checked at the beginning of the 
function, if its not set, I think its Ok to assume that we need a 
service core ?

Thanks,
Nikhil

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v2 4/4] eventdev: add auto test for eth Tx adapter
  2018-08-22 16:13     ` Rao, Nikhil
@ 2018-08-22 16:23       ` Pavan Nikhilesh
  2018-08-23  1:48         ` Rao, Nikhil
  0 siblings, 1 reply; 27+ messages in thread
From: Pavan Nikhilesh @ 2018-08-22 16:23 UTC (permalink / raw)
  To: Rao, Nikhil, jerin.jacob, olivier.matz; +Cc: dev

On Wed, Aug 22, 2018 at 09:43:02PM +0530, Rao, Nikhil wrote:
> > > +       ev_qid = qcnt;
> > > +       qconf.nb_atomic_flows = dev_info.max_event_queue_flows;
> > > +       qconf.nb_atomic_order_sequences = 32;
> > > +       qconf.schedule_type = RTE_SCHED_TYPE_ATOMIC;
> > > +       qconf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST;
> > > +       qconf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_SINGLE_LINK;
> > > +       err = rte_event_queue_setup(TEST_DEV_ID, ev_qid, &qconf);
> > > +       TEST_ASSERT_SUCCESS(err, "Failed to setup queue %u", ev_qid);
> >
> > On reconfigure, setup all the ports and queues so that the newly configured
> > values are seen by them.
>
> Is that required since dev_conf matches dev_info except for the number
> of queues ?

Some drivers require this as without port setup newly configured queues might
not be visible.

>
> >
> > > +
> > > +       err = rte_event_port_link(TEST_DEV_ID, ev_port, &ev_qid, NULL, 1);
> > > +       TEST_ASSERT(err == 1, "Failed to link queue port %u",
> > > +                   ev_port);
> > > +
> > > +       err = rte_event_eth_tx_adapter_start(TEST_INST_ID);
> > > +       TEST_ASSERT(err == 0, "Expected 0 got %d", err);
> > > +
> > > +       err = rte_event_dev_service_id_get(0, &eid);
> > > +       TEST_ASSERT(err == 0, "Expected 0 got %d", err);
> >
> > An event device might not be needing a service core, check the capabilities
> > before requesting service id above.
> >
>
> The internal_port capability is checked at the beginning of the
> function, if its not set, I think its Ok to assume that we need a
> service core ?

That check is to see if TX adapter requires service core, the above one is
checking if eventdev requires serivce core.

>
> Thanks,
> Nikhil
>

Regards,
Pavan.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v2 4/4] eventdev: add auto test for eth Tx adapter
  2018-08-22 16:23       ` Pavan Nikhilesh
@ 2018-08-23  1:48         ` Rao, Nikhil
  0 siblings, 0 replies; 27+ messages in thread
From: Rao, Nikhil @ 2018-08-23  1:48 UTC (permalink / raw)
  To: Pavan Nikhilesh, jerin.jacob, olivier.matz; +Cc: dev, Rao, Nikhil

> -----Original Message-----
> From: Pavan Nikhilesh [mailto:pbhagavatula@caviumnetworks.com]
> Sent: Wednesday, August 22, 2018 9:53 PM
> To: Rao, Nikhil <nikhil.rao@intel.com>; jerin.jacob@caviumnetworks.com;
> olivier.matz@6wind.com
> Cc: dev@dpdk.org
> Subject: Re: [PATCH v2 4/4] eventdev: add auto test for eth Tx adapter
> 
> On Wed, Aug 22, 2018 at 09:43:02PM +0530, Rao, Nikhil wrote:
> > > > +       ev_qid = qcnt;
> > > > +       qconf.nb_atomic_flows = dev_info.max_event_queue_flows;
> > > > +       qconf.nb_atomic_order_sequences = 32;
> > > > +       qconf.schedule_type = RTE_SCHED_TYPE_ATOMIC;
> > > > +       qconf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST;
> > > > +       qconf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_SINGLE_LINK;
> > > > +       err = rte_event_queue_setup(TEST_DEV_ID, ev_qid, &qconf);
> > > > +       TEST_ASSERT_SUCCESS(err, "Failed to setup queue %u",
> > > > + ev_qid);
> > >
> > > On reconfigure, setup all the ports and queues so that the newly
> > > configured values are seen by them.
> >
> > Is that required since dev_conf matches dev_info except for the number
> > of queues ?
> 
> Some drivers require this as without port setup newly configured queues
> might not be visible.
> 
OK.

> >
> > >
> > > > +
> > > > +       err = rte_event_port_link(TEST_DEV_ID, ev_port, &ev_qid, NULL,
> 1);
> > > > +       TEST_ASSERT(err == 1, "Failed to link queue port %u",
> > > > +                   ev_port);
> > > > +
> > > > +       err = rte_event_eth_tx_adapter_start(TEST_INST_ID);
> > > > +       TEST_ASSERT(err == 0, "Expected 0 got %d", err);
> > > > +
> > > > +       err = rte_event_dev_service_id_get(0, &eid);
> > > > +       TEST_ASSERT(err == 0, "Expected 0 got %d", err);
> > >
> > > An event device might not be needing a service core, check the
> > > capabilities before requesting service id above.
> > >
> >
> > The internal_port capability is checked at the beginning of the
> > function, if its not set, I think its Ok to assume that we need a
> > service core ?
> 
> That check is to see if TX adapter requires service core, the above one is
> checking if eventdev requires serivce core.

Got it, I was misreading the comment.

Thanks,
Nikhil

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v3 1/5] eventdev: add eth Tx adapter APIs
  2018-08-17  4:20 [dpdk-dev] [PATCH v2 1/4] eventdev: add eth Tx adapter APIs Nikhil Rao
                   ` (3 preceding siblings ...)
  2018-08-19 10:19 ` [dpdk-dev] [PATCH v2 1/4] eventdev: add eth Tx adapter APIs Jerin Jacob
@ 2018-08-31  5:41 ` Nikhil Rao
  2018-08-31  5:41   ` [dpdk-dev] [PATCH v3 2/5] eventdev: add caps API and PMD callbacks for eth Tx adapter Nikhil Rao
                     ` (4 more replies)
  4 siblings, 5 replies; 27+ messages in thread
From: Nikhil Rao @ 2018-08-31  5:41 UTC (permalink / raw)
  To: jerin.jacob, olivier.matz; +Cc: dev, Nikhil Rao

The ethernet Tx adapter abstracts the transmit stage of an
event driven packet processing application. The transmit
stage may be implemented with eventdev PMD support or use a
rte_service function implemented in the adapter. These APIs
provide a common configuration and control interface and
an transmit API for the eventdev PMD implementation.

The transmit port is specified using mbuf::port. The transmit
queue is specified using the rte_event_eth_tx_adapter_txq_set()
function.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---

RFC -> V1:
=========

* Move port and tx queue id to mbuf from mbuf private area. (Jerin Jacob)

* Support for PMD transmit function. (Jerin Jacob)

* mbuf change has been replaced with rte_event_eth_tx_adapter_txq_set(). 
The goal is to align with the mbuf change for a qid field. 
(http://mails.dpdk.org/archives/dev/2018-February/090651.html). Once the mbuf
change is available, the function can be replaced with a macro with no impact
to applications.

* Various cleanups (Jerin Jacob)

V2:
==

* Add ethdev port to rte_event_eth_tx_adapter_caps_get() (Pavan Nikhilesh)

* Remove struct rte_event_eth_tx_adapter from rte_event_eth_tx_adapter.h
							(Jerin Jacob)

* Move rte_event_eth_tx_adapter_txq_set() to static inline,
  also add rte_event_eth_tx_adapter_txq_get().

* Remove adapter id from rte_event_eth_tx_adapter_enqueue(), also update
  eventdev PMD callback (Jerin Jacob)

* Fix doxy-api-index.md (Jerin Jacob)

* Rename eventdev_eth_tx_adapter_init_t to eventdev_eth_tx_adapter_create_t,
  Invoke it from rte_event_eth_tx_adapter_create()/create_ext()

* Remove the eventdev_eth_tx_adapter_event_port_get PMD callback, the event
  port is used by the application only in the case of the service function
  implementation.

* Add support for dynamic device (created after the adapter) in the service
  function implementation.

V3
==
* Fix RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT hyperlink (Jerin Jacob)

* Use pkg->hi directly instead of struct rte_event_tx_adapter_mbuf_queue (Jerin Jacob)

* Remove eventdev->txa_enqueue NULL check (Jerin Jacob)

* Fix x86_x32 compile error in auto test (Jerin Jacob)

* Add check for scheduler service id in auto test (Pavan Nikhilesh)

* Init queue, memset dev_conf to zero in auto test (Pavan Nikhilesh)

* Re-initialize port configuration in auto-test after device configuration (Pavan Nikhilesh)

 lib/librte_eventdev/rte_event_eth_tx_adapter.h | 462 +++++++++++++++++++++++++
 lib/librte_mbuf/rte_mbuf.h                     |   5 +-
 MAINTAINERS                                    |   5 +
 doc/api/doxy-api-index.md                      |   1 +
 4 files changed, 472 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_eventdev/rte_event_eth_tx_adapter.h

diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.h b/lib/librte_eventdev/rte_event_eth_tx_adapter.h
new file mode 100644
index 0000000..3e0d5c6
--- /dev/null
+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.h
@@ -0,0 +1,462 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation.
+ */
+
+#ifndef _RTE_EVENT_ETH_TX_ADAPTER_
+#define _RTE_EVENT_ETH_TX_ADAPTER_
+
+/**
+ * @file
+ *
+ * RTE Event Ethernet Tx Adapter
+ *
+ * The event ethernet Tx adapter provides configuration and data path APIs
+ * for the ethernet transmit stage of an event driven packet processing
+ * application. These APIs abstract the implementation of the transmit stage
+ * and allow the application to use eventdev PMD support or a common
+ * implementation.
+ *
+ * In the common implementation, the application enqueues mbufs to the adapter
+ * which runs as a rte_service function. The service function dequeues events
+ * from its event port and transmits the mbufs referenced by these events.
+ *
+ * The ethernet Tx event adapter APIs are:
+ *
+ *  - rte_event_eth_tx_adapter_create()
+ *  - rte_event_eth_tx_adapter_create_ext()
+ *  - rte_event_eth_tx_adapter_free()
+ *  - rte_event_eth_tx_adapter_start()
+ *  - rte_event_eth_tx_adapter_stop()
+ *  - rte_event_eth_tx_adapter_queue_add()
+ *  - rte_event_eth_tx_adapter_queue_del()
+ *  - rte_event_eth_tx_adapter_stats_get()
+ *  - rte_event_eth_tx_adapter_stats_reset()
+ *  - rte_event_eth_tx_adapter_enqueue()
+ *  - rte_event_eth_tx_adapter_event_port_get()
+ *  - rte_event_eth_tx_adapter_service_id_get()
+ *
+ * The application creates the adapter using
+ * rte_event_eth_tx_adapter_create() or rte_event_eth_tx_adapter_create_ext().
+ *
+ * The adapter will use the common implementation when the eventdev PMD
+ * does not have the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability.
+ * The common implementation uses an event port that is created using the port
+ * configuration parameter passed to rte_event_eth_tx_adapter_create(). The
+ * application can get the port identifier using
+ * rte_event_eth_tx_adapter_event_port_get() and must link an event queue to
+ * this port.
+ *
+ * If the eventdev PMD has the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT
+ * flags set, Tx adapter events should be enqueued using the
+ * rte_event_eth_tx_adapter_enqueue() function, else the application should
+ * use rte_event_enqueue_burst().
+ *
+ * Transmit queues can be added and deleted from the adapter using
+ * rte_event_eth_tx_adapter_queue_add()/del() APIs respectively.
+ *
+ * The application can start and stop the adapter using the
+ * rte_event_eth_tx_adapter_start/stop() calls.
+ *
+ * The common adapter implementation uses an EAL service function as described
+ * before and its execution is controlled using the rte_service APIs. The
+ * rte_event_eth_tx_adapter_service_id_get()
+ * function can be used to retrieve the adapter's service function ID.
+ *
+ * The ethernet port and transmit queue index to transmit the mbuf on are
+ * specified using the mbuf port and the higher 16 bits of
+ * struct rte_mbuf::hash::sched:hi. The application should use the
+ * rte_event_eth_tx_adapter_txq_set() and rte_event_eth_tx_adapter_txq_get()
+ * functions to access the transmit queue index since it is expected that the
+ * transmit queue will be eventually defined within struct rte_mbuf and using
+ * these macros will help with minimizing application impact due to
+ * a change in how the transmit queue index is specified.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+
+#include <rte_mbuf.h>
+
+#include "rte_eventdev.h"
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Adapter configuration structure
+ *
+ * @see rte_event_eth_tx_adapter_create_ext
+ * @see rte_event_eth_tx_adapter_conf_cb
+ */
+struct rte_event_eth_tx_adapter_conf {
+	uint8_t event_port_id;
+	/**< Event port identifier, the adapter service function dequeues mbuf
+	 * events from this port.
+	 * @see RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT
+	 */
+	uint32_t max_nb_tx;
+	/**< The adapter can return early if it has processed at least
+	 * max_nb_tx mbufs. This isn't treated as a requirement; batching may
+	 * cause the adapter to process more than max_nb_tx mbufs.
+	 */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Function type used for adapter configuration callback. The callback is
+ * used to fill in members of the struct rte_event_eth_tx_adapter_conf, this
+ * callback is invoked when creating a RTE service function based
+ * adapter implementation.
+ *
+ * @param id
+ *  Adapter identifier.
+ * @param dev_id
+ *  Event device identifier.
+ * @param [out] conf
+ *  Structure that needs to be populated by this callback.
+ * @param arg
+ *  Argument to the callback. This is the same as the conf_arg passed to the
+ *  rte_event_eth_tx_adapter_create_ext().
+ *
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure
+ */
+typedef int (*rte_event_eth_tx_adapter_conf_cb) (uint8_t id, uint8_t dev_id,
+				struct rte_event_eth_tx_adapter_conf *conf,
+				void *arg);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * A structure used to retrieve statistics for an ethernet Tx adapter instance.
+ */
+struct rte_event_eth_tx_adapter_stats {
+	uint64_t tx_retry;
+	/**< Number of transmit retries */
+	uint64_t tx_packets;
+	/**< Number of packets transmitted */
+	uint64_t tx_dropped;
+	/**< Number of packets dropped */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new ethernet Tx adapter with the specified identifier.
+ *
+ * @param id
+ *  The identifier of the ethernet Tx adapter.
+ * @param dev_id
+ *  The event device identifier.
+ * @param port_config
+ *  Event port configuration, the adapter uses this configuration to
+ *  create an event port if needed.
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_create(uint8_t id, uint8_t dev_id,
+				struct rte_event_port_conf *port_config);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new ethernet Tx adapter with the specified identifier.
+ *
+ * @param id
+ *  The identifier of the ethernet Tx adapter.
+ * @param dev_id
+ *  The event device identifier.
+ * @param conf_cb
+ *  Callback function that initializes members of the
+ *  struct rte_event_eth_tx_adapter_conf struct passed into
+ *  it.
+ * @param conf_arg
+ *  Argument that is passed to the conf_cb function.
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_create_ext(uint8_t id, uint8_t dev_id,
+				rte_event_eth_tx_adapter_conf_cb conf_cb,
+				void *conf_arg);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free an ethernet Tx adapter
+ *
+ * @param id
+ *  Adapter identifier.
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure, If the adapter still has Tx queues
+ *      added to it, the function returns -EBUSY.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_free(uint8_t id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Start ethernet Tx adapter
+ *
+ * @param id
+ *  Adapter identifier.
+ * @return
+ *  - 0: Success, Adapter started correctly.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_start(uint8_t id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Stop ethernet Tx adapter
+ *
+ * @param id
+ *  Adapter identifier.
+ * @return
+ *  - 0: Success.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_stop(uint8_t id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a Tx queue to the adapter.
+ * A queue value of -1 is used to indicate all
+ * queues within the device.
+ *
+ * @param id
+ *  Adapter identifier.
+ * @param eth_dev_id
+ *  Ethernet Port Identifier.
+ * @param queue
+ *  Tx queue index.
+ * @return
+ *  - 0: Success, Queues added successfully.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_queue_add(uint8_t id,
+				uint16_t eth_dev_id,
+				int32_t queue);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Delete a Tx queue from the adapter.
+ * A queue value of -1 is used to indicate all
+ * queues within the device, that have been added to this
+ * adapter.
+ *
+ * @param id
+ *  Adapter identifier.
+ * @param eth_dev_id
+ *  Ethernet Port Identifier.
+ * @param queue
+ *  Tx queue index.
+ * @return
+ *  - 0: Success, Queues deleted successfully.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_queue_del(uint8_t id,
+				uint16_t eth_dev_id,
+				int32_t queue);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Set Tx queue in the mbuf. This queue is used by the  adapter
+ * to transmit the mbuf.
+ *
+ * @param pkt
+ *  Pointer to the mbuf.
+ * @param queue
+ *  Tx queue index.
+ */
+static __rte_always_inline void __rte_experimental
+rte_event_eth_tx_adapter_txq_set(struct rte_mbuf *pkt, uint16_t queue)
+{
+	uint16_t *p = (uint16_t *)&pkt->hash.sched.hi;
+	p[1] = queue;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve Tx queue from the mbuf.
+ *
+ * @param pkt
+ *  Pointer to the mbuf.
+ * @return
+ *  Tx queue identifier.
+ *
+ * @see rte_event_eth_tx_adapter_txq_set()
+ */
+static __rte_always_inline uint16_t __rte_experimental
+rte_event_eth_tx_adapter_txq_get(struct rte_mbuf *pkt)
+{
+	uint16_t *p = (uint16_t *)&pkt->hash.sched.hi;
+	return p[1];
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve the adapter event port. The adapter creates an event port if
+ * the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT is not set in the
+ * ethernet Tx capabilities of the event device.
+ *
+ * @param id
+ *  Adapter Identifier.
+ * @param[out] event_port_id
+ *  Event port pointer.
+ * @return
+ *   - 0: Success.
+ *   - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id);
+
+/**
+ * Enqueue a burst of events objects or an event object supplied in *rte_event*
+ * structure on an  event device designated by its *dev_id* through the event
+ * port specified by *port_id*. This function is supported if the eventdev PMD
+ * has the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability flag set.
+ *
+ * The *nb_events* parameter is the number of event objects to enqueue which are
+ * supplied in the *ev* array of *rte_event* structure.
+ *
+ * The rte_event_eth_tx_adapter_enqueue() function returns the number of
+ * events objects it actually enqueued. A return value equal to *nb_events*
+ * means that all event objects have been enqueued.
+ *
+ * @param dev_id
+ *  The identifier of the device.
+ * @param port_id
+ *  The identifier of the event port.
+ * @param ev
+ *  Points to an array of *nb_events* objects of type *rte_event* structure
+ *  which contain the event object enqueue operations to be processed.
+ * @param nb_events
+ *  The number of event objects to enqueue, typically number of
+ *  rte_event_port_enqueue_depth() available for this port.
+ *
+ * @return
+ *   The number of event objects actually enqueued on the event device. The
+ *   return value can be less than the value of the *nb_events* parameter when
+ *   the event devices queue is full or if invalid parameters are specified in a
+ *   *rte_event*. If the return value is less than *nb_events*, the remaining
+ *   events at the end of ev[] are not consumed and the caller has to take care
+ *   of them, and rte_errno is set accordingly. Possible errno values include:
+ *   - -EINVAL  The port ID is invalid, device ID is invalid, an event's queue
+ *              ID is invalid, or an event's sched type doesn't match the
+ *              capabilities of the destination queue.
+ *   - -ENOSPC  The event port was backpressured and unable to enqueue
+ *              one or more events. This error code is only applicable to
+ *              closed systems.
+ */
+static inline uint16_t __rte_experimental
+rte_event_eth_tx_adapter_enqueue(uint8_t dev_id,
+				uint8_t port_id,
+				struct rte_event ev[],
+				uint16_t nb_events)
+{
+	const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+	if (dev_id >= RTE_EVENT_MAX_DEVS ||
+		!rte_eventdevs[dev_id].attached) {
+		rte_errno = -EINVAL;
+		return 0;
+	}
+
+	if (port_id >= dev->data->nb_ports) {
+		rte_errno = -EINVAL;
+		return 0;
+	}
+#endif
+	return dev->txa_enqueue(dev->data->ports[port_id], ev, nb_events);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve statistics for an adapter
+ *
+ * @param id
+ *  Adapter identifier.
+ * @param [out] stats
+ *  A pointer to structure used to retrieve statistics for an adapter.
+ * @return
+ *  - 0: Success, statistics retrieved successfully.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_stats_get(uint8_t id,
+				struct rte_event_eth_tx_adapter_stats *stats);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Reset statistics for an adapter.
+ *
+ * @param id
+ *  Adapter identifier.
+ * @return
+ *  - 0: Success, statistics reset successfully.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_stats_reset(uint8_t id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve the service ID of an adapter. If the adapter doesn't use
+ * a rte_service function, this function returns -ESRCH.
+ *
+ * @param id
+ *  Adapter identifier.
+ * @param [out] service_id
+ *  A pointer to a uint32_t, to be filled in with the service id.
+ * @return
+ *  - 0: Success
+ *  - <0: Error code on failure, if the adapter doesn't use a rte_service
+ * function, this function returns -ESRCH.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_service_id_get(uint8_t id, uint32_t *service_id);
+
+#ifdef __cplusplus
+}
+#endif
+#endif	/* _RTE_EVENT_ETH_TX_ADAPTER_ */
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 9ce5d76..93dffeb 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -464,7 +464,9 @@ struct rte_mbuf {
 	};
 	uint16_t nb_segs;         /**< Number of segments. */
 
-	/** Input port (16 bits to support more than 256 virtual ports). */
+	/** Input port (16 bits to support more than 256 virtual ports).
+	 * The event eth Tx adapter uses this field to specify the output port.
+	 */
 	uint16_t port;
 
 	uint64_t ol_flags;        /**< Offload features. */
@@ -530,6 +532,7 @@ struct rte_mbuf {
 		struct {
 			uint32_t lo;
 			uint32_t hi;
+			/**< @see rte_event_eth_tx_adapter_txq_set */
 		} sched;          /**< Hierarchical scheduler */
 		uint32_t usr;	  /**< User defined tags. See rte_distributor_process() */
 	} hash;                   /**< hash information */
diff --git a/MAINTAINERS b/MAINTAINERS
index 9fd258f..ea9e0e7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -391,6 +391,11 @@ F: lib/librte_eventdev/*crypto_adapter*
 F: test/test/test_event_crypto_adapter.c
 F: doc/guides/prog_guide/event_crypto_adapter.rst
 
+Eventdev Ethdev Tx Adapter API - EXPERIMENTAL
+M: Nikhil Rao <nikhil.rao@intel.com>
+T: git://dpdk.org/next/dpdk-next-eventdev
+F: lib/librte_eventdev/*eth_tx_adapter*
+
 Raw device API - EXPERIMENTAL
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 9265907..911a902 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -24,6 +24,7 @@ The public API headers are grouped by topics:
   [event_eth_rx_adapter]   (@ref rte_event_eth_rx_adapter.h),
   [event_timer_adapter]    (@ref rte_event_timer_adapter.h),
   [event_crypto_adapter]   (@ref rte_event_crypto_adapter.h),
+  [event_eth_tx_adapter]   (@ref rte_event_eth_tx_adapter.h),
   [rawdev]             (@ref rte_rawdev.h),
   [metrics]            (@ref rte_metrics.h),
   [bitrate]            (@ref rte_bitrate.h),
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v3 2/5] eventdev: add caps API and PMD callbacks for eth Tx adapter
  2018-08-31  5:41 ` [dpdk-dev] [PATCH v3 1/5] " Nikhil Rao
@ 2018-08-31  5:41   ` Nikhil Rao
  2018-08-31  5:41   ` [dpdk-dev] [PATCH v3 3/5] eventdev: add eth Tx adapter implementation Nikhil Rao
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 27+ messages in thread
From: Nikhil Rao @ 2018-08-31  5:41 UTC (permalink / raw)
  To: jerin.jacob, olivier.matz; +Cc: dev, Nikhil Rao

The caps API allows the application to query if the transmit
stage is implemented in the eventdev PMD or uses the common
rte_service function. The PMD callbacks support the
eventdev PMD implementation of the adapter.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 lib/librte_eventdev/rte_event_eth_tx_adapter.h |   8 +-
 lib/librte_eventdev/rte_eventdev.h             |  33 +++-
 lib/librte_eventdev/rte_eventdev_pmd.h         | 200 +++++++++++++++++++++++++
 lib/librte_eventdev/rte_eventdev.c             |  36 +++++
 4 files changed, 272 insertions(+), 5 deletions(-)

diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.h b/lib/librte_eventdev/rte_event_eth_tx_adapter.h
index 3e0d5c6..0f378a6 100644
--- a/lib/librte_eventdev/rte_event_eth_tx_adapter.h
+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.h
@@ -39,14 +39,14 @@
  * rte_event_eth_tx_adapter_create() or rte_event_eth_tx_adapter_create_ext().
  *
  * The adapter will use the common implementation when the eventdev PMD
- * does not have the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability.
+ * does not have the #RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability.
  * The common implementation uses an event port that is created using the port
  * configuration parameter passed to rte_event_eth_tx_adapter_create(). The
  * application can get the port identifier using
  * rte_event_eth_tx_adapter_event_port_get() and must link an event queue to
  * this port.
  *
- * If the eventdev PMD has the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT
+ * If the eventdev PMD has the #RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT
  * flags set, Tx adapter events should be enqueued using the
  * rte_event_eth_tx_adapter_enqueue() function, else the application should
  * use rte_event_enqueue_burst().
@@ -329,7 +329,7 @@ struct rte_event_eth_tx_adapter_stats {
  * @b EXPERIMENTAL: this API may change without prior notice
  *
  * Retrieve the adapter event port. The adapter creates an event port if
- * the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT is not set in the
+ * the #RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT is not set in the
  * ethernet Tx capabilities of the event device.
  *
  * @param id
@@ -347,7 +347,7 @@ struct rte_event_eth_tx_adapter_stats {
  * Enqueue a burst of events objects or an event object supplied in *rte_event*
  * structure on an  event device designated by its *dev_id* through the event
  * port specified by *port_id*. This function is supported if the eventdev PMD
- * has the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability flag set.
+ * has the #RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability flag set.
  *
  * The *nb_events* parameter is the number of event objects to enqueue which are
  * supplied in the *ev* array of *rte_event* structure.
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index b6fd6ee..4717292 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -1186,6 +1186,32 @@ struct rte_event {
 rte_event_crypto_adapter_caps_get(uint8_t dev_id, uint8_t cdev_id,
 				  uint32_t *caps);
 
+/* Ethdev Tx adapter capability bitmap flags */
+#define RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT	0x1
+/**< This flag is sent when the PMD supports a packet transmit callback
+ */
+
+/**
+ * Retrieve the event device's eth Tx adapter capabilities
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param eth_port_id
+ *   The identifier of the ethernet device.
+ *
+ * @param[out] caps
+ *   A pointer to memory filled with eth Tx adapter capabilities.
+ *
+ * @return
+ *   - 0: Success, driver provides eth Tx adapter capabilities.
+ *   - <0: Error code returned by the driver function.
+ *
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
+				uint32_t *caps);
+
 struct rte_eventdev_ops;
 struct rte_eventdev;
 
@@ -1204,6 +1230,10 @@ typedef uint16_t (*event_dequeue_burst_t)(void *port, struct rte_event ev[],
 		uint16_t nb_events, uint64_t timeout_ticks);
 /**< @internal Dequeue burst of events from port of a device */
 
+typedef uint16_t (*event_tx_adapter_enqueue)(void *port,
+				struct rte_event ev[], uint16_t nb_events);
+/**< @internal Enqueue burst of events on port of a device */
+
 #define RTE_EVENTDEV_NAME_MAX_LEN	(64)
 /**< @internal Max length of name of event PMD */
 
@@ -1266,7 +1296,8 @@ struct rte_eventdev {
 	/**< Pointer to PMD dequeue function. */
 	event_dequeue_burst_t dequeue_burst;
 	/**< Pointer to PMD dequeue burst function. */
-
+	event_tx_adapter_enqueue txa_enqueue;
+	/**< Pointer to PMD eth Tx adapter enqueue function. */
 	struct rte_eventdev_data *data;
 	/**< Pointer to device data */
 	struct rte_eventdev_ops *dev_ops;
diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h b/lib/librte_eventdev/rte_eventdev_pmd.h
index 3fbb4d2..cd7145a 100644
--- a/lib/librte_eventdev/rte_eventdev_pmd.h
+++ b/lib/librte_eventdev/rte_eventdev_pmd.h
@@ -789,6 +789,186 @@ typedef int (*eventdev_crypto_adapter_stats_reset)
 			(const struct rte_eventdev *dev,
 			 const struct rte_cryptodev *cdev);
 
+/**
+ * Retrieve the event device's eth Tx adapter capabilities.
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @param eth_dev
+ *   Ethernet device pointer
+ *
+ * @param[out] caps
+ *   A pointer to memory filled with eth Tx adapter capabilities.
+ *
+ * @return
+ *   - 0: Success, driver provides eth Tx adapter capabilities
+ *   - <0: Error code returned by the driver function.
+ *
+ */
+typedef int (*eventdev_eth_tx_adapter_caps_get_t)
+					(const struct rte_eventdev *dev,
+					const struct rte_eth_dev *eth_dev,
+					uint32_t *caps);
+
+/**
+ * Create adapter callback.
+ *
+ * @param id
+ *   Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *   - 0: Success.
+ *   - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_create_t)(uint8_t id,
+					const struct rte_eventdev *dev);
+
+/**
+ * Free adapter callback.
+ *
+ * @param id
+ *   Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *   - 0: Success.
+ *   - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_free_t)(uint8_t id,
+					const struct rte_eventdev *dev);
+
+/**
+ * Add a Tx queue to the adapter.
+ * A queue value of -1 is used to indicate all
+ * queues within the device.
+ *
+ * @param id
+ *   Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @param eth_dev
+ *   Ethernet device pointer
+ *
+ * @param tx_queue_id
+ *   Transmt queue index
+ *
+ * @return
+ *   - 0: Success.
+ *   - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_queue_add_t)(
+					uint8_t id,
+					const struct rte_eventdev *dev,
+					const struct rte_eth_dev *eth_dev,
+					int32_t tx_queue_id);
+
+/**
+ * Delete a Tx queue from the adapter.
+ * A queue value of -1 is used to indicate all
+ * queues within the device, that have been added to this
+ * adapter.
+ *
+ * @param id
+ *   Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @param eth_dev
+ *   Ethernet device pointer
+ *
+ * @param tx_queue_id
+ *   Transmit queue index
+ *
+ * @return
+ *  - 0: Success, Queues deleted successfully.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_queue_del_t)(
+					uint8_t id,
+					const struct rte_eventdev *dev,
+					const struct rte_eth_dev *eth_dev,
+					int32_t tx_queue_id);
+
+/**
+ * Start the adapter.
+ *
+ * @param id
+ *   Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *  - 0: Success, Adapter started correctly.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_start_t)(uint8_t id,
+					const struct rte_eventdev *dev);
+
+/**
+ * Stop the adapter.
+ *
+ * @param id
+ *  Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *  - 0: Success.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_stop_t)(uint8_t id,
+					const struct rte_eventdev *dev);
+
+struct rte_event_eth_tx_adapter_stats;
+
+/**
+ * Retrieve statistics for an adapter
+ *
+ * @param id
+ *  Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @param [out] stats
+ *  A pointer to structure used to retrieve statistics for an adapter
+ *
+ * @return
+ *  - 0: Success, statistics retrieved successfully.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_stats_get_t)(
+				uint8_t id,
+				const struct rte_eventdev *dev,
+				struct rte_event_eth_tx_adapter_stats *stats);
+
+/**
+ * Reset statistics for an adapter
+ *
+ * @param id
+ *  Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *  - 0: Success, statistics retrieved successfully.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_stats_reset_t)(uint8_t id,
+					const struct rte_eventdev *dev);
+
 /** Event device operations function pointer table */
 struct rte_eventdev_ops {
 	eventdev_info_get_t dev_infos_get;	/**< Get device info. */
@@ -862,6 +1042,26 @@ struct rte_eventdev_ops {
 	eventdev_crypto_adapter_stats_reset crypto_adapter_stats_reset;
 	/**< Reset crypto stats */
 
+	eventdev_eth_tx_adapter_caps_get_t eth_tx_adapter_caps_get;
+	/**< Get ethernet Tx adapter capabilities */
+
+	eventdev_eth_tx_adapter_create_t eth_tx_adapter_create;
+	/**< Create adapter callback */
+	eventdev_eth_tx_adapter_free_t eth_tx_adapter_free;
+	/**< Free adapter callback */
+	eventdev_eth_tx_adapter_queue_add_t eth_tx_adapter_queue_add;
+	/**< Add Tx queues to the eth Tx adapter */
+	eventdev_eth_tx_adapter_queue_del_t eth_tx_adapter_queue_del;
+	/**< Delete Tx queues from the eth Tx adapter */
+	eventdev_eth_tx_adapter_start_t eth_tx_adapter_start;
+	/**< Start eth Tx adapter */
+	eventdev_eth_tx_adapter_stop_t eth_tx_adapter_stop;
+	/**< Stop eth Tx adapter */
+	eventdev_eth_tx_adapter_stats_get_t eth_tx_adapter_stats_get;
+	/**< Get eth Tx adapter statistics */
+	eventdev_eth_tx_adapter_stats_reset_t eth_tx_adapter_stats_reset;
+	/**< Reset eth Tx adapter statistics */
+
 	eventdev_selftest dev_selftest;
 	/**< Start eventdev Selftest */
 
diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index 801810e..c4decd5 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -175,6 +175,31 @@
 		(dev, cdev, caps) : -ENOTSUP;
 }
 
+int __rte_experimental
+rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
+				uint32_t *caps)
+{
+	struct rte_eventdev *dev;
+	struct rte_eth_dev *eth_dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_port_id, -EINVAL);
+
+	dev = &rte_eventdevs[dev_id];
+	eth_dev = &rte_eth_devices[eth_port_id];
+
+	if (caps == NULL)
+		return -EINVAL;
+
+	*caps = 0;
+
+	return dev->dev_ops->eth_tx_adapter_caps_get ?
+			(*dev->dev_ops->eth_tx_adapter_caps_get)(dev,
+								eth_dev,
+								caps)
+			: 0;
+}
+
 static inline int
 rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
 {
@@ -1275,6 +1300,15 @@ int rte_event_dev_selftest(uint8_t dev_id)
 	return RTE_EVENT_MAX_DEVS;
 }
 
+static uint16_t
+rte_event_tx_adapter_enqueue(__rte_unused void *port,
+			__rte_unused struct rte_event ev[],
+			__rte_unused uint16_t nb_events)
+{
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
 struct rte_eventdev *
 rte_event_pmd_allocate(const char *name, int socket_id)
 {
@@ -1295,6 +1329,8 @@ struct rte_eventdev *
 
 	eventdev = &rte_eventdevs[dev_id];
 
+	eventdev->txa_enqueue = rte_event_tx_adapter_enqueue;
+
 	if (eventdev->data == NULL) {
 		struct rte_eventdev_data *eventdev_data = NULL;
 
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v3 3/5] eventdev: add eth Tx adapter implementation
  2018-08-31  5:41 ` [dpdk-dev] [PATCH v3 1/5] " Nikhil Rao
  2018-08-31  5:41   ` [dpdk-dev] [PATCH v3 2/5] eventdev: add caps API and PMD callbacks for eth Tx adapter Nikhil Rao
@ 2018-08-31  5:41   ` Nikhil Rao
  2018-08-31  5:41   ` [dpdk-dev] [PATCH v3 4/5] eventdev: add auto test for eth Tx adapter Nikhil Rao
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 27+ messages in thread
From: Nikhil Rao @ 2018-08-31  5:41 UTC (permalink / raw)
  To: jerin.jacob, olivier.matz; +Cc: dev, Nikhil Rao

This patch implements the Tx adapter APIs by invoking the
corresponding eventdev PMD callbacks and also provides
the common rte_service function based implementation when
the eventdev PMD support is absent.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
---
 config/rte_config.h                            |    1 +
 lib/librte_eventdev/rte_event_eth_tx_adapter.c | 1138 ++++++++++++++++++++++++
 config/common_base                             |    2 +-
 lib/librte_eventdev/Makefile                   |    2 +
 lib/librte_eventdev/meson.build                |    6 +-
 lib/librte_eventdev/rte_eventdev_version.map   |   12 +
 6 files changed, 1158 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_eventdev/rte_event_eth_tx_adapter.c

diff --git a/config/rte_config.h b/config/rte_config.h
index a8e4797..b338b15 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -66,6 +66,7 @@
 #define RTE_EVENT_TIMER_ADAPTER_NUM_MAX 32
 #define RTE_EVENT_ETH_INTR_RING_SIZE 1024
 #define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
+#define RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE 32
 
 /* rawdev defines */
 #define RTE_RAWDEV_MAX_DEVS 10
diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.c b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
new file mode 100644
index 0000000..aae0378
--- /dev/null
+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
@@ -0,0 +1,1138 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation.
+ */
+#include <rte_spinlock.h>
+#include <rte_service_component.h>
+#include <rte_ethdev.h>
+
+#include "rte_eventdev_pmd.h"
+#include "rte_event_eth_tx_adapter.h"
+
+#define TXA_BATCH_SIZE		32
+#define TXA_SERVICE_NAME_LEN	32
+#define TXA_MEM_NAME_LEN	32
+#define TXA_FLUSH_THRESHOLD	1024
+#define TXA_RETRY_CNT		100
+#define TXA_MAX_NB_TX		128
+#define TXA_INVALID_DEV_ID	INT32_C(-1)
+#define TXA_INVALID_SERVICE_ID	INT64_C(-1)
+
+#define txa_evdev(id) (&rte_eventdevs[txa_dev_id_array[(id)]])
+
+#define txa_dev_caps_get(id) txa_evdev((id))->dev_ops->eth_tx_adapter_caps_get
+
+#define txa_dev_adapter_create(t) txa_evdev(t)->dev_ops->eth_tx_adapter_create
+
+#define txa_dev_adapter_create_ext(t) \
+				txa_evdev(t)->dev_ops->eth_tx_adapter_create
+
+#define txa_dev_adapter_free(t) txa_evdev(t)->dev_ops->eth_tx_adapter_free
+
+#define txa_dev_queue_add(id) txa_evdev(id)->dev_ops->eth_tx_adapter_queue_add
+
+#define txa_dev_queue_del(t) txa_evdev(t)->dev_ops->eth_tx_adapter_queue_del
+
+#define txa_dev_start(t) txa_evdev(t)->dev_ops->eth_tx_adapter_start
+
+#define txa_dev_stop(t) txa_evdev(t)->dev_ops->eth_tx_adapter_stop
+
+#define txa_dev_stats_reset(t) txa_evdev(t)->dev_ops->eth_tx_adapter_stats_reset
+
+#define txa_dev_stats_get(t) txa_evdev(t)->dev_ops->eth_tx_adapter_stats_get
+
+#define RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) \
+do { \
+	if (!txa_valid_id(id)) { \
+		RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d", id); \
+		return retval; \
+	} \
+} while (0)
+
+#define TXA_CHECK_OR_ERR_RET(id) \
+do {\
+	int ret; \
+	RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET((id), -EINVAL); \
+	ret = txa_init(); \
+	if (ret != 0) \
+		return ret; \
+	if (!txa_adapter_exist((id))) \
+		return -EINVAL; \
+} while (0)
+
+/* Tx retry callback structure */
+struct txa_retry {
+	/* Ethernet port id */
+	uint16_t port_id;
+	/* Tx queue */
+	uint16_t tx_queue;
+	/* Adapter ID */
+	uint8_t id;
+};
+
+/* Per queue structure */
+struct txa_service_queue_info {
+	/* Queue has been added */
+	uint8_t added;
+	/* Retry callback argument */
+	struct txa_retry txa_retry;
+	/* Tx buffer */
+	struct rte_eth_dev_tx_buffer *tx_buf;
+};
+
+/* PMD private structure */
+struct txa_service_data {
+	/* Max mbufs processed in any service function invocation */
+	uint32_t max_nb_tx;
+	/* Number of Tx queues in adapter */
+	uint32_t nb_queues;
+	/*  Synchronization with data path */
+	rte_spinlock_t tx_lock;
+	/* Event port ID */
+	uint8_t port_id;
+	/* Event device identifier */
+	uint8_t eventdev_id;
+	/* Highest port id supported + 1 */
+	uint16_t dev_count;
+	/* Loop count to flush Tx buffers */
+	int loop_cnt;
+	/* Per ethernet device structure */
+	struct txa_service_ethdev *txa_ethdev;
+	/* Statistics */
+	struct rte_event_eth_tx_adapter_stats stats;
+	/* Adapter Identifier */
+	uint8_t id;
+	/* Conf arg must be freed */
+	uint8_t conf_free;
+	/* Configuration callback */
+	rte_event_eth_tx_adapter_conf_cb conf_cb;
+	/* Configuration callback argument */
+	void *conf_arg;
+	/* socket id */
+	int socket_id;
+	/* Per adapter EAL service */
+	int64_t service_id;
+	/* Memory allocation name */
+	char mem_name[TXA_MEM_NAME_LEN];
+} __rte_cache_aligned;
+
+/* Per eth device structure */
+struct txa_service_ethdev {
+	/* Pointer to ethernet device */
+	struct rte_eth_dev *dev;
+	/* Number of queues added */
+	uint16_t nb_queues;
+	/* PMD specific queue data */
+	void *queues;
+};
+
+/* Array of adapter instances, initialized with event device id
+ * when adapter is created
+ */
+static int *txa_dev_id_array;
+
+/* Array of pointers to service implementation data */
+static struct txa_service_data **txa_service_data_array;
+
+static int32_t txa_service_func(void *args);
+static int txa_service_adapter_create_ext(uint8_t id,
+			struct rte_eventdev *dev,
+			rte_event_eth_tx_adapter_conf_cb conf_cb,
+			void *conf_arg);
+static int txa_service_queue_del(uint8_t id,
+				const struct rte_eth_dev *dev,
+				int32_t tx_queue_id);
+
+static int
+txa_adapter_exist(uint8_t id)
+{
+	return txa_dev_id_array[id] != TXA_INVALID_DEV_ID;
+}
+
+static inline int
+txa_valid_id(uint8_t id)
+{
+	return id < RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE;
+}
+
+static void *
+txa_memzone_array_get(const char *name, unsigned int elt_size, int nb_elems)
+{
+	const struct rte_memzone *mz;
+	unsigned int sz;
+
+	sz = elt_size * nb_elems;
+	sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
+
+	mz = rte_memzone_lookup(name);
+	if (mz == NULL) {
+		mz = rte_memzone_reserve_aligned(name, sz, rte_socket_id(), 0,
+						 RTE_CACHE_LINE_SIZE);
+		if (mz == NULL) {
+			RTE_EDEV_LOG_ERR("failed to reserve memzone"
+					" name = %s err = %"
+					PRId32, name, rte_errno);
+			return NULL;
+		}
+	}
+
+	return  mz->addr;
+}
+
+static int
+txa_dev_id_array_init(void)
+{
+	if (txa_dev_id_array == NULL) {
+		int i;
+
+		txa_dev_id_array = txa_memzone_array_get("txa_adapter_array",
+					sizeof(int),
+					RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE);
+		if (txa_dev_id_array == NULL)
+			return -ENOMEM;
+
+		for (i = 0; i < RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE; i++)
+			txa_dev_id_array[i] = TXA_INVALID_DEV_ID;
+	}
+
+	return 0;
+}
+
+static int
+txa_init(void)
+{
+	return txa_dev_id_array_init();
+}
+
+static int
+txa_service_data_init(void)
+{
+	if (txa_service_data_array == NULL) {
+		txa_service_data_array =
+				txa_memzone_array_get("txa_service_data_array",
+					sizeof(int),
+					RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE);
+		if (txa_service_data_array == NULL)
+			return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static inline struct txa_service_data *
+txa_service_id_to_data(uint8_t id)
+{
+	return txa_service_data_array[id];
+}
+
+static inline struct txa_service_queue_info *
+txa_service_queue(struct txa_service_data *txa, uint16_t port_id,
+		uint16_t tx_queue_id)
+{
+	struct txa_service_queue_info *tqi;
+
+	if (unlikely(txa->txa_ethdev == NULL || txa->dev_count < port_id + 1))
+		return NULL;
+
+	tqi = txa->txa_ethdev[port_id].queues;
+
+	return likely(tqi != NULL) ? tqi + tx_queue_id : NULL;
+}
+
+static int
+txa_service_conf_cb(uint8_t __rte_unused id, uint8_t dev_id,
+		struct rte_event_eth_tx_adapter_conf *conf, void *arg)
+{
+	int ret;
+	struct rte_eventdev *dev;
+	struct rte_event_port_conf *pc;
+	struct rte_event_dev_config dev_conf;
+	int started;
+	uint8_t port_id;
+
+	pc = arg;
+	dev = &rte_eventdevs[dev_id];
+	dev_conf = dev->data->dev_conf;
+
+	started = dev->data->dev_started;
+	if (started)
+		rte_event_dev_stop(dev_id);
+
+	port_id = dev_conf.nb_event_ports;
+	dev_conf.nb_event_ports += 1;
+
+	ret = rte_event_dev_configure(dev_id, &dev_conf);
+	if (ret) {
+		RTE_EDEV_LOG_ERR("failed to configure event dev %u",
+						dev_id);
+		if (started) {
+			if (rte_event_dev_start(dev_id))
+				return -EIO;
+		}
+		return ret;
+	}
+
+	pc->disable_implicit_release = 0;
+	ret = rte_event_port_setup(dev_id, port_id, pc);
+	if (ret) {
+		RTE_EDEV_LOG_ERR("failed to setup event port %u\n",
+					port_id);
+		if (started) {
+			if (rte_event_dev_start(dev_id))
+				return -EIO;
+		}
+		return ret;
+	}
+
+	conf->event_port_id = port_id;
+	conf->max_nb_tx = TXA_MAX_NB_TX;
+	if (started)
+		ret = rte_event_dev_start(dev_id);
+	return ret;
+}
+
+static int
+txa_service_ethdev_alloc(struct txa_service_data *txa)
+{
+	struct txa_service_ethdev *txa_ethdev;
+	uint16_t i, dev_count;
+
+	dev_count = rte_eth_dev_count_avail();
+	if (txa->txa_ethdev && dev_count == txa->dev_count)
+		return 0;
+
+	txa_ethdev = rte_zmalloc_socket(txa->mem_name,
+					dev_count * sizeof(*txa_ethdev),
+					0,
+					txa->socket_id);
+	if (txa_ethdev == NULL) {
+		RTE_EDEV_LOG_ERR("Failed to alloc txa::txa_ethdev ");
+		return -ENOMEM;
+	}
+
+	if (txa->dev_count)
+		memcpy(txa_ethdev, txa->txa_ethdev,
+			txa->dev_count * sizeof(*txa_ethdev));
+
+	RTE_ETH_FOREACH_DEV(i) {
+		if (i == dev_count)
+			break;
+		txa_ethdev[i].dev = &rte_eth_devices[i];
+	}
+
+	txa->txa_ethdev = txa_ethdev;
+	txa->dev_count = dev_count;
+	return 0;
+}
+
+static int
+txa_service_queue_array_alloc(struct txa_service_data *txa,
+			uint16_t port_id)
+{
+	struct txa_service_queue_info *tqi;
+	uint16_t nb_queue;
+	int ret;
+
+	ret = txa_service_ethdev_alloc(txa);
+	if (ret != 0)
+		return ret;
+
+	if (txa->txa_ethdev[port_id].queues)
+		return 0;
+
+	nb_queue = txa->txa_ethdev[port_id].dev->data->nb_tx_queues;
+	tqi = rte_zmalloc_socket(txa->mem_name,
+				nb_queue *
+				sizeof(struct txa_service_queue_info), 0,
+				txa->socket_id);
+	if (tqi == NULL)
+		return -ENOMEM;
+	txa->txa_ethdev[port_id].queues = tqi;
+	return 0;
+}
+
+static void
+txa_service_queue_array_free(struct txa_service_data *txa,
+			uint16_t port_id)
+{
+	struct txa_service_ethdev *txa_ethdev;
+	struct txa_service_queue_info *tqi;
+
+	txa_ethdev = &txa->txa_ethdev[port_id];
+	if (txa->txa_ethdev == NULL || txa_ethdev->nb_queues != 0)
+		return;
+
+	tqi = txa_ethdev->queues;
+	txa_ethdev->queues = NULL;
+	rte_free(tqi);
+
+	if (txa->nb_queues == 0) {
+		rte_free(txa->txa_ethdev);
+		txa->txa_ethdev = NULL;
+	}
+}
+
+static void
+txa_service_unregister(struct txa_service_data *txa)
+{
+	if (txa->service_id != TXA_INVALID_SERVICE_ID) {
+		rte_service_component_runstate_set(txa->service_id, 0);
+		while (rte_service_may_be_active(txa->service_id))
+			rte_pause();
+		rte_service_component_unregister(txa->service_id);
+	}
+	txa->service_id = TXA_INVALID_SERVICE_ID;
+}
+
+static int
+txa_service_register(struct txa_service_data *txa)
+{
+	int ret;
+	struct rte_service_spec service;
+	struct rte_event_eth_tx_adapter_conf conf;
+
+	if (txa->service_id != TXA_INVALID_SERVICE_ID)
+		return 0;
+
+	memset(&service, 0, sizeof(service));
+	snprintf(service.name, TXA_SERVICE_NAME_LEN, "txa_%d", txa->id);
+	service.socket_id = txa->socket_id;
+	service.callback = txa_service_func;
+	service.callback_userdata = txa;
+	service.capabilities = RTE_SERVICE_CAP_MT_SAFE;
+	ret = rte_service_component_register(&service,
+					(uint32_t *)&txa->service_id);
+	if (ret) {
+		RTE_EDEV_LOG_ERR("failed to register service %s err = %"
+				 PRId32, service.name, ret);
+		return ret;
+	}
+
+	ret = txa->conf_cb(txa->id, txa->eventdev_id, &conf, txa->conf_arg);
+	if (ret) {
+		txa_service_unregister(txa);
+		return ret;
+	}
+
+	rte_service_component_runstate_set(txa->service_id, 1);
+	txa->port_id = conf.event_port_id;
+	txa->max_nb_tx = conf.max_nb_tx;
+	return 0;
+}
+
+static struct rte_eth_dev_tx_buffer *
+txa_service_tx_buf_alloc(struct txa_service_data *txa,
+			const struct rte_eth_dev *dev)
+{
+	struct rte_eth_dev_tx_buffer *tb;
+	uint16_t port_id;
+
+	port_id = dev->data->port_id;
+	tb = rte_zmalloc_socket(txa->mem_name,
+				RTE_ETH_TX_BUFFER_SIZE(TXA_BATCH_SIZE),
+				0,
+				rte_eth_dev_socket_id(port_id));
+	if (tb == NULL)
+		RTE_EDEV_LOG_ERR("Failed to allocate memory for tx buffer");
+	return tb;
+}
+
+static int
+txa_service_is_queue_added(struct txa_service_data *txa,
+			const struct rte_eth_dev *dev,
+			uint16_t tx_queue_id)
+{
+	struct txa_service_queue_info *tqi;
+
+	tqi = txa_service_queue(txa, dev->data->port_id, tx_queue_id);
+	return tqi && tqi->added;
+}
+
+static int
+txa_service_ctrl(uint8_t id, int start)
+{
+	int ret;
+	struct txa_service_data *txa;
+
+	txa = txa_service_id_to_data(id);
+	if (txa->service_id == TXA_INVALID_SERVICE_ID)
+		return 0;
+
+	ret = rte_service_runstate_set(txa->service_id, start);
+	if (ret == 0 && !start) {
+		while (rte_service_may_be_active(txa->service_id))
+			rte_pause();
+	}
+	return ret;
+}
+
+static void
+txa_service_buffer_retry(struct rte_mbuf **pkts, uint16_t unsent,
+			void *userdata)
+{
+	struct txa_retry *tr;
+	struct txa_service_data *data;
+	struct rte_event_eth_tx_adapter_stats *stats;
+	uint16_t sent = 0;
+	unsigned int retry = 0;
+	uint16_t i, n;
+
+	tr = (struct txa_retry *)(uintptr_t)userdata;
+	data = txa_service_id_to_data(tr->id);
+	stats = &data->stats;
+
+	do {
+		n = rte_eth_tx_burst(tr->port_id, tr->tx_queue,
+			       &pkts[sent], unsent - sent);
+
+		sent += n;
+	} while (sent != unsent && retry++ < TXA_RETRY_CNT);
+
+	for (i = sent; i < unsent; i++)
+		rte_pktmbuf_free(pkts[i]);
+
+	stats->tx_retry += retry;
+	stats->tx_packets += sent;
+	stats->tx_dropped += unsent - sent;
+}
+
+static void
+txa_service_tx(struct txa_service_data *txa, struct rte_event *ev,
+	uint32_t n)
+{
+	uint32_t i;
+	uint16_t nb_tx;
+	struct rte_event_eth_tx_adapter_stats *stats;
+
+	stats = &txa->stats;
+
+	nb_tx = 0;
+	for (i = 0; i < n; i++) {
+		struct rte_mbuf *m;
+		uint16_t port;
+		uint16_t queue;
+		struct txa_service_queue_info *tqi;
+
+		m = ev[i].mbuf;
+		port = m->port;
+		queue = rte_event_eth_tx_adapter_txq_get(m);
+
+		tqi = txa_service_queue(txa, port, queue);
+		if (unlikely(tqi == NULL || !tqi->added)) {
+			rte_pktmbuf_free(m);
+			continue;
+		}
+
+		nb_tx += rte_eth_tx_buffer(port, queue, tqi->tx_buf, m);
+	}
+
+	stats->tx_packets += nb_tx;
+}
+
+static int32_t
+txa_service_func(void *args)
+{
+	struct txa_service_data *txa = args;
+	uint8_t dev_id;
+	uint8_t port;
+	uint16_t n;
+	uint32_t nb_tx, max_nb_tx;
+	struct rte_event ev[TXA_BATCH_SIZE];
+
+	dev_id = txa->eventdev_id;
+	max_nb_tx = txa->max_nb_tx;
+	port = txa->port_id;
+
+	if (txa->nb_queues == 0)
+		return 0;
+
+	if (!rte_spinlock_trylock(&txa->tx_lock))
+		return 0;
+
+	for (nb_tx = 0; nb_tx < max_nb_tx; nb_tx += n) {
+
+		n = rte_event_dequeue_burst(dev_id, port, ev, RTE_DIM(ev), 0);
+		if (!n)
+			break;
+		txa_service_tx(txa, ev, n);
+	}
+
+	if ((txa->loop_cnt++ & (TXA_FLUSH_THRESHOLD - 1)) == 0) {
+
+		struct txa_service_ethdev *tdi;
+		struct txa_service_queue_info *tqi;
+		struct rte_eth_dev *dev;
+		uint16_t i;
+
+		tdi = txa->txa_ethdev;
+		nb_tx = 0;
+
+		RTE_ETH_FOREACH_DEV(i) {
+			uint16_t q;
+
+			if (i == txa->dev_count)
+				break;
+
+			dev = tdi[i].dev;
+			if (tdi[i].nb_queues == 0)
+				continue;
+			for (q = 0; q < dev->data->nb_tx_queues; q++) {
+
+				tqi = txa_service_queue(txa, i, q);
+				if (unlikely(tqi == NULL || !tqi->added))
+					continue;
+
+				nb_tx += rte_eth_tx_buffer_flush(i, q,
+							tqi->tx_buf);
+			}
+		}
+
+		txa->stats.tx_packets += nb_tx;
+	}
+	rte_spinlock_unlock(&txa->tx_lock);
+	return 0;
+}
+
+static int
+txa_service_adapter_create(uint8_t id, struct rte_eventdev *dev,
+			struct rte_event_port_conf *port_conf)
+{
+	struct txa_service_data *txa;
+	struct rte_event_port_conf *cb_conf;
+	int ret;
+
+	cb_conf = rte_malloc(NULL, sizeof(*cb_conf), 0);
+	if (cb_conf == NULL)
+		return -ENOMEM;
+
+	*cb_conf = *port_conf;
+	ret = txa_service_adapter_create_ext(id, dev, txa_service_conf_cb,
+					cb_conf);
+	if (ret) {
+		rte_free(cb_conf);
+		return ret;
+	}
+
+	txa = txa_service_id_to_data(id);
+	txa->conf_free = 1;
+	return ret;
+}
+
+static int
+txa_service_adapter_create_ext(uint8_t id, struct rte_eventdev *dev,
+			rte_event_eth_tx_adapter_conf_cb conf_cb,
+			void *conf_arg)
+{
+	struct txa_service_data *txa;
+	int socket_id;
+	char mem_name[TXA_SERVICE_NAME_LEN];
+	int ret;
+
+	if (conf_cb == NULL)
+		return -EINVAL;
+
+	socket_id = dev->data->socket_id;
+	snprintf(mem_name, TXA_MEM_NAME_LEN,
+		"rte_event_eth_txa_%d",
+		id);
+
+	ret = txa_service_data_init();
+	if (ret != 0)
+		return ret;
+
+	txa = rte_zmalloc_socket(mem_name,
+				sizeof(*txa),
+				RTE_CACHE_LINE_SIZE, socket_id);
+	if (txa == NULL) {
+		RTE_EDEV_LOG_ERR("failed to get mem for tx adapter");
+		return -ENOMEM;
+	}
+
+	txa->id = id;
+	txa->eventdev_id = dev->data->dev_id;
+	txa->socket_id = socket_id;
+	strncpy(txa->mem_name, mem_name, TXA_SERVICE_NAME_LEN);
+	txa->conf_cb = conf_cb;
+	txa->conf_arg = conf_arg;
+	txa->service_id = TXA_INVALID_SERVICE_ID;
+	rte_spinlock_init(&txa->tx_lock);
+	txa_service_data_array[id] = txa;
+
+	return 0;
+}
+
+static int
+txa_service_event_port_get(uint8_t id, uint8_t *port)
+{
+	struct txa_service_data *txa;
+
+	txa = txa_service_id_to_data(id);
+	if (txa->service_id == TXA_INVALID_SERVICE_ID)
+		return -ENODEV;
+
+	*port = txa->port_id;
+	return 0;
+}
+
+static int
+txa_service_adapter_free(uint8_t id)
+{
+	struct txa_service_data *txa;
+
+	txa = txa_service_id_to_data(id);
+	if (txa->nb_queues) {
+		RTE_EDEV_LOG_ERR("%" PRIu16 " Tx queues not deleted",
+				txa->nb_queues);
+		return -EBUSY;
+	}
+
+	if (txa->conf_free)
+		rte_free(txa->conf_arg);
+	rte_free(txa);
+	return 0;
+}
+
+static int
+txa_service_queue_add(uint8_t id,
+		__rte_unused struct rte_eventdev *dev,
+		const struct rte_eth_dev *eth_dev,
+		int32_t tx_queue_id)
+{
+	struct txa_service_data *txa;
+	struct txa_service_ethdev *tdi;
+	struct txa_service_queue_info *tqi;
+	struct rte_eth_dev_tx_buffer *tb;
+	struct txa_retry *txa_retry;
+	int ret;
+
+	txa = txa_service_id_to_data(id);
+
+	if (tx_queue_id == -1) {
+		int nb_queues;
+		uint16_t i, j;
+		uint16_t *qdone;
+
+		nb_queues = eth_dev->data->nb_tx_queues;
+		if (txa->dev_count > eth_dev->data->port_id) {
+			tdi = &txa->txa_ethdev[eth_dev->data->port_id];
+			nb_queues -= tdi->nb_queues;
+		}
+
+		qdone = rte_zmalloc(txa->mem_name,
+				nb_queues * sizeof(*qdone), 0);
+		j = 0;
+		for (i = 0; i < nb_queues; i++) {
+			if (txa_service_is_queue_added(txa, eth_dev, i))
+				continue;
+			ret = txa_service_queue_add(id, dev, eth_dev, i);
+			if (ret == 0)
+				qdone[j++] = i;
+			else
+				break;
+		}
+
+		if (i != nb_queues) {
+			for (i = 0; i < j; i++)
+				txa_service_queue_del(id, eth_dev, qdone[i]);
+		}
+		rte_free(qdone);
+		return ret;
+	}
+
+	ret = txa_service_register(txa);
+	if (ret)
+		return ret;
+
+	rte_spinlock_lock(&txa->tx_lock);
+
+	if (txa_service_is_queue_added(txa, eth_dev, tx_queue_id)) {
+		rte_spinlock_unlock(&txa->tx_lock);
+		return 0;
+	}
+
+	ret = txa_service_queue_array_alloc(txa, eth_dev->data->port_id);
+	if (ret)
+		goto err_unlock;
+
+	tb = txa_service_tx_buf_alloc(txa, eth_dev);
+	if (tb == NULL)
+		goto err_unlock;
+
+	tdi = &txa->txa_ethdev[eth_dev->data->port_id];
+	tqi = txa_service_queue(txa, eth_dev->data->port_id, tx_queue_id);
+
+	txa_retry = &tqi->txa_retry;
+	txa_retry->id = txa->id;
+	txa_retry->port_id = eth_dev->data->port_id;
+	txa_retry->tx_queue = tx_queue_id;
+
+	rte_eth_tx_buffer_init(tb, TXA_BATCH_SIZE);
+	rte_eth_tx_buffer_set_err_callback(tb,
+		txa_service_buffer_retry, txa_retry);
+
+	tqi->tx_buf = tb;
+	tqi->added = 1;
+	tdi->nb_queues++;
+	txa->nb_queues++;
+
+err_unlock:
+	if (txa->nb_queues == 0) {
+		txa_service_queue_array_free(txa,
+					eth_dev->data->port_id);
+		txa_service_unregister(txa);
+	}
+
+	rte_spinlock_unlock(&txa->tx_lock);
+	return 0;
+}
+
+static int
+txa_service_queue_del(uint8_t id,
+		const struct rte_eth_dev *dev,
+		int32_t tx_queue_id)
+{
+	struct txa_service_data *txa;
+	struct txa_service_queue_info *tqi;
+	struct rte_eth_dev_tx_buffer *tb;
+	uint16_t port_id;
+
+	if (tx_queue_id == -1) {
+		uint16_t i;
+		int ret;
+
+		for (i = 0; i < dev->data->nb_tx_queues; i++) {
+			ret = txa_service_queue_del(id, dev, i);
+			if (ret != 0)
+				break;
+		}
+		return ret;
+	}
+
+	txa = txa_service_id_to_data(id);
+	port_id = dev->data->port_id;
+
+	tqi = txa_service_queue(txa, port_id, tx_queue_id);
+	if (tqi == NULL || !tqi->added)
+		return 0;
+
+	tb = tqi->tx_buf;
+	tqi->added = 0;
+	tqi->tx_buf = NULL;
+	rte_free(tb);
+	txa->nb_queues--;
+	txa->txa_ethdev[port_id].nb_queues--;
+
+	txa_service_queue_array_free(txa, port_id);
+	return 0;
+}
+
+static int
+txa_service_id_get(uint8_t id, uint32_t *service_id)
+{
+	struct txa_service_data *txa;
+
+	txa = txa_service_id_to_data(id);
+	if (txa->service_id == TXA_INVALID_SERVICE_ID)
+		return -ESRCH;
+
+	if (service_id == NULL)
+		return -EINVAL;
+
+	*service_id = txa->service_id;
+	return 0;
+}
+
+static int
+txa_service_start(uint8_t id)
+{
+	return txa_service_ctrl(id, 1);
+}
+
+static int
+txa_service_stats_get(uint8_t id,
+		struct rte_event_eth_tx_adapter_stats *stats)
+{
+	struct txa_service_data *txa;
+
+	txa = txa_service_id_to_data(id);
+	*stats = txa->stats;
+	return 0;
+}
+
+static int
+txa_service_stats_reset(uint8_t id)
+{
+	struct txa_service_data *txa;
+
+	txa = txa_service_id_to_data(id);
+	memset(&txa->stats, 0, sizeof(txa->stats));
+	return 0;
+}
+
+static int
+txa_service_stop(uint8_t id)
+{
+	return txa_service_ctrl(id, 0);
+}
+
+
+int __rte_experimental
+rte_event_eth_tx_adapter_create(uint8_t id, uint8_t dev_id,
+				struct rte_event_port_conf *port_conf)
+{
+	struct rte_eventdev *dev;
+	int ret;
+
+	if (port_conf == NULL)
+		return -EINVAL;
+
+	RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+
+	dev = &rte_eventdevs[dev_id];
+
+	ret = txa_init();
+	if (ret != 0)
+		return ret;
+
+	if (txa_adapter_exist(id))
+		return -EEXIST;
+
+	txa_dev_id_array[id] = dev_id;
+	if (txa_dev_adapter_create(id))
+		ret = txa_dev_adapter_create(id)(id, dev);
+
+	if (ret != 0) {
+		txa_dev_id_array[id] = TXA_INVALID_DEV_ID;
+		return ret;
+	}
+
+	ret = txa_service_adapter_create(id, dev, port_conf);
+	if (ret != 0) {
+		if (txa_dev_adapter_free(id))
+			txa_dev_adapter_free(id)(id, dev);
+		txa_dev_id_array[id] = TXA_INVALID_DEV_ID;
+		return ret;
+	}
+
+	txa_dev_id_array[id] = dev_id;
+	return 0;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_create_ext(uint8_t id, uint8_t dev_id,
+				rte_event_eth_tx_adapter_conf_cb conf_cb,
+				void *conf_arg)
+{
+	struct rte_eventdev *dev;
+	int ret;
+
+	RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+
+	ret = txa_init();
+	if (ret != 0)
+		return ret;
+
+	if (txa_adapter_exist(id))
+		return -EINVAL;
+
+	dev = &rte_eventdevs[dev_id];
+
+	txa_dev_id_array[id] = dev_id;
+	if (txa_dev_adapter_create_ext(id))
+		ret = txa_dev_adapter_create_ext(id)(id, dev);
+
+	if (ret != 0) {
+		txa_dev_id_array[id] = TXA_INVALID_DEV_ID;
+		return ret;
+	}
+
+	ret = txa_service_adapter_create_ext(id, dev, conf_cb, conf_arg);
+	if (ret != 0) {
+		if (txa_dev_adapter_free(id))
+			txa_dev_adapter_free(id)(id, dev);
+		txa_dev_id_array[id] = TXA_INVALID_DEV_ID;
+		return ret;
+	}
+
+	txa_dev_id_array[id] = dev_id;
+	return 0;
+}
+
+
+int __rte_experimental
+rte_event_eth_tx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
+{
+	TXA_CHECK_OR_ERR_RET(id);
+
+	return txa_service_event_port_get(id, event_port_id);
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_free(uint8_t id)
+{
+	int ret;
+
+	TXA_CHECK_OR_ERR_RET(id);
+
+	ret = txa_dev_adapter_free(id) ?
+		txa_dev_adapter_free(id)(id, txa_evdev(id)) :
+		0;
+
+	if (ret == 0)
+		ret = txa_service_adapter_free(id);
+	txa_dev_id_array[id] = TXA_INVALID_DEV_ID;
+
+	return ret;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_queue_add(uint8_t id,
+				uint16_t eth_dev_id,
+				int32_t queue)
+{
+	struct rte_eth_dev *eth_dev;
+	int ret;
+	uint32_t caps;
+
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL);
+	TXA_CHECK_OR_ERR_RET(id);
+
+	eth_dev = &rte_eth_devices[eth_dev_id];
+	if (queue != -1 && (uint16_t)queue >= eth_dev->data->nb_tx_queues) {
+		RTE_EDEV_LOG_ERR("Invalid tx queue_id %" PRIu16,
+				(uint16_t)queue);
+		return -EINVAL;
+	}
+
+	caps = 0;
+	if (txa_dev_caps_get(id))
+		txa_dev_caps_get(id)(txa_evdev(id), eth_dev, &caps);
+
+	if (caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)
+		ret =  txa_dev_queue_add(id) ?
+					txa_dev_queue_add(id)(id,
+							txa_evdev(id),
+							eth_dev,
+							queue) : 0;
+	else
+		ret = txa_service_queue_add(id, txa_evdev(id), eth_dev, queue);
+
+	return ret;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_queue_del(uint8_t id,
+				uint16_t eth_dev_id,
+				int32_t queue)
+{
+	struct rte_eth_dev *eth_dev;
+	int ret;
+	uint32_t caps;
+
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL);
+	TXA_CHECK_OR_ERR_RET(id);
+
+	eth_dev = &rte_eth_devices[eth_dev_id];
+	if (queue != -1 && (uint16_t)queue >= eth_dev->data->nb_tx_queues) {
+		RTE_EDEV_LOG_ERR("Invalid tx queue_id %" PRIu16,
+				(uint16_t)queue);
+		return -EINVAL;
+	}
+
+	caps = 0;
+
+	if (txa_dev_caps_get(id))
+		txa_dev_caps_get(id)(txa_evdev(id), eth_dev, &caps);
+
+	if (caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)
+		ret =  txa_dev_queue_del(id) ?
+					txa_dev_queue_del(id)(id, txa_evdev(id),
+							eth_dev,
+							queue) : 0;
+	else
+		ret = txa_service_queue_del(id, eth_dev, queue);
+
+	return ret;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_service_id_get(uint8_t id, uint32_t *service_id)
+{
+	TXA_CHECK_OR_ERR_RET(id);
+
+	return txa_service_id_get(id, service_id);
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_start(uint8_t id)
+{
+	int ret;
+
+	TXA_CHECK_OR_ERR_RET(id);
+
+	ret = txa_dev_start(id) ? txa_dev_start(id)(id, txa_evdev(id)) : 0;
+	if (ret == 0)
+		ret = txa_service_start(id);
+	return ret;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_stats_get(uint8_t id,
+				struct rte_event_eth_tx_adapter_stats *stats)
+{
+	int ret;
+
+	TXA_CHECK_OR_ERR_RET(id);
+
+	if (stats == NULL)
+		return -EINVAL;
+
+	*stats = (struct rte_event_eth_tx_adapter_stats){0};
+
+	ret = txa_dev_stats_get(id) ?
+			txa_dev_stats_get(id)(id, txa_evdev(id), stats) : 0;
+
+	if (ret == 0 && txa_service_id_get(id, NULL) != ESRCH) {
+		if (txa_dev_stats_get(id)) {
+			struct rte_event_eth_tx_adapter_stats service_stats;
+
+			ret = txa_service_stats_get(id, &service_stats);
+			if (ret == 0) {
+				stats->tx_retry += service_stats.tx_retry;
+				stats->tx_packets += service_stats.tx_packets;
+				stats->tx_dropped += service_stats.tx_dropped;
+			}
+		} else
+			ret = txa_service_stats_get(id, stats);
+	}
+
+	return ret;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_stats_reset(uint8_t id)
+{
+	int ret;
+
+	TXA_CHECK_OR_ERR_RET(id);
+
+	ret = txa_dev_stats_reset(id) ?
+		txa_dev_stats_reset(id)(id, txa_evdev(id)) : 0;
+	if (ret == 0)
+		ret = txa_service_stats_reset(id);
+	return ret;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_stop(uint8_t id)
+{
+	int ret;
+
+	TXA_CHECK_OR_ERR_RET(id);
+
+	ret = txa_dev_stop(id) ? txa_dev_stop(id)(id,  txa_evdev(id)) : 0;
+	if (ret == 0)
+		ret = txa_service_stop(id);
+	return ret;
+}
diff --git a/config/common_base b/config/common_base
index 4bcbaf9..40a1fdf 100644
--- a/config/common_base
+++ b/config/common_base
@@ -602,7 +602,7 @@ CONFIG_RTE_EVENT_MAX_QUEUES_PER_DEV=64
 CONFIG_RTE_EVENT_TIMER_ADAPTER_NUM_MAX=32
 CONFIG_RTE_EVENT_ETH_INTR_RING_SIZE=1024
 CONFIG_RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE=32
-
+CONFIG_RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE=32
 #
 # Compile PMD for skeleton event device
 #
diff --git a/lib/librte_eventdev/Makefile b/lib/librte_eventdev/Makefile
index 47f599a..424ff35 100644
--- a/lib/librte_eventdev/Makefile
+++ b/lib/librte_eventdev/Makefile
@@ -28,6 +28,7 @@ SRCS-y += rte_event_ring.c
 SRCS-y += rte_event_eth_rx_adapter.c
 SRCS-y += rte_event_timer_adapter.c
 SRCS-y += rte_event_crypto_adapter.c
+SRCS-y += rte_event_eth_tx_adapter.c
 
 # export include files
 SYMLINK-y-include += rte_eventdev.h
@@ -39,6 +40,7 @@ SYMLINK-y-include += rte_event_eth_rx_adapter.h
 SYMLINK-y-include += rte_event_timer_adapter.h
 SYMLINK-y-include += rte_event_timer_adapter_pmd.h
 SYMLINK-y-include += rte_event_crypto_adapter.h
+SYMLINK-y-include += rte_event_eth_tx_adapter.h
 
 # versioning export map
 EXPORT_MAP := rte_eventdev_version.map
diff --git a/lib/librte_eventdev/meson.build b/lib/librte_eventdev/meson.build
index 3cbaf29..47989e7 100644
--- a/lib/librte_eventdev/meson.build
+++ b/lib/librte_eventdev/meson.build
@@ -14,7 +14,8 @@ sources = files('rte_eventdev.c',
 		'rte_event_ring.c',
 		'rte_event_eth_rx_adapter.c',
 		'rte_event_timer_adapter.c',
-		'rte_event_crypto_adapter.c')
+		'rte_event_crypto_adapter.c',
+		'rte_event_eth_tx_adapter.c')
 headers = files('rte_eventdev.h',
 		'rte_eventdev_pmd.h',
 		'rte_eventdev_pmd_pci.h',
@@ -23,5 +24,6 @@ headers = files('rte_eventdev.h',
 		'rte_event_eth_rx_adapter.h',
 		'rte_event_timer_adapter.h',
 		'rte_event_timer_adapter_pmd.h',
-		'rte_event_crypto_adapter.h')
+		'rte_event_crypto_adapter.h',
+		'rte_event_eth_tx_adapter.h')
 deps += ['ring', 'ethdev', 'hash', 'mempool', 'mbuf', 'timer', 'cryptodev']
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 12835e9..19c1494 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -96,6 +96,18 @@ EXPERIMENTAL {
 	rte_event_crypto_adapter_stats_reset;
 	rte_event_crypto_adapter_stop;
 	rte_event_eth_rx_adapter_cb_register;
+	rte_event_eth_tx_adapter_caps_get;
+	rte_event_eth_tx_adapter_create;
+	rte_event_eth_tx_adapter_create_ext;
+	rte_event_eth_tx_adapter_event_port_get;
+	rte_event_eth_tx_adapter_free;
+	rte_event_eth_tx_adapter_queue_add;
+	rte_event_eth_tx_adapter_queue_del;
+	rte_event_eth_tx_adapter_service_id_get;
+	rte_event_eth_tx_adapter_start;
+	rte_event_eth_tx_adapter_stats_get;
+	rte_event_eth_tx_adapter_stats_reset;
+	rte_event_eth_tx_adapter_stop;
 	rte_event_timer_adapter_caps_get;
 	rte_event_timer_adapter_create;
 	rte_event_timer_adapter_create_ext;
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v3 4/5] eventdev: add auto test for eth Tx adapter
  2018-08-31  5:41 ` [dpdk-dev] [PATCH v3 1/5] " Nikhil Rao
  2018-08-31  5:41   ` [dpdk-dev] [PATCH v3 2/5] eventdev: add caps API and PMD callbacks for eth Tx adapter Nikhil Rao
  2018-08-31  5:41   ` [dpdk-dev] [PATCH v3 3/5] eventdev: add eth Tx adapter implementation Nikhil Rao
@ 2018-08-31  5:41   ` Nikhil Rao
  2018-09-17 14:00     ` Jerin Jacob
  2018-08-31  5:41   ` [dpdk-dev] [PATCH v3 5/5] doc: add event eth Tx adapter guide Nikhil Rao
  2018-09-20 17:41   ` [dpdk-dev] [PATCH v4 1/5] eventdev: add eth Tx adapter APIs Nikhil Rao
  4 siblings, 1 reply; 27+ messages in thread
From: Nikhil Rao @ 2018-08-31  5:41 UTC (permalink / raw)
  To: jerin.jacob, olivier.matz; +Cc: dev, Nikhil Rao

This patch adds tests for the eth Tx adapter APIs. It also
tests the data path for the rte_service function based
implementation of the APIs.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
---
 test/test/test_event_eth_tx_adapter.c | 694 ++++++++++++++++++++++++++++++++++
 MAINTAINERS                           |   1 +
 test/test/Makefile                    |   1 +
 test/test/meson.build                 |   2 +
 4 files changed, 698 insertions(+)
 create mode 100644 test/test/test_event_eth_tx_adapter.c

diff --git a/test/test/test_event_eth_tx_adapter.c b/test/test/test_event_eth_tx_adapter.c
new file mode 100644
index 0000000..e97f59f
--- /dev/null
+++ b/test/test/test_event_eth_tx_adapter.c
@@ -0,0 +1,694 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+#include <string.h>
+#include <rte_common.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_bus_vdev.h>
+#include <rte_eth_ring.h>
+#include <rte_service.h>
+#include <rte_event_eth_tx_adapter.h>
+
+#include "test.h"
+
+#define MAX_NUM_QUEUE		RTE_PMD_RING_MAX_RX_RINGS
+#define TEST_INST_ID		0
+#define TEST_DEV_ID		0
+#define SOCKET0			0
+#define RING_SIZE		256
+#define ETH_NAME_LEN		32
+#define NUM_ETH_PAIR		1
+#define NUM_ETH_DEV		(2 * NUM_ETH_PAIR)
+#define NB_MBUF			512
+#define PAIR_PORT_INDEX(p)	((p) + NUM_ETH_PAIR)
+#define PORT(p)			default_params.port[(p)]
+#define TEST_ETHDEV_ID		PORT(0)
+#define TEST_ETHDEV_PAIR_ID	PORT(PAIR_PORT_INDEX(0))
+
+#define EDEV_RETRY		0xffff
+
+struct event_eth_tx_adapter_test_params {
+	struct rte_mempool *mp;
+	uint16_t rx_rings, tx_rings;
+	struct rte_ring *r[NUM_ETH_DEV][MAX_NUM_QUEUE];
+	int port[NUM_ETH_DEV];
+};
+
+static int event_dev_delete;
+static struct event_eth_tx_adapter_test_params default_params;
+static uint64_t eid = ~0ULL;
+static uint32_t tid;
+
+static inline int
+port_init_common(uint8_t port, const struct rte_eth_conf *port_conf,
+		struct rte_mempool *mp)
+{
+	const uint16_t rx_ring_size = RING_SIZE, tx_ring_size = RING_SIZE;
+	int retval;
+	uint16_t q;
+
+	if (!rte_eth_dev_is_valid_port(port))
+		return -1;
+
+	default_params.rx_rings = MAX_NUM_QUEUE;
+	default_params.tx_rings = MAX_NUM_QUEUE;
+
+	/* Configure the Ethernet device. */
+	retval = rte_eth_dev_configure(port, default_params.rx_rings,
+				default_params.tx_rings, port_conf);
+	if (retval != 0)
+		return retval;
+
+	for (q = 0; q < default_params.rx_rings; q++) {
+		retval = rte_eth_rx_queue_setup(port, q, rx_ring_size,
+				rte_eth_dev_socket_id(port), NULL, mp);
+		if (retval < 0)
+			return retval;
+	}
+
+	for (q = 0; q < default_params.tx_rings; q++) {
+		retval = rte_eth_tx_queue_setup(port, q, tx_ring_size,
+				rte_eth_dev_socket_id(port), NULL);
+		if (retval < 0)
+			return retval;
+	}
+
+	/* Start the Ethernet port. */
+	retval = rte_eth_dev_start(port);
+	if (retval < 0)
+		return retval;
+
+	/* Display the port MAC address. */
+	struct ether_addr addr;
+	rte_eth_macaddr_get(port, &addr);
+	printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8
+			   " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n",
+			(unsigned int)port,
+			addr.addr_bytes[0], addr.addr_bytes[1],
+			addr.addr_bytes[2], addr.addr_bytes[3],
+			addr.addr_bytes[4], addr.addr_bytes[5]);
+
+	/* Enable RX in promiscuous mode for the Ethernet device. */
+	rte_eth_promiscuous_enable(port);
+
+	return 0;
+}
+
+static inline int
+port_init(uint8_t port, struct rte_mempool *mp)
+{
+	struct rte_eth_conf conf = { 0 };
+	return port_init_common(port, &conf, mp);
+}
+
+#define RING_NAME_LEN	20
+#define DEV_NAME_LEN	20
+
+static int
+init_ports(void)
+{
+	char ring_name[ETH_NAME_LEN];
+	unsigned int i, j;
+	struct rte_ring * const *c1;
+	struct rte_ring * const *c2;
+	int err;
+
+	if (!default_params.mp)
+		default_params.mp = rte_pktmbuf_pool_create("mbuf_pool",
+			NB_MBUF, 32,
+			0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+
+	if (!default_params.mp)
+		return -ENOMEM;
+
+	for (i = 0; i < NUM_ETH_DEV; i++) {
+		for (j = 0; j < MAX_NUM_QUEUE; j++) {
+			snprintf(ring_name, sizeof(ring_name), "R%u%u", i, j);
+			default_params.r[i][j] = rte_ring_create(ring_name,
+						RING_SIZE,
+						SOCKET0,
+						RING_F_SP_ENQ | RING_F_SC_DEQ);
+			TEST_ASSERT((default_params.r[i][j] != NULL),
+				"Failed to allocate ring");
+		}
+	}
+
+	/*
+	 * To create two pseudo-Ethernet ports where the traffic is
+	 * switched between them, that is, traffic sent to port 1 is
+	 * read back from port 2 and vice-versa
+	 */
+	for (i = 0; i < NUM_ETH_PAIR; i++) {
+		char dev_name[DEV_NAME_LEN];
+		int p;
+
+		c1 = default_params.r[i];
+		c2 = default_params.r[PAIR_PORT_INDEX(i)];
+
+		snprintf(dev_name, DEV_NAME_LEN, "%u-%u", i, i + NUM_ETH_PAIR);
+		p = rte_eth_from_rings(dev_name, c1, MAX_NUM_QUEUE,
+				 c2, MAX_NUM_QUEUE, SOCKET0);
+		TEST_ASSERT(p >= 0, "Port creation failed %s", dev_name);
+		err = port_init(p, default_params.mp);
+		TEST_ASSERT(err == 0, "Port init failed %s", dev_name);
+		default_params.port[i] = p;
+
+		snprintf(dev_name, DEV_NAME_LEN, "%u-%u",  i + NUM_ETH_PAIR, i);
+		p = rte_eth_from_rings(dev_name, c2, MAX_NUM_QUEUE,
+				c1, MAX_NUM_QUEUE, SOCKET0);
+		TEST_ASSERT(p > 0, "Port creation failed %s", dev_name);
+		err = port_init(p, default_params.mp);
+		TEST_ASSERT(err == 0, "Port init failed %s", dev_name);
+		default_params.port[PAIR_PORT_INDEX(i)] = p;
+	}
+
+	return 0;
+}
+
+static void
+deinit_ports(void)
+{
+	uint16_t i, j;
+	char name[ETH_NAME_LEN];
+
+	for (i = 0; i < RTE_DIM(default_params.port); i++) {
+		rte_eth_dev_stop(default_params.port[i]);
+		rte_eth_dev_get_name_by_port(default_params.port[i], name);
+		rte_vdev_uninit(name);
+		for (j = 0; j < RTE_DIM(default_params.r[i]); j++)
+			rte_ring_free(default_params.r[i][j]);
+	}
+}
+
+static int
+testsuite_setup(void)
+{
+	const char *vdev_name = "event_sw0";
+
+	int err = init_ports();
+	TEST_ASSERT(err == 0, "Port initialization failed err %d\n", err);
+
+	if (rte_event_dev_count() == 0) {
+		printf("Failed to find a valid event device,"
+			" testing with event_sw0 device\n");
+		err = rte_vdev_init(vdev_name, NULL);
+		TEST_ASSERT(err == 0, "vdev %s creation failed  %d\n",
+			vdev_name, err);
+		event_dev_delete = 1;
+	}
+	return err;
+}
+
+#define DEVICE_ID_SIZE 64
+
+static void
+testsuite_teardown(void)
+{
+	deinit_ports();
+	rte_mempool_free(default_params.mp);
+	default_params.mp = NULL;
+	if (event_dev_delete)
+		rte_vdev_uninit("event_sw0");
+}
+
+static int
+tx_adapter_create(void)
+{
+	int err;
+	struct rte_event_dev_info dev_info;
+	struct rte_event_port_conf tx_p_conf = {0};
+	uint8_t priority;
+	uint8_t queue_id;
+
+	struct rte_event_dev_config config = {
+			.nb_event_queues = 1,
+			.nb_event_ports = 1,
+	};
+
+	struct rte_event_queue_conf wkr_q_conf = {
+			.schedule_type = RTE_SCHED_TYPE_ORDERED,
+			.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+			.nb_atomic_flows = 1024,
+			.nb_atomic_order_sequences = 1024,
+	};
+
+	err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
+	config.nb_event_queue_flows = dev_info.max_event_queue_flows;
+	config.nb_event_port_dequeue_depth =
+			dev_info.max_event_port_dequeue_depth;
+	config.nb_event_port_enqueue_depth =
+			dev_info.max_event_port_enqueue_depth;
+	config.nb_events_limit =
+			dev_info.max_num_events;
+
+	err = rte_event_dev_configure(TEST_DEV_ID, &config);
+	TEST_ASSERT(err == 0, "Event device initialization failed err %d\n",
+			err);
+
+	queue_id = 0;
+	err = rte_event_queue_setup(TEST_DEV_ID, 0, &wkr_q_conf);
+	TEST_ASSERT(err == 0, "Event queue setup failed %d\n", err);
+
+	err = rte_event_port_setup(TEST_DEV_ID, 0, NULL);
+	TEST_ASSERT(err == 0, "Event port setup failed %d\n", err);
+
+	priority = RTE_EVENT_DEV_PRIORITY_LOWEST;
+	err = rte_event_port_link(TEST_DEV_ID, 0, &queue_id, &priority, 1);
+	TEST_ASSERT(err == 1, "Error linking port %s\n",
+		rte_strerror(rte_errno));
+	err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	tx_p_conf.new_event_threshold = dev_info.max_num_events;
+	tx_p_conf.dequeue_depth = dev_info.max_event_port_dequeue_depth;
+	tx_p_conf.enqueue_depth = dev_info.max_event_port_enqueue_depth;
+	err = rte_event_eth_tx_adapter_create(TEST_INST_ID, TEST_DEV_ID,
+					&tx_p_conf);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	return err;
+}
+
+static void
+tx_adapter_free(void)
+{
+	rte_event_eth_tx_adapter_free(TEST_INST_ID);
+}
+
+static int
+tx_adapter_create_free(void)
+{
+	int err;
+	struct rte_event_dev_info dev_info;
+	struct rte_event_port_conf tx_p_conf;
+
+	err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	tx_p_conf.new_event_threshold = dev_info.max_num_events;
+	tx_p_conf.dequeue_depth = dev_info.max_event_port_dequeue_depth;
+	tx_p_conf.enqueue_depth = dev_info.max_event_port_enqueue_depth;
+
+	err = rte_event_eth_tx_adapter_create(TEST_INST_ID, TEST_DEV_ID,
+					NULL);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_tx_adapter_create(TEST_INST_ID, TEST_DEV_ID,
+					&tx_p_conf);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_create(TEST_INST_ID,
+					TEST_DEV_ID, &tx_p_conf);
+	TEST_ASSERT(err == -EEXIST, "Expected -EEXIST %d got %d", -EEXIST, err);
+
+	err = rte_event_eth_tx_adapter_free(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_free(TEST_INST_ID);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL %d got %d", -EINVAL, err);
+
+	err = rte_event_eth_tx_adapter_free(1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL %d got %d", -EINVAL, err);
+
+	return TEST_SUCCESS;
+}
+
+static int
+tx_adapter_queue_add_del(void)
+{
+	int err;
+	uint32_t cap;
+
+	err = rte_event_eth_tx_adapter_caps_get(TEST_DEV_ID, TEST_ETHDEV_ID,
+					 &cap);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+
+	err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID,
+						rte_eth_dev_count_total(),
+						-1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID,
+						TEST_ETHDEV_ID,
+						0);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID,
+						TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID,
+						TEST_ETHDEV_ID,
+						0);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID,
+						TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID,
+						TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_add(1, TEST_ETHDEV_ID, -1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(1, TEST_ETHDEV_ID, -1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	return TEST_SUCCESS;
+}
+
+static int
+tx_adapter_start_stop(void)
+{
+	int err;
+
+	err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID, TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_start(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_stop(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID, TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_start(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_stop(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_start(1);
+
+	err = rte_event_eth_tx_adapter_stop(1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	return TEST_SUCCESS;
+}
+
+
+static int
+tx_adapter_single(uint16_t port, uint16_t tx_queue_id,
+		struct rte_mbuf *m, uint8_t qid,
+		uint8_t sched_type)
+{
+	struct rte_event event;
+	struct rte_mbuf *r;
+	int ret;
+	unsigned int l;
+
+	event.queue_id = qid;
+	event.op = RTE_EVENT_OP_NEW;
+	event.event_type = RTE_EVENT_TYPE_CPU;
+	event.sched_type = sched_type;
+	event.mbuf = m;
+
+	m->port = port;
+	rte_event_eth_tx_adapter_txq_set(m, tx_queue_id);
+
+	l = 0;
+	while (rte_event_enqueue_burst(TEST_DEV_ID, 0, &event, 1) != 1) {
+		l++;
+		if (l > EDEV_RETRY)
+			break;
+	}
+
+	TEST_ASSERT(l < EDEV_RETRY, "Unable to enqueue to eventdev");
+	l = 0;
+	while (l++ < EDEV_RETRY) {
+
+		if (eid != ~0ULL) {
+			ret = rte_service_run_iter_on_app_lcore(eid, 0);
+			TEST_ASSERT(ret == 0, "failed to run service %d", ret);
+		}
+
+		ret = rte_service_run_iter_on_app_lcore(tid, 0);
+		TEST_ASSERT(ret == 0, "failed to run service %d", ret);
+
+		if (rte_eth_rx_burst(TEST_ETHDEV_PAIR_ID, tx_queue_id,
+				&r, 1)) {
+			TEST_ASSERT_EQUAL(r, m, "mbuf comparison failed"
+					" expected %p received %p", m, r);
+			return 0;
+		}
+	}
+
+	TEST_ASSERT(0, "Failed to receive packet");
+	return -1;
+}
+
+static int
+tx_adapter_service(void)
+{
+	struct rte_event_eth_tx_adapter_stats stats;
+	uint32_t i;
+	int err;
+	uint8_t ev_port, ev_qid;
+	struct rte_mbuf  bufs[RING_SIZE];
+	struct rte_mbuf *pbufs[RING_SIZE];
+	struct rte_event_dev_info dev_info;
+	struct rte_event_dev_config dev_conf = {0};
+	struct rte_event_queue_conf qconf;
+	uint32_t qcnt, pcnt;
+	uint16_t q;
+	int internal_port;
+	uint32_t cap;
+
+	err = rte_event_eth_tx_adapter_caps_get(TEST_DEV_ID, TEST_ETHDEV_ID,
+						&cap);
+	TEST_ASSERT(err == 0, "Failed to get adapter cap err %d\n", err);
+
+	internal_port = !!(cap & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT);
+	if (internal_port)
+		return TEST_SUCCESS;
+
+	err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID, TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_event_port_get(TEST_INST_ID,
+						&ev_port);
+	TEST_ASSERT_SUCCESS(err, "Failed to get event port %d", err);
+
+	err = rte_event_dev_attr_get(TEST_DEV_ID, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+					&pcnt);
+	TEST_ASSERT_SUCCESS(err, "Port count get failed");
+
+	err = rte_event_dev_attr_get(TEST_DEV_ID,
+				RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &qcnt);
+	TEST_ASSERT_SUCCESS(err, "Queue count get failed");
+
+	err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
+	TEST_ASSERT_SUCCESS(err, "Dev info failed");
+
+	dev_conf.nb_event_queue_flows = dev_info.max_event_queue_flows;
+	dev_conf.nb_event_port_dequeue_depth =
+			dev_info.max_event_port_dequeue_depth;
+	dev_conf.nb_event_port_enqueue_depth =
+			dev_info.max_event_port_enqueue_depth;
+	dev_conf.nb_events_limit =
+			dev_info.max_num_events;
+	dev_conf.nb_event_queues = qcnt + 1;
+	dev_conf.nb_event_ports = pcnt;
+	err = rte_event_dev_configure(TEST_DEV_ID, &dev_conf);
+	TEST_ASSERT(err == 0, "Event device initialization failed err %d\n",
+			err);
+
+	ev_qid = qcnt;
+	qconf.nb_atomic_flows = dev_info.max_event_queue_flows;
+	qconf.nb_atomic_order_sequences = 32;
+	qconf.schedule_type = RTE_SCHED_TYPE_ATOMIC;
+	qconf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST;
+	qconf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_SINGLE_LINK;
+	err = rte_event_queue_setup(TEST_DEV_ID, ev_qid, &qconf);
+	TEST_ASSERT_SUCCESS(err, "Failed to setup queue %u", ev_qid);
+
+	/*
+	 * Setup ports again so that the newly added queue is visible
+	 * to them
+	 */
+	for (i = 0; i < pcnt; i++) {
+
+		int n_links;
+		uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+		uint8_t priorities[RTE_EVENT_MAX_QUEUES_PER_DEV];
+
+		if (i == ev_port)
+			continue;
+
+		n_links = rte_event_port_links_get(TEST_DEV_ID, i, queues,
+						priorities);
+		TEST_ASSERT(n_links > 0, "Failed to get port links %d\n",
+			n_links);
+		err = rte_event_port_setup(TEST_DEV_ID, i, NULL);
+		TEST_ASSERT(err == 0, "Failed to setup port err %d\n", err);
+		err = rte_event_port_link(TEST_DEV_ID, i, queues, priorities,
+					n_links);
+		TEST_ASSERT(n_links == err, "Failed to link all queues"
+			" err %s\n", rte_strerror(rte_errno));
+	}
+
+	err = rte_event_port_link(TEST_DEV_ID, ev_port, &ev_qid, NULL, 1);
+	TEST_ASSERT(err == 1, "Failed to link queue port %u",
+		    ev_port);
+
+	err = rte_event_eth_tx_adapter_start(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	if (!(dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED)) {
+		err = rte_event_dev_service_id_get(0, (uint32_t *)&eid);
+		TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+		err = rte_service_runstate_set(eid, 1);
+		TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+		err = rte_service_set_runstate_mapped_check(eid, 0);
+		TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+	}
+
+	err = rte_event_eth_tx_adapter_service_id_get(TEST_INST_ID, &tid);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_service_runstate_set(tid, 1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_service_set_runstate_mapped_check(tid, 0);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_dev_start(TEST_DEV_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	for (q = 0; q < MAX_NUM_QUEUE; q++) {
+		for (i = 0; i < RING_SIZE; i++)
+			pbufs[i] = &bufs[i];
+		for (i = 0; i < RING_SIZE; i++) {
+			pbufs[i] = &bufs[i];
+			err = tx_adapter_single(TEST_ETHDEV_ID, q, pbufs[i],
+						ev_qid,
+						RTE_SCHED_TYPE_ORDERED);
+			TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+		}
+		for (i = 0; i < RING_SIZE; i++) {
+			TEST_ASSERT_EQUAL(pbufs[i], &bufs[i],
+				"Error: received data does not match"
+				" that transmitted");
+		}
+	}
+
+	err = rte_event_eth_tx_adapter_stats_get(TEST_INST_ID, NULL);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_tx_adapter_stats_get(TEST_INST_ID, &stats);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+	TEST_ASSERT_EQUAL(stats.tx_packets, MAX_NUM_QUEUE * RING_SIZE,
+			"stats.tx_packets expected %u got %"PRIu64,
+			MAX_NUM_QUEUE * RING_SIZE,
+			stats.tx_packets);
+
+	err = rte_event_eth_tx_adapter_stats_reset(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_stats_get(TEST_INST_ID, &stats);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+	TEST_ASSERT_EQUAL(stats.tx_packets, 0,
+			"stats.tx_packets expected %u got %"PRIu64,
+			0,
+			stats.tx_packets);
+
+	err = rte_event_eth_tx_adapter_stats_get(1, &stats);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID, TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_free(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	rte_event_dev_stop(TEST_DEV_ID);
+
+	return TEST_SUCCESS;
+}
+
+static int
+tx_adapter_dynamic_device(void)
+{
+	uint16_t port_id = rte_eth_dev_count_avail();
+	const char *null_dev[2] = { "eth_null0", "eth_null1" };
+	struct rte_eth_conf dev_conf = {0};
+	int ret;
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(null_dev); i++) {
+		ret = rte_vdev_init(null_dev[i], NULL);
+		TEST_ASSERT_SUCCESS(ret, "%s Port creation failed %d",
+				null_dev[i], ret);
+
+		if (i == 0) {
+			ret = tx_adapter_create();
+			TEST_ASSERT_SUCCESS(ret, "Adapter create failed %d",
+					ret);
+		}
+
+		ret = rte_eth_dev_configure(port_id + i, MAX_NUM_QUEUE,
+					MAX_NUM_QUEUE, &dev_conf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to configure device %d", ret);
+
+		ret = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID,
+							port_id + i, 0);
+		TEST_ASSERT_SUCCESS(ret, "Failed to add queues %d", ret);
+
+	}
+
+	for (i = 0; i < RTE_DIM(null_dev); i++) {
+		ret = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID,
+							port_id + i, -1);
+		TEST_ASSERT_SUCCESS(ret, "Failed to delete queues %d", ret);
+	}
+
+	tx_adapter_free();
+
+	for (i = 0; i < RTE_DIM(null_dev); i++)
+		rte_vdev_uninit(null_dev[i]);
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite event_eth_tx_tests = {
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.suite_name = "tx event eth adapter test suite",
+	.unit_test_cases = {
+		TEST_CASE_ST(NULL, NULL, tx_adapter_create_free),
+		TEST_CASE_ST(tx_adapter_create, tx_adapter_free,
+					tx_adapter_queue_add_del),
+		TEST_CASE_ST(tx_adapter_create, tx_adapter_free,
+					tx_adapter_start_stop),
+		TEST_CASE_ST(tx_adapter_create, tx_adapter_free,
+					tx_adapter_service),
+		TEST_CASE_ST(NULL, NULL, tx_adapter_dynamic_device),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_event_eth_tx_adapter_common(void)
+{
+	return unit_test_suite_runner(&event_eth_tx_tests);
+}
+
+REGISTER_TEST_COMMAND(event_eth_tx_adapter_autotest,
+		test_event_eth_tx_adapter_common);
diff --git a/MAINTAINERS b/MAINTAINERS
index ea9e0e7..6105a56 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -395,6 +395,7 @@ Eventdev Ethdev Tx Adapter API - EXPERIMENTAL
 M: Nikhil Rao <nikhil.rao@intel.com>
 T: git://dpdk.org/next/dpdk-next-eventdev
 F: lib/librte_eventdev/*eth_tx_adapter*
+F: test/test/test_event_eth_tx_adapter.c
 
 Raw device API - EXPERIMENTAL
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
diff --git a/test/test/Makefile b/test/test/Makefile
index e6967ba..dcea441 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -191,6 +191,7 @@ ifeq ($(CONFIG_RTE_LIBRTE_EVENTDEV),y)
 SRCS-y += test_eventdev.c
 SRCS-y += test_event_ring.c
 SRCS-y += test_event_eth_rx_adapter.c
+SRCS-y += test_event_eth_tx_adapter.c
 SRCS-y += test_event_timer_adapter.c
 SRCS-y += test_event_crypto_adapter.c
 endif
diff --git a/test/test/meson.build b/test/test/meson.build
index b1dd6ec..3d2887b 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -34,6 +34,7 @@ test_sources = files('commands.c',
 	'test_efd_perf.c',
 	'test_errno.c',
 	'test_event_ring.c',
+	'test_event_eth_tx_adapter.c',
 	'test_eventdev.c',
 	'test_func_reentrancy.c',
 	'test_flow_classify.c',
@@ -152,6 +153,7 @@ test_names = [
 	'efd_perf_autotest',
 	'errno_autotest',
 	'event_ring_autotest',
+	'event_eth_tx_adapter_autotest',
 	'eventdev_common_autotest',
 	'eventdev_octeontx_autotest',
 	'eventdev_sw_autotest',
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v3 5/5] doc: add event eth Tx adapter guide
  2018-08-31  5:41 ` [dpdk-dev] [PATCH v3 1/5] " Nikhil Rao
                     ` (2 preceding siblings ...)
  2018-08-31  5:41   ` [dpdk-dev] [PATCH v3 4/5] eventdev: add auto test for eth Tx adapter Nikhil Rao
@ 2018-08-31  5:41   ` Nikhil Rao
  2018-09-17 13:56     ` Jerin Jacob
  2018-09-20 17:41   ` [dpdk-dev] [PATCH v4 1/5] eventdev: add eth Tx adapter APIs Nikhil Rao
  4 siblings, 1 reply; 27+ messages in thread
From: Nikhil Rao @ 2018-08-31  5:41 UTC (permalink / raw)
  To: jerin.jacob, olivier.matz; +Cc: dev, Nikhil Rao

Add programmer's guide doc to explain the use of the
Event Ethernet Tx Adapter library.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
---
 MAINTAINERS                                        |   1 +
 .../prog_guide/event_ethernet_tx_adapter.rst       | 165 +++++++++++++++++++++
 doc/guides/prog_guide/index.rst                    |   1 +
 3 files changed, 167 insertions(+)
 create mode 100644 doc/guides/prog_guide/event_ethernet_tx_adapter.rst

diff --git a/MAINTAINERS b/MAINTAINERS
index 6105a56..c98faf2b7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -396,6 +396,7 @@ M: Nikhil Rao <nikhil.rao@intel.com>
 T: git://dpdk.org/next/dpdk-next-eventdev
 F: lib/librte_eventdev/*eth_tx_adapter*
 F: test/test/test_event_eth_tx_adapter.c
+F: doc/guides/prog_guide/event_ethernet_tx_adapter.rst
 
 Raw device API - EXPERIMENTAL
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
diff --git a/doc/guides/prog_guide/event_ethernet_tx_adapter.rst b/doc/guides/prog_guide/event_ethernet_tx_adapter.rst
new file mode 100644
index 0000000..af657d0
--- /dev/null
+++ b/doc/guides/prog_guide/event_ethernet_tx_adapter.rst
@@ -0,0 +1,165 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2017 Intel Corporation.
+
+Event Ethernet Tx Adapter Library
+=================================
+
+The DPDK Eventdev API allows the application to use an event driven programming
+model for packet processing in which the event device distributes events
+referencing packets to the application cores in a dynamic load balanced fashion
+while handling atomicity and packet ordering. Event adapters provide the interface
+between the ethernet, crypto and timer devices and the event device. Event adapter
+APIs enable common application code by abstracting PMD specific capabilities.
+The Event ethernet Tx adapter provides configuration and data path APIs for the
+transmit stage of the application allowing the same application code to use eventdev
+PMD support or in its absence, a common implementation.
+
+In the common implementation, the application enqueues mbufs to the adapter
+which runs as a rte_service function. The service function dequeues events
+from its event port and transmits the mbufs referenced by these events.
+
+
+API Walk-through
+----------------
+
+This section will introduce the reader to the adapter API. The
+application has to first instantiate an adapter which is associated with
+a single eventdev, next the adapter instance is configured with Tx queues,
+finally the adapter is started and the application can start enqueuing mbufs
+to it.
+
+Creating an Adapter Instance
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+An adapter instance is created using ``rte_event_eth_tx_adapter_create()``. This
+function is passed the event device to be associated with the adapter and port
+configuration for the adapter to setup an event port if the adapter needs to use
+a service function.
+
+If the application desires to have finer control of eventdev port configuration,
+it can use the ``rte_event_eth_tx_adapter_create_ext()`` function. The
+``rte_event_eth_tx_adapter_create_ext()`` function is passed a callback function.
+The callback function is invoked if the adapter needs to use a service function
+and needs to create an event port for it. The callback is expected to fill the
+``struct rte_event_eth_tx_adapter_confi`` structure passed to it.
+
+.. code-block:: c
+
+        struct rte_event_dev_info dev_info;
+        struct rte_event_port_conf tx_p_conf = {0};
+
+        err = rte_event_dev_info_get(id, &dev_info);
+
+        tx_p_conf.new_event_threshold = dev_info.max_num_events;
+        tx_p_conf.dequeue_depth = dev_info.max_event_port_dequeue_depth;
+        tx_p_conf.enqueue_depth = dev_info.max_event_port_enqueue_depth;
+
+        err = rte_event_eth_tx_adapter_create(id, dev_id, &tx_p_conf);
+
+Adding Tx Queues to the Adapter Instance
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Ethdev Tx queues are added to the instance using the
+``rte_event_eth_tx_adapter_queue_add()`` function. A queue value
+of -1 is used to indicate all queues within a device.
+
+.. code-block:: c
+
+        int err = rte_event_eth_tx_adapter_queue_add(id,
+						     eth_dev_id,
+						     q);
+
+Querying Adapter Capabilities
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ``rte_event_eth_tx_adapter_caps_get()`` function allows
+the application to query the adapter capabilities for an eventdev and ethdev
+combination. Currently, the only capability flag defined is
+``RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT``, the application can
+query this flag to determine if a service function is associated with the
+adapter and retrieve its service identifier using the
+``rte_event_eth_tx_adapter_service_id_get()`` API.
+
+
+.. code-block:: c
+
+        int err = rte_event_eth_tx_adapter_caps_get(dev_id, eth_dev_id, &cap);
+
+        if (cap & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)
+                err = rte_event_eth_tx_adapter_service_id_get(id, &service_id);
+
+Linking a Queue to the Adapter's Event Port
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If the adapter uses a service function as described in the previous section, the
+application is required to link a queue to the adapter's event port. The adapter's
+event port can be obtained using the ``rte_event_eth_tx_adapter_event_port_get()``
+function. The queue can be configured with the ``RTE_EVENT_QUEUE_CFG_SINGLE_LINK``
+since it is linked to a single event port.
+
+Configuring the Service Function
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If the adapter uses a service function, the application can assign
+a service core to the service function as shown below.
+
+.. code-block:: c
+
+        if (rte_event_eth_tx_adapter_service_id_get(id, &service_id) == 0)
+                rte_service_map_lcore_set(service_id, TX_CORE_ID);
+
+Starting the Adapter Instance
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The application calls ``rte_event_eth_tx_adapter_start()`` to start the adapter.
+This function calls the start callback of the eventdev PMD if supported,
+and the ``rte_service_run_state_set()`` to enable the service function if one exists.
+
+Enqueueing Packets to the Adapter
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The application needs to notify the adapter about the transmit port and queue used
+to send the packet. The transmit port is set in the ``struct rte mbuf::port`` field
+and the transmit queue is set using the ``rte_event_eth_tx_adapter_txq_set()``
+function.
+
+If the eventdev PMD supports the ``RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT``
+capability for a given ethernet device, the application should use the
+``rte_event_eth_tx_adapter_enqueue()`` function to enqueue packets to the adapter.
+
+If the adapter uses a service function for the ethernet device then the application
+should use the ``rte_event_enqueue_burst()`` function.
+
+.. code-block:: c
+
+	struct rte_event event;
+
+	if (cap & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT) {
+
+		event.mbuf = m;
+
+		m->port = tx_port;
+		rte_event_eth_tx_adapter_txq_set(m, tx_queue_id);
+
+		rte_event_eth_tx_adapter_enqueue(dev_id, ev_port, &event, 1);
+	} else {
+
+		event.queue_id = qid; /* event queue linked to adapter port */
+		event.op = RTE_EVENT_OP_NEW;
+		event.event_type = RTE_EVENT_TYPE_CPU;
+		event.sched_type = RTE_SCHED_TYPE_ATOMIC;
+		event.mbuf = m;
+
+		m->port = tx_port;
+		rte_event_eth_tx_adapter_txq_set(m, tx_queue_id);
+
+		rte_event_enqueue_burst(dev_id, ev_port, &event, 1);
+	}
+
+Getting Adapter Statistics
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The  ``rte_event_eth_tx_adapter_stats_get()`` function reports counters defined
+in struct ``rte_event_eth_tx_adapter_stats``. The counter values are the sum of
+the counts from the eventdev PMD callback if the callback is supported, and
+the counts maintained by the service function, if one exists.
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 3b920e5..c81d9c5 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -44,6 +44,7 @@ Programmer's Guide
     thread_safety_dpdk_functions
     eventdev
     event_ethernet_rx_adapter
+    event_ethernet_tx_adapter
     event_timer_adapter
     event_crypto_adapter
     qos_framework
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v3 5/5] doc: add event eth Tx adapter guide
  2018-08-31  5:41   ` [dpdk-dev] [PATCH v3 5/5] doc: add event eth Tx adapter guide Nikhil Rao
@ 2018-09-17 13:56     ` Jerin Jacob
  0 siblings, 0 replies; 27+ messages in thread
From: Jerin Jacob @ 2018-09-17 13:56 UTC (permalink / raw)
  To: Nikhil Rao; +Cc: olivier.matz, dev, marko.kovacevic, john.mcnamara

-----Original Message-----
> Date: Fri, 31 Aug 2018 11:11:09 +0530
> From: Nikhil Rao <nikhil.rao@intel.com>
> To: jerin.jacob@caviumnetworks.com, olivier.matz@6wind.com
> CC: dev@dpdk.org, Nikhil Rao <nikhil.rao@intel.com>
> Subject: [PATCH v3 5/5] doc: add event eth Tx adapter guide
> X-Mailer: git-send-email 1.8.3.1
> 
> 
> Add programmer's guide doc to explain the use of the
> Event Ethernet Tx Adapter library.
> 
> Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
> ---

+ john.mcnamara@intel.com, marko.kovacevic@intel.com

> +++ b/doc/guides/prog_guide/event_ethernet_tx_adapter.rst
> @@ -0,0 +1,165 @@
> +..  SPDX-License-Identifier: BSD-3-Clause
> +    Copyright(c) 2017 Intel Corporation.
> +
> +
> +Creating an Adapter Instance
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +An adapter instance is created using ``rte_event_eth_tx_adapter_create()``. This
> +function is passed the event device to be associated with the adapter and port
> +configuration for the adapter to setup an event port if the adapter needs to use
> +a service function.
> +
> +If the application desires to have finer control of eventdev port configuration,
> +it can use the ``rte_event_eth_tx_adapter_create_ext()`` function. The
> +``rte_event_eth_tx_adapter_create_ext()`` function is passed a callback function.
> +The callback function is invoked if the adapter needs to use a service function
> +and needs to create an event port for it. The callback is expected to fill the
> +``struct rte_event_eth_tx_adapter_confi`` structure passed to it.

s/rte_event_eth_tx_adapter_confi/rte_event_eth_tx_adapter_conf/

> +
> +.. code-block:: c
> +
> +        struct rte_event_dev_info dev_info;
> +        struct rte_event_port_conf tx_p_conf = {0};
> +
> +        err = rte_event_dev_info_get(id, &dev_info);
> +
> +        tx_p_conf.new_event_threshold = dev_info.max_num_events;
> +        tx_p_conf.dequeue_depth = dev_info.max_event_port_dequeue_depth;
> +        tx_p_conf.enqueue_depth = dev_info.max_event_port_enqueue_depth;
> +
> +        err = rte_event_eth_tx_adapter_create(id, dev_id, &tx_p_conf);
> +
> +
> +Querying Adapter Capabilities
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +The ``rte_event_eth_tx_adapter_caps_get()`` function allows
> +the application to query the adapter capabilities for an eventdev and ethdev
> +combination. Currently, the only capability flag defined is
> +``RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT``, the application can
> +query this flag to determine if a service function is associated with the
> +adapter and retrieve its service identifier using the
> +``rte_event_eth_tx_adapter_service_id_get()`` API.
> +
> +
> +.. code-block:: c
> +
> +        int err = rte_event_eth_tx_adapter_caps_get(dev_id, eth_dev_id, &cap);
> +
> +        if (cap & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)


Shouldn't it be, if (!(cap & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT))
ie. rte_event_eth_tx_adapter_service_id_get valid only when cap is
!RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT

> +                err = rte_event_eth_tx_adapter_service_id_get(id, &service_id);
> +
> +
> +Enqueueing Packets to the Adapter

s/Enqueueing/Enqueuing ??

> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +

With above change it looks good to me.

Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/5] eventdev: add auto test for eth Tx adapter
  2018-08-31  5:41   ` [dpdk-dev] [PATCH v3 4/5] eventdev: add auto test for eth Tx adapter Nikhil Rao
@ 2018-09-17 14:00     ` Jerin Jacob
  0 siblings, 0 replies; 27+ messages in thread
From: Jerin Jacob @ 2018-09-17 14:00 UTC (permalink / raw)
  To: Nikhil Rao; +Cc: olivier.matz, dev

-----Original Message-----
> Date: Fri, 31 Aug 2018 11:11:08 +0530
> From: Nikhil Rao <nikhil.rao@intel.com>
> To: jerin.jacob@caviumnetworks.com, olivier.matz@6wind.com
> CC: dev@dpdk.org, Nikhil Rao <nikhil.rao@intel.com>
> Subject: [PATCH v3 4/5] eventdev: add auto test for eth Tx adapter
> X-Mailer: git-send-email 1.8.3.1
> 
> This patch adds tests for the eth Tx adapter APIs. It also
> tests the data path for the rte_service function based
> implementation of the APIs.
> 
> Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
> ---
>  test/test/test_event_eth_tx_adapter.c | 694 ++++++++++++++++++++++++++++++++++
>  MAINTAINERS                           |   1 +
>  test/test/Makefile                    |   1 +
>  test/test/meson.build                 |   2 +
>  4 files changed, 698 insertions(+)
>  create mode 100644 test/test/test_event_eth_tx_adapter.c
> 
> diff --git a/test/test/test_event_eth_tx_adapter.c b/test/test/test_event_eth_tx_adapter.c
> new file mode 100644
> index 0000000..e97f59f
> --- /dev/null
> +++ b/test/test/test_event_eth_tx_adapter.c
> @@ -0,0 +1,694 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018 Intel Corporation
> + */
> +#include <string.h>

Space here

> +#include <rte_common.h>
> +#include <rte_mempool.h>
> +#include <rte_mbuf.h>
> +#include <rte_ethdev.h>
> +#include <rte_eventdev.h>
> +#include <rte_bus_vdev.h>
> +#include <rte_eth_ring.h>
> +#include <rte_service.h>
> +#include <rte_event_eth_tx_adapter.h>

Sort it in alphabetical order.

> +
> +#include "test.h"
> +
> +
> +static inline int
> +port_init(uint8_t port, struct rte_mempool *mp)
> +{
> +       struct rte_eth_conf conf = { 0 };

Some old compiler had issue with above declaration,
Can you use memset instead here another instance of this file.

> +       return port_init_common(port, &conf, mp);
> +}
> +
> +#define RING_NAME_LEN  20
> +#define DEV_NAME_LEN   20
> +

With above changes:
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v4 1/5] eventdev: add eth Tx adapter APIs
  2018-08-31  5:41 ` [dpdk-dev] [PATCH v3 1/5] " Nikhil Rao
                     ` (3 preceding siblings ...)
  2018-08-31  5:41   ` [dpdk-dev] [PATCH v3 5/5] doc: add event eth Tx adapter guide Nikhil Rao
@ 2018-09-20 17:41   ` Nikhil Rao
  2018-09-20 17:41     ` [dpdk-dev] [PATCH v4 2/5] eventdev: add caps API and PMD callbacks for eth Tx adapter Nikhil Rao
                       ` (5 more replies)
  4 siblings, 6 replies; 27+ messages in thread
From: Nikhil Rao @ 2018-09-20 17:41 UTC (permalink / raw)
  To: jerin.jacob, olivier.matz, marko.kovacevic, john.mcnamara; +Cc: dev, Nikhil Rao

The ethernet Tx adapter abstracts the transmit stage of an
event driven packet processing application. The transmit
stage may be implemented with eventdev PMD support or use a
rte_service function implemented in the adapter. These APIs
provide a common configuration and control interface and
an transmit API for the eventdev PMD implementation.

The transmit port is specified using mbuf::port. The transmit
queue is specified using the rte_event_eth_tx_adapter_txq_set()
function.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 lib/librte_eventdev/rte_event_eth_tx_adapter.h | 462 +++++++++++++++++++++++++
 lib/librte_mbuf/rte_mbuf.h                     |   5 +-
 MAINTAINERS                                    |   5 +
 doc/api/doxy-api-index.md                      |   1 +
 4 files changed, 472 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_eventdev/rte_event_eth_tx_adapter.h

diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.h b/lib/librte_eventdev/rte_event_eth_tx_adapter.h
new file mode 100644
index 0000000..3e0d5c6
--- /dev/null
+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.h
@@ -0,0 +1,462 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation.
+ */
+
+#ifndef _RTE_EVENT_ETH_TX_ADAPTER_
+#define _RTE_EVENT_ETH_TX_ADAPTER_
+
+/**
+ * @file
+ *
+ * RTE Event Ethernet Tx Adapter
+ *
+ * The event ethernet Tx adapter provides configuration and data path APIs
+ * for the ethernet transmit stage of an event driven packet processing
+ * application. These APIs abstract the implementation of the transmit stage
+ * and allow the application to use eventdev PMD support or a common
+ * implementation.
+ *
+ * In the common implementation, the application enqueues mbufs to the adapter
+ * which runs as a rte_service function. The service function dequeues events
+ * from its event port and transmits the mbufs referenced by these events.
+ *
+ * The ethernet Tx event adapter APIs are:
+ *
+ *  - rte_event_eth_tx_adapter_create()
+ *  - rte_event_eth_tx_adapter_create_ext()
+ *  - rte_event_eth_tx_adapter_free()
+ *  - rte_event_eth_tx_adapter_start()
+ *  - rte_event_eth_tx_adapter_stop()
+ *  - rte_event_eth_tx_adapter_queue_add()
+ *  - rte_event_eth_tx_adapter_queue_del()
+ *  - rte_event_eth_tx_adapter_stats_get()
+ *  - rte_event_eth_tx_adapter_stats_reset()
+ *  - rte_event_eth_tx_adapter_enqueue()
+ *  - rte_event_eth_tx_adapter_event_port_get()
+ *  - rte_event_eth_tx_adapter_service_id_get()
+ *
+ * The application creates the adapter using
+ * rte_event_eth_tx_adapter_create() or rte_event_eth_tx_adapter_create_ext().
+ *
+ * The adapter will use the common implementation when the eventdev PMD
+ * does not have the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability.
+ * The common implementation uses an event port that is created using the port
+ * configuration parameter passed to rte_event_eth_tx_adapter_create(). The
+ * application can get the port identifier using
+ * rte_event_eth_tx_adapter_event_port_get() and must link an event queue to
+ * this port.
+ *
+ * If the eventdev PMD has the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT
+ * flags set, Tx adapter events should be enqueued using the
+ * rte_event_eth_tx_adapter_enqueue() function, else the application should
+ * use rte_event_enqueue_burst().
+ *
+ * Transmit queues can be added and deleted from the adapter using
+ * rte_event_eth_tx_adapter_queue_add()/del() APIs respectively.
+ *
+ * The application can start and stop the adapter using the
+ * rte_event_eth_tx_adapter_start/stop() calls.
+ *
+ * The common adapter implementation uses an EAL service function as described
+ * before and its execution is controlled using the rte_service APIs. The
+ * rte_event_eth_tx_adapter_service_id_get()
+ * function can be used to retrieve the adapter's service function ID.
+ *
+ * The ethernet port and transmit queue index to transmit the mbuf on are
+ * specified using the mbuf port and the higher 16 bits of
+ * struct rte_mbuf::hash::sched:hi. The application should use the
+ * rte_event_eth_tx_adapter_txq_set() and rte_event_eth_tx_adapter_txq_get()
+ * functions to access the transmit queue index since it is expected that the
+ * transmit queue will be eventually defined within struct rte_mbuf and using
+ * these macros will help with minimizing application impact due to
+ * a change in how the transmit queue index is specified.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+
+#include <rte_mbuf.h>
+
+#include "rte_eventdev.h"
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Adapter configuration structure
+ *
+ * @see rte_event_eth_tx_adapter_create_ext
+ * @see rte_event_eth_tx_adapter_conf_cb
+ */
+struct rte_event_eth_tx_adapter_conf {
+	uint8_t event_port_id;
+	/**< Event port identifier, the adapter service function dequeues mbuf
+	 * events from this port.
+	 * @see RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT
+	 */
+	uint32_t max_nb_tx;
+	/**< The adapter can return early if it has processed at least
+	 * max_nb_tx mbufs. This isn't treated as a requirement; batching may
+	 * cause the adapter to process more than max_nb_tx mbufs.
+	 */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Function type used for adapter configuration callback. The callback is
+ * used to fill in members of the struct rte_event_eth_tx_adapter_conf, this
+ * callback is invoked when creating a RTE service function based
+ * adapter implementation.
+ *
+ * @param id
+ *  Adapter identifier.
+ * @param dev_id
+ *  Event device identifier.
+ * @param [out] conf
+ *  Structure that needs to be populated by this callback.
+ * @param arg
+ *  Argument to the callback. This is the same as the conf_arg passed to the
+ *  rte_event_eth_tx_adapter_create_ext().
+ *
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure
+ */
+typedef int (*rte_event_eth_tx_adapter_conf_cb) (uint8_t id, uint8_t dev_id,
+				struct rte_event_eth_tx_adapter_conf *conf,
+				void *arg);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * A structure used to retrieve statistics for an ethernet Tx adapter instance.
+ */
+struct rte_event_eth_tx_adapter_stats {
+	uint64_t tx_retry;
+	/**< Number of transmit retries */
+	uint64_t tx_packets;
+	/**< Number of packets transmitted */
+	uint64_t tx_dropped;
+	/**< Number of packets dropped */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new ethernet Tx adapter with the specified identifier.
+ *
+ * @param id
+ *  The identifier of the ethernet Tx adapter.
+ * @param dev_id
+ *  The event device identifier.
+ * @param port_config
+ *  Event port configuration, the adapter uses this configuration to
+ *  create an event port if needed.
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_create(uint8_t id, uint8_t dev_id,
+				struct rte_event_port_conf *port_config);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new ethernet Tx adapter with the specified identifier.
+ *
+ * @param id
+ *  The identifier of the ethernet Tx adapter.
+ * @param dev_id
+ *  The event device identifier.
+ * @param conf_cb
+ *  Callback function that initializes members of the
+ *  struct rte_event_eth_tx_adapter_conf struct passed into
+ *  it.
+ * @param conf_arg
+ *  Argument that is passed to the conf_cb function.
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_create_ext(uint8_t id, uint8_t dev_id,
+				rte_event_eth_tx_adapter_conf_cb conf_cb,
+				void *conf_arg);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free an ethernet Tx adapter
+ *
+ * @param id
+ *  Adapter identifier.
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure, If the adapter still has Tx queues
+ *      added to it, the function returns -EBUSY.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_free(uint8_t id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Start ethernet Tx adapter
+ *
+ * @param id
+ *  Adapter identifier.
+ * @return
+ *  - 0: Success, Adapter started correctly.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_start(uint8_t id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Stop ethernet Tx adapter
+ *
+ * @param id
+ *  Adapter identifier.
+ * @return
+ *  - 0: Success.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_stop(uint8_t id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Add a Tx queue to the adapter.
+ * A queue value of -1 is used to indicate all
+ * queues within the device.
+ *
+ * @param id
+ *  Adapter identifier.
+ * @param eth_dev_id
+ *  Ethernet Port Identifier.
+ * @param queue
+ *  Tx queue index.
+ * @return
+ *  - 0: Success, Queues added successfully.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_queue_add(uint8_t id,
+				uint16_t eth_dev_id,
+				int32_t queue);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Delete a Tx queue from the adapter.
+ * A queue value of -1 is used to indicate all
+ * queues within the device, that have been added to this
+ * adapter.
+ *
+ * @param id
+ *  Adapter identifier.
+ * @param eth_dev_id
+ *  Ethernet Port Identifier.
+ * @param queue
+ *  Tx queue index.
+ * @return
+ *  - 0: Success, Queues deleted successfully.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_queue_del(uint8_t id,
+				uint16_t eth_dev_id,
+				int32_t queue);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Set Tx queue in the mbuf. This queue is used by the  adapter
+ * to transmit the mbuf.
+ *
+ * @param pkt
+ *  Pointer to the mbuf.
+ * @param queue
+ *  Tx queue index.
+ */
+static __rte_always_inline void __rte_experimental
+rte_event_eth_tx_adapter_txq_set(struct rte_mbuf *pkt, uint16_t queue)
+{
+	uint16_t *p = (uint16_t *)&pkt->hash.sched.hi;
+	p[1] = queue;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve Tx queue from the mbuf.
+ *
+ * @param pkt
+ *  Pointer to the mbuf.
+ * @return
+ *  Tx queue identifier.
+ *
+ * @see rte_event_eth_tx_adapter_txq_set()
+ */
+static __rte_always_inline uint16_t __rte_experimental
+rte_event_eth_tx_adapter_txq_get(struct rte_mbuf *pkt)
+{
+	uint16_t *p = (uint16_t *)&pkt->hash.sched.hi;
+	return p[1];
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve the adapter event port. The adapter creates an event port if
+ * the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT is not set in the
+ * ethernet Tx capabilities of the event device.
+ *
+ * @param id
+ *  Adapter Identifier.
+ * @param[out] event_port_id
+ *  Event port pointer.
+ * @return
+ *   - 0: Success.
+ *   - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id);
+
+/**
+ * Enqueue a burst of events objects or an event object supplied in *rte_event*
+ * structure on an  event device designated by its *dev_id* through the event
+ * port specified by *port_id*. This function is supported if the eventdev PMD
+ * has the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability flag set.
+ *
+ * The *nb_events* parameter is the number of event objects to enqueue which are
+ * supplied in the *ev* array of *rte_event* structure.
+ *
+ * The rte_event_eth_tx_adapter_enqueue() function returns the number of
+ * events objects it actually enqueued. A return value equal to *nb_events*
+ * means that all event objects have been enqueued.
+ *
+ * @param dev_id
+ *  The identifier of the device.
+ * @param port_id
+ *  The identifier of the event port.
+ * @param ev
+ *  Points to an array of *nb_events* objects of type *rte_event* structure
+ *  which contain the event object enqueue operations to be processed.
+ * @param nb_events
+ *  The number of event objects to enqueue, typically number of
+ *  rte_event_port_enqueue_depth() available for this port.
+ *
+ * @return
+ *   The number of event objects actually enqueued on the event device. The
+ *   return value can be less than the value of the *nb_events* parameter when
+ *   the event devices queue is full or if invalid parameters are specified in a
+ *   *rte_event*. If the return value is less than *nb_events*, the remaining
+ *   events at the end of ev[] are not consumed and the caller has to take care
+ *   of them, and rte_errno is set accordingly. Possible errno values include:
+ *   - -EINVAL  The port ID is invalid, device ID is invalid, an event's queue
+ *              ID is invalid, or an event's sched type doesn't match the
+ *              capabilities of the destination queue.
+ *   - -ENOSPC  The event port was backpressured and unable to enqueue
+ *              one or more events. This error code is only applicable to
+ *              closed systems.
+ */
+static inline uint16_t __rte_experimental
+rte_event_eth_tx_adapter_enqueue(uint8_t dev_id,
+				uint8_t port_id,
+				struct rte_event ev[],
+				uint16_t nb_events)
+{
+	const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+	if (dev_id >= RTE_EVENT_MAX_DEVS ||
+		!rte_eventdevs[dev_id].attached) {
+		rte_errno = -EINVAL;
+		return 0;
+	}
+
+	if (port_id >= dev->data->nb_ports) {
+		rte_errno = -EINVAL;
+		return 0;
+	}
+#endif
+	return dev->txa_enqueue(dev->data->ports[port_id], ev, nb_events);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve statistics for an adapter
+ *
+ * @param id
+ *  Adapter identifier.
+ * @param [out] stats
+ *  A pointer to structure used to retrieve statistics for an adapter.
+ * @return
+ *  - 0: Success, statistics retrieved successfully.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_stats_get(uint8_t id,
+				struct rte_event_eth_tx_adapter_stats *stats);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Reset statistics for an adapter.
+ *
+ * @param id
+ *  Adapter identifier.
+ * @return
+ *  - 0: Success, statistics reset successfully.
+ *  - <0: Error code on failure.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_stats_reset(uint8_t id);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve the service ID of an adapter. If the adapter doesn't use
+ * a rte_service function, this function returns -ESRCH.
+ *
+ * @param id
+ *  Adapter identifier.
+ * @param [out] service_id
+ *  A pointer to a uint32_t, to be filled in with the service id.
+ * @return
+ *  - 0: Success
+ *  - <0: Error code on failure, if the adapter doesn't use a rte_service
+ * function, this function returns -ESRCH.
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_service_id_get(uint8_t id, uint32_t *service_id);
+
+#ifdef __cplusplus
+}
+#endif
+#endif	/* _RTE_EVENT_ETH_TX_ADAPTER_ */
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index a50b05c..b47a5c5 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -464,7 +464,9 @@ struct rte_mbuf {
 	};
 	uint16_t nb_segs;         /**< Number of segments. */
 
-	/** Input port (16 bits to support more than 256 virtual ports). */
+	/** Input port (16 bits to support more than 256 virtual ports).
+	 * The event eth Tx adapter uses this field to specify the output port.
+	 */
 	uint16_t port;
 
 	uint64_t ol_flags;        /**< Offload features. */
@@ -530,6 +532,7 @@ struct rte_mbuf {
 		struct {
 			uint32_t lo;
 			uint32_t hi;
+			/**< @see rte_event_eth_tx_adapter_txq_set */
 		} sched;          /**< Hierarchical scheduler */
 		uint32_t usr;	  /**< User defined tags. See rte_distributor_process() */
 	} hash;                   /**< hash information */
diff --git a/MAINTAINERS b/MAINTAINERS
index f222701..3f06b56 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -391,6 +391,11 @@ F: lib/librte_eventdev/*crypto_adapter*
 F: test/test/test_event_crypto_adapter.c
 F: doc/guides/prog_guide/event_crypto_adapter.rst
 
+Eventdev Ethdev Tx Adapter API - EXPERIMENTAL
+M: Nikhil Rao <nikhil.rao@intel.com>
+T: git://dpdk.org/next/dpdk-next-eventdev
+F: lib/librte_eventdev/*eth_tx_adapter*
+
 Raw device API - EXPERIMENTAL
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
 M: Hemant Agrawal <hemant.agrawal@nxp.com>
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 9265907..911a902 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -24,6 +24,7 @@ The public API headers are grouped by topics:
   [event_eth_rx_adapter]   (@ref rte_event_eth_rx_adapter.h),
   [event_timer_adapter]    (@ref rte_event_timer_adapter.h),
   [event_crypto_adapter]   (@ref rte_event_crypto_adapter.h),
+  [event_eth_tx_adapter]   (@ref rte_event_eth_tx_adapter.h),
   [rawdev]             (@ref rte_rawdev.h),
   [metrics]            (@ref rte_metrics.h),
   [bitrate]            (@ref rte_bitrate.h),
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v4 2/5] eventdev: add caps API and PMD callbacks for eth Tx adapter
  2018-09-20 17:41   ` [dpdk-dev] [PATCH v4 1/5] eventdev: add eth Tx adapter APIs Nikhil Rao
@ 2018-09-20 17:41     ` Nikhil Rao
  2018-09-20 17:41     ` [dpdk-dev] [PATCH v4 3/5] eventdev: add eth Tx adapter implementation Nikhil Rao
                       ` (4 subsequent siblings)
  5 siblings, 0 replies; 27+ messages in thread
From: Nikhil Rao @ 2018-09-20 17:41 UTC (permalink / raw)
  To: jerin.jacob, olivier.matz, marko.kovacevic, john.mcnamara; +Cc: dev, Nikhil Rao

The caps API allows the application to query if the transmit
stage is implemented in the eventdev PMD or uses the common
rte_service function. The PMD callbacks support the
eventdev PMD implementation of the adapter.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 lib/librte_eventdev/rte_event_eth_tx_adapter.h |   8 +-
 lib/librte_eventdev/rte_eventdev.h             |  33 +++-
 lib/librte_eventdev/rte_eventdev_pmd.h         | 200 +++++++++++++++++++++++++
 lib/librte_eventdev/rte_eventdev.c             |  36 +++++
 4 files changed, 272 insertions(+), 5 deletions(-)

diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.h b/lib/librte_eventdev/rte_event_eth_tx_adapter.h
index 3e0d5c6..0f378a6 100644
--- a/lib/librte_eventdev/rte_event_eth_tx_adapter.h
+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.h
@@ -39,14 +39,14 @@
  * rte_event_eth_tx_adapter_create() or rte_event_eth_tx_adapter_create_ext().
  *
  * The adapter will use the common implementation when the eventdev PMD
- * does not have the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability.
+ * does not have the #RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability.
  * The common implementation uses an event port that is created using the port
  * configuration parameter passed to rte_event_eth_tx_adapter_create(). The
  * application can get the port identifier using
  * rte_event_eth_tx_adapter_event_port_get() and must link an event queue to
  * this port.
  *
- * If the eventdev PMD has the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT
+ * If the eventdev PMD has the #RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT
  * flags set, Tx adapter events should be enqueued using the
  * rte_event_eth_tx_adapter_enqueue() function, else the application should
  * use rte_event_enqueue_burst().
@@ -329,7 +329,7 @@ struct rte_event_eth_tx_adapter_stats {
  * @b EXPERIMENTAL: this API may change without prior notice
  *
  * Retrieve the adapter event port. The adapter creates an event port if
- * the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT is not set in the
+ * the #RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT is not set in the
  * ethernet Tx capabilities of the event device.
  *
  * @param id
@@ -347,7 +347,7 @@ struct rte_event_eth_tx_adapter_stats {
  * Enqueue a burst of events objects or an event object supplied in *rte_event*
  * structure on an  event device designated by its *dev_id* through the event
  * port specified by *port_id*. This function is supported if the eventdev PMD
- * has the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability flag set.
+ * has the #RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability flag set.
  *
  * The *nb_events* parameter is the number of event objects to enqueue which are
  * supplied in the *ev* array of *rte_event* structure.
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index b6fd6ee..4717292 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -1186,6 +1186,32 @@ struct rte_event {
 rte_event_crypto_adapter_caps_get(uint8_t dev_id, uint8_t cdev_id,
 				  uint32_t *caps);
 
+/* Ethdev Tx adapter capability bitmap flags */
+#define RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT	0x1
+/**< This flag is sent when the PMD supports a packet transmit callback
+ */
+
+/**
+ * Retrieve the event device's eth Tx adapter capabilities
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param eth_port_id
+ *   The identifier of the ethernet device.
+ *
+ * @param[out] caps
+ *   A pointer to memory filled with eth Tx adapter capabilities.
+ *
+ * @return
+ *   - 0: Success, driver provides eth Tx adapter capabilities.
+ *   - <0: Error code returned by the driver function.
+ *
+ */
+int __rte_experimental
+rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
+				uint32_t *caps);
+
 struct rte_eventdev_ops;
 struct rte_eventdev;
 
@@ -1204,6 +1230,10 @@ typedef uint16_t (*event_dequeue_burst_t)(void *port, struct rte_event ev[],
 		uint16_t nb_events, uint64_t timeout_ticks);
 /**< @internal Dequeue burst of events from port of a device */
 
+typedef uint16_t (*event_tx_adapter_enqueue)(void *port,
+				struct rte_event ev[], uint16_t nb_events);
+/**< @internal Enqueue burst of events on port of a device */
+
 #define RTE_EVENTDEV_NAME_MAX_LEN	(64)
 /**< @internal Max length of name of event PMD */
 
@@ -1266,7 +1296,8 @@ struct rte_eventdev {
 	/**< Pointer to PMD dequeue function. */
 	event_dequeue_burst_t dequeue_burst;
 	/**< Pointer to PMD dequeue burst function. */
-
+	event_tx_adapter_enqueue txa_enqueue;
+	/**< Pointer to PMD eth Tx adapter enqueue function. */
 	struct rte_eventdev_data *data;
 	/**< Pointer to device data */
 	struct rte_eventdev_ops *dev_ops;
diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h b/lib/librte_eventdev/rte_eventdev_pmd.h
index 3fbb4d2..cd7145a 100644
--- a/lib/librte_eventdev/rte_eventdev_pmd.h
+++ b/lib/librte_eventdev/rte_eventdev_pmd.h
@@ -789,6 +789,186 @@ typedef int (*eventdev_crypto_adapter_stats_reset)
 			(const struct rte_eventdev *dev,
 			 const struct rte_cryptodev *cdev);
 
+/**
+ * Retrieve the event device's eth Tx adapter capabilities.
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @param eth_dev
+ *   Ethernet device pointer
+ *
+ * @param[out] caps
+ *   A pointer to memory filled with eth Tx adapter capabilities.
+ *
+ * @return
+ *   - 0: Success, driver provides eth Tx adapter capabilities
+ *   - <0: Error code returned by the driver function.
+ *
+ */
+typedef int (*eventdev_eth_tx_adapter_caps_get_t)
+					(const struct rte_eventdev *dev,
+					const struct rte_eth_dev *eth_dev,
+					uint32_t *caps);
+
+/**
+ * Create adapter callback.
+ *
+ * @param id
+ *   Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *   - 0: Success.
+ *   - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_create_t)(uint8_t id,
+					const struct rte_eventdev *dev);
+
+/**
+ * Free adapter callback.
+ *
+ * @param id
+ *   Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *   - 0: Success.
+ *   - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_free_t)(uint8_t id,
+					const struct rte_eventdev *dev);
+
+/**
+ * Add a Tx queue to the adapter.
+ * A queue value of -1 is used to indicate all
+ * queues within the device.
+ *
+ * @param id
+ *   Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @param eth_dev
+ *   Ethernet device pointer
+ *
+ * @param tx_queue_id
+ *   Transmt queue index
+ *
+ * @return
+ *   - 0: Success.
+ *   - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_queue_add_t)(
+					uint8_t id,
+					const struct rte_eventdev *dev,
+					const struct rte_eth_dev *eth_dev,
+					int32_t tx_queue_id);
+
+/**
+ * Delete a Tx queue from the adapter.
+ * A queue value of -1 is used to indicate all
+ * queues within the device, that have been added to this
+ * adapter.
+ *
+ * @param id
+ *   Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @param eth_dev
+ *   Ethernet device pointer
+ *
+ * @param tx_queue_id
+ *   Transmit queue index
+ *
+ * @return
+ *  - 0: Success, Queues deleted successfully.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_queue_del_t)(
+					uint8_t id,
+					const struct rte_eventdev *dev,
+					const struct rte_eth_dev *eth_dev,
+					int32_t tx_queue_id);
+
+/**
+ * Start the adapter.
+ *
+ * @param id
+ *   Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *  - 0: Success, Adapter started correctly.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_start_t)(uint8_t id,
+					const struct rte_eventdev *dev);
+
+/**
+ * Stop the adapter.
+ *
+ * @param id
+ *  Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *  - 0: Success.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_stop_t)(uint8_t id,
+					const struct rte_eventdev *dev);
+
+struct rte_event_eth_tx_adapter_stats;
+
+/**
+ * Retrieve statistics for an adapter
+ *
+ * @param id
+ *  Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @param [out] stats
+ *  A pointer to structure used to retrieve statistics for an adapter
+ *
+ * @return
+ *  - 0: Success, statistics retrieved successfully.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_stats_get_t)(
+				uint8_t id,
+				const struct rte_eventdev *dev,
+				struct rte_event_eth_tx_adapter_stats *stats);
+
+/**
+ * Reset statistics for an adapter
+ *
+ * @param id
+ *  Adapter identifier
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *  - 0: Success, statistics retrieved successfully.
+ *  - <0: Error code on failure.
+ */
+typedef int (*eventdev_eth_tx_adapter_stats_reset_t)(uint8_t id,
+					const struct rte_eventdev *dev);
+
 /** Event device operations function pointer table */
 struct rte_eventdev_ops {
 	eventdev_info_get_t dev_infos_get;	/**< Get device info. */
@@ -862,6 +1042,26 @@ struct rte_eventdev_ops {
 	eventdev_crypto_adapter_stats_reset crypto_adapter_stats_reset;
 	/**< Reset crypto stats */
 
+	eventdev_eth_tx_adapter_caps_get_t eth_tx_adapter_caps_get;
+	/**< Get ethernet Tx adapter capabilities */
+
+	eventdev_eth_tx_adapter_create_t eth_tx_adapter_create;
+	/**< Create adapter callback */
+	eventdev_eth_tx_adapter_free_t eth_tx_adapter_free;
+	/**< Free adapter callback */
+	eventdev_eth_tx_adapter_queue_add_t eth_tx_adapter_queue_add;
+	/**< Add Tx queues to the eth Tx adapter */
+	eventdev_eth_tx_adapter_queue_del_t eth_tx_adapter_queue_del;
+	/**< Delete Tx queues from the eth Tx adapter */
+	eventdev_eth_tx_adapter_start_t eth_tx_adapter_start;
+	/**< Start eth Tx adapter */
+	eventdev_eth_tx_adapter_stop_t eth_tx_adapter_stop;
+	/**< Stop eth Tx adapter */
+	eventdev_eth_tx_adapter_stats_get_t eth_tx_adapter_stats_get;
+	/**< Get eth Tx adapter statistics */
+	eventdev_eth_tx_adapter_stats_reset_t eth_tx_adapter_stats_reset;
+	/**< Reset eth Tx adapter statistics */
+
 	eventdev_selftest dev_selftest;
 	/**< Start eventdev Selftest */
 
diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index 801810e..c4decd5 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -175,6 +175,31 @@
 		(dev, cdev, caps) : -ENOTSUP;
 }
 
+int __rte_experimental
+rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
+				uint32_t *caps)
+{
+	struct rte_eventdev *dev;
+	struct rte_eth_dev *eth_dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_port_id, -EINVAL);
+
+	dev = &rte_eventdevs[dev_id];
+	eth_dev = &rte_eth_devices[eth_port_id];
+
+	if (caps == NULL)
+		return -EINVAL;
+
+	*caps = 0;
+
+	return dev->dev_ops->eth_tx_adapter_caps_get ?
+			(*dev->dev_ops->eth_tx_adapter_caps_get)(dev,
+								eth_dev,
+								caps)
+			: 0;
+}
+
 static inline int
 rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
 {
@@ -1275,6 +1300,15 @@ int rte_event_dev_selftest(uint8_t dev_id)
 	return RTE_EVENT_MAX_DEVS;
 }
 
+static uint16_t
+rte_event_tx_adapter_enqueue(__rte_unused void *port,
+			__rte_unused struct rte_event ev[],
+			__rte_unused uint16_t nb_events)
+{
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
 struct rte_eventdev *
 rte_event_pmd_allocate(const char *name, int socket_id)
 {
@@ -1295,6 +1329,8 @@ struct rte_eventdev *
 
 	eventdev = &rte_eventdevs[dev_id];
 
+	eventdev->txa_enqueue = rte_event_tx_adapter_enqueue;
+
 	if (eventdev->data == NULL) {
 		struct rte_eventdev_data *eventdev_data = NULL;
 
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v4 3/5] eventdev: add eth Tx adapter implementation
  2018-09-20 17:41   ` [dpdk-dev] [PATCH v4 1/5] eventdev: add eth Tx adapter APIs Nikhil Rao
  2018-09-20 17:41     ` [dpdk-dev] [PATCH v4 2/5] eventdev: add caps API and PMD callbacks for eth Tx adapter Nikhil Rao
@ 2018-09-20 17:41     ` Nikhil Rao
  2018-09-20 17:41     ` [dpdk-dev] [PATCH v4 4/5] eventdev: add auto test for eth Tx adapter Nikhil Rao
                       ` (3 subsequent siblings)
  5 siblings, 0 replies; 27+ messages in thread
From: Nikhil Rao @ 2018-09-20 17:41 UTC (permalink / raw)
  To: jerin.jacob, olivier.matz, marko.kovacevic, john.mcnamara; +Cc: dev, Nikhil Rao

This patch implements the Tx adapter APIs by invoking the
corresponding eventdev PMD callbacks and also provides
the common rte_service function based implementation when
the eventdev PMD support is absent.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
---
 config/rte_config.h                            |    1 +
 lib/librte_eventdev/rte_event_eth_tx_adapter.c | 1138 ++++++++++++++++++++++++
 config/common_base                             |    2 +-
 lib/librte_eventdev/Makefile                   |    2 +
 lib/librte_eventdev/meson.build                |    6 +-
 lib/librte_eventdev/rte_eventdev_version.map   |   12 +
 6 files changed, 1158 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_eventdev/rte_event_eth_tx_adapter.c

diff --git a/config/rte_config.h b/config/rte_config.h
index ee84f04..73e71af 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -69,6 +69,7 @@
 #define RTE_EVENT_TIMER_ADAPTER_NUM_MAX 32
 #define RTE_EVENT_ETH_INTR_RING_SIZE 1024
 #define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
+#define RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE 32
 
 /* rawdev defines */
 #define RTE_RAWDEV_MAX_DEVS 10
diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.c b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
new file mode 100644
index 0000000..aae0378
--- /dev/null
+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
@@ -0,0 +1,1138 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation.
+ */
+#include <rte_spinlock.h>
+#include <rte_service_component.h>
+#include <rte_ethdev.h>
+
+#include "rte_eventdev_pmd.h"
+#include "rte_event_eth_tx_adapter.h"
+
+#define TXA_BATCH_SIZE		32
+#define TXA_SERVICE_NAME_LEN	32
+#define TXA_MEM_NAME_LEN	32
+#define TXA_FLUSH_THRESHOLD	1024
+#define TXA_RETRY_CNT		100
+#define TXA_MAX_NB_TX		128
+#define TXA_INVALID_DEV_ID	INT32_C(-1)
+#define TXA_INVALID_SERVICE_ID	INT64_C(-1)
+
+#define txa_evdev(id) (&rte_eventdevs[txa_dev_id_array[(id)]])
+
+#define txa_dev_caps_get(id) txa_evdev((id))->dev_ops->eth_tx_adapter_caps_get
+
+#define txa_dev_adapter_create(t) txa_evdev(t)->dev_ops->eth_tx_adapter_create
+
+#define txa_dev_adapter_create_ext(t) \
+				txa_evdev(t)->dev_ops->eth_tx_adapter_create
+
+#define txa_dev_adapter_free(t) txa_evdev(t)->dev_ops->eth_tx_adapter_free
+
+#define txa_dev_queue_add(id) txa_evdev(id)->dev_ops->eth_tx_adapter_queue_add
+
+#define txa_dev_queue_del(t) txa_evdev(t)->dev_ops->eth_tx_adapter_queue_del
+
+#define txa_dev_start(t) txa_evdev(t)->dev_ops->eth_tx_adapter_start
+
+#define txa_dev_stop(t) txa_evdev(t)->dev_ops->eth_tx_adapter_stop
+
+#define txa_dev_stats_reset(t) txa_evdev(t)->dev_ops->eth_tx_adapter_stats_reset
+
+#define txa_dev_stats_get(t) txa_evdev(t)->dev_ops->eth_tx_adapter_stats_get
+
+#define RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) \
+do { \
+	if (!txa_valid_id(id)) { \
+		RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d", id); \
+		return retval; \
+	} \
+} while (0)
+
+#define TXA_CHECK_OR_ERR_RET(id) \
+do {\
+	int ret; \
+	RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET((id), -EINVAL); \
+	ret = txa_init(); \
+	if (ret != 0) \
+		return ret; \
+	if (!txa_adapter_exist((id))) \
+		return -EINVAL; \
+} while (0)
+
+/* Tx retry callback structure */
+struct txa_retry {
+	/* Ethernet port id */
+	uint16_t port_id;
+	/* Tx queue */
+	uint16_t tx_queue;
+	/* Adapter ID */
+	uint8_t id;
+};
+
+/* Per queue structure */
+struct txa_service_queue_info {
+	/* Queue has been added */
+	uint8_t added;
+	/* Retry callback argument */
+	struct txa_retry txa_retry;
+	/* Tx buffer */
+	struct rte_eth_dev_tx_buffer *tx_buf;
+};
+
+/* PMD private structure */
+struct txa_service_data {
+	/* Max mbufs processed in any service function invocation */
+	uint32_t max_nb_tx;
+	/* Number of Tx queues in adapter */
+	uint32_t nb_queues;
+	/*  Synchronization with data path */
+	rte_spinlock_t tx_lock;
+	/* Event port ID */
+	uint8_t port_id;
+	/* Event device identifier */
+	uint8_t eventdev_id;
+	/* Highest port id supported + 1 */
+	uint16_t dev_count;
+	/* Loop count to flush Tx buffers */
+	int loop_cnt;
+	/* Per ethernet device structure */
+	struct txa_service_ethdev *txa_ethdev;
+	/* Statistics */
+	struct rte_event_eth_tx_adapter_stats stats;
+	/* Adapter Identifier */
+	uint8_t id;
+	/* Conf arg must be freed */
+	uint8_t conf_free;
+	/* Configuration callback */
+	rte_event_eth_tx_adapter_conf_cb conf_cb;
+	/* Configuration callback argument */
+	void *conf_arg;
+	/* socket id */
+	int socket_id;
+	/* Per adapter EAL service */
+	int64_t service_id;
+	/* Memory allocation name */
+	char mem_name[TXA_MEM_NAME_LEN];
+} __rte_cache_aligned;
+
+/* Per eth device structure */
+struct txa_service_ethdev {
+	/* Pointer to ethernet device */
+	struct rte_eth_dev *dev;
+	/* Number of queues added */
+	uint16_t nb_queues;
+	/* PMD specific queue data */
+	void *queues;
+};
+
+/* Array of adapter instances, initialized with event device id
+ * when adapter is created
+ */
+static int *txa_dev_id_array;
+
+/* Array of pointers to service implementation data */
+static struct txa_service_data **txa_service_data_array;
+
+static int32_t txa_service_func(void *args);
+static int txa_service_adapter_create_ext(uint8_t id,
+			struct rte_eventdev *dev,
+			rte_event_eth_tx_adapter_conf_cb conf_cb,
+			void *conf_arg);
+static int txa_service_queue_del(uint8_t id,
+				const struct rte_eth_dev *dev,
+				int32_t tx_queue_id);
+
+static int
+txa_adapter_exist(uint8_t id)
+{
+	return txa_dev_id_array[id] != TXA_INVALID_DEV_ID;
+}
+
+static inline int
+txa_valid_id(uint8_t id)
+{
+	return id < RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE;
+}
+
+static void *
+txa_memzone_array_get(const char *name, unsigned int elt_size, int nb_elems)
+{
+	const struct rte_memzone *mz;
+	unsigned int sz;
+
+	sz = elt_size * nb_elems;
+	sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
+
+	mz = rte_memzone_lookup(name);
+	if (mz == NULL) {
+		mz = rte_memzone_reserve_aligned(name, sz, rte_socket_id(), 0,
+						 RTE_CACHE_LINE_SIZE);
+		if (mz == NULL) {
+			RTE_EDEV_LOG_ERR("failed to reserve memzone"
+					" name = %s err = %"
+					PRId32, name, rte_errno);
+			return NULL;
+		}
+	}
+
+	return  mz->addr;
+}
+
+static int
+txa_dev_id_array_init(void)
+{
+	if (txa_dev_id_array == NULL) {
+		int i;
+
+		txa_dev_id_array = txa_memzone_array_get("txa_adapter_array",
+					sizeof(int),
+					RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE);
+		if (txa_dev_id_array == NULL)
+			return -ENOMEM;
+
+		for (i = 0; i < RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE; i++)
+			txa_dev_id_array[i] = TXA_INVALID_DEV_ID;
+	}
+
+	return 0;
+}
+
+static int
+txa_init(void)
+{
+	return txa_dev_id_array_init();
+}
+
+static int
+txa_service_data_init(void)
+{
+	if (txa_service_data_array == NULL) {
+		txa_service_data_array =
+				txa_memzone_array_get("txa_service_data_array",
+					sizeof(int),
+					RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE);
+		if (txa_service_data_array == NULL)
+			return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static inline struct txa_service_data *
+txa_service_id_to_data(uint8_t id)
+{
+	return txa_service_data_array[id];
+}
+
+static inline struct txa_service_queue_info *
+txa_service_queue(struct txa_service_data *txa, uint16_t port_id,
+		uint16_t tx_queue_id)
+{
+	struct txa_service_queue_info *tqi;
+
+	if (unlikely(txa->txa_ethdev == NULL || txa->dev_count < port_id + 1))
+		return NULL;
+
+	tqi = txa->txa_ethdev[port_id].queues;
+
+	return likely(tqi != NULL) ? tqi + tx_queue_id : NULL;
+}
+
+static int
+txa_service_conf_cb(uint8_t __rte_unused id, uint8_t dev_id,
+		struct rte_event_eth_tx_adapter_conf *conf, void *arg)
+{
+	int ret;
+	struct rte_eventdev *dev;
+	struct rte_event_port_conf *pc;
+	struct rte_event_dev_config dev_conf;
+	int started;
+	uint8_t port_id;
+
+	pc = arg;
+	dev = &rte_eventdevs[dev_id];
+	dev_conf = dev->data->dev_conf;
+
+	started = dev->data->dev_started;
+	if (started)
+		rte_event_dev_stop(dev_id);
+
+	port_id = dev_conf.nb_event_ports;
+	dev_conf.nb_event_ports += 1;
+
+	ret = rte_event_dev_configure(dev_id, &dev_conf);
+	if (ret) {
+		RTE_EDEV_LOG_ERR("failed to configure event dev %u",
+						dev_id);
+		if (started) {
+			if (rte_event_dev_start(dev_id))
+				return -EIO;
+		}
+		return ret;
+	}
+
+	pc->disable_implicit_release = 0;
+	ret = rte_event_port_setup(dev_id, port_id, pc);
+	if (ret) {
+		RTE_EDEV_LOG_ERR("failed to setup event port %u\n",
+					port_id);
+		if (started) {
+			if (rte_event_dev_start(dev_id))
+				return -EIO;
+		}
+		return ret;
+	}
+
+	conf->event_port_id = port_id;
+	conf->max_nb_tx = TXA_MAX_NB_TX;
+	if (started)
+		ret = rte_event_dev_start(dev_id);
+	return ret;
+}
+
+static int
+txa_service_ethdev_alloc(struct txa_service_data *txa)
+{
+	struct txa_service_ethdev *txa_ethdev;
+	uint16_t i, dev_count;
+
+	dev_count = rte_eth_dev_count_avail();
+	if (txa->txa_ethdev && dev_count == txa->dev_count)
+		return 0;
+
+	txa_ethdev = rte_zmalloc_socket(txa->mem_name,
+					dev_count * sizeof(*txa_ethdev),
+					0,
+					txa->socket_id);
+	if (txa_ethdev == NULL) {
+		RTE_EDEV_LOG_ERR("Failed to alloc txa::txa_ethdev ");
+		return -ENOMEM;
+	}
+
+	if (txa->dev_count)
+		memcpy(txa_ethdev, txa->txa_ethdev,
+			txa->dev_count * sizeof(*txa_ethdev));
+
+	RTE_ETH_FOREACH_DEV(i) {
+		if (i == dev_count)
+			break;
+		txa_ethdev[i].dev = &rte_eth_devices[i];
+	}
+
+	txa->txa_ethdev = txa_ethdev;
+	txa->dev_count = dev_count;
+	return 0;
+}
+
+static int
+txa_service_queue_array_alloc(struct txa_service_data *txa,
+			uint16_t port_id)
+{
+	struct txa_service_queue_info *tqi;
+	uint16_t nb_queue;
+	int ret;
+
+	ret = txa_service_ethdev_alloc(txa);
+	if (ret != 0)
+		return ret;
+
+	if (txa->txa_ethdev[port_id].queues)
+		return 0;
+
+	nb_queue = txa->txa_ethdev[port_id].dev->data->nb_tx_queues;
+	tqi = rte_zmalloc_socket(txa->mem_name,
+				nb_queue *
+				sizeof(struct txa_service_queue_info), 0,
+				txa->socket_id);
+	if (tqi == NULL)
+		return -ENOMEM;
+	txa->txa_ethdev[port_id].queues = tqi;
+	return 0;
+}
+
+static void
+txa_service_queue_array_free(struct txa_service_data *txa,
+			uint16_t port_id)
+{
+	struct txa_service_ethdev *txa_ethdev;
+	struct txa_service_queue_info *tqi;
+
+	txa_ethdev = &txa->txa_ethdev[port_id];
+	if (txa->txa_ethdev == NULL || txa_ethdev->nb_queues != 0)
+		return;
+
+	tqi = txa_ethdev->queues;
+	txa_ethdev->queues = NULL;
+	rte_free(tqi);
+
+	if (txa->nb_queues == 0) {
+		rte_free(txa->txa_ethdev);
+		txa->txa_ethdev = NULL;
+	}
+}
+
+static void
+txa_service_unregister(struct txa_service_data *txa)
+{
+	if (txa->service_id != TXA_INVALID_SERVICE_ID) {
+		rte_service_component_runstate_set(txa->service_id, 0);
+		while (rte_service_may_be_active(txa->service_id))
+			rte_pause();
+		rte_service_component_unregister(txa->service_id);
+	}
+	txa->service_id = TXA_INVALID_SERVICE_ID;
+}
+
+static int
+txa_service_register(struct txa_service_data *txa)
+{
+	int ret;
+	struct rte_service_spec service;
+	struct rte_event_eth_tx_adapter_conf conf;
+
+	if (txa->service_id != TXA_INVALID_SERVICE_ID)
+		return 0;
+
+	memset(&service, 0, sizeof(service));
+	snprintf(service.name, TXA_SERVICE_NAME_LEN, "txa_%d", txa->id);
+	service.socket_id = txa->socket_id;
+	service.callback = txa_service_func;
+	service.callback_userdata = txa;
+	service.capabilities = RTE_SERVICE_CAP_MT_SAFE;
+	ret = rte_service_component_register(&service,
+					(uint32_t *)&txa->service_id);
+	if (ret) {
+		RTE_EDEV_LOG_ERR("failed to register service %s err = %"
+				 PRId32, service.name, ret);
+		return ret;
+	}
+
+	ret = txa->conf_cb(txa->id, txa->eventdev_id, &conf, txa->conf_arg);
+	if (ret) {
+		txa_service_unregister(txa);
+		return ret;
+	}
+
+	rte_service_component_runstate_set(txa->service_id, 1);
+	txa->port_id = conf.event_port_id;
+	txa->max_nb_tx = conf.max_nb_tx;
+	return 0;
+}
+
+static struct rte_eth_dev_tx_buffer *
+txa_service_tx_buf_alloc(struct txa_service_data *txa,
+			const struct rte_eth_dev *dev)
+{
+	struct rte_eth_dev_tx_buffer *tb;
+	uint16_t port_id;
+
+	port_id = dev->data->port_id;
+	tb = rte_zmalloc_socket(txa->mem_name,
+				RTE_ETH_TX_BUFFER_SIZE(TXA_BATCH_SIZE),
+				0,
+				rte_eth_dev_socket_id(port_id));
+	if (tb == NULL)
+		RTE_EDEV_LOG_ERR("Failed to allocate memory for tx buffer");
+	return tb;
+}
+
+static int
+txa_service_is_queue_added(struct txa_service_data *txa,
+			const struct rte_eth_dev *dev,
+			uint16_t tx_queue_id)
+{
+	struct txa_service_queue_info *tqi;
+
+	tqi = txa_service_queue(txa, dev->data->port_id, tx_queue_id);
+	return tqi && tqi->added;
+}
+
+static int
+txa_service_ctrl(uint8_t id, int start)
+{
+	int ret;
+	struct txa_service_data *txa;
+
+	txa = txa_service_id_to_data(id);
+	if (txa->service_id == TXA_INVALID_SERVICE_ID)
+		return 0;
+
+	ret = rte_service_runstate_set(txa->service_id, start);
+	if (ret == 0 && !start) {
+		while (rte_service_may_be_active(txa->service_id))
+			rte_pause();
+	}
+	return ret;
+}
+
+static void
+txa_service_buffer_retry(struct rte_mbuf **pkts, uint16_t unsent,
+			void *userdata)
+{
+	struct txa_retry *tr;
+	struct txa_service_data *data;
+	struct rte_event_eth_tx_adapter_stats *stats;
+	uint16_t sent = 0;
+	unsigned int retry = 0;
+	uint16_t i, n;
+
+	tr = (struct txa_retry *)(uintptr_t)userdata;
+	data = txa_service_id_to_data(tr->id);
+	stats = &data->stats;
+
+	do {
+		n = rte_eth_tx_burst(tr->port_id, tr->tx_queue,
+			       &pkts[sent], unsent - sent);
+
+		sent += n;
+	} while (sent != unsent && retry++ < TXA_RETRY_CNT);
+
+	for (i = sent; i < unsent; i++)
+		rte_pktmbuf_free(pkts[i]);
+
+	stats->tx_retry += retry;
+	stats->tx_packets += sent;
+	stats->tx_dropped += unsent - sent;
+}
+
+static void
+txa_service_tx(struct txa_service_data *txa, struct rte_event *ev,
+	uint32_t n)
+{
+	uint32_t i;
+	uint16_t nb_tx;
+	struct rte_event_eth_tx_adapter_stats *stats;
+
+	stats = &txa->stats;
+
+	nb_tx = 0;
+	for (i = 0; i < n; i++) {
+		struct rte_mbuf *m;
+		uint16_t port;
+		uint16_t queue;
+		struct txa_service_queue_info *tqi;
+
+		m = ev[i].mbuf;
+		port = m->port;
+		queue = rte_event_eth_tx_adapter_txq_get(m);
+
+		tqi = txa_service_queue(txa, port, queue);
+		if (unlikely(tqi == NULL || !tqi->added)) {
+			rte_pktmbuf_free(m);
+			continue;
+		}
+
+		nb_tx += rte_eth_tx_buffer(port, queue, tqi->tx_buf, m);
+	}
+
+	stats->tx_packets += nb_tx;
+}
+
+static int32_t
+txa_service_func(void *args)
+{
+	struct txa_service_data *txa = args;
+	uint8_t dev_id;
+	uint8_t port;
+	uint16_t n;
+	uint32_t nb_tx, max_nb_tx;
+	struct rte_event ev[TXA_BATCH_SIZE];
+
+	dev_id = txa->eventdev_id;
+	max_nb_tx = txa->max_nb_tx;
+	port = txa->port_id;
+
+	if (txa->nb_queues == 0)
+		return 0;
+
+	if (!rte_spinlock_trylock(&txa->tx_lock))
+		return 0;
+
+	for (nb_tx = 0; nb_tx < max_nb_tx; nb_tx += n) {
+
+		n = rte_event_dequeue_burst(dev_id, port, ev, RTE_DIM(ev), 0);
+		if (!n)
+			break;
+		txa_service_tx(txa, ev, n);
+	}
+
+	if ((txa->loop_cnt++ & (TXA_FLUSH_THRESHOLD - 1)) == 0) {
+
+		struct txa_service_ethdev *tdi;
+		struct txa_service_queue_info *tqi;
+		struct rte_eth_dev *dev;
+		uint16_t i;
+
+		tdi = txa->txa_ethdev;
+		nb_tx = 0;
+
+		RTE_ETH_FOREACH_DEV(i) {
+			uint16_t q;
+
+			if (i == txa->dev_count)
+				break;
+
+			dev = tdi[i].dev;
+			if (tdi[i].nb_queues == 0)
+				continue;
+			for (q = 0; q < dev->data->nb_tx_queues; q++) {
+
+				tqi = txa_service_queue(txa, i, q);
+				if (unlikely(tqi == NULL || !tqi->added))
+					continue;
+
+				nb_tx += rte_eth_tx_buffer_flush(i, q,
+							tqi->tx_buf);
+			}
+		}
+
+		txa->stats.tx_packets += nb_tx;
+	}
+	rte_spinlock_unlock(&txa->tx_lock);
+	return 0;
+}
+
+static int
+txa_service_adapter_create(uint8_t id, struct rte_eventdev *dev,
+			struct rte_event_port_conf *port_conf)
+{
+	struct txa_service_data *txa;
+	struct rte_event_port_conf *cb_conf;
+	int ret;
+
+	cb_conf = rte_malloc(NULL, sizeof(*cb_conf), 0);
+	if (cb_conf == NULL)
+		return -ENOMEM;
+
+	*cb_conf = *port_conf;
+	ret = txa_service_adapter_create_ext(id, dev, txa_service_conf_cb,
+					cb_conf);
+	if (ret) {
+		rte_free(cb_conf);
+		return ret;
+	}
+
+	txa = txa_service_id_to_data(id);
+	txa->conf_free = 1;
+	return ret;
+}
+
+static int
+txa_service_adapter_create_ext(uint8_t id, struct rte_eventdev *dev,
+			rte_event_eth_tx_adapter_conf_cb conf_cb,
+			void *conf_arg)
+{
+	struct txa_service_data *txa;
+	int socket_id;
+	char mem_name[TXA_SERVICE_NAME_LEN];
+	int ret;
+
+	if (conf_cb == NULL)
+		return -EINVAL;
+
+	socket_id = dev->data->socket_id;
+	snprintf(mem_name, TXA_MEM_NAME_LEN,
+		"rte_event_eth_txa_%d",
+		id);
+
+	ret = txa_service_data_init();
+	if (ret != 0)
+		return ret;
+
+	txa = rte_zmalloc_socket(mem_name,
+				sizeof(*txa),
+				RTE_CACHE_LINE_SIZE, socket_id);
+	if (txa == NULL) {
+		RTE_EDEV_LOG_ERR("failed to get mem for tx adapter");
+		return -ENOMEM;
+	}
+
+	txa->id = id;
+	txa->eventdev_id = dev->data->dev_id;
+	txa->socket_id = socket_id;
+	strncpy(txa->mem_name, mem_name, TXA_SERVICE_NAME_LEN);
+	txa->conf_cb = conf_cb;
+	txa->conf_arg = conf_arg;
+	txa->service_id = TXA_INVALID_SERVICE_ID;
+	rte_spinlock_init(&txa->tx_lock);
+	txa_service_data_array[id] = txa;
+
+	return 0;
+}
+
+static int
+txa_service_event_port_get(uint8_t id, uint8_t *port)
+{
+	struct txa_service_data *txa;
+
+	txa = txa_service_id_to_data(id);
+	if (txa->service_id == TXA_INVALID_SERVICE_ID)
+		return -ENODEV;
+
+	*port = txa->port_id;
+	return 0;
+}
+
+static int
+txa_service_adapter_free(uint8_t id)
+{
+	struct txa_service_data *txa;
+
+	txa = txa_service_id_to_data(id);
+	if (txa->nb_queues) {
+		RTE_EDEV_LOG_ERR("%" PRIu16 " Tx queues not deleted",
+				txa->nb_queues);
+		return -EBUSY;
+	}
+
+	if (txa->conf_free)
+		rte_free(txa->conf_arg);
+	rte_free(txa);
+	return 0;
+}
+
+static int
+txa_service_queue_add(uint8_t id,
+		__rte_unused struct rte_eventdev *dev,
+		const struct rte_eth_dev *eth_dev,
+		int32_t tx_queue_id)
+{
+	struct txa_service_data *txa;
+	struct txa_service_ethdev *tdi;
+	struct txa_service_queue_info *tqi;
+	struct rte_eth_dev_tx_buffer *tb;
+	struct txa_retry *txa_retry;
+	int ret;
+
+	txa = txa_service_id_to_data(id);
+
+	if (tx_queue_id == -1) {
+		int nb_queues;
+		uint16_t i, j;
+		uint16_t *qdone;
+
+		nb_queues = eth_dev->data->nb_tx_queues;
+		if (txa->dev_count > eth_dev->data->port_id) {
+			tdi = &txa->txa_ethdev[eth_dev->data->port_id];
+			nb_queues -= tdi->nb_queues;
+		}
+
+		qdone = rte_zmalloc(txa->mem_name,
+				nb_queues * sizeof(*qdone), 0);
+		j = 0;
+		for (i = 0; i < nb_queues; i++) {
+			if (txa_service_is_queue_added(txa, eth_dev, i))
+				continue;
+			ret = txa_service_queue_add(id, dev, eth_dev, i);
+			if (ret == 0)
+				qdone[j++] = i;
+			else
+				break;
+		}
+
+		if (i != nb_queues) {
+			for (i = 0; i < j; i++)
+				txa_service_queue_del(id, eth_dev, qdone[i]);
+		}
+		rte_free(qdone);
+		return ret;
+	}
+
+	ret = txa_service_register(txa);
+	if (ret)
+		return ret;
+
+	rte_spinlock_lock(&txa->tx_lock);
+
+	if (txa_service_is_queue_added(txa, eth_dev, tx_queue_id)) {
+		rte_spinlock_unlock(&txa->tx_lock);
+		return 0;
+	}
+
+	ret = txa_service_queue_array_alloc(txa, eth_dev->data->port_id);
+	if (ret)
+		goto err_unlock;
+
+	tb = txa_service_tx_buf_alloc(txa, eth_dev);
+	if (tb == NULL)
+		goto err_unlock;
+
+	tdi = &txa->txa_ethdev[eth_dev->data->port_id];
+	tqi = txa_service_queue(txa, eth_dev->data->port_id, tx_queue_id);
+
+	txa_retry = &tqi->txa_retry;
+	txa_retry->id = txa->id;
+	txa_retry->port_id = eth_dev->data->port_id;
+	txa_retry->tx_queue = tx_queue_id;
+
+	rte_eth_tx_buffer_init(tb, TXA_BATCH_SIZE);
+	rte_eth_tx_buffer_set_err_callback(tb,
+		txa_service_buffer_retry, txa_retry);
+
+	tqi->tx_buf = tb;
+	tqi->added = 1;
+	tdi->nb_queues++;
+	txa->nb_queues++;
+
+err_unlock:
+	if (txa->nb_queues == 0) {
+		txa_service_queue_array_free(txa,
+					eth_dev->data->port_id);
+		txa_service_unregister(txa);
+	}
+
+	rte_spinlock_unlock(&txa->tx_lock);
+	return 0;
+}
+
+static int
+txa_service_queue_del(uint8_t id,
+		const struct rte_eth_dev *dev,
+		int32_t tx_queue_id)
+{
+	struct txa_service_data *txa;
+	struct txa_service_queue_info *tqi;
+	struct rte_eth_dev_tx_buffer *tb;
+	uint16_t port_id;
+
+	if (tx_queue_id == -1) {
+		uint16_t i;
+		int ret;
+
+		for (i = 0; i < dev->data->nb_tx_queues; i++) {
+			ret = txa_service_queue_del(id, dev, i);
+			if (ret != 0)
+				break;
+		}
+		return ret;
+	}
+
+	txa = txa_service_id_to_data(id);
+	port_id = dev->data->port_id;
+
+	tqi = txa_service_queue(txa, port_id, tx_queue_id);
+	if (tqi == NULL || !tqi->added)
+		return 0;
+
+	tb = tqi->tx_buf;
+	tqi->added = 0;
+	tqi->tx_buf = NULL;
+	rte_free(tb);
+	txa->nb_queues--;
+	txa->txa_ethdev[port_id].nb_queues--;
+
+	txa_service_queue_array_free(txa, port_id);
+	return 0;
+}
+
+static int
+txa_service_id_get(uint8_t id, uint32_t *service_id)
+{
+	struct txa_service_data *txa;
+
+	txa = txa_service_id_to_data(id);
+	if (txa->service_id == TXA_INVALID_SERVICE_ID)
+		return -ESRCH;
+
+	if (service_id == NULL)
+		return -EINVAL;
+
+	*service_id = txa->service_id;
+	return 0;
+}
+
+static int
+txa_service_start(uint8_t id)
+{
+	return txa_service_ctrl(id, 1);
+}
+
+static int
+txa_service_stats_get(uint8_t id,
+		struct rte_event_eth_tx_adapter_stats *stats)
+{
+	struct txa_service_data *txa;
+
+	txa = txa_service_id_to_data(id);
+	*stats = txa->stats;
+	return 0;
+}
+
+static int
+txa_service_stats_reset(uint8_t id)
+{
+	struct txa_service_data *txa;
+
+	txa = txa_service_id_to_data(id);
+	memset(&txa->stats, 0, sizeof(txa->stats));
+	return 0;
+}
+
+static int
+txa_service_stop(uint8_t id)
+{
+	return txa_service_ctrl(id, 0);
+}
+
+
+int __rte_experimental
+rte_event_eth_tx_adapter_create(uint8_t id, uint8_t dev_id,
+				struct rte_event_port_conf *port_conf)
+{
+	struct rte_eventdev *dev;
+	int ret;
+
+	if (port_conf == NULL)
+		return -EINVAL;
+
+	RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+
+	dev = &rte_eventdevs[dev_id];
+
+	ret = txa_init();
+	if (ret != 0)
+		return ret;
+
+	if (txa_adapter_exist(id))
+		return -EEXIST;
+
+	txa_dev_id_array[id] = dev_id;
+	if (txa_dev_adapter_create(id))
+		ret = txa_dev_adapter_create(id)(id, dev);
+
+	if (ret != 0) {
+		txa_dev_id_array[id] = TXA_INVALID_DEV_ID;
+		return ret;
+	}
+
+	ret = txa_service_adapter_create(id, dev, port_conf);
+	if (ret != 0) {
+		if (txa_dev_adapter_free(id))
+			txa_dev_adapter_free(id)(id, dev);
+		txa_dev_id_array[id] = TXA_INVALID_DEV_ID;
+		return ret;
+	}
+
+	txa_dev_id_array[id] = dev_id;
+	return 0;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_create_ext(uint8_t id, uint8_t dev_id,
+				rte_event_eth_tx_adapter_conf_cb conf_cb,
+				void *conf_arg)
+{
+	struct rte_eventdev *dev;
+	int ret;
+
+	RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+
+	ret = txa_init();
+	if (ret != 0)
+		return ret;
+
+	if (txa_adapter_exist(id))
+		return -EINVAL;
+
+	dev = &rte_eventdevs[dev_id];
+
+	txa_dev_id_array[id] = dev_id;
+	if (txa_dev_adapter_create_ext(id))
+		ret = txa_dev_adapter_create_ext(id)(id, dev);
+
+	if (ret != 0) {
+		txa_dev_id_array[id] = TXA_INVALID_DEV_ID;
+		return ret;
+	}
+
+	ret = txa_service_adapter_create_ext(id, dev, conf_cb, conf_arg);
+	if (ret != 0) {
+		if (txa_dev_adapter_free(id))
+			txa_dev_adapter_free(id)(id, dev);
+		txa_dev_id_array[id] = TXA_INVALID_DEV_ID;
+		return ret;
+	}
+
+	txa_dev_id_array[id] = dev_id;
+	return 0;
+}
+
+
+int __rte_experimental
+rte_event_eth_tx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
+{
+	TXA_CHECK_OR_ERR_RET(id);
+
+	return txa_service_event_port_get(id, event_port_id);
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_free(uint8_t id)
+{
+	int ret;
+
+	TXA_CHECK_OR_ERR_RET(id);
+
+	ret = txa_dev_adapter_free(id) ?
+		txa_dev_adapter_free(id)(id, txa_evdev(id)) :
+		0;
+
+	if (ret == 0)
+		ret = txa_service_adapter_free(id);
+	txa_dev_id_array[id] = TXA_INVALID_DEV_ID;
+
+	return ret;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_queue_add(uint8_t id,
+				uint16_t eth_dev_id,
+				int32_t queue)
+{
+	struct rte_eth_dev *eth_dev;
+	int ret;
+	uint32_t caps;
+
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL);
+	TXA_CHECK_OR_ERR_RET(id);
+
+	eth_dev = &rte_eth_devices[eth_dev_id];
+	if (queue != -1 && (uint16_t)queue >= eth_dev->data->nb_tx_queues) {
+		RTE_EDEV_LOG_ERR("Invalid tx queue_id %" PRIu16,
+				(uint16_t)queue);
+		return -EINVAL;
+	}
+
+	caps = 0;
+	if (txa_dev_caps_get(id))
+		txa_dev_caps_get(id)(txa_evdev(id), eth_dev, &caps);
+
+	if (caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)
+		ret =  txa_dev_queue_add(id) ?
+					txa_dev_queue_add(id)(id,
+							txa_evdev(id),
+							eth_dev,
+							queue) : 0;
+	else
+		ret = txa_service_queue_add(id, txa_evdev(id), eth_dev, queue);
+
+	return ret;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_queue_del(uint8_t id,
+				uint16_t eth_dev_id,
+				int32_t queue)
+{
+	struct rte_eth_dev *eth_dev;
+	int ret;
+	uint32_t caps;
+
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL);
+	TXA_CHECK_OR_ERR_RET(id);
+
+	eth_dev = &rte_eth_devices[eth_dev_id];
+	if (queue != -1 && (uint16_t)queue >= eth_dev->data->nb_tx_queues) {
+		RTE_EDEV_LOG_ERR("Invalid tx queue_id %" PRIu16,
+				(uint16_t)queue);
+		return -EINVAL;
+	}
+
+	caps = 0;
+
+	if (txa_dev_caps_get(id))
+		txa_dev_caps_get(id)(txa_evdev(id), eth_dev, &caps);
+
+	if (caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)
+		ret =  txa_dev_queue_del(id) ?
+					txa_dev_queue_del(id)(id, txa_evdev(id),
+							eth_dev,
+							queue) : 0;
+	else
+		ret = txa_service_queue_del(id, eth_dev, queue);
+
+	return ret;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_service_id_get(uint8_t id, uint32_t *service_id)
+{
+	TXA_CHECK_OR_ERR_RET(id);
+
+	return txa_service_id_get(id, service_id);
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_start(uint8_t id)
+{
+	int ret;
+
+	TXA_CHECK_OR_ERR_RET(id);
+
+	ret = txa_dev_start(id) ? txa_dev_start(id)(id, txa_evdev(id)) : 0;
+	if (ret == 0)
+		ret = txa_service_start(id);
+	return ret;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_stats_get(uint8_t id,
+				struct rte_event_eth_tx_adapter_stats *stats)
+{
+	int ret;
+
+	TXA_CHECK_OR_ERR_RET(id);
+
+	if (stats == NULL)
+		return -EINVAL;
+
+	*stats = (struct rte_event_eth_tx_adapter_stats){0};
+
+	ret = txa_dev_stats_get(id) ?
+			txa_dev_stats_get(id)(id, txa_evdev(id), stats) : 0;
+
+	if (ret == 0 && txa_service_id_get(id, NULL) != ESRCH) {
+		if (txa_dev_stats_get(id)) {
+			struct rte_event_eth_tx_adapter_stats service_stats;
+
+			ret = txa_service_stats_get(id, &service_stats);
+			if (ret == 0) {
+				stats->tx_retry += service_stats.tx_retry;
+				stats->tx_packets += service_stats.tx_packets;
+				stats->tx_dropped += service_stats.tx_dropped;
+			}
+		} else
+			ret = txa_service_stats_get(id, stats);
+	}
+
+	return ret;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_stats_reset(uint8_t id)
+{
+	int ret;
+
+	TXA_CHECK_OR_ERR_RET(id);
+
+	ret = txa_dev_stats_reset(id) ?
+		txa_dev_stats_reset(id)(id, txa_evdev(id)) : 0;
+	if (ret == 0)
+		ret = txa_service_stats_reset(id);
+	return ret;
+}
+
+int __rte_experimental
+rte_event_eth_tx_adapter_stop(uint8_t id)
+{
+	int ret;
+
+	TXA_CHECK_OR_ERR_RET(id);
+
+	ret = txa_dev_stop(id) ? txa_dev_stop(id)(id,  txa_evdev(id)) : 0;
+	if (ret == 0)
+		ret = txa_service_stop(id);
+	return ret;
+}
diff --git a/config/common_base b/config/common_base
index 86e3833..f00b4bc 100644
--- a/config/common_base
+++ b/config/common_base
@@ -602,7 +602,7 @@ CONFIG_RTE_EVENT_MAX_QUEUES_PER_DEV=64
 CONFIG_RTE_EVENT_TIMER_ADAPTER_NUM_MAX=32
 CONFIG_RTE_EVENT_ETH_INTR_RING_SIZE=1024
 CONFIG_RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE=32
-
+CONFIG_RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE=32
 #
 # Compile PMD for skeleton event device
 #
diff --git a/lib/librte_eventdev/Makefile b/lib/librte_eventdev/Makefile
index 47f599a..424ff35 100644
--- a/lib/librte_eventdev/Makefile
+++ b/lib/librte_eventdev/Makefile
@@ -28,6 +28,7 @@ SRCS-y += rte_event_ring.c
 SRCS-y += rte_event_eth_rx_adapter.c
 SRCS-y += rte_event_timer_adapter.c
 SRCS-y += rte_event_crypto_adapter.c
+SRCS-y += rte_event_eth_tx_adapter.c
 
 # export include files
 SYMLINK-y-include += rte_eventdev.h
@@ -39,6 +40,7 @@ SYMLINK-y-include += rte_event_eth_rx_adapter.h
 SYMLINK-y-include += rte_event_timer_adapter.h
 SYMLINK-y-include += rte_event_timer_adapter_pmd.h
 SYMLINK-y-include += rte_event_crypto_adapter.h
+SYMLINK-y-include += rte_event_eth_tx_adapter.h
 
 # versioning export map
 EXPORT_MAP := rte_eventdev_version.map
diff --git a/lib/librte_eventdev/meson.build b/lib/librte_eventdev/meson.build
index 3cbaf29..47989e7 100644
--- a/lib/librte_eventdev/meson.build
+++ b/lib/librte_eventdev/meson.build
@@ -14,7 +14,8 @@ sources = files('rte_eventdev.c',
 		'rte_event_ring.c',
 		'rte_event_eth_rx_adapter.c',
 		'rte_event_timer_adapter.c',
-		'rte_event_crypto_adapter.c')
+		'rte_event_crypto_adapter.c',
+		'rte_event_eth_tx_adapter.c')
 headers = files('rte_eventdev.h',
 		'rte_eventdev_pmd.h',
 		'rte_eventdev_pmd_pci.h',
@@ -23,5 +24,6 @@ headers = files('rte_eventdev.h',
 		'rte_event_eth_rx_adapter.h',
 		'rte_event_timer_adapter.h',
 		'rte_event_timer_adapter_pmd.h',
-		'rte_event_crypto_adapter.h')
+		'rte_event_crypto_adapter.h',
+		'rte_event_eth_tx_adapter.h')
 deps += ['ring', 'ethdev', 'hash', 'mempool', 'mbuf', 'timer', 'cryptodev']
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 12835e9..19c1494 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -96,6 +96,18 @@ EXPERIMENTAL {
 	rte_event_crypto_adapter_stats_reset;
 	rte_event_crypto_adapter_stop;
 	rte_event_eth_rx_adapter_cb_register;
+	rte_event_eth_tx_adapter_caps_get;
+	rte_event_eth_tx_adapter_create;
+	rte_event_eth_tx_adapter_create_ext;
+	rte_event_eth_tx_adapter_event_port_get;
+	rte_event_eth_tx_adapter_free;
+	rte_event_eth_tx_adapter_queue_add;
+	rte_event_eth_tx_adapter_queue_del;
+	rte_event_eth_tx_adapter_service_id_get;
+	rte_event_eth_tx_adapter_start;
+	rte_event_eth_tx_adapter_stats_get;
+	rte_event_eth_tx_adapter_stats_reset;
+	rte_event_eth_tx_adapter_stop;
 	rte_event_timer_adapter_caps_get;
 	rte_event_timer_adapter_create;
 	rte_event_timer_adapter_create_ext;
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v4 4/5] eventdev: add auto test for eth Tx adapter
  2018-09-20 17:41   ` [dpdk-dev] [PATCH v4 1/5] eventdev: add eth Tx adapter APIs Nikhil Rao
  2018-09-20 17:41     ` [dpdk-dev] [PATCH v4 2/5] eventdev: add caps API and PMD callbacks for eth Tx adapter Nikhil Rao
  2018-09-20 17:41     ` [dpdk-dev] [PATCH v4 3/5] eventdev: add eth Tx adapter implementation Nikhil Rao
@ 2018-09-20 17:41     ` Nikhil Rao
  2018-09-20 17:41     ` [dpdk-dev] [PATCH v4 5/5] doc: add event eth Tx adapter guide Nikhil Rao
                       ` (2 subsequent siblings)
  5 siblings, 0 replies; 27+ messages in thread
From: Nikhil Rao @ 2018-09-20 17:41 UTC (permalink / raw)
  To: jerin.jacob, olivier.matz, marko.kovacevic, john.mcnamara; +Cc: dev, Nikhil Rao

This patch adds tests for the eth Tx adapter APIs. It also
tests the data path for the rte_service function based
implementation of the APIs.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 test/test/test_event_eth_tx_adapter.c | 699 ++++++++++++++++++++++++++++++++++
 MAINTAINERS                           |   1 +
 test/test/Makefile                    |   1 +
 test/test/meson.build                 |   2 +
 4 files changed, 703 insertions(+)
 create mode 100644 test/test/test_event_eth_tx_adapter.c

diff --git a/test/test/test_event_eth_tx_adapter.c b/test/test/test_event_eth_tx_adapter.c
new file mode 100644
index 0000000..c26c515
--- /dev/null
+++ b/test/test/test_event_eth_tx_adapter.c
@@ -0,0 +1,699 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_bus_vdev.h>
+#include <rte_common.h>
+#include <rte_ethdev.h>
+#include <rte_eth_ring.h>
+#include <rte_eventdev.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_service.h>
+
+#include "test.h"
+
+#define MAX_NUM_QUEUE		RTE_PMD_RING_MAX_RX_RINGS
+#define TEST_INST_ID		0
+#define TEST_DEV_ID		0
+#define SOCKET0			0
+#define RING_SIZE		256
+#define ETH_NAME_LEN		32
+#define NUM_ETH_PAIR		1
+#define NUM_ETH_DEV		(2 * NUM_ETH_PAIR)
+#define NB_MBUF			512
+#define PAIR_PORT_INDEX(p)	((p) + NUM_ETH_PAIR)
+#define PORT(p)			default_params.port[(p)]
+#define TEST_ETHDEV_ID		PORT(0)
+#define TEST_ETHDEV_PAIR_ID	PORT(PAIR_PORT_INDEX(0))
+
+#define EDEV_RETRY		0xffff
+
+struct event_eth_tx_adapter_test_params {
+	struct rte_mempool *mp;
+	uint16_t rx_rings, tx_rings;
+	struct rte_ring *r[NUM_ETH_DEV][MAX_NUM_QUEUE];
+	int port[NUM_ETH_DEV];
+};
+
+static int event_dev_delete;
+static struct event_eth_tx_adapter_test_params default_params;
+static uint64_t eid = ~0ULL;
+static uint32_t tid;
+
+static inline int
+port_init_common(uint8_t port, const struct rte_eth_conf *port_conf,
+		struct rte_mempool *mp)
+{
+	const uint16_t rx_ring_size = RING_SIZE, tx_ring_size = RING_SIZE;
+	int retval;
+	uint16_t q;
+
+	if (!rte_eth_dev_is_valid_port(port))
+		return -1;
+
+	default_params.rx_rings = MAX_NUM_QUEUE;
+	default_params.tx_rings = MAX_NUM_QUEUE;
+
+	/* Configure the Ethernet device. */
+	retval = rte_eth_dev_configure(port, default_params.rx_rings,
+				default_params.tx_rings, port_conf);
+	if (retval != 0)
+		return retval;
+
+	for (q = 0; q < default_params.rx_rings; q++) {
+		retval = rte_eth_rx_queue_setup(port, q, rx_ring_size,
+				rte_eth_dev_socket_id(port), NULL, mp);
+		if (retval < 0)
+			return retval;
+	}
+
+	for (q = 0; q < default_params.tx_rings; q++) {
+		retval = rte_eth_tx_queue_setup(port, q, tx_ring_size,
+				rte_eth_dev_socket_id(port), NULL);
+		if (retval < 0)
+			return retval;
+	}
+
+	/* Start the Ethernet port. */
+	retval = rte_eth_dev_start(port);
+	if (retval < 0)
+		return retval;
+
+	/* Display the port MAC address. */
+	struct ether_addr addr;
+	rte_eth_macaddr_get(port, &addr);
+	printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8
+			   " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n",
+			(unsigned int)port,
+			addr.addr_bytes[0], addr.addr_bytes[1],
+			addr.addr_bytes[2], addr.addr_bytes[3],
+			addr.addr_bytes[4], addr.addr_bytes[5]);
+
+	/* Enable RX in promiscuous mode for the Ethernet device. */
+	rte_eth_promiscuous_enable(port);
+
+	return 0;
+}
+
+static inline int
+port_init(uint8_t port, struct rte_mempool *mp)
+{
+	struct rte_eth_conf conf = { 0 };
+	return port_init_common(port, &conf, mp);
+}
+
+#define RING_NAME_LEN	20
+#define DEV_NAME_LEN	20
+
+static int
+init_ports(void)
+{
+	char ring_name[ETH_NAME_LEN];
+	unsigned int i, j;
+	struct rte_ring * const *c1;
+	struct rte_ring * const *c2;
+	int err;
+
+	if (!default_params.mp)
+		default_params.mp = rte_pktmbuf_pool_create("mbuf_pool",
+			NB_MBUF, 32,
+			0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+
+	if (!default_params.mp)
+		return -ENOMEM;
+
+	for (i = 0; i < NUM_ETH_DEV; i++) {
+		for (j = 0; j < MAX_NUM_QUEUE; j++) {
+			snprintf(ring_name, sizeof(ring_name), "R%u%u", i, j);
+			default_params.r[i][j] = rte_ring_create(ring_name,
+						RING_SIZE,
+						SOCKET0,
+						RING_F_SP_ENQ | RING_F_SC_DEQ);
+			TEST_ASSERT((default_params.r[i][j] != NULL),
+				"Failed to allocate ring");
+		}
+	}
+
+	/*
+	 * To create two pseudo-Ethernet ports where the traffic is
+	 * switched between them, that is, traffic sent to port 1 is
+	 * read back from port 2 and vice-versa
+	 */
+	for (i = 0; i < NUM_ETH_PAIR; i++) {
+		char dev_name[DEV_NAME_LEN];
+		int p;
+
+		c1 = default_params.r[i];
+		c2 = default_params.r[PAIR_PORT_INDEX(i)];
+
+		snprintf(dev_name, DEV_NAME_LEN, "%u-%u", i, i + NUM_ETH_PAIR);
+		p = rte_eth_from_rings(dev_name, c1, MAX_NUM_QUEUE,
+				 c2, MAX_NUM_QUEUE, SOCKET0);
+		TEST_ASSERT(p >= 0, "Port creation failed %s", dev_name);
+		err = port_init(p, default_params.mp);
+		TEST_ASSERT(err == 0, "Port init failed %s", dev_name);
+		default_params.port[i] = p;
+
+		snprintf(dev_name, DEV_NAME_LEN, "%u-%u",  i + NUM_ETH_PAIR, i);
+		p = rte_eth_from_rings(dev_name, c2, MAX_NUM_QUEUE,
+				c1, MAX_NUM_QUEUE, SOCKET0);
+		TEST_ASSERT(p > 0, "Port creation failed %s", dev_name);
+		err = port_init(p, default_params.mp);
+		TEST_ASSERT(err == 0, "Port init failed %s", dev_name);
+		default_params.port[PAIR_PORT_INDEX(i)] = p;
+	}
+
+	return 0;
+}
+
+static void
+deinit_ports(void)
+{
+	uint16_t i, j;
+	char name[ETH_NAME_LEN];
+
+	for (i = 0; i < RTE_DIM(default_params.port); i++) {
+		rte_eth_dev_stop(default_params.port[i]);
+		rte_eth_dev_get_name_by_port(default_params.port[i], name);
+		rte_vdev_uninit(name);
+		for (j = 0; j < RTE_DIM(default_params.r[i]); j++)
+			rte_ring_free(default_params.r[i][j]);
+	}
+}
+
+static int
+testsuite_setup(void)
+{
+	const char *vdev_name = "event_sw0";
+
+	int err = init_ports();
+	TEST_ASSERT(err == 0, "Port initialization failed err %d\n", err);
+
+	if (rte_event_dev_count() == 0) {
+		printf("Failed to find a valid event device,"
+			" testing with event_sw0 device\n");
+		err = rte_vdev_init(vdev_name, NULL);
+		TEST_ASSERT(err == 0, "vdev %s creation failed  %d\n",
+			vdev_name, err);
+		event_dev_delete = 1;
+	}
+	return err;
+}
+
+#define DEVICE_ID_SIZE 64
+
+static void
+testsuite_teardown(void)
+{
+	deinit_ports();
+	rte_mempool_free(default_params.mp);
+	default_params.mp = NULL;
+	if (event_dev_delete)
+		rte_vdev_uninit("event_sw0");
+}
+
+static int
+tx_adapter_create(void)
+{
+	int err;
+	struct rte_event_dev_info dev_info;
+	struct rte_event_port_conf tx_p_conf;
+	uint8_t priority;
+	uint8_t queue_id;
+
+	struct rte_event_dev_config config = {
+			.nb_event_queues = 1,
+			.nb_event_ports = 1,
+	};
+
+	struct rte_event_queue_conf wkr_q_conf = {
+			.schedule_type = RTE_SCHED_TYPE_ORDERED,
+			.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+			.nb_atomic_flows = 1024,
+			.nb_atomic_order_sequences = 1024,
+	};
+
+	memset(&tx_p_conf, 0, sizeof(tx_p_conf));
+	err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
+	config.nb_event_queue_flows = dev_info.max_event_queue_flows;
+	config.nb_event_port_dequeue_depth =
+			dev_info.max_event_port_dequeue_depth;
+	config.nb_event_port_enqueue_depth =
+			dev_info.max_event_port_enqueue_depth;
+	config.nb_events_limit =
+			dev_info.max_num_events;
+
+	err = rte_event_dev_configure(TEST_DEV_ID, &config);
+	TEST_ASSERT(err == 0, "Event device initialization failed err %d\n",
+			err);
+
+	queue_id = 0;
+	err = rte_event_queue_setup(TEST_DEV_ID, 0, &wkr_q_conf);
+	TEST_ASSERT(err == 0, "Event queue setup failed %d\n", err);
+
+	err = rte_event_port_setup(TEST_DEV_ID, 0, NULL);
+	TEST_ASSERT(err == 0, "Event port setup failed %d\n", err);
+
+	priority = RTE_EVENT_DEV_PRIORITY_LOWEST;
+	err = rte_event_port_link(TEST_DEV_ID, 0, &queue_id, &priority, 1);
+	TEST_ASSERT(err == 1, "Error linking port %s\n",
+		rte_strerror(rte_errno));
+	err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	tx_p_conf.new_event_threshold = dev_info.max_num_events;
+	tx_p_conf.dequeue_depth = dev_info.max_event_port_dequeue_depth;
+	tx_p_conf.enqueue_depth = dev_info.max_event_port_enqueue_depth;
+	err = rte_event_eth_tx_adapter_create(TEST_INST_ID, TEST_DEV_ID,
+					&tx_p_conf);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	return err;
+}
+
+static void
+tx_adapter_free(void)
+{
+	rte_event_eth_tx_adapter_free(TEST_INST_ID);
+}
+
+static int
+tx_adapter_create_free(void)
+{
+	int err;
+	struct rte_event_dev_info dev_info;
+	struct rte_event_port_conf tx_p_conf;
+
+	err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	tx_p_conf.new_event_threshold = dev_info.max_num_events;
+	tx_p_conf.dequeue_depth = dev_info.max_event_port_dequeue_depth;
+	tx_p_conf.enqueue_depth = dev_info.max_event_port_enqueue_depth;
+
+	err = rte_event_eth_tx_adapter_create(TEST_INST_ID, TEST_DEV_ID,
+					NULL);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_tx_adapter_create(TEST_INST_ID, TEST_DEV_ID,
+					&tx_p_conf);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_create(TEST_INST_ID,
+					TEST_DEV_ID, &tx_p_conf);
+	TEST_ASSERT(err == -EEXIST, "Expected -EEXIST %d got %d", -EEXIST, err);
+
+	err = rte_event_eth_tx_adapter_free(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_free(TEST_INST_ID);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL %d got %d", -EINVAL, err);
+
+	err = rte_event_eth_tx_adapter_free(1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL %d got %d", -EINVAL, err);
+
+	return TEST_SUCCESS;
+}
+
+static int
+tx_adapter_queue_add_del(void)
+{
+	int err;
+	uint32_t cap;
+
+	err = rte_event_eth_tx_adapter_caps_get(TEST_DEV_ID, TEST_ETHDEV_ID,
+					 &cap);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+
+	err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID,
+						rte_eth_dev_count_total(),
+						-1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID,
+						TEST_ETHDEV_ID,
+						0);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID,
+						TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID,
+						TEST_ETHDEV_ID,
+						0);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID,
+						TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID,
+						TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_add(1, TEST_ETHDEV_ID, -1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(1, TEST_ETHDEV_ID, -1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	return TEST_SUCCESS;
+}
+
+static int
+tx_adapter_start_stop(void)
+{
+	int err;
+
+	err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID, TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_start(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_stop(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID, TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_start(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_stop(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_start(1);
+
+	err = rte_event_eth_tx_adapter_stop(1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	return TEST_SUCCESS;
+}
+
+
+static int
+tx_adapter_single(uint16_t port, uint16_t tx_queue_id,
+		struct rte_mbuf *m, uint8_t qid,
+		uint8_t sched_type)
+{
+	struct rte_event event;
+	struct rte_mbuf *r;
+	int ret;
+	unsigned int l;
+
+	event.queue_id = qid;
+	event.op = RTE_EVENT_OP_NEW;
+	event.event_type = RTE_EVENT_TYPE_CPU;
+	event.sched_type = sched_type;
+	event.mbuf = m;
+
+	m->port = port;
+	rte_event_eth_tx_adapter_txq_set(m, tx_queue_id);
+
+	l = 0;
+	while (rte_event_enqueue_burst(TEST_DEV_ID, 0, &event, 1) != 1) {
+		l++;
+		if (l > EDEV_RETRY)
+			break;
+	}
+
+	TEST_ASSERT(l < EDEV_RETRY, "Unable to enqueue to eventdev");
+	l = 0;
+	while (l++ < EDEV_RETRY) {
+
+		if (eid != ~0ULL) {
+			ret = rte_service_run_iter_on_app_lcore(eid, 0);
+			TEST_ASSERT(ret == 0, "failed to run service %d", ret);
+		}
+
+		ret = rte_service_run_iter_on_app_lcore(tid, 0);
+		TEST_ASSERT(ret == 0, "failed to run service %d", ret);
+
+		if (rte_eth_rx_burst(TEST_ETHDEV_PAIR_ID, tx_queue_id,
+				&r, 1)) {
+			TEST_ASSERT_EQUAL(r, m, "mbuf comparison failed"
+					" expected %p received %p", m, r);
+			return 0;
+		}
+	}
+
+	TEST_ASSERT(0, "Failed to receive packet");
+	return -1;
+}
+
+static int
+tx_adapter_service(void)
+{
+	struct rte_event_eth_tx_adapter_stats stats;
+	uint32_t i;
+	int err;
+	uint8_t ev_port, ev_qid;
+	struct rte_mbuf  bufs[RING_SIZE];
+	struct rte_mbuf *pbufs[RING_SIZE];
+	struct rte_event_dev_info dev_info;
+	struct rte_event_dev_config dev_conf;
+	struct rte_event_queue_conf qconf;
+	uint32_t qcnt, pcnt;
+	uint16_t q;
+	int internal_port;
+	uint32_t cap;
+
+	memset(&dev_conf, 0, sizeof(dev_conf));
+	err = rte_event_eth_tx_adapter_caps_get(TEST_DEV_ID, TEST_ETHDEV_ID,
+						&cap);
+	TEST_ASSERT(err == 0, "Failed to get adapter cap err %d\n", err);
+
+	internal_port = !!(cap & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT);
+	if (internal_port)
+		return TEST_SUCCESS;
+
+	err = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID, TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_event_port_get(TEST_INST_ID,
+						&ev_port);
+	TEST_ASSERT_SUCCESS(err, "Failed to get event port %d", err);
+
+	err = rte_event_dev_attr_get(TEST_DEV_ID, RTE_EVENT_DEV_ATTR_PORT_COUNT,
+					&pcnt);
+	TEST_ASSERT_SUCCESS(err, "Port count get failed");
+
+	err = rte_event_dev_attr_get(TEST_DEV_ID,
+				RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &qcnt);
+	TEST_ASSERT_SUCCESS(err, "Queue count get failed");
+
+	err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
+	TEST_ASSERT_SUCCESS(err, "Dev info failed");
+
+	dev_conf.nb_event_queue_flows = dev_info.max_event_queue_flows;
+	dev_conf.nb_event_port_dequeue_depth =
+			dev_info.max_event_port_dequeue_depth;
+	dev_conf.nb_event_port_enqueue_depth =
+			dev_info.max_event_port_enqueue_depth;
+	dev_conf.nb_events_limit =
+			dev_info.max_num_events;
+	dev_conf.nb_event_queues = qcnt + 1;
+	dev_conf.nb_event_ports = pcnt;
+	err = rte_event_dev_configure(TEST_DEV_ID, &dev_conf);
+	TEST_ASSERT(err == 0, "Event device initialization failed err %d\n",
+			err);
+
+	ev_qid = qcnt;
+	qconf.nb_atomic_flows = dev_info.max_event_queue_flows;
+	qconf.nb_atomic_order_sequences = 32;
+	qconf.schedule_type = RTE_SCHED_TYPE_ATOMIC;
+	qconf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST;
+	qconf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_SINGLE_LINK;
+	err = rte_event_queue_setup(TEST_DEV_ID, ev_qid, &qconf);
+	TEST_ASSERT_SUCCESS(err, "Failed to setup queue %u", ev_qid);
+
+	/*
+	 * Setup ports again so that the newly added queue is visible
+	 * to them
+	 */
+	for (i = 0; i < pcnt; i++) {
+
+		int n_links;
+		uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+		uint8_t priorities[RTE_EVENT_MAX_QUEUES_PER_DEV];
+
+		if (i == ev_port)
+			continue;
+
+		n_links = rte_event_port_links_get(TEST_DEV_ID, i, queues,
+						priorities);
+		TEST_ASSERT(n_links > 0, "Failed to get port links %d\n",
+			n_links);
+		err = rte_event_port_setup(TEST_DEV_ID, i, NULL);
+		TEST_ASSERT(err == 0, "Failed to setup port err %d\n", err);
+		err = rte_event_port_link(TEST_DEV_ID, i, queues, priorities,
+					n_links);
+		TEST_ASSERT(n_links == err, "Failed to link all queues"
+			" err %s\n", rte_strerror(rte_errno));
+	}
+
+	err = rte_event_port_link(TEST_DEV_ID, ev_port, &ev_qid, NULL, 1);
+	TEST_ASSERT(err == 1, "Failed to link queue port %u",
+		    ev_port);
+
+	err = rte_event_eth_tx_adapter_start(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	if (!(dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED)) {
+		err = rte_event_dev_service_id_get(0, (uint32_t *)&eid);
+		TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+		err = rte_service_runstate_set(eid, 1);
+		TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+		err = rte_service_set_runstate_mapped_check(eid, 0);
+		TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+	}
+
+	err = rte_event_eth_tx_adapter_service_id_get(TEST_INST_ID, &tid);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_service_runstate_set(tid, 1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_service_set_runstate_mapped_check(tid, 0);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_dev_start(TEST_DEV_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	for (q = 0; q < MAX_NUM_QUEUE; q++) {
+		for (i = 0; i < RING_SIZE; i++)
+			pbufs[i] = &bufs[i];
+		for (i = 0; i < RING_SIZE; i++) {
+			pbufs[i] = &bufs[i];
+			err = tx_adapter_single(TEST_ETHDEV_ID, q, pbufs[i],
+						ev_qid,
+						RTE_SCHED_TYPE_ORDERED);
+			TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+		}
+		for (i = 0; i < RING_SIZE; i++) {
+			TEST_ASSERT_EQUAL(pbufs[i], &bufs[i],
+				"Error: received data does not match"
+				" that transmitted");
+		}
+	}
+
+	err = rte_event_eth_tx_adapter_stats_get(TEST_INST_ID, NULL);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_tx_adapter_stats_get(TEST_INST_ID, &stats);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+	TEST_ASSERT_EQUAL(stats.tx_packets, MAX_NUM_QUEUE * RING_SIZE,
+			"stats.tx_packets expected %u got %"PRIu64,
+			MAX_NUM_QUEUE * RING_SIZE,
+			stats.tx_packets);
+
+	err = rte_event_eth_tx_adapter_stats_reset(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_stats_get(TEST_INST_ID, &stats);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+	TEST_ASSERT_EQUAL(stats.tx_packets, 0,
+			"stats.tx_packets expected %u got %"PRIu64,
+			0,
+			stats.tx_packets);
+
+	err = rte_event_eth_tx_adapter_stats_get(1, &stats);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID, TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_tx_adapter_free(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	rte_event_dev_stop(TEST_DEV_ID);
+
+	return TEST_SUCCESS;
+}
+
+static int
+tx_adapter_dynamic_device(void)
+{
+	uint16_t port_id = rte_eth_dev_count_avail();
+	const char *null_dev[2] = { "eth_null0", "eth_null1" };
+	struct rte_eth_conf dev_conf;
+	int ret;
+	size_t i;
+
+	memset(&dev_conf, 0, sizeof(dev_conf));
+	for (i = 0; i < RTE_DIM(null_dev); i++) {
+		ret = rte_vdev_init(null_dev[i], NULL);
+		TEST_ASSERT_SUCCESS(ret, "%s Port creation failed %d",
+				null_dev[i], ret);
+
+		if (i == 0) {
+			ret = tx_adapter_create();
+			TEST_ASSERT_SUCCESS(ret, "Adapter create failed %d",
+					ret);
+		}
+
+		ret = rte_eth_dev_configure(port_id + i, MAX_NUM_QUEUE,
+					MAX_NUM_QUEUE, &dev_conf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to configure device %d", ret);
+
+		ret = rte_event_eth_tx_adapter_queue_add(TEST_INST_ID,
+							port_id + i, 0);
+		TEST_ASSERT_SUCCESS(ret, "Failed to add queues %d", ret);
+
+	}
+
+	for (i = 0; i < RTE_DIM(null_dev); i++) {
+		ret = rte_event_eth_tx_adapter_queue_del(TEST_INST_ID,
+							port_id + i, -1);
+		TEST_ASSERT_SUCCESS(ret, "Failed to delete queues %d", ret);
+	}
+
+	tx_adapter_free();
+
+	for (i = 0; i < RTE_DIM(null_dev); i++)
+		rte_vdev_uninit(null_dev[i]);
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite event_eth_tx_tests = {
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.suite_name = "tx event eth adapter test suite",
+	.unit_test_cases = {
+		TEST_CASE_ST(NULL, NULL, tx_adapter_create_free),
+		TEST_CASE_ST(tx_adapter_create, tx_adapter_free,
+					tx_adapter_queue_add_del),
+		TEST_CASE_ST(tx_adapter_create, tx_adapter_free,
+					tx_adapter_start_stop),
+		TEST_CASE_ST(tx_adapter_create, tx_adapter_free,
+					tx_adapter_service),
+		TEST_CASE_ST(NULL, NULL, tx_adapter_dynamic_device),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_event_eth_tx_adapter_common(void)
+{
+	return unit_test_suite_runner(&event_eth_tx_tests);
+}
+
+REGISTER_TEST_COMMAND(event_eth_tx_adapter_autotest,
+		test_event_eth_tx_adapter_common);
diff --git a/MAINTAINERS b/MAINTAINERS
index 3f06b56..93699ba 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -395,6 +395,7 @@ Eventdev Ethdev Tx Adapter API - EXPERIMENTAL
 M: Nikhil Rao <nikhil.rao@intel.com>
 T: git://dpdk.org/next/dpdk-next-eventdev
 F: lib/librte_eventdev/*eth_tx_adapter*
+F: test/test/test_event_eth_tx_adapter.c
 
 Raw device API - EXPERIMENTAL
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
diff --git a/test/test/Makefile b/test/test/Makefile
index e6967ba..dcea441 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -191,6 +191,7 @@ ifeq ($(CONFIG_RTE_LIBRTE_EVENTDEV),y)
 SRCS-y += test_eventdev.c
 SRCS-y += test_event_ring.c
 SRCS-y += test_event_eth_rx_adapter.c
+SRCS-y += test_event_eth_tx_adapter.c
 SRCS-y += test_event_timer_adapter.c
 SRCS-y += test_event_crypto_adapter.c
 endif
diff --git a/test/test/meson.build b/test/test/meson.build
index b1dd6ec..3d2887b 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -34,6 +34,7 @@ test_sources = files('commands.c',
 	'test_efd_perf.c',
 	'test_errno.c',
 	'test_event_ring.c',
+	'test_event_eth_tx_adapter.c',
 	'test_eventdev.c',
 	'test_func_reentrancy.c',
 	'test_flow_classify.c',
@@ -152,6 +153,7 @@ test_names = [
 	'efd_perf_autotest',
 	'errno_autotest',
 	'event_ring_autotest',
+	'event_eth_tx_adapter_autotest',
 	'eventdev_common_autotest',
 	'eventdev_octeontx_autotest',
 	'eventdev_sw_autotest',
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v4 5/5] doc: add event eth Tx adapter guide
  2018-09-20 17:41   ` [dpdk-dev] [PATCH v4 1/5] eventdev: add eth Tx adapter APIs Nikhil Rao
                       ` (2 preceding siblings ...)
  2018-09-20 17:41     ` [dpdk-dev] [PATCH v4 4/5] eventdev: add auto test for eth Tx adapter Nikhil Rao
@ 2018-09-20 17:41     ` Nikhil Rao
  2018-09-21  5:04     ` [dpdk-dev] [PATCH v4 1/5] eventdev: add eth Tx adapter APIs Jerin Jacob
  2018-09-28 10:05     ` Jerin Jacob
  5 siblings, 0 replies; 27+ messages in thread
From: Nikhil Rao @ 2018-09-20 17:41 UTC (permalink / raw)
  To: jerin.jacob, olivier.matz, marko.kovacevic, john.mcnamara; +Cc: dev, Nikhil Rao

Add programmer's guide doc to explain the use of the
Event Ethernet Tx Adapter library.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 MAINTAINERS                                        |   1 +
 .../prog_guide/event_ethernet_tx_adapter.rst       | 165 +++++++++++++++++++++
 doc/guides/prog_guide/index.rst                    |   1 +
 doc/guides/rel_notes/release_18_11.rst             |   8 +
 4 files changed, 175 insertions(+)
 create mode 100644 doc/guides/prog_guide/event_ethernet_tx_adapter.rst

diff --git a/MAINTAINERS b/MAINTAINERS
index 93699ba..6f6755c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -396,6 +396,7 @@ M: Nikhil Rao <nikhil.rao@intel.com>
 T: git://dpdk.org/next/dpdk-next-eventdev
 F: lib/librte_eventdev/*eth_tx_adapter*
 F: test/test/test_event_eth_tx_adapter.c
+F: doc/guides/prog_guide/event_ethernet_tx_adapter.rst
 
 Raw device API - EXPERIMENTAL
 M: Shreyansh Jain <shreyansh.jain@nxp.com>
diff --git a/doc/guides/prog_guide/event_ethernet_tx_adapter.rst b/doc/guides/prog_guide/event_ethernet_tx_adapter.rst
new file mode 100644
index 0000000..192f9e1
--- /dev/null
+++ b/doc/guides/prog_guide/event_ethernet_tx_adapter.rst
@@ -0,0 +1,165 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2017 Intel Corporation.
+
+Event Ethernet Tx Adapter Library
+=================================
+
+The DPDK Eventdev API allows the application to use an event driven programming
+model for packet processing in which the event device distributes events
+referencing packets to the application cores in a dynamic load balanced fashion
+while handling atomicity and packet ordering. Event adapters provide the interface
+between the ethernet, crypto and timer devices and the event device. Event adapter
+APIs enable common application code by abstracting PMD specific capabilities.
+The Event ethernet Tx adapter provides configuration and data path APIs for the
+transmit stage of the application allowing the same application code to use eventdev
+PMD support or in its absence, a common implementation.
+
+In the common implementation, the application enqueues mbufs to the adapter
+which runs as a rte_service function. The service function dequeues events
+from its event port and transmits the mbufs referenced by these events.
+
+
+API Walk-through
+----------------
+
+This section will introduce the reader to the adapter API. The
+application has to first instantiate an adapter which is associated with
+a single eventdev, next the adapter instance is configured with Tx queues,
+finally the adapter is started and the application can start enqueuing mbufs
+to it.
+
+Creating an Adapter Instance
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+An adapter instance is created using ``rte_event_eth_tx_adapter_create()``. This
+function is passed the event device to be associated with the adapter and port
+configuration for the adapter to setup an event port if the adapter needs to use
+a service function.
+
+If the application desires to have finer control of eventdev port configuration,
+it can use the ``rte_event_eth_tx_adapter_create_ext()`` function. The
+``rte_event_eth_tx_adapter_create_ext()`` function is passed a callback function.
+The callback function is invoked if the adapter needs to use a service function
+and needs to create an event port for it. The callback is expected to fill the
+``struct rte_event_eth_tx_adapter_conf`` structure passed to it.
+
+.. code-block:: c
+
+        struct rte_event_dev_info dev_info;
+        struct rte_event_port_conf tx_p_conf = {0};
+
+        err = rte_event_dev_info_get(id, &dev_info);
+
+        tx_p_conf.new_event_threshold = dev_info.max_num_events;
+        tx_p_conf.dequeue_depth = dev_info.max_event_port_dequeue_depth;
+        tx_p_conf.enqueue_depth = dev_info.max_event_port_enqueue_depth;
+
+        err = rte_event_eth_tx_adapter_create(id, dev_id, &tx_p_conf);
+
+Adding Tx Queues to the Adapter Instance
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Ethdev Tx queues are added to the instance using the
+``rte_event_eth_tx_adapter_queue_add()`` function. A queue value
+of -1 is used to indicate all queues within a device.
+
+.. code-block:: c
+
+        int err = rte_event_eth_tx_adapter_queue_add(id,
+						     eth_dev_id,
+						     q);
+
+Querying Adapter Capabilities
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ``rte_event_eth_tx_adapter_caps_get()`` function allows
+the application to query the adapter capabilities for an eventdev and ethdev
+combination. Currently, the only capability flag defined is
+``RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT``, the application can
+query this flag to determine if a service function is associated with the
+adapter and retrieve its service identifier using the
+``rte_event_eth_tx_adapter_service_id_get()`` API.
+
+
+.. code-block:: c
+
+        int err = rte_event_eth_tx_adapter_caps_get(dev_id, eth_dev_id, &cap);
+
+        if (!(cap & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT))
+                err = rte_event_eth_tx_adapter_service_id_get(id, &service_id);
+
+Linking a Queue to the Adapter's Event Port
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If the adapter uses a service function as described in the previous section, the
+application is required to link a queue to the adapter's event port. The adapter's
+event port can be obtained using the ``rte_event_eth_tx_adapter_event_port_get()``
+function. The queue can be configured with the ``RTE_EVENT_QUEUE_CFG_SINGLE_LINK``
+since it is linked to a single event port.
+
+Configuring the Service Function
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If the adapter uses a service function, the application can assign
+a service core to the service function as shown below.
+
+.. code-block:: c
+
+        if (rte_event_eth_tx_adapter_service_id_get(id, &service_id) == 0)
+                rte_service_map_lcore_set(service_id, TX_CORE_ID);
+
+Starting the Adapter Instance
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The application calls ``rte_event_eth_tx_adapter_start()`` to start the adapter.
+This function calls the start callback of the eventdev PMD if supported,
+and the ``rte_service_run_state_set()`` to enable the service function if one exists.
+
+Enqueuing Packets to the Adapter
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The application needs to notify the adapter about the transmit port and queue used
+to send the packet. The transmit port is set in the ``struct rte mbuf::port`` field
+and the transmit queue is set using the ``rte_event_eth_tx_adapter_txq_set()``
+function.
+
+If the eventdev PMD supports the ``RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT``
+capability for a given ethernet device, the application should use the
+``rte_event_eth_tx_adapter_enqueue()`` function to enqueue packets to the adapter.
+
+If the adapter uses a service function for the ethernet device then the application
+should use the ``rte_event_enqueue_burst()`` function.
+
+.. code-block:: c
+
+	struct rte_event event;
+
+	if (cap & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT) {
+
+		event.mbuf = m;
+
+		m->port = tx_port;
+		rte_event_eth_tx_adapter_txq_set(m, tx_queue_id);
+
+		rte_event_eth_tx_adapter_enqueue(dev_id, ev_port, &event, 1);
+	} else {
+
+		event.queue_id = qid; /* event queue linked to adapter port */
+		event.op = RTE_EVENT_OP_NEW;
+		event.event_type = RTE_EVENT_TYPE_CPU;
+		event.sched_type = RTE_SCHED_TYPE_ATOMIC;
+		event.mbuf = m;
+
+		m->port = tx_port;
+		rte_event_eth_tx_adapter_txq_set(m, tx_queue_id);
+
+		rte_event_enqueue_burst(dev_id, ev_port, &event, 1);
+	}
+
+Getting Adapter Statistics
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The  ``rte_event_eth_tx_adapter_stats_get()`` function reports counters defined
+in struct ``rte_event_eth_tx_adapter_stats``. The counter values are the sum of
+the counts from the eventdev PMD callback if the callback is supported, and
+the counts maintained by the service function, if one exists.
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 3b920e5..c81d9c5 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -44,6 +44,7 @@ Programmer's Guide
     thread_safety_dpdk_functions
     eventdev
     event_ethernet_rx_adapter
+    event_ethernet_tx_adapter
     event_timer_adapter
     event_crypto_adapter
     qos_framework
diff --git a/doc/guides/rel_notes/release_18_11.rst b/doc/guides/rel_notes/release_18_11.rst
index 97daad1..7f2636a 100644
--- a/doc/guides/rel_notes/release_18_11.rst
+++ b/doc/guides/rel_notes/release_18_11.rst
@@ -67,6 +67,14 @@ New Features
   SR-IOV option in Hyper-V and Azure. This is an alternative to the previous
   vdev_netvsc, tap, and failsafe drivers combination.
 
+* **Added Event Ethernet Tx Adapter.**
+
+  Added event ethernet Tx adapter library that  provides configuration and
+  data path APIs for the ethernet transmit stage of an event driven packet
+  processing application. These APIs abstract the implementation of the
+  transmit stage and allow the application to use eventdev PMD support or
+  a common implementation.
+
 * **Added Distributed Software Eventdev PMD.**
 
   Added the new Distributed Software Event Device (DSW), which is a
-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v4 1/5] eventdev: add eth Tx adapter APIs
  2018-09-20 17:41   ` [dpdk-dev] [PATCH v4 1/5] eventdev: add eth Tx adapter APIs Nikhil Rao
                       ` (3 preceding siblings ...)
  2018-09-20 17:41     ` [dpdk-dev] [PATCH v4 5/5] doc: add event eth Tx adapter guide Nikhil Rao
@ 2018-09-21  5:04     ` Jerin Jacob
  2018-09-28 10:05     ` Jerin Jacob
  5 siblings, 0 replies; 27+ messages in thread
From: Jerin Jacob @ 2018-09-21  5:04 UTC (permalink / raw)
  To: Nikhil Rao; +Cc: olivier.matz, marko.kovacevic, john.mcnamara, dev

-----Original Message-----
> Date: Thu, 20 Sep 2018 23:11:12 +0530
> From: Nikhil Rao <nikhil.rao@intel.com>
> To: jerin.jacob@caviumnetworks.com, olivier.matz@6wind.com,
>  marko.kovacevic@intel.com, john.mcnamara@intel.com
> CC: dev@dpdk.org, Nikhil Rao <nikhil.rao@intel.com>
> Subject: [PATCH v4 1/5] eventdev: add eth Tx adapter APIs
> X-Mailer: git-send-email 1.8.3.1
> 
> The ethernet Tx adapter abstracts the transmit stage of an
> event driven packet processing application. The transmit
> stage may be implemented with eventdev PMD support or use a
> rte_service function implemented in the adapter. These APIs
> provide a common configuration and control interface and
> an transmit API for the eventdev PMD implementation.
> 
> The transmit port is specified using mbuf::port. The transmit
> queue is specified using the rte_event_eth_tx_adapter_txq_set()
> function.
> 
> Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
> Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---
>  lib/librte_eventdev/rte_event_eth_tx_adapter.h | 462 +++++++++++++++++++++++++
>  lib/librte_mbuf/rte_mbuf.h                     |   5 +-
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index a50b05c..b47a5c5 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -464,7 +464,9 @@ struct rte_mbuf {
>         };
>         uint16_t nb_segs;         /**< Number of segments. */
> 
> -       /** Input port (16 bits to support more than 256 virtual ports). */
> +       /** Input port (16 bits to support more than 256 virtual ports).
> +        * The event eth Tx adapter uses this field to specify the output port.
> +        */
>         uint16_t port;
> 
>         uint64_t ol_flags;        /**< Offload features. */
> @@ -530,6 +532,7 @@ struct rte_mbuf {
>                 struct {
>                         uint32_t lo;
>                         uint32_t hi;
> +                       /**< @see rte_event_eth_tx_adapter_txq_set */
>                 } sched;          /**< Hierarchical scheduler */
>                 uint32_t usr;     /**< User defined tags. See rte_distributor_process() */
>         } hash;                   /**< hash information */

Olivier,

I am planning to take this patch into next-eventdev tree. Could you
please let us know, if you any comments on above "comments" additions to
lib/librte_mbuf/rte_mbuf.h.

If there is any minor one, I can do it on apply or if it major one
Nikhil can send the next version. Let us know.

/Jerin

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v4 1/5] eventdev: add eth Tx adapter APIs
  2018-09-20 17:41   ` [dpdk-dev] [PATCH v4 1/5] eventdev: add eth Tx adapter APIs Nikhil Rao
                       ` (4 preceding siblings ...)
  2018-09-21  5:04     ` [dpdk-dev] [PATCH v4 1/5] eventdev: add eth Tx adapter APIs Jerin Jacob
@ 2018-09-28 10:05     ` Jerin Jacob
  5 siblings, 0 replies; 27+ messages in thread
From: Jerin Jacob @ 2018-09-28 10:05 UTC (permalink / raw)
  To: Nikhil Rao; +Cc: olivier.matz, marko.kovacevic, john.mcnamara, dev

-----Original Message-----
> Date: Thu, 20 Sep 2018 23:11:12 +0530
> From: Nikhil Rao <nikhil.rao@intel.com>
> To: jerin.jacob@caviumnetworks.com, olivier.matz@6wind.com,
>  marko.kovacevic@intel.com, john.mcnamara@intel.com
> CC: dev@dpdk.org, Nikhil Rao <nikhil.rao@intel.com>
> Subject: [PATCH v4 1/5] eventdev: add eth Tx adapter APIs
> X-Mailer: git-send-email 1.8.3.1
> 
> 
> The ethernet Tx adapter abstracts the transmit stage of an
> event driven packet processing application. The transmit
> stage may be implemented with eventdev PMD support or use a
> rte_service function implemented in the adapter. These APIs
> provide a common configuration and control interface and
> an transmit API for the eventdev PMD implementation.
> 
> The transmit port is specified using mbuf::port. The transmit
> queue is specified using the rte_event_eth_tx_adapter_txq_set()
> function.
> 
> Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
> Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

Applied this series to dpdk-next-eventdev/master with following minor
documentation change. Thanks.

diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index b47a5c50e..a09377a60 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -532,7 +532,9 @@ struct rte_mbuf {
                struct {
                        uint32_t lo;
                        uint32_t hi;
-                       /**< @see rte_event_eth_tx_adapter_txq_set */
+                       /**< The event eth Tx adapter uses this field to store
+                        * Tx queue id. @see rte_event_eth_tx_adapter_txq_set()
+                        */

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2018-09-28 10:05 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-17  4:20 [dpdk-dev] [PATCH v2 1/4] eventdev: add eth Tx adapter APIs Nikhil Rao
2018-08-17  4:20 ` [dpdk-dev] [PATCH v2 2/4] eventdev: add caps API and PMD callbacks for eth Tx adapter Nikhil Rao
2018-08-19 10:45   ` Jerin Jacob
2018-08-21  8:52     ` Rao, Nikhil
2018-08-21  9:11       ` Jerin Jacob
2018-08-22 13:34         ` Rao, Nikhil
2018-08-17  4:20 ` [dpdk-dev] [PATCH v2 3/4] eventdev: add eth Tx adapter implementation Nikhil Rao
2018-08-17  4:20 ` [dpdk-dev] [PATCH v2 4/4] eventdev: add auto test for eth Tx adapter Nikhil Rao
2018-08-17 11:55   ` Pavan Nikhilesh
2018-08-22 16:13     ` Rao, Nikhil
2018-08-22 16:23       ` Pavan Nikhilesh
2018-08-23  1:48         ` Rao, Nikhil
2018-08-19 10:19 ` [dpdk-dev] [PATCH v2 1/4] eventdev: add eth Tx adapter APIs Jerin Jacob
2018-08-31  5:41 ` [dpdk-dev] [PATCH v3 1/5] " Nikhil Rao
2018-08-31  5:41   ` [dpdk-dev] [PATCH v3 2/5] eventdev: add caps API and PMD callbacks for eth Tx adapter Nikhil Rao
2018-08-31  5:41   ` [dpdk-dev] [PATCH v3 3/5] eventdev: add eth Tx adapter implementation Nikhil Rao
2018-08-31  5:41   ` [dpdk-dev] [PATCH v3 4/5] eventdev: add auto test for eth Tx adapter Nikhil Rao
2018-09-17 14:00     ` Jerin Jacob
2018-08-31  5:41   ` [dpdk-dev] [PATCH v3 5/5] doc: add event eth Tx adapter guide Nikhil Rao
2018-09-17 13:56     ` Jerin Jacob
2018-09-20 17:41   ` [dpdk-dev] [PATCH v4 1/5] eventdev: add eth Tx adapter APIs Nikhil Rao
2018-09-20 17:41     ` [dpdk-dev] [PATCH v4 2/5] eventdev: add caps API and PMD callbacks for eth Tx adapter Nikhil Rao
2018-09-20 17:41     ` [dpdk-dev] [PATCH v4 3/5] eventdev: add eth Tx adapter implementation Nikhil Rao
2018-09-20 17:41     ` [dpdk-dev] [PATCH v4 4/5] eventdev: add auto test for eth Tx adapter Nikhil Rao
2018-09-20 17:41     ` [dpdk-dev] [PATCH v4 5/5] doc: add event eth Tx adapter guide Nikhil Rao
2018-09-21  5:04     ` [dpdk-dev] [PATCH v4 1/5] eventdev: add eth Tx adapter APIs Jerin Jacob
2018-09-28 10:05     ` Jerin Jacob

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).