In case an event from a previous stage is required to be forwarded to a crypto adapter and PMD supports internal event port in crypto adapter, exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not have a way to check in the API rte_event_enqueue_burst(), whether it is for crypto adapter or for eth tx adapter. Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue_burst(), which can send to a crypto adapter. Note that RTE_EVENT_TYPE_* cannot be used to make that decision, as it is meant for event source and not event destination. And event port designated for crypto adapter is designed to be used for OP_NEW mode. Hence, in order to support an event PMD which has an internal event port in crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, application should use rte_event_crypto_adapter_enqueue_burst() API to enqueue events. When internal port is not available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW mode), application can use API rte_event_enqueue_burst() as it was doing earlier, i.e. retrieve event port used by crypto adapter and bind its event queues to that port and enqueue events using the API rte_event_enqueue_burst(). TODO: - test application changes for the usage of new API - support in octeontx2 event PMD Signed-off-by: Shijith Thotton <sthotton@marvell.com> Signed-off-by: Akhil Goyal <gakhil@marvell.com> --- .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++++------- lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 66 ++++++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 8 ++- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 3 + 7 files changed, 142 insertions(+), 27 deletions(-) diff --git a/doc/guides/prog_guide/event_crypto_adapter.rst b/doc/guides/prog_guide/event_crypto_adapter.rst index 1e3eb7139..4650ed945 100644 --- a/doc/guides/prog_guide/event_crypto_adapter.rst +++ b/doc/guides/prog_guide/event_crypto_adapter.rst @@ -55,21 +55,22 @@ which is needed to enqueue an event after the crypto operation is completed. RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -In the RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, if HW supports -RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability the application -can directly submit the crypto operations to the cryptodev. -If not, application retrieves crypto adapter's event port using -rte_event_crypto_adapter_event_port_get() API. Then, links its event -queue to this port and starts enqueuing crypto operations as events -to the eventdev. The adapter then dequeues the events and submits the -crypto operations to the cryptodev. After the crypto completions, the -adapter enqueues events to the event device. -Application can use this mode, when ingress packet ordering is needed. -In this mode, events dequeued from the adapter will be treated as -forwarded events. The application needs to specify the cryptodev ID -and queue pair ID (request information) needed to enqueue a crypto -operation in addition to the event information (response information) -needed to enqueue an event after the crypto operation has completed. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), the application should +use ``rte_event_crypto_adapter_enqueue_burst()`` API to enqueue crypto +operations as events to crypto adapter. If not, application retrieves crypto +adapter's event port using ``rte_event_crypto_adapter_event_port_get()`` API, +links its event queue to this port and starts enqueuing crypto operations as +events to eventdev using ``rte_event_enqueue_burst()``. The adapter then +dequeues the events and submits the crypto operations to the cryptodev. After +the crypto operation is complete, the adapter enqueues events to the event +device. The application can use this mode when ingress packet ordering is +needed. In this mode, events dequeued from the adapter will be treated as +forwarded events. The application needs to specify the cryptodev ID and queue +pair ID (request information) needed to enqueue a crypto operation in addition +to the event information (response information) needed to enqueue an event after +the crypto operation has completed. .. _figure_event_crypto_adapter_op_forward: @@ -120,28 +121,44 @@ service function and needs to create an event port for it. The callback is expected to fill the ``struct rte_event_crypto_adapter_conf`` structure passed to it. -For RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, the event port created by adapter -can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` API. -Application can use this event port to link with event queue on which it -enqueues events towards the crypto adapter. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), events with crypto +operations should be enqueued to the crypto adapter using +``rte_event_crypto_adapter_enqueue_burst()`` API. If not, the event port created +by the adapter can be retrieved using +``rte_event_crypto_adapter_event_port_get()`` API. An application can use this +event port to link with an event queue, on which it enqueues events towards the +crypto adapter using ``rte_event_enqueue_burst()``. .. code-block:: c - uint8_t id, evdev, crypto_ev_port_id, app_qid; + uint8_t id, evdev_id, cdev_id, crypto_ev_port_id, app_qid; struct rte_event ev; + uint32_t cap; int ret; - ret = rte_event_crypto_adapter_event_port_get(id, &crypto_ev_port_id); - ret = rte_event_queue_setup(evdev, app_qid, NULL); - ret = rte_event_port_link(evdev, crypto_ev_port_id, &app_qid, NULL, 1); - // Fill in event info and update event_ptr with rte_crypto_op memset(&ev, 0, sizeof(ev)); - ev.queue_id = app_qid; . . ev.event_ptr = op; - ret = rte_event_enqueue_burst(evdev, app_ev_port_id, ev, nb_events); + + ret = rte_event_crypto_adapter_caps_get(evdev_id, cdev_id, &cap); + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) { + ret = rte_event_crypto_adapter_enqueue_burst(evdev_id, + app_ev_port_id, ev, nb_events); + } else { + ret = rte_event_crypto_adapter_event_port_get(id, + &crypto_ev_port_id); + ret = rte_event_queue_setup(evdev_id, app_qid, NULL); + ret = rte_event_port_link(evdev_id, crypto_ev_port_id, &app_qid, + NULL, 1); + ev.queue_id = app_qid; + ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev, + nb_events); + } + Querying adapter capabilities ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/lib/librte_eventdev/eventdev_trace_points.c b/lib/librte_eventdev/eventdev_trace_points.c index 1a0ccc448..3867ec800 100644 --- a/lib/librte_eventdev/eventdev_trace_points.c +++ b/lib/librte_eventdev/eventdev_trace_points.c @@ -118,3 +118,6 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_start, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_stop, lib.eventdev.crypto.stop) + +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_enqueue, + lib.eventdev.crypto.enq) diff --git a/lib/librte_eventdev/rte_event_crypto_adapter.h b/lib/librte_eventdev/rte_event_crypto_adapter.h index 60630ef66..003667759 100644 --- a/lib/librte_eventdev/rte_event_crypto_adapter.h +++ b/lib/librte_eventdev/rte_event_crypto_adapter.h @@ -522,6 +522,72 @@ rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id); int rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); +/** + * Enqueue a burst of crypto operations as events object supplied in *rte_event* + * structure on an event crypto adapter designated by its event *dev_id* through + * the event port specified by *port_id*. This function is supported if the + * eventdev PMD has the #RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD + * capability flag set. + * + * The *nb_events* parameter is the number of event objects to enqueue which are + * supplied in the *ev* array of *rte_event* structure. + * + * The rte_event_crypto_adapter_enqueue() function returns the number of + * events objects it actually enqueued. A return value equal to *nb_events* + * means that all event objects have been enqueued. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param ev + * Points to an array of *nb_events* objects of type *rte_event* structure + * which contain the event object enqueue operations to be processed. + * @param nb_events + * The number of event objects to enqueue, typically number of + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) + * available for this port. + * + * @return + * The number of event objects actually enqueued on the event device. The + * return value can be less than the value of the *nb_events* parameter when + * the event devices queue is full or if invalid parameters are specified in a + * *rte_event*. If the return value is less than *nb_events*, the remaining + * events at the end of ev[] are not consumed and the caller has to take care + * of them, and rte_errno is set accordingly. Possible errno values include: + * - EINVAL The port ID is invalid, device ID is invalid, an event's queue + * ID is invalid, or an event's sched type doesn't match the + * capabilities of the destination queue. + * - ENOSPC The event port was backpressured and unable to enqueue + * one or more events. This error code is only applicable to + * closed systems. + */ +static inline uint16_t +rte_event_crypto_adapter_enqueue(uint8_t dev_id, + uint8_t port_id, + struct rte_event ev[], + uint16_t nb_events) +{ + const struct rte_eventdev *dev = &rte_eventdevs[dev_id]; + +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG + if (dev_id >= RTE_EVENT_MAX_DEVS || + !rte_eventdevs[dev_id].attached) { + rte_errno = EINVAL; + return 0; + } + + if (port_id >= dev->data->nb_ports) { + rte_errno = EINVAL; + return 0; + } +#endif + rte_eventdev_trace_crypto_adapter_enqueue(dev_id, port_id, ev, + nb_events); + + return dev->ca_enqueue(dev->data->ports[port_id], ev, nb_events); +} + #ifdef __cplusplus } #endif diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c index b57363f80..5674bd38e 100644 --- a/lib/librte_eventdev/rte_eventdev.c +++ b/lib/librte_eventdev/rte_eventdev.c @@ -1405,6 +1405,15 @@ rte_event_tx_adapter_enqueue(__rte_unused void *port, return 0; } +static uint16_t +rte_event_crypto_adapter_enqueue(__rte_unused void *port, + __rte_unused struct rte_event ev[], + __rte_unused uint16_t nb_events) +{ + rte_errno = ENOTSUP; + return 0; +} + struct rte_eventdev * rte_event_pmd_allocate(const char *name, int socket_id) { @@ -1427,6 +1436,7 @@ rte_event_pmd_allocate(const char *name, int socket_id) eventdev->txa_enqueue = rte_event_tx_adapter_enqueue; eventdev->txa_enqueue_same_dest = rte_event_tx_adapter_enqueue; + eventdev->ca_enqueue = rte_event_crypto_adapter_enqueue; if (eventdev->data == NULL) { struct rte_eventdev_data *eventdev_data = NULL; diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h index ce1fc2ce0..70e2fa140 100644 --- a/lib/librte_eventdev/rte_eventdev.h +++ b/lib/librte_eventdev/rte_eventdev.h @@ -1273,6 +1273,10 @@ typedef uint16_t (*event_tx_adapter_enqueue_same_dest)(void *port, * burst having same destination Ethernet port & Tx queue. */ +typedef uint16_t (*event_crypto_adapter_enqueue)(void *port, + struct rte_event ev[], uint16_t nb_events); +/**< @internal Enqueue burst of events on crypto adapter */ + #define RTE_EVENTDEV_NAME_MAX_LEN (64) /**< @internal Max length of name of event PMD */ @@ -1344,6 +1348,8 @@ struct rte_eventdev { */ event_tx_adapter_enqueue txa_enqueue; /**< Pointer to PMD eth Tx adapter enqueue function. */ + event_crypto_adapter_enqueue ca_enqueue; + /**< Pointer to PMD crypto adapter enqueue function. */ struct rte_eventdev_data *data; /**< Pointer to device data */ struct rte_eventdev_ops *dev_ops; @@ -1356,7 +1362,7 @@ struct rte_eventdev { /**< Flag indicating the device is attached */ uint64_t reserved_64s[4]; /**< Reserved for future fields */ - void *reserved_ptrs[4]; /**< Reserved for future fields */ + void *reserved_ptrs[3]; /**< Reserved for future fields */ } __rte_cache_aligned; extern struct rte_eventdev *rte_eventdevs; diff --git a/lib/librte_eventdev/rte_eventdev_trace_fp.h b/lib/librte_eventdev/rte_eventdev_trace_fp.h index 349129c0f..5639e0b83 100644 --- a/lib/librte_eventdev/rte_eventdev_trace_fp.h +++ b/lib/librte_eventdev/rte_eventdev_trace_fp.h @@ -49,6 +49,16 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_u8(flags); ) +RTE_TRACE_POINT_FP( + rte_eventdev_trace_crypto_adapter_enqueue, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table, + uint16_t nb_events), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_ptr(ev_table); + rte_trace_point_emit_u16(nb_events); +) + RTE_TRACE_POINT_FP( rte_eventdev_trace_timer_arm_burst, RTE_TRACE_POINT_ARGS(const void *adapter, void **evtims_table, diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map index 3e5c09cfd..c63ba7a9c 100644 --- a/lib/librte_eventdev/version.map +++ b/lib/librte_eventdev/version.map @@ -138,6 +138,9 @@ EXPERIMENTAL { __rte_eventdev_trace_port_setup; # added in 20.11 rte_event_pmd_pci_probe_named; + + # added in 21.05 + __rte_eventdev_trace_crypto_adapter_enqueue; }; INTERNAL { -- 2.25.1
This series proposes a new event device enqueue operation if crypto adapter forward mode is supported. Second patch in the series is the implementation of the same in PMD. Test application changes for the usage of new API is yet to add. v1: - Added crypto adapter forward mode support for octeontx2. Akhil Goyal (1): eventdev: introduce crypto adapter enqueue API Shijith Thotton (1): event/octeontx2: support crypto adapter forward mode .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++------ drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 34 +++++--- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 82 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 66 +++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 8 +- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 3 + 14 files changed, 259 insertions(+), 44 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h -- 2.25.1
From: Akhil Goyal <gakhil@marvell.com> In case an event from a previous stage is required to be forwarded to a crypto adapter and PMD supports internal event port in crypto adapter, exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not have a way to check in the API rte_event_enqueue_burst(), whether it is for crypto adapter or for eth tx adapter. Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue(), which can send to a crypto adapter. Note that RTE_EVENT_TYPE_* cannot be used to make that decision, as it is meant for event source and not event destination. And event port designated for crypto adapter is designed to be used for OP_NEW mode. Hence, in order to support an event PMD which has an internal event port in crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, application should use rte_event_crypto_adapter_enqueue() API to enqueue events. When internal port is not available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW mode), application can use API rte_event_enqueue_burst() as it was doing earlier, i.e. retrieve event port used by crypto adapter and bind its event queues to that port and enqueue events using the API rte_event_enqueue_burst(). Signed-off-by: Akhil Goyal <gakhil@marvell.com> --- .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++++------- lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 66 ++++++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 8 ++- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 3 + 7 files changed, 142 insertions(+), 27 deletions(-) diff --git a/doc/guides/prog_guide/event_crypto_adapter.rst b/doc/guides/prog_guide/event_crypto_adapter.rst index 1e3eb7139..4fb5c688e 100644 --- a/doc/guides/prog_guide/event_crypto_adapter.rst +++ b/doc/guides/prog_guide/event_crypto_adapter.rst @@ -55,21 +55,22 @@ which is needed to enqueue an event after the crypto operation is completed. RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -In the RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, if HW supports -RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability the application -can directly submit the crypto operations to the cryptodev. -If not, application retrieves crypto adapter's event port using -rte_event_crypto_adapter_event_port_get() API. Then, links its event -queue to this port and starts enqueuing crypto operations as events -to the eventdev. The adapter then dequeues the events and submits the -crypto operations to the cryptodev. After the crypto completions, the -adapter enqueues events to the event device. -Application can use this mode, when ingress packet ordering is needed. -In this mode, events dequeued from the adapter will be treated as -forwarded events. The application needs to specify the cryptodev ID -and queue pair ID (request information) needed to enqueue a crypto -operation in addition to the event information (response information) -needed to enqueue an event after the crypto operation has completed. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), the application should +use ``rte_event_crypto_adapter_enqueue()`` API to enqueue crypto operations as +events to crypto adapter. If not, application retrieves crypto adapter's event +port using ``rte_event_crypto_adapter_event_port_get()`` API, links its event +queue to this port and starts enqueuing crypto operations as events to eventdev +using ``rte_event_enqueue_burst()``. The adapter then dequeues the events and +submits the crypto operations to the cryptodev. After the crypto operation is +complete, the adapter enqueues events to the event device. The application can +use this mode when ingress packet ordering is needed. In this mode, events +dequeued from the adapter will be treated as forwarded events. The application +needs to specify the cryptodev ID and queue pair ID (request information) needed +to enqueue a crypto operation in addition to the event information (response +information) needed to enqueue an event after the crypto operation has +completed. .. _figure_event_crypto_adapter_op_forward: @@ -120,28 +121,44 @@ service function and needs to create an event port for it. The callback is expected to fill the ``struct rte_event_crypto_adapter_conf`` structure passed to it. -For RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, the event port created by adapter -can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` API. -Application can use this event port to link with event queue on which it -enqueues events towards the crypto adapter. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), events with crypto +operations should be enqueued to the crypto adapter using +``rte_event_crypto_adapter_enqueue()`` API. If not, the event port created by +the adapter can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` +API. An application can use this event port to link with an event queue, on +which it enqueues events towards the crypto adapter using +``rte_event_enqueue_burst()``. .. code-block:: c - uint8_t id, evdev, crypto_ev_port_id, app_qid; + uint8_t id, evdev_id, cdev_id, crypto_ev_port_id, app_qid; struct rte_event ev; + uint32_t cap; int ret; - ret = rte_event_crypto_adapter_event_port_get(id, &crypto_ev_port_id); - ret = rte_event_queue_setup(evdev, app_qid, NULL); - ret = rte_event_port_link(evdev, crypto_ev_port_id, &app_qid, NULL, 1); - // Fill in event info and update event_ptr with rte_crypto_op memset(&ev, 0, sizeof(ev)); - ev.queue_id = app_qid; . . ev.event_ptr = op; - ret = rte_event_enqueue_burst(evdev, app_ev_port_id, ev, nb_events); + + ret = rte_event_crypto_adapter_caps_get(evdev_id, cdev_id, &cap); + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) { + ret = rte_event_crypto_adapter_enqueue(evdev_id, app_ev_port_id, + ev, nb_events); + } else { + ret = rte_event_crypto_adapter_event_port_get(id, + &crypto_ev_port_id); + ret = rte_event_queue_setup(evdev_id, app_qid, NULL); + ret = rte_event_port_link(evdev_id, crypto_ev_port_id, &app_qid, + NULL, 1); + ev.queue_id = app_qid; + ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev, + nb_events); + } + Querying adapter capabilities ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/lib/librte_eventdev/eventdev_trace_points.c b/lib/librte_eventdev/eventdev_trace_points.c index 1a0ccc448..3867ec800 100644 --- a/lib/librte_eventdev/eventdev_trace_points.c +++ b/lib/librte_eventdev/eventdev_trace_points.c @@ -118,3 +118,6 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_start, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_stop, lib.eventdev.crypto.stop) + +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_enqueue, + lib.eventdev.crypto.enq) diff --git a/lib/librte_eventdev/rte_event_crypto_adapter.h b/lib/librte_eventdev/rte_event_crypto_adapter.h index 60630ef66..003667759 100644 --- a/lib/librte_eventdev/rte_event_crypto_adapter.h +++ b/lib/librte_eventdev/rte_event_crypto_adapter.h @@ -522,6 +522,72 @@ rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id); int rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); +/** + * Enqueue a burst of crypto operations as events object supplied in *rte_event* + * structure on an event crypto adapter designated by its event *dev_id* through + * the event port specified by *port_id*. This function is supported if the + * eventdev PMD has the #RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD + * capability flag set. + * + * The *nb_events* parameter is the number of event objects to enqueue which are + * supplied in the *ev* array of *rte_event* structure. + * + * The rte_event_crypto_adapter_enqueue() function returns the number of + * events objects it actually enqueued. A return value equal to *nb_events* + * means that all event objects have been enqueued. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param ev + * Points to an array of *nb_events* objects of type *rte_event* structure + * which contain the event object enqueue operations to be processed. + * @param nb_events + * The number of event objects to enqueue, typically number of + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) + * available for this port. + * + * @return + * The number of event objects actually enqueued on the event device. The + * return value can be less than the value of the *nb_events* parameter when + * the event devices queue is full or if invalid parameters are specified in a + * *rte_event*. If the return value is less than *nb_events*, the remaining + * events at the end of ev[] are not consumed and the caller has to take care + * of them, and rte_errno is set accordingly. Possible errno values include: + * - EINVAL The port ID is invalid, device ID is invalid, an event's queue + * ID is invalid, or an event's sched type doesn't match the + * capabilities of the destination queue. + * - ENOSPC The event port was backpressured and unable to enqueue + * one or more events. This error code is only applicable to + * closed systems. + */ +static inline uint16_t +rte_event_crypto_adapter_enqueue(uint8_t dev_id, + uint8_t port_id, + struct rte_event ev[], + uint16_t nb_events) +{ + const struct rte_eventdev *dev = &rte_eventdevs[dev_id]; + +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG + if (dev_id >= RTE_EVENT_MAX_DEVS || + !rte_eventdevs[dev_id].attached) { + rte_errno = EINVAL; + return 0; + } + + if (port_id >= dev->data->nb_ports) { + rte_errno = EINVAL; + return 0; + } +#endif + rte_eventdev_trace_crypto_adapter_enqueue(dev_id, port_id, ev, + nb_events); + + return dev->ca_enqueue(dev->data->ports[port_id], ev, nb_events); +} + #ifdef __cplusplus } #endif diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c index b57363f80..5674bd38e 100644 --- a/lib/librte_eventdev/rte_eventdev.c +++ b/lib/librte_eventdev/rte_eventdev.c @@ -1405,6 +1405,15 @@ rte_event_tx_adapter_enqueue(__rte_unused void *port, return 0; } +static uint16_t +rte_event_crypto_adapter_enqueue(__rte_unused void *port, + __rte_unused struct rte_event ev[], + __rte_unused uint16_t nb_events) +{ + rte_errno = ENOTSUP; + return 0; +} + struct rte_eventdev * rte_event_pmd_allocate(const char *name, int socket_id) { @@ -1427,6 +1436,7 @@ rte_event_pmd_allocate(const char *name, int socket_id) eventdev->txa_enqueue = rte_event_tx_adapter_enqueue; eventdev->txa_enqueue_same_dest = rte_event_tx_adapter_enqueue; + eventdev->ca_enqueue = rte_event_crypto_adapter_enqueue; if (eventdev->data == NULL) { struct rte_eventdev_data *eventdev_data = NULL; diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h index 9fc39e9ca..b50027f88 100644 --- a/lib/librte_eventdev/rte_eventdev.h +++ b/lib/librte_eventdev/rte_eventdev.h @@ -1276,6 +1276,10 @@ typedef uint16_t (*event_tx_adapter_enqueue_same_dest)(void *port, * burst having same destination Ethernet port & Tx queue. */ +typedef uint16_t (*event_crypto_adapter_enqueue)(void *port, + struct rte_event ev[], uint16_t nb_events); +/**< @internal Enqueue burst of events on crypto adapter */ + #define RTE_EVENTDEV_NAME_MAX_LEN (64) /**< @internal Max length of name of event PMD */ @@ -1347,6 +1351,8 @@ struct rte_eventdev { */ event_tx_adapter_enqueue txa_enqueue; /**< Pointer to PMD eth Tx adapter enqueue function. */ + event_crypto_adapter_enqueue ca_enqueue; + /**< Pointer to PMD crypto adapter enqueue function. */ struct rte_eventdev_data *data; /**< Pointer to device data */ struct rte_eventdev_ops *dev_ops; @@ -1359,7 +1365,7 @@ struct rte_eventdev { /**< Flag indicating the device is attached */ uint64_t reserved_64s[4]; /**< Reserved for future fields */ - void *reserved_ptrs[4]; /**< Reserved for future fields */ + void *reserved_ptrs[3]; /**< Reserved for future fields */ } __rte_cache_aligned; extern struct rte_eventdev *rte_eventdevs; diff --git a/lib/librte_eventdev/rte_eventdev_trace_fp.h b/lib/librte_eventdev/rte_eventdev_trace_fp.h index 349129c0f..5639e0b83 100644 --- a/lib/librte_eventdev/rte_eventdev_trace_fp.h +++ b/lib/librte_eventdev/rte_eventdev_trace_fp.h @@ -49,6 +49,16 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_u8(flags); ) +RTE_TRACE_POINT_FP( + rte_eventdev_trace_crypto_adapter_enqueue, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table, + uint16_t nb_events), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_ptr(ev_table); + rte_trace_point_emit_u16(nb_events); +) + RTE_TRACE_POINT_FP( rte_eventdev_trace_timer_arm_burst, RTE_TRACE_POINT_ARGS(const void *adapter, void **evtims_table, diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map index 3e5c09cfd..c63ba7a9c 100644 --- a/lib/librte_eventdev/version.map +++ b/lib/librte_eventdev/version.map @@ -138,6 +138,9 @@ EXPERIMENTAL { __rte_eventdev_trace_port_setup; # added in 20.11 rte_event_pmd_pci_probe_named; + + # added in 21.05 + __rte_eventdev_trace_crypto_adapter_enqueue; }; INTERNAL { -- 2.25.1
Advertise crypto adapter forward mode capability and set crypto adapter enqueue function in driver. Signed-off-by: Shijith Thotton <sthotton@marvell.com> --- drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 34 +++++--- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 82 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- 7 files changed, 117 insertions(+), 17 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c index cec20b5c6..a72285892 100644 --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c @@ -7,6 +7,7 @@ #include <rte_cryptodev_pmd.h> #include <rte_errno.h> #include <rte_ethdev.h> +#include <rte_event_crypto_adapter.h> #include "otx2_cryptodev.h" #include "otx2_cryptodev_capabilities.h" @@ -438,11 +439,23 @@ static __rte_always_inline void __rte_hot otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, struct cpt_request_info *req, void *lmtline, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { + union rte_event_crypto_metadata *m_data; union cpt_inst_s inst; uint64_t lmt_status; + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) + m_data = rte_cryptodev_sym_session_get_user_data( + op->sym->session); + else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + op->private_data_offset) + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)op + + op->private_data_offset); + + inst.u[0] = 0; inst.s9x.res_addr = req->comp_baddr; inst.u[2] = 0; @@ -453,12 +466,11 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, inst.s9x.ei2 = req->ist.ei2; inst.s9x.ei3 = cpt_inst_w7; - inst.s9x.qord = 1; - inst.s9x.grp = qp->ev.queue_id; - inst.s9x.tt = qp->ev.sched_type; - inst.s9x.tag = (RTE_EVENT_TYPE_CRYPTODEV << 28) | - qp->ev.flow_id; - inst.s9x.wq_ptr = (uint64_t)req >> 3; + inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) | + m_data->response_info.flow_id) | + ((uint64_t)m_data->response_info.sched_type << 32) | + ((uint64_t)m_data->response_info.queue_id << 34)); + inst.u[3] = 1 | (((uint64_t)req >> 3) << 3); req->qp = qp; do { @@ -481,6 +493,7 @@ static __rte_always_inline int32_t __rte_hot otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp, struct pending_queue *pend_q, struct cpt_request_info *req, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { void *lmtline = qp->lmtline; @@ -488,7 +501,7 @@ otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp, uint64_t lmt_status; if (qp->ca_enable) { - otx2_ca_enqueue_req(qp, req, lmtline, cpt_inst_w7); + otx2_ca_enqueue_req(qp, req, lmtline, op, cpt_inst_w7); return 0; } @@ -594,7 +607,8 @@ otx2_cpt_enqueue_asym(struct otx2_cpt_qp *qp, goto req_fail; } - ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op, + sess->cpt_inst_w7); if (unlikely(ret)) { CPT_LOG_DP_ERR("Could not enqueue crypto req"); @@ -638,7 +652,7 @@ otx2_cpt_enqueue_sym(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (unlikely(ret)) { /* Free buffer allocated by fill params routines */ @@ -707,7 +721,7 @@ otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (winsz && esn) { seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low; diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 770a801c4..59450521a 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -12,8 +12,9 @@ #include <rte_mbuf_pool_ops.h> #include <rte_pci.h> -#include "otx2_evdev_stats.h" #include "otx2_evdev.h" +#include "otx2_evdev_crypto_adptr_tx.h" +#include "otx2_evdev_stats.h" #include "otx2_irq.h" #include "otx2_tim_evdev.h" @@ -311,6 +312,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_ca_enq; if (dev->dual_ws) { event_dev->enqueue = otx2_ssogws_dual_enq; @@ -473,6 +475,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq; } event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c index 4e8a96cb6..2c9b347f0 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c @@ -18,7 +18,8 @@ otx2_ca_caps_get(const struct rte_eventdev *dev, RTE_SET_USED(cdev); *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND | - RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW; + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW | + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD; return 0; } diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h similarity index 93% rename from drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h rename to drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h index 70b63933e..9e331fdd7 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h @@ -2,8 +2,8 @@ * Copyright (C) 2020 Marvell International Ltd. */ -#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ -#define _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ #include <rte_cryptodev.h> #include <rte_cryptodev_pmd.h> @@ -72,4 +72,4 @@ otx2_handle_crypto_event(uint64_t get_work1) return (uint64_t)(cop); } -#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ */ +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */ diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h new file mode 100644 index 000000000..470d2e274 --- /dev/null +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h @@ -0,0 +1,82 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2021 Marvell International Ltd. + */ + +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ + +#include <rte_cryptodev.h> +#include <rte_cryptodev_pmd.h> +#include <rte_event_crypto_adapter.h> +#include <rte_eventdev.h> + +#include <otx2_cryptodev_qp.h> +#include <otx2_worker.h> + +static inline uint16_t +otx2_ca_enq(uint64_t base, const struct rte_event *ev) +{ + union rte_event_crypto_metadata *m_data; + struct rte_crypto_op *crypto_op; + struct rte_cryptodev *cdev; + struct otx2_cpt_qp *qp; + uint8_t cdev_id; + uint16_t qp_id; + + crypto_op = ev->event_ptr; + if (crypto_op == NULL) + return 0; + + if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + m_data = rte_cryptodev_sym_session_get_user_data( + crypto_op->sym->session); + if (m_data == NULL) + goto free_op; + + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + crypto_op->private_data_offset) { + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)crypto_op + + crypto_op->private_data_offset); + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else { + goto free_op; + } + + cdev = &rte_cryptodevs[cdev_id]; + qp = cdev->data->queue_pairs[qp_id]; + + if (!ev->sched_type) + otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG); + if (qp->ca_enable) + return cdev->enqueue_burst(qp, &crypto_op, 1); + +free_op: + rte_pktmbuf_free(crypto_op->sym->m_src); + rte_crypto_op_free(crypto_op); + return 0; +} + +static uint16_t __rte_hot +otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->base, ev); +} + +static uint16_t __rte_hot +otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws_dual *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->base[!ws->vws], ev); +} +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */ diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h index 2b716c042..fd149be91 100644 --- a/drivers/event/octeontx2/otx2_worker.h +++ b/drivers/event/octeontx2/otx2_worker.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" #include "otx2_ethdev_sec_tx.h" /* SSO Operations */ diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h index 72b616439..36ae4dd88 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.h +++ b/drivers/event/octeontx2/otx2_worker_dual.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" /* SSO Operations */ static __rte_always_inline uint16_t -- 2.25.1
>Subject: [PATCH v1 1/2] eventdev: introduce crypto adapter enqueue >API > >From: Akhil Goyal <gakhil@marvell.com> > >In case an event from a previous stage is required to be forwarded >to a crypto adapter and PMD supports internal event port in crypto >adapter, exposed via capability >RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we >do not have >a way to check in the API rte_event_enqueue_burst(), whether it is >for crypto adapter or for eth tx adapter. > >Hence we need a new API similar to >rte_event_eth_tx_adapter_enqueue(), >which can send to a crypto adapter. > >Note that RTE_EVENT_TYPE_* cannot be used to make that decision, >as it is meant for event source and not event destination. >And event port designated for crypto adapter is designed to be used >for OP_NEW mode. > >Hence, in order to support an event PMD which has an internal event >port >in crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD >mode), exposed >via capability >RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, >application should use rte_event_crypto_adapter_enqueue() API to >enqueue >events. > >When internal port is not >available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW mode), >application can use API rte_event_enqueue_burst() as it was doing >earlier, >i.e. retrieve event port used by crypto adapter and bind its event queues >to that port and enqueue events using the API >rte_event_enqueue_burst(). > >Signed-off-by: Akhil Goyal <gakhil@marvell.com> >--- > .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++++------- > lib/librte_eventdev/eventdev_trace_points.c | 3 + > .../rte_event_crypto_adapter.h | 66 ++++++++++++++++++ > lib/librte_eventdev/rte_eventdev.c | 10 +++ > lib/librte_eventdev/rte_eventdev.h | 8 ++- > lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ > lib/librte_eventdev/version.map | 3 + Please update release notes. > 7 files changed, 142 insertions(+), 27 deletions(-) > >diff --git a/doc/guides/prog_guide/event_crypto_adapter.rst >b/doc/guides/prog_guide/event_crypto_adapter.rst >index 1e3eb7139..4fb5c688e 100644 >--- a/doc/guides/prog_guide/event_crypto_adapter.rst >+++ b/doc/guides/prog_guide/event_crypto_adapter.rst >@@ -55,21 +55,22 @@ which is needed to enqueue an event after the >crypto operation is completed. > RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > >-In the RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, if HW >supports >-RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD >capability the application >-can directly submit the crypto operations to the cryptodev. >-If not, application retrieves crypto adapter's event port using >-rte_event_crypto_adapter_event_port_get() API. Then, links its event >-queue to this port and starts enqueuing crypto operations as events >-to the eventdev. The adapter then dequeues the events and submits >the >-crypto operations to the cryptodev. After the crypto completions, the >-adapter enqueues events to the event device. >-Application can use this mode, when ingress packet ordering is needed. >-In this mode, events dequeued from the adapter will be treated as >-forwarded events. The application needs to specify the cryptodev ID >-and queue pair ID (request information) needed to enqueue a crypto >-operation in addition to the event information (response information) >-needed to enqueue an event after the crypto operation has completed. >+In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the >event PMD and crypto >+PMD supports internal event port >+(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), >the application should >+use ``rte_event_crypto_adapter_enqueue()`` API to enqueue crypto >operations as >+events to crypto adapter. If not, application retrieves crypto adapter's >event >+port using ``rte_event_crypto_adapter_event_port_get()`` API, links its >event >+queue to this port and starts enqueuing crypto operations as events to >eventdev >+using ``rte_event_enqueue_burst()``. The adapter then dequeues the >events and >+submits the crypto operations to the cryptodev. After the crypto >operation is >+complete, the adapter enqueues events to the event device. The >application can >+use this mode when ingress packet ordering is needed. In this mode, >events >+dequeued from the adapter will be treated as forwarded events. The >application >+needs to specify the cryptodev ID and queue pair ID (request >information) needed >+to enqueue a crypto operation in addition to the event information >(response >+information) needed to enqueue an event after the crypto operation >has >+completed. > > .. _figure_event_crypto_adapter_op_forward: > >@@ -120,28 +121,44 @@ service function and needs to create an >event port for it. The callback is > expected to fill the ``struct rte_event_crypto_adapter_conf`` structure > passed to it. > >-For RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, the event >port created by adapter >-can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` >API. >-Application can use this event port to link with event queue on which it >-enqueues events towards the crypto adapter. >+In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the >event PMD and crypto >+PMD supports internal event port >+(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), >events with crypto >+operations should be enqueued to the crypto adapter using >+``rte_event_crypto_adapter_enqueue()`` API. If not, the event port >created by >+the adapter can be retrieved using >``rte_event_crypto_adapter_event_port_get()`` >+API. An application can use this event port to link with an event queue, >on >+which it enqueues events towards the crypto adapter using >+``rte_event_enqueue_burst()``. > > .. code-block:: c > >- uint8_t id, evdev, crypto_ev_port_id, app_qid; >+ uint8_t id, evdev_id, cdev_id, crypto_ev_port_id, app_qid; > struct rte_event ev; >+ uint32_t cap; > int ret; > >- ret = rte_event_crypto_adapter_event_port_get(id, >&crypto_ev_port_id); >- ret = rte_event_queue_setup(evdev, app_qid, NULL); >- ret = rte_event_port_link(evdev, crypto_ev_port_id, &app_qid, >NULL, 1); >- > // Fill in event info and update event_ptr with rte_crypto_op > memset(&ev, 0, sizeof(ev)); >- ev.queue_id = app_qid; > . > . > ev.event_ptr = op; >- ret = rte_event_enqueue_burst(evdev, app_ev_port_id, ev, >nb_events); >+ >+ ret = rte_event_crypto_adapter_caps_get(evdev_id, cdev_id, >&cap); >+ if (cap & >RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) { >+ ret = rte_event_crypto_adapter_enqueue(evdev_id, >app_ev_port_id, >+ ev, nb_events); >+ } else { >+ ret = rte_event_crypto_adapter_event_port_get(id, >+ &crypto_ev_port_id); >+ ret = rte_event_queue_setup(evdev_id, app_qid, NULL); >+ ret = rte_event_port_link(evdev_id, crypto_ev_port_id, >&app_qid, >+ NULL, 1); >+ ev.queue_id = app_qid; >+ ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev, >+ nb_events); >+ } >+ > > Querying adapter capabilities > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >diff --git a/lib/librte_eventdev/eventdev_trace_points.c >b/lib/librte_eventdev/eventdev_trace_points.c >index 1a0ccc448..3867ec800 100644 >--- a/lib/librte_eventdev/eventdev_trace_points.c >+++ b/lib/librte_eventdev/eventdev_trace_points.c >@@ -118,3 +118,6 @@ >RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_sta >rt, > > >RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_sto >p, > lib.eventdev.crypto.stop) >+ >+RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_e >nqueue, >+ lib.eventdev.crypto.enq) >diff --git a/lib/librte_eventdev/rte_event_crypto_adapter.h >b/lib/librte_eventdev/rte_event_crypto_adapter.h >index 60630ef66..003667759 100644 >--- a/lib/librte_eventdev/rte_event_crypto_adapter.h >+++ b/lib/librte_eventdev/rte_event_crypto_adapter.h >@@ -522,6 +522,72 @@ >rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t >*service_id); > int > rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t >*event_port_id); > >+/** >+ * Enqueue a burst of crypto operations as events object supplied in >*rte_event* >+ * structure on an event crypto adapter designated by its event >*dev_id* through >+ * the event port specified by *port_id*. This function is supported if >the >+ * eventdev PMD has the >#RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD >+ * capability flag set. >+ * >+ * The *nb_events* parameter is the number of event objects to >enqueue which are >+ * supplied in the *ev* array of *rte_event* structure. >+ * >+ * The rte_event_crypto_adapter_enqueue() function returns the >number of >+ * events objects it actually enqueued. A return value equal to >*nb_events* >+ * means that all event objects have been enqueued. >+ * >+ * @param dev_id >+ * The identifier of the device. >+ * @param port_id >+ * The identifier of the event port. >+ * @param ev >+ * Points to an array of *nb_events* objects of type *rte_event* >structure >+ * which contain the event object enqueue operations to be >processed. >+ * @param nb_events >+ * The number of event objects to enqueue, typically number of >+ * >rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) >+ * available for this port. >+ * >+ * @return >+ * The number of event objects actually enqueued on the event >device. The >+ * return value can be less than the value of the *nb_events* >parameter when >+ * the event devices queue is full or if invalid parameters are specified >in a >+ * *rte_event*. If the return value is less than *nb_events*, the >remaining >+ * events at the end of ev[] are not consumed and the caller has to >take care >+ * of them, and rte_errno is set accordingly. Possible errno values >include: >+ * - EINVAL The port ID is invalid, device ID is invalid, an event's >queue >+ * ID is invalid, or an event's sched type doesn't match the >+ * capabilities of the destination queue. >+ * - ENOSPC The event port was backpressured and unable to >enqueue >+ * one or more events. This error code is only applicable to >+ * closed systems. >+ */ >+static inline uint16_t >+rte_event_crypto_adapter_enqueue(uint8_t dev_id, >+ uint8_t port_id, >+ struct rte_event ev[], >+ uint16_t nb_events) >+{ >+ const struct rte_eventdev *dev = &rte_eventdevs[dev_id]; >+ >+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG >+ if (dev_id >= RTE_EVENT_MAX_DEVS || >+ !rte_eventdevs[dev_id].attached) { >+ rte_errno = EINVAL; >+ return 0; Please use RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET >+ } >+ >+ if (port_id >= dev->data->nb_ports) { >+ rte_errno = EINVAL; >+ return 0; >+ } >+#endif >+ rte_eventdev_trace_crypto_adapter_enqueue(dev_id, port_id, >ev, >+ nb_events); >+ >+ return dev->ca_enqueue(dev->data->ports[port_id], ev, >nb_events); >+} >+ > #ifdef __cplusplus > } > #endif >diff --git a/lib/librte_eventdev/rte_eventdev.c >b/lib/librte_eventdev/rte_eventdev.c >index b57363f80..5674bd38e 100644 >--- a/lib/librte_eventdev/rte_eventdev.c >+++ b/lib/librte_eventdev/rte_eventdev.c >@@ -1405,6 +1405,15 @@ >rte_event_tx_adapter_enqueue(__rte_unused void *port, > return 0; > } > >+static uint16_t >+rte_event_crypto_adapter_enqueue(__rte_unused void *port, >+ __rte_unused struct rte_event ev[], >+ __rte_unused uint16_t nb_events) >+{ >+ rte_errno = ENOTSUP; >+ return 0; >+} >+ > struct rte_eventdev * > rte_event_pmd_allocate(const char *name, int socket_id) > { >@@ -1427,6 +1436,7 @@ rte_event_pmd_allocate(const char *name, >int socket_id) > > eventdev->txa_enqueue = rte_event_tx_adapter_enqueue; > eventdev->txa_enqueue_same_dest = >rte_event_tx_adapter_enqueue; >+ eventdev->ca_enqueue = rte_event_crypto_adapter_enqueue; > > if (eventdev->data == NULL) { > struct rte_eventdev_data *eventdev_data = NULL; >diff --git a/lib/librte_eventdev/rte_eventdev.h >b/lib/librte_eventdev/rte_eventdev.h >index 9fc39e9ca..b50027f88 100644 >--- a/lib/librte_eventdev/rte_eventdev.h >+++ b/lib/librte_eventdev/rte_eventdev.h >@@ -1276,6 +1276,10 @@ typedef uint16_t >(*event_tx_adapter_enqueue_same_dest)(void *port, > * burst having same destination Ethernet port & Tx queue. > */ > >+typedef uint16_t (*event_crypto_adapter_enqueue)(void *port, >+ struct rte_event ev[], uint16_t >nb_events); >+/**< @internal Enqueue burst of events on crypto adapter */ >+ > #define RTE_EVENTDEV_NAME_MAX_LEN (64) > /**< @internal Max length of name of event PMD */ > >@@ -1347,6 +1351,8 @@ struct rte_eventdev { > */ > event_tx_adapter_enqueue txa_enqueue; > /**< Pointer to PMD eth Tx adapter enqueue function. */ >+ event_crypto_adapter_enqueue ca_enqueue; >+ /**< Pointer to PMD crypto adapter enqueue function. */ > struct rte_eventdev_data *data; > /**< Pointer to device data */ > struct rte_eventdev_ops *dev_ops; >@@ -1359,7 +1365,7 @@ struct rte_eventdev { > /**< Flag indicating the device is attached */ > > uint64_t reserved_64s[4]; /**< Reserved for future fields */ >- void *reserved_ptrs[4]; /**< Reserved for future fields */ >+ void *reserved_ptrs[3]; /**< Reserved for future fields */ > } __rte_cache_aligned; > > extern struct rte_eventdev *rte_eventdevs; >diff --git a/lib/librte_eventdev/rte_eventdev_trace_fp.h >b/lib/librte_eventdev/rte_eventdev_trace_fp.h >index 349129c0f..5639e0b83 100644 >--- a/lib/librte_eventdev/rte_eventdev_trace_fp.h >+++ b/lib/librte_eventdev/rte_eventdev_trace_fp.h >@@ -49,6 +49,16 @@ RTE_TRACE_POINT_FP( > rte_trace_point_emit_u8(flags); > ) > >+RTE_TRACE_POINT_FP( >+ rte_eventdev_trace_crypto_adapter_enqueue, >+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void >*ev_table, >+ uint16_t nb_events), >+ rte_trace_point_emit_u8(dev_id); >+ rte_trace_point_emit_u8(port_id); >+ rte_trace_point_emit_ptr(ev_table); >+ rte_trace_point_emit_u16(nb_events); >+) >+ > RTE_TRACE_POINT_FP( > rte_eventdev_trace_timer_arm_burst, > RTE_TRACE_POINT_ARGS(const void *adapter, void >**evtims_table, >diff --git a/lib/librte_eventdev/version.map >b/lib/librte_eventdev/version.map >index 3e5c09cfd..c63ba7a9c 100644 >--- a/lib/librte_eventdev/version.map >+++ b/lib/librte_eventdev/version.map >@@ -138,6 +138,9 @@ EXPERIMENTAL { > __rte_eventdev_trace_port_setup; > # added in 20.11 > rte_event_pmd_pci_probe_named; >+ >+ # added in 21.05 >+ __rte_eventdev_trace_crypto_adapter_enqueue; > }; > > INTERNAL { >-- >2.25.1
>-----Original Message----- >From: Shijith Thotton <sthotton@marvell.com> >Sent: Friday, March 26, 2021 2:42 PM >To: dev@dpdk.org >Cc: Shijith Thotton <sthotton@marvell.com>; thomas@monjalon.net; >Jerin Jacob Kollanukkaran <jerinj@marvell.com>; >abhinandan.gujjar@intel.com; hemant.agrawal@nxp.com; >nipun.gupta@nxp.com; sachin.saxena@oss.nxp.com; Anoob Joseph ><anoobj@marvell.com>; matan@nvidia.com; >roy.fan.zhang@intel.com; g.singh@nxp.com; erik.g.carrillo@intel.com; >jay.jayatheerthan@intel.com; Pavan Nikhilesh Bhagavatula ><pbhagavatula@marvell.com>; harry.van.haaren@intel.com; Akhil >Goyal <gakhil@marvell.com> >Subject: [PATCH v1 2/2] event/octeontx2: support crypto adapter >forward mode > >Advertise crypto adapter forward mode capability and set crypto >adapter >enqueue function in driver. > >Signed-off-by: Shijith Thotton <sthotton@marvell.com> >--- > drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 34 +++++--- > drivers/event/octeontx2/otx2_evdev.c | 5 +- > .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- > ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- > .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 82 >+++++++++++++++++++ > drivers/event/octeontx2/otx2_worker.h | 2 +- > drivers/event/octeontx2/otx2_worker_dual.h | 2 +- > 7 files changed, 117 insertions(+), 17 deletions(-) > rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => >otx2_evdev_crypto_adptr_rx.h} (93%) > create mode 100644 >drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h > >diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c >b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c >index cec20b5c6..a72285892 100644 >--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c >+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c >@@ -7,6 +7,7 @@ > #include <rte_cryptodev_pmd.h> > #include <rte_errno.h> > #include <rte_ethdev.h> >+#include <rte_event_crypto_adapter.h> > > #include "otx2_cryptodev.h" > #include "otx2_cryptodev_capabilities.h" >@@ -438,11 +439,23 @@ static __rte_always_inline void __rte_hot > otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, > struct cpt_request_info *req, > void *lmtline, >+ struct rte_crypto_op *op, > uint64_t cpt_inst_w7) > { >+ union rte_event_crypto_metadata *m_data; > union cpt_inst_s inst; > uint64_t lmt_status; > >+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) >+ m_data = rte_cryptodev_sym_session_get_user_data( >+ op->sym->session); >+ else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && >+ op->private_data_offset) >+ m_data = (union rte_event_crypto_metadata *) >+ ((uint8_t *)op + >+ op->private_data_offset); >+ >+ > inst.u[0] = 0; > inst.s9x.res_addr = req->comp_baddr; > inst.u[2] = 0; >@@ -453,12 +466,11 @@ otx2_ca_enqueue_req(const struct >otx2_cpt_qp *qp, > inst.s9x.ei2 = req->ist.ei2; > inst.s9x.ei3 = cpt_inst_w7; > >- inst.s9x.qord = 1; >- inst.s9x.grp = qp->ev.queue_id; >- inst.s9x.tt = qp->ev.sched_type; >- inst.s9x.tag = (RTE_EVENT_TYPE_CRYPTODEV << 28) | >- qp->ev.flow_id; >- inst.s9x.wq_ptr = (uint64_t)req >> 3; >+ inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) | >+ m_data->response_info.flow_id) | >+ ((uint64_t)m_data->response_info.sched_type << 32) >| >+ ((uint64_t)m_data->response_info.queue_id << 34)); >+ inst.u[3] = 1 | (((uint64_t)req >> 3) << 3); > req->qp = qp; > > do { >@@ -481,6 +493,7 @@ static __rte_always_inline int32_t __rte_hot > otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp, > struct pending_queue *pend_q, > struct cpt_request_info *req, >+ struct rte_crypto_op *op, > uint64_t cpt_inst_w7) > { > void *lmtline = qp->lmtline; >@@ -488,7 +501,7 @@ otx2_cpt_enqueue_req(const struct >otx2_cpt_qp *qp, > uint64_t lmt_status; > > if (qp->ca_enable) { >- otx2_ca_enqueue_req(qp, req, lmtline, cpt_inst_w7); >+ otx2_ca_enqueue_req(qp, req, lmtline, op, >cpt_inst_w7); > return 0; > } > >@@ -594,7 +607,8 @@ otx2_cpt_enqueue_asym(struct otx2_cpt_qp >*qp, > goto req_fail; > } > >- ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, sess- >>cpt_inst_w7); >+ ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op, >+ sess->cpt_inst_w7); > > if (unlikely(ret)) { > CPT_LOG_DP_ERR("Could not enqueue crypto req"); >@@ -638,7 +652,7 @@ otx2_cpt_enqueue_sym(struct otx2_cpt_qp >*qp, struct rte_crypto_op *op, > return ret; > } > >- ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess- >>cpt_inst_w7); >+ ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess- >>cpt_inst_w7); > > if (unlikely(ret)) { > /* Free buffer allocated by fill params routines */ >@@ -707,7 +721,7 @@ otx2_cpt_enqueue_sec(struct otx2_cpt_qp >*qp, struct rte_crypto_op *op, > return ret; > } > >- ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess- >>cpt_inst_w7); >+ ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess- >>cpt_inst_w7); > > if (winsz && esn) { > seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low; >diff --git a/drivers/event/octeontx2/otx2_evdev.c >b/drivers/event/octeontx2/otx2_evdev.c >index 770a801c4..59450521a 100644 >--- a/drivers/event/octeontx2/otx2_evdev.c >+++ b/drivers/event/octeontx2/otx2_evdev.c >@@ -12,8 +12,9 @@ > #include <rte_mbuf_pool_ops.h> > #include <rte_pci.h> > >-#include "otx2_evdev_stats.h" > #include "otx2_evdev.h" >+#include "otx2_evdev_crypto_adptr_tx.h" >+#include "otx2_evdev_stats.h" > #include "otx2_irq.h" > #include "otx2_tim_evdev.h" > >@@ -311,6 +312,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC > [!!(dev->tx_offloads & >NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] > [!!(dev->tx_offloads & >NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; > } >+ event_dev->ca_enqueue = otx2_ssogws_ca_enq; > > if (dev->dual_ws) { > event_dev->enqueue = >otx2_ssogws_dual_enq; >@@ -473,6 +475,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC > [!!(dev->tx_offloads & > > NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; > } >+ event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq; > } > > event_dev->txa_enqueue_same_dest = event_dev- >>txa_enqueue; >diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c >b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c >index 4e8a96cb6..2c9b347f0 100644 >--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c >+++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c >@@ -18,7 +18,8 @@ otx2_ca_caps_get(const struct rte_eventdev >*dev, > RTE_SET_USED(cdev); > > *caps = >RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND | >- > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NE >W; >+ > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NE >W | >+ > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FW >D; > > return 0; > } >diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h >b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h >similarity index 93% >rename from drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h >rename to drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h >index 70b63933e..9e331fdd7 100644 >--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h >+++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h >@@ -2,8 +2,8 @@ > * Copyright (C) 2020 Marvell International Ltd. > */ > >-#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ >-#define _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ >+#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ >+#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ > > #include <rte_cryptodev.h> > #include <rte_cryptodev_pmd.h> >@@ -72,4 +72,4 @@ otx2_handle_crypto_event(uint64_t get_work1) > > return (uint64_t)(cop); > } >-#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ */ >+#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */ >diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h >b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h >new file mode 100644 >index 000000000..470d2e274 >--- /dev/null >+++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h >@@ -0,0 +1,82 @@ >+/* SPDX-License-Identifier: BSD-3-Clause >+ * Copyright (C) 2021 Marvell International Ltd. >+ */ >+ >+#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ >+#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ >+ >+#include <rte_cryptodev.h> >+#include <rte_cryptodev_pmd.h> >+#include <rte_event_crypto_adapter.h> >+#include <rte_eventdev.h> >+ >+#include <otx2_cryptodev_qp.h> >+#include <otx2_worker.h> >+ >+static inline uint16_t >+otx2_ca_enq(uint64_t base, const struct rte_event *ev) >+{ >+ union rte_event_crypto_metadata *m_data; >+ struct rte_crypto_op *crypto_op; >+ struct rte_cryptodev *cdev; >+ struct otx2_cpt_qp *qp; >+ uint8_t cdev_id; >+ uint16_t qp_id; >+ >+ crypto_op = ev->event_ptr; >+ if (crypto_op == NULL) >+ return 0; >+ >+ if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) >{ >+ m_data = rte_cryptodev_sym_session_get_user_data( >+ crypto_op->sym- >>session); >+ if (m_data == NULL) >+ goto free_op; >+ >+ cdev_id = m_data->request_info.cdev_id; >+ qp_id = m_data->request_info.queue_pair_id; >+ } else if (crypto_op->sess_type == >RTE_CRYPTO_OP_SESSIONLESS && >+ crypto_op->private_data_offset) { >+ m_data = (union rte_event_crypto_metadata *) >+ ((uint8_t *)crypto_op + >+ crypto_op->private_data_offset); >+ cdev_id = m_data->request_info.cdev_id; >+ qp_id = m_data->request_info.queue_pair_id; >+ } else { >+ goto free_op; >+ } >+ >+ cdev = &rte_cryptodevs[cdev_id]; >+ qp = cdev->data->queue_pairs[qp_id]; >+ >+ if (!ev->sched_type) >+ otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG); Directly pass the TAG address. >+ if (qp->ca_enable) >+ return cdev->enqueue_burst(qp, &crypto_op, 1); >+ >+free_op: >+ rte_pktmbuf_free(crypto_op->sym->m_src); >+ rte_crypto_op_free(crypto_op); >+ return 0; >+} >+ >+static uint16_t __rte_hot >+otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t >nb_events) >+{ >+ struct otx2_ssogws *ws = port; >+ >+ RTE_SET_USED(nb_events); >+ >+ return otx2_ca_enq(ws->base, ev); ws->tag_op >+} >+ >+static uint16_t __rte_hot >+otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t >nb_events) >+{ >+ struct otx2_ssogws_dual *ws = port; >+ >+ RTE_SET_USED(nb_events); >+ >+ return otx2_ca_enq(ws->base[!ws->vws], ev); ws->ws_state[!ws->vws].tag_op >+} >+#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */ >diff --git a/drivers/event/octeontx2/otx2_worker.h >b/drivers/event/octeontx2/otx2_worker.h >index 2b716c042..fd149be91 100644 >--- a/drivers/event/octeontx2/otx2_worker.h >+++ b/drivers/event/octeontx2/otx2_worker.h >@@ -10,7 +10,7 @@ > > #include <otx2_common.h> > #include "otx2_evdev.h" >-#include "otx2_evdev_crypto_adptr_dp.h" >+#include "otx2_evdev_crypto_adptr_rx.h" > #include "otx2_ethdev_sec_tx.h" > > /* SSO Operations */ >diff --git a/drivers/event/octeontx2/otx2_worker_dual.h >b/drivers/event/octeontx2/otx2_worker_dual.h >index 72b616439..36ae4dd88 100644 >--- a/drivers/event/octeontx2/otx2_worker_dual.h >+++ b/drivers/event/octeontx2/otx2_worker_dual.h >@@ -10,7 +10,7 @@ > > #include <otx2_common.h> > #include "otx2_evdev.h" >-#include "otx2_evdev_crypto_adptr_dp.h" >+#include "otx2_evdev_crypto_adptr_rx.h" > > /* SSO Operations */ > static __rte_always_inline uint16_t >-- >2.25.1
On Sat, Mar 27, 2021 at 06:27:49AM +0000, Pavan Nikhilesh Bhagavatula wrote: > > > >-----Original Message----- > >From: Shijith Thotton <sthotton@marvell.com> > >Sent: Friday, March 26, 2021 2:42 PM > >To: dev@dpdk.org > >Cc: Shijith Thotton <sthotton@marvell.com>; thomas@monjalon.net; > >Jerin Jacob Kollanukkaran <jerinj@marvell.com>; > >abhinandan.gujjar@intel.com; hemant.agrawal@nxp.com; > >nipun.gupta@nxp.com; sachin.saxena@oss.nxp.com; Anoob Joseph > ><anoobj@marvell.com>; matan@nvidia.com; > >roy.fan.zhang@intel.com; g.singh@nxp.com; erik.g.carrillo@intel.com; > >jay.jayatheerthan@intel.com; Pavan Nikhilesh Bhagavatula > ><pbhagavatula@marvell.com>; harry.van.haaren@intel.com; Akhil > >Goyal <gakhil@marvell.com> > >Subject: [PATCH v1 2/2] event/octeontx2: support crypto adapter > >forward mode > > > >Advertise crypto adapter forward mode capability and set crypto > >adapter > >enqueue function in driver. > > > >Signed-off-by: Shijith Thotton <sthotton@marvell.com> > >--- > > drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 34 +++++--- > > drivers/event/octeontx2/otx2_evdev.c | 5 +- > > .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- > > ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- > > .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 82 > >+++++++++++++++++++ > > drivers/event/octeontx2/otx2_worker.h | 2 +- > > drivers/event/octeontx2/otx2_worker_dual.h | 2 +- > > 7 files changed, 117 insertions(+), 17 deletions(-) > > rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => > >otx2_evdev_crypto_adptr_rx.h} (93%) > > create mode 100644 > >drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h > > > >diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c > >b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c > >index cec20b5c6..a72285892 100644 > >--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c > >+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c > >@@ -7,6 +7,7 @@ > > #include <rte_cryptodev_pmd.h> > > #include <rte_errno.h> > > #include <rte_ethdev.h> > >+#include <rte_event_crypto_adapter.h> > > > > #include "otx2_cryptodev.h" > > #include "otx2_cryptodev_capabilities.h" > >@@ -438,11 +439,23 @@ static __rte_always_inline void __rte_hot > > otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, > > struct cpt_request_info *req, > > void *lmtline, > >+ struct rte_crypto_op *op, > > uint64_t cpt_inst_w7) > > { > >+ union rte_event_crypto_metadata *m_data; > > union cpt_inst_s inst; > > uint64_t lmt_status; > > > >+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) > >+ m_data = rte_cryptodev_sym_session_get_user_data( > >+ op->sym->session); > >+ else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && > >+ op->private_data_offset) > >+ m_data = (union rte_event_crypto_metadata *) > >+ ((uint8_t *)op + > >+ op->private_data_offset); > >+ > >+ > > inst.u[0] = 0; > > inst.s9x.res_addr = req->comp_baddr; > > inst.u[2] = 0; > >@@ -453,12 +466,11 @@ otx2_ca_enqueue_req(const struct > >otx2_cpt_qp *qp, > > inst.s9x.ei2 = req->ist.ei2; > > inst.s9x.ei3 = cpt_inst_w7; > > > >- inst.s9x.qord = 1; > >- inst.s9x.grp = qp->ev.queue_id; > >- inst.s9x.tt = qp->ev.sched_type; > >- inst.s9x.tag = (RTE_EVENT_TYPE_CRYPTODEV << 28) | > >- qp->ev.flow_id; > >- inst.s9x.wq_ptr = (uint64_t)req >> 3; > >+ inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) | > >+ m_data->response_info.flow_id) | > >+ ((uint64_t)m_data->response_info.sched_type << 32) > >| > >+ ((uint64_t)m_data->response_info.queue_id << 34)); > >+ inst.u[3] = 1 | (((uint64_t)req >> 3) << 3); > > req->qp = qp; > > > > do { > >@@ -481,6 +493,7 @@ static __rte_always_inline int32_t __rte_hot > > otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp, > > struct pending_queue *pend_q, > > struct cpt_request_info *req, > >+ struct rte_crypto_op *op, > > uint64_t cpt_inst_w7) > > { > > void *lmtline = qp->lmtline; > >@@ -488,7 +501,7 @@ otx2_cpt_enqueue_req(const struct > >otx2_cpt_qp *qp, > > uint64_t lmt_status; > > > > if (qp->ca_enable) { > >- otx2_ca_enqueue_req(qp, req, lmtline, cpt_inst_w7); > >+ otx2_ca_enqueue_req(qp, req, lmtline, op, > >cpt_inst_w7); > > return 0; > > } > > > >@@ -594,7 +607,8 @@ otx2_cpt_enqueue_asym(struct otx2_cpt_qp > >*qp, > > goto req_fail; > > } > > > >- ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, sess- > >>cpt_inst_w7); > >+ ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op, > >+ sess->cpt_inst_w7); > > > > if (unlikely(ret)) { > > CPT_LOG_DP_ERR("Could not enqueue crypto req"); > >@@ -638,7 +652,7 @@ otx2_cpt_enqueue_sym(struct otx2_cpt_qp > >*qp, struct rte_crypto_op *op, > > return ret; > > } > > > >- ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess- > >>cpt_inst_w7); > >+ ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess- > >>cpt_inst_w7); > > > > if (unlikely(ret)) { > > /* Free buffer allocated by fill params routines */ > >@@ -707,7 +721,7 @@ otx2_cpt_enqueue_sec(struct otx2_cpt_qp > >*qp, struct rte_crypto_op *op, > > return ret; > > } > > > >- ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess- > >>cpt_inst_w7); > >+ ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess- > >>cpt_inst_w7); > > > > if (winsz && esn) { > > seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low; > >diff --git a/drivers/event/octeontx2/otx2_evdev.c > >b/drivers/event/octeontx2/otx2_evdev.c > >index 770a801c4..59450521a 100644 > >--- a/drivers/event/octeontx2/otx2_evdev.c > >+++ b/drivers/event/octeontx2/otx2_evdev.c > >@@ -12,8 +12,9 @@ > > #include <rte_mbuf_pool_ops.h> > > #include <rte_pci.h> > > > >-#include "otx2_evdev_stats.h" > > #include "otx2_evdev.h" > >+#include "otx2_evdev_crypto_adptr_tx.h" > >+#include "otx2_evdev_stats.h" > > #include "otx2_irq.h" > > #include "otx2_tim_evdev.h" > > > >@@ -311,6 +312,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC > > [!!(dev->tx_offloads & > >NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] > > [!!(dev->tx_offloads & > >NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; > > } > >+ event_dev->ca_enqueue = otx2_ssogws_ca_enq; > > > > if (dev->dual_ws) { > > event_dev->enqueue = > >otx2_ssogws_dual_enq; > >@@ -473,6 +475,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC > > [!!(dev->tx_offloads & > > > > NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; > > } > >+ event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq; > > } > > > > event_dev->txa_enqueue_same_dest = event_dev- > >>txa_enqueue; > >diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c > >b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c > >index 4e8a96cb6..2c9b347f0 100644 > >--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c > >+++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c > >@@ -18,7 +18,8 @@ otx2_ca_caps_get(const struct rte_eventdev > >*dev, > > RTE_SET_USED(cdev); > > > > *caps = > >RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND | > >- > > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NE > >W; > >+ > > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NE > >W | > >+ > > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FW > >D; > > > > return 0; > > } > >diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h > >b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h > >similarity index 93% > >rename from drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h > >rename to drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h > >index 70b63933e..9e331fdd7 100644 > >--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h > >+++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h > >@@ -2,8 +2,8 @@ > > * Copyright (C) 2020 Marvell International Ltd. > > */ > > > >-#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ > >-#define _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ > >+#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ > >+#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ > > > > #include <rte_cryptodev.h> > > #include <rte_cryptodev_pmd.h> > >@@ -72,4 +72,4 @@ otx2_handle_crypto_event(uint64_t get_work1) > > > > return (uint64_t)(cop); > > } > >-#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ */ > >+#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */ > >diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h > >b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h > >new file mode 100644 > >index 000000000..470d2e274 > >--- /dev/null > >+++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h > >@@ -0,0 +1,82 @@ > >+/* SPDX-License-Identifier: BSD-3-Clause > >+ * Copyright (C) 2021 Marvell International Ltd. > >+ */ > >+ > >+#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ > >+#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ > >+ > >+#include <rte_cryptodev.h> > >+#include <rte_cryptodev_pmd.h> > >+#include <rte_event_crypto_adapter.h> > >+#include <rte_eventdev.h> > >+ > >+#include <otx2_cryptodev_qp.h> > >+#include <otx2_worker.h> > >+ > >+static inline uint16_t > >+otx2_ca_enq(uint64_t base, const struct rte_event *ev) > >+{ > >+ union rte_event_crypto_metadata *m_data; > >+ struct rte_crypto_op *crypto_op; > >+ struct rte_cryptodev *cdev; > >+ struct otx2_cpt_qp *qp; > >+ uint8_t cdev_id; > >+ uint16_t qp_id; > >+ > >+ crypto_op = ev->event_ptr; > >+ if (crypto_op == NULL) > >+ return 0; > >+ > >+ if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) > >{ > >+ m_data = rte_cryptodev_sym_session_get_user_data( > >+ crypto_op->sym- > >>session); > >+ if (m_data == NULL) > >+ goto free_op; > >+ > >+ cdev_id = m_data->request_info.cdev_id; > >+ qp_id = m_data->request_info.queue_pair_id; > >+ } else if (crypto_op->sess_type == > >RTE_CRYPTO_OP_SESSIONLESS && > >+ crypto_op->private_data_offset) { > >+ m_data = (union rte_event_crypto_metadata *) > >+ ((uint8_t *)crypto_op + > >+ crypto_op->private_data_offset); > >+ cdev_id = m_data->request_info.cdev_id; > >+ qp_id = m_data->request_info.queue_pair_id; > >+ } else { > >+ goto free_op; > >+ } > >+ > >+ cdev = &rte_cryptodevs[cdev_id]; > >+ qp = cdev->data->queue_pairs[qp_id]; > >+ > >+ if (!ev->sched_type) > >+ otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG); > > Directly pass the TAG address. > Ack. Will send v2 with the change. > >+ if (qp->ca_enable) > >+ return cdev->enqueue_burst(qp, &crypto_op, 1); > >+ > >+free_op: > >+ rte_pktmbuf_free(crypto_op->sym->m_src); > >+ rte_crypto_op_free(crypto_op); > >+ return 0; > >+} > >+ > >+static uint16_t __rte_hot > >+otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t > >nb_events) > >+{ > >+ struct otx2_ssogws *ws = port; > >+ > >+ RTE_SET_USED(nb_events); > >+ > >+ return otx2_ca_enq(ws->base, ev); > > ws->tag_op > > >+} > >+ > >+static uint16_t __rte_hot > >+otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t > >nb_events) > >+{ > >+ struct otx2_ssogws_dual *ws = port; > >+ > >+ RTE_SET_USED(nb_events); > >+ > >+ return otx2_ca_enq(ws->base[!ws->vws], ev); > > ws->ws_state[!ws->vws].tag_op > > >+} > >+#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */ > >diff --git a/drivers/event/octeontx2/otx2_worker.h > >b/drivers/event/octeontx2/otx2_worker.h > >index 2b716c042..fd149be91 100644 > >--- a/drivers/event/octeontx2/otx2_worker.h > >+++ b/drivers/event/octeontx2/otx2_worker.h > >@@ -10,7 +10,7 @@ > > > > #include <otx2_common.h> > > #include "otx2_evdev.h" > >-#include "otx2_evdev_crypto_adptr_dp.h" > >+#include "otx2_evdev_crypto_adptr_rx.h" > > #include "otx2_ethdev_sec_tx.h" > > > > /* SSO Operations */ > >diff --git a/drivers/event/octeontx2/otx2_worker_dual.h > >b/drivers/event/octeontx2/otx2_worker_dual.h > >index 72b616439..36ae4dd88 100644 > >--- a/drivers/event/octeontx2/otx2_worker_dual.h > >+++ b/drivers/event/octeontx2/otx2_worker_dual.h > >@@ -10,7 +10,7 @@ > > > > #include <otx2_common.h> > > #include "otx2_evdev.h" > >-#include "otx2_evdev_crypto_adptr_dp.h" > >+#include "otx2_evdev_crypto_adptr_rx.h" > > > > /* SSO Operations */ > > static __rte_always_inline uint16_t > >-- > >2.25.1 >
This series proposes a new event device enqueue operation if crypto adapter forward mode is supported. Second patch in the series is the implementation of the same in PMD. Test application changes for the usage of new API is yet to add. v2: - Updated release notes. - Made use of RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET macro. - Fixed v1 build error. v1: - Added crypto adapter forward mode support for octeontx2. Akhil Goyal (1): eventdev: introduce crypto adapter enqueue API Shijith Thotton (1): event/octeontx2: support crypto adapter forward mode .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++------ doc/guides/rel_notes/release_21_05.rst | 6 ++ drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 42 ++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 82 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 62 ++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 8 +- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 3 + 15 files changed, 265 insertions(+), 48 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h -- 2.25.1
From: Akhil Goyal <gakhil@marvell.com> In case an event from a previous stage is required to be forwarded to a crypto adapter and PMD supports internal event port in crypto adapter, exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not have a way to check in the API rte_event_enqueue_burst(), whether it is for crypto adapter or for eth tx adapter. Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue(), which can send to a crypto adapter. Note that RTE_EVENT_TYPE_* cannot be used to make that decision, as it is meant for event source and not event destination. And event port designated for crypto adapter is designed to be used for OP_NEW mode. Hence, in order to support an event PMD which has an internal event port in crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, application should use rte_event_crypto_adapter_enqueue() API to enqueue events. When internal port is not available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW mode), application can use API rte_event_enqueue_burst() as it was doing earlier, i.e. retrieve event port used by crypto adapter and bind its event queues to that port and enqueue events using the API rte_event_enqueue_burst(). Signed-off-by: Akhil Goyal <gakhil@marvell.com> --- .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++++------- doc/guides/rel_notes/release_21_05.rst | 6 ++ lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 62 +++++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 8 ++- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 3 + 8 files changed, 144 insertions(+), 27 deletions(-) diff --git a/doc/guides/prog_guide/event_crypto_adapter.rst b/doc/guides/prog_guide/event_crypto_adapter.rst index 1e3eb7139..4fb5c688e 100644 --- a/doc/guides/prog_guide/event_crypto_adapter.rst +++ b/doc/guides/prog_guide/event_crypto_adapter.rst @@ -55,21 +55,22 @@ which is needed to enqueue an event after the crypto operation is completed. RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -In the RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, if HW supports -RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability the application -can directly submit the crypto operations to the cryptodev. -If not, application retrieves crypto adapter's event port using -rte_event_crypto_adapter_event_port_get() API. Then, links its event -queue to this port and starts enqueuing crypto operations as events -to the eventdev. The adapter then dequeues the events and submits the -crypto operations to the cryptodev. After the crypto completions, the -adapter enqueues events to the event device. -Application can use this mode, when ingress packet ordering is needed. -In this mode, events dequeued from the adapter will be treated as -forwarded events. The application needs to specify the cryptodev ID -and queue pair ID (request information) needed to enqueue a crypto -operation in addition to the event information (response information) -needed to enqueue an event after the crypto operation has completed. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), the application should +use ``rte_event_crypto_adapter_enqueue()`` API to enqueue crypto operations as +events to crypto adapter. If not, application retrieves crypto adapter's event +port using ``rte_event_crypto_adapter_event_port_get()`` API, links its event +queue to this port and starts enqueuing crypto operations as events to eventdev +using ``rte_event_enqueue_burst()``. The adapter then dequeues the events and +submits the crypto operations to the cryptodev. After the crypto operation is +complete, the adapter enqueues events to the event device. The application can +use this mode when ingress packet ordering is needed. In this mode, events +dequeued from the adapter will be treated as forwarded events. The application +needs to specify the cryptodev ID and queue pair ID (request information) needed +to enqueue a crypto operation in addition to the event information (response +information) needed to enqueue an event after the crypto operation has +completed. .. _figure_event_crypto_adapter_op_forward: @@ -120,28 +121,44 @@ service function and needs to create an event port for it. The callback is expected to fill the ``struct rte_event_crypto_adapter_conf`` structure passed to it. -For RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, the event port created by adapter -can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` API. -Application can use this event port to link with event queue on which it -enqueues events towards the crypto adapter. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), events with crypto +operations should be enqueued to the crypto adapter using +``rte_event_crypto_adapter_enqueue()`` API. If not, the event port created by +the adapter can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` +API. An application can use this event port to link with an event queue, on +which it enqueues events towards the crypto adapter using +``rte_event_enqueue_burst()``. .. code-block:: c - uint8_t id, evdev, crypto_ev_port_id, app_qid; + uint8_t id, evdev_id, cdev_id, crypto_ev_port_id, app_qid; struct rte_event ev; + uint32_t cap; int ret; - ret = rte_event_crypto_adapter_event_port_get(id, &crypto_ev_port_id); - ret = rte_event_queue_setup(evdev, app_qid, NULL); - ret = rte_event_port_link(evdev, crypto_ev_port_id, &app_qid, NULL, 1); - // Fill in event info and update event_ptr with rte_crypto_op memset(&ev, 0, sizeof(ev)); - ev.queue_id = app_qid; . . ev.event_ptr = op; - ret = rte_event_enqueue_burst(evdev, app_ev_port_id, ev, nb_events); + + ret = rte_event_crypto_adapter_caps_get(evdev_id, cdev_id, &cap); + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) { + ret = rte_event_crypto_adapter_enqueue(evdev_id, app_ev_port_id, + ev, nb_events); + } else { + ret = rte_event_crypto_adapter_event_port_get(id, + &crypto_ev_port_id); + ret = rte_event_queue_setup(evdev_id, app_qid, NULL); + ret = rte_event_port_link(evdev_id, crypto_ev_port_id, &app_qid, + NULL, 1); + ev.queue_id = app_qid; + ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev, + nb_events); + } + Querying adapter capabilities ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst index e2b0886a9..0bee94877 100644 --- a/doc/guides/rel_notes/release_21_05.rst +++ b/doc/guides/rel_notes/release_21_05.rst @@ -106,6 +106,12 @@ New Features * Added support for periodic timer mode in eventdev timer adapter. * Added support for periodic timer mode in octeontx2 event device driver. +* **Enhanced crypto adapter forward mode.** + + * Added ``rte_event_crypto_adapter_enqueue()`` API to enqueue events to crypto + adapter if forward mode is supported by driver. + * Added support for crypto adapter forward mode in octeontx2 event and crypto + device driver. Removed Items ------------- diff --git a/lib/librte_eventdev/eventdev_trace_points.c b/lib/librte_eventdev/eventdev_trace_points.c index 1a0ccc448..3867ec800 100644 --- a/lib/librte_eventdev/eventdev_trace_points.c +++ b/lib/librte_eventdev/eventdev_trace_points.c @@ -118,3 +118,6 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_start, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_stop, lib.eventdev.crypto.stop) + +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_enqueue, + lib.eventdev.crypto.enq) diff --git a/lib/librte_eventdev/rte_event_crypto_adapter.h b/lib/librte_eventdev/rte_event_crypto_adapter.h index 60630ef66..90e9a7863 100644 --- a/lib/librte_eventdev/rte_event_crypto_adapter.h +++ b/lib/librte_eventdev/rte_event_crypto_adapter.h @@ -522,6 +522,68 @@ rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id); int rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); +/** + * Enqueue a burst of crypto operations as events object supplied in *rte_event* + * structure on an event crypto adapter designated by its event *dev_id* through + * the event port specified by *port_id*. This function is supported if the + * eventdev PMD has the #RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD + * capability flag set. + * + * The *nb_events* parameter is the number of event objects to enqueue which are + * supplied in the *ev* array of *rte_event* structure. + * + * The rte_event_crypto_adapter_enqueue() function returns the number of + * events objects it actually enqueued. A return value equal to *nb_events* + * means that all event objects have been enqueued. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param ev + * Points to an array of *nb_events* objects of type *rte_event* structure + * which contain the event object enqueue operations to be processed. + * @param nb_events + * The number of event objects to enqueue, typically number of + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) + * available for this port. + * + * @return + * The number of event objects actually enqueued on the event device. The + * return value can be less than the value of the *nb_events* parameter when + * the event devices queue is full or if invalid parameters are specified in a + * *rte_event*. If the return value is less than *nb_events*, the remaining + * events at the end of ev[] are not consumed and the caller has to take care + * of them, and rte_errno is set accordingly. Possible errno values include: + * - EINVAL The port ID is invalid, device ID is invalid, an event's queue + * ID is invalid, or an event's sched type doesn't match the + * capabilities of the destination queue. + * - ENOSPC The event port was backpressured and unable to enqueue + * one or more events. This error code is only applicable to + * closed systems. + */ +static inline uint16_t +rte_event_crypto_adapter_enqueue(uint8_t dev_id, + uint8_t port_id, + struct rte_event ev[], + uint16_t nb_events) +{ + const struct rte_eventdev *dev = &rte_eventdevs[dev_id]; + +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + if (port_id >= dev->data->nb_ports) { + rte_errno = EINVAL; + return 0; + } +#endif + rte_eventdev_trace_crypto_adapter_enqueue(dev_id, port_id, ev, + nb_events); + + return dev->ca_enqueue(dev->data->ports[port_id], ev, nb_events); +} + #ifdef __cplusplus } #endif diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c index b57363f80..5674bd38e 100644 --- a/lib/librte_eventdev/rte_eventdev.c +++ b/lib/librte_eventdev/rte_eventdev.c @@ -1405,6 +1405,15 @@ rte_event_tx_adapter_enqueue(__rte_unused void *port, return 0; } +static uint16_t +rte_event_crypto_adapter_enqueue(__rte_unused void *port, + __rte_unused struct rte_event ev[], + __rte_unused uint16_t nb_events) +{ + rte_errno = ENOTSUP; + return 0; +} + struct rte_eventdev * rte_event_pmd_allocate(const char *name, int socket_id) { @@ -1427,6 +1436,7 @@ rte_event_pmd_allocate(const char *name, int socket_id) eventdev->txa_enqueue = rte_event_tx_adapter_enqueue; eventdev->txa_enqueue_same_dest = rte_event_tx_adapter_enqueue; + eventdev->ca_enqueue = rte_event_crypto_adapter_enqueue; if (eventdev->data == NULL) { struct rte_eventdev_data *eventdev_data = NULL; diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h index 9fc39e9ca..b50027f88 100644 --- a/lib/librte_eventdev/rte_eventdev.h +++ b/lib/librte_eventdev/rte_eventdev.h @@ -1276,6 +1276,10 @@ typedef uint16_t (*event_tx_adapter_enqueue_same_dest)(void *port, * burst having same destination Ethernet port & Tx queue. */ +typedef uint16_t (*event_crypto_adapter_enqueue)(void *port, + struct rte_event ev[], uint16_t nb_events); +/**< @internal Enqueue burst of events on crypto adapter */ + #define RTE_EVENTDEV_NAME_MAX_LEN (64) /**< @internal Max length of name of event PMD */ @@ -1347,6 +1351,8 @@ struct rte_eventdev { */ event_tx_adapter_enqueue txa_enqueue; /**< Pointer to PMD eth Tx adapter enqueue function. */ + event_crypto_adapter_enqueue ca_enqueue; + /**< Pointer to PMD crypto adapter enqueue function. */ struct rte_eventdev_data *data; /**< Pointer to device data */ struct rte_eventdev_ops *dev_ops; @@ -1359,7 +1365,7 @@ struct rte_eventdev { /**< Flag indicating the device is attached */ uint64_t reserved_64s[4]; /**< Reserved for future fields */ - void *reserved_ptrs[4]; /**< Reserved for future fields */ + void *reserved_ptrs[3]; /**< Reserved for future fields */ } __rte_cache_aligned; extern struct rte_eventdev *rte_eventdevs; diff --git a/lib/librte_eventdev/rte_eventdev_trace_fp.h b/lib/librte_eventdev/rte_eventdev_trace_fp.h index 349129c0f..5639e0b83 100644 --- a/lib/librte_eventdev/rte_eventdev_trace_fp.h +++ b/lib/librte_eventdev/rte_eventdev_trace_fp.h @@ -49,6 +49,16 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_u8(flags); ) +RTE_TRACE_POINT_FP( + rte_eventdev_trace_crypto_adapter_enqueue, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table, + uint16_t nb_events), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_ptr(ev_table); + rte_trace_point_emit_u16(nb_events); +) + RTE_TRACE_POINT_FP( rte_eventdev_trace_timer_arm_burst, RTE_TRACE_POINT_ARGS(const void *adapter, void **evtims_table, diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map index 3e5c09cfd..c63ba7a9c 100644 --- a/lib/librte_eventdev/version.map +++ b/lib/librte_eventdev/version.map @@ -138,6 +138,9 @@ EXPERIMENTAL { __rte_eventdev_trace_port_setup; # added in 20.11 rte_event_pmd_pci_probe_named; + + # added in 21.05 + __rte_eventdev_trace_crypto_adapter_enqueue; }; INTERNAL { -- 2.25.1
Advertise crypto adapter forward mode capability and set crypto adapter enqueue function in driver. Signed-off-by: Shijith Thotton <sthotton@marvell.com> --- drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 42 ++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 82 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- 7 files changed, 121 insertions(+), 21 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c index cec20b5c6..4808dca64 100644 --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c @@ -7,6 +7,7 @@ #include <rte_cryptodev_pmd.h> #include <rte_errno.h> #include <rte_ethdev.h> +#include <rte_event_crypto_adapter.h> #include "otx2_cryptodev.h" #include "otx2_cryptodev_capabilities.h" @@ -434,15 +435,28 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform, return -ENOTSUP; } -static __rte_always_inline void __rte_hot +static __rte_always_inline int32_t __rte_hot otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, struct cpt_request_info *req, void *lmtline, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { + union rte_event_crypto_metadata *m_data; union cpt_inst_s inst; uint64_t lmt_status; + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) + m_data = rte_cryptodev_sym_session_get_user_data( + op->sym->session); + else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + op->private_data_offset) + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)op + + op->private_data_offset); + else + return -EINVAL; + inst.u[0] = 0; inst.s9x.res_addr = req->comp_baddr; inst.u[2] = 0; @@ -453,12 +467,11 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, inst.s9x.ei2 = req->ist.ei2; inst.s9x.ei3 = cpt_inst_w7; - inst.s9x.qord = 1; - inst.s9x.grp = qp->ev.queue_id; - inst.s9x.tt = qp->ev.sched_type; - inst.s9x.tag = (RTE_EVENT_TYPE_CRYPTODEV << 28) | - qp->ev.flow_id; - inst.s9x.wq_ptr = (uint64_t)req >> 3; + inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) | + m_data->response_info.flow_id) | + ((uint64_t)m_data->response_info.sched_type << 32) | + ((uint64_t)m_data->response_info.queue_id << 34)); + inst.u[3] = 1 | (((uint64_t)req >> 3) << 3); req->qp = qp; do { @@ -475,22 +488,22 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, lmt_status = otx2_lmt_submit(qp->lf_nq_reg); } while (lmt_status == 0); + return 0; } static __rte_always_inline int32_t __rte_hot otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp, struct pending_queue *pend_q, struct cpt_request_info *req, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { void *lmtline = qp->lmtline; union cpt_inst_s inst; uint64_t lmt_status; - if (qp->ca_enable) { - otx2_ca_enqueue_req(qp, req, lmtline, cpt_inst_w7); - return 0; - } + if (qp->ca_enable) + return otx2_ca_enqueue_req(qp, req, lmtline, op, cpt_inst_w7); if (unlikely(pend_q->pending_count >= OTX2_CPT_DEFAULT_CMD_QLEN)) return -EAGAIN; @@ -594,7 +607,8 @@ otx2_cpt_enqueue_asym(struct otx2_cpt_qp *qp, goto req_fail; } - ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op, + sess->cpt_inst_w7); if (unlikely(ret)) { CPT_LOG_DP_ERR("Could not enqueue crypto req"); @@ -638,7 +652,7 @@ otx2_cpt_enqueue_sym(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (unlikely(ret)) { /* Free buffer allocated by fill params routines */ @@ -707,7 +721,7 @@ otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (winsz && esn) { seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low; diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 770a801c4..59450521a 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -12,8 +12,9 @@ #include <rte_mbuf_pool_ops.h> #include <rte_pci.h> -#include "otx2_evdev_stats.h" #include "otx2_evdev.h" +#include "otx2_evdev_crypto_adptr_tx.h" +#include "otx2_evdev_stats.h" #include "otx2_irq.h" #include "otx2_tim_evdev.h" @@ -311,6 +312,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_ca_enq; if (dev->dual_ws) { event_dev->enqueue = otx2_ssogws_dual_enq; @@ -473,6 +475,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq; } event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c index 4e8a96cb6..2c9b347f0 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c @@ -18,7 +18,8 @@ otx2_ca_caps_get(const struct rte_eventdev *dev, RTE_SET_USED(cdev); *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND | - RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW; + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW | + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD; return 0; } diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h similarity index 93% rename from drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h rename to drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h index 70b63933e..9e331fdd7 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h @@ -2,8 +2,8 @@ * Copyright (C) 2020 Marvell International Ltd. */ -#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ -#define _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ #include <rte_cryptodev.h> #include <rte_cryptodev_pmd.h> @@ -72,4 +72,4 @@ otx2_handle_crypto_event(uint64_t get_work1) return (uint64_t)(cop); } -#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ */ +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */ diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h new file mode 100644 index 000000000..bcc3c473d --- /dev/null +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h @@ -0,0 +1,82 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2021 Marvell International Ltd. + */ + +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ + +#include <rte_cryptodev.h> +#include <rte_cryptodev_pmd.h> +#include <rte_event_crypto_adapter.h> +#include <rte_eventdev.h> + +#include <otx2_cryptodev_qp.h> +#include <otx2_worker.h> + +static inline uint16_t +otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) +{ + union rte_event_crypto_metadata *m_data; + struct rte_crypto_op *crypto_op; + struct rte_cryptodev *cdev; + struct otx2_cpt_qp *qp; + uint8_t cdev_id; + uint16_t qp_id; + + crypto_op = ev->event_ptr; + if (crypto_op == NULL) + return 0; + + if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + m_data = rte_cryptodev_sym_session_get_user_data( + crypto_op->sym->session); + if (m_data == NULL) + goto free_op; + + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + crypto_op->private_data_offset) { + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)crypto_op + + crypto_op->private_data_offset); + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else { + goto free_op; + } + + cdev = &rte_cryptodevs[cdev_id]; + qp = cdev->data->queue_pairs[qp_id]; + + if (!ev->sched_type) + otx2_ssogws_head_wait(tag_op); + if (qp->ca_enable) + return cdev->enqueue_burst(qp, &crypto_op, 1); + +free_op: + rte_pktmbuf_free(crypto_op->sym->m_src); + rte_crypto_op_free(crypto_op); + return 0; +} + +static uint16_t __rte_hot +otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->tag_op, ev); +} + +static uint16_t __rte_hot +otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws_dual *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->ws_state[!ws->vws].tag_op, ev); +} +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */ diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h index 2b716c042..fd149be91 100644 --- a/drivers/event/octeontx2/otx2_worker.h +++ b/drivers/event/octeontx2/otx2_worker.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" #include "otx2_ethdev_sec_tx.h" /* SSO Operations */ diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h index 72b616439..36ae4dd88 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.h +++ b/drivers/event/octeontx2/otx2_worker_dual.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" /* SSO Operations */ static __rte_always_inline uint16_t -- 2.25.1
On Fri, Mar 26, 2021 at 2:43 PM Shijith Thotton <sthotton@marvell.com> wrote: > > This series proposes a new event device enqueue operation if crypto > adapter forward mode is supported. Second patch in the series is the > implementation of the same in PMD. Test application changes for the > usage of new API is yet to add. Please submit the new version with the test application change. > > v1: > - Added crypto adapter forward mode support for octeontx2. > > Akhil Goyal (1): > eventdev: introduce crypto adapter enqueue API @Gujjar, Abhinandan S Could you review the API changes? > > Shijith Thotton (1): > event/octeontx2: support crypto adapter forward mode > > .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++------ > drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 34 +++++--- > drivers/event/octeontx2/otx2_evdev.c | 5 +- > .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- > ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- > .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 82 +++++++++++++++++++ > drivers/event/octeontx2/otx2_worker.h | 2 +- > drivers/event/octeontx2/otx2_worker_dual.h | 2 +- > lib/librte_eventdev/eventdev_trace_points.c | 3 + > .../rte_event_crypto_adapter.h | 66 +++++++++++++++ > lib/librte_eventdev/rte_eventdev.c | 10 +++ > lib/librte_eventdev/rte_eventdev.h | 8 +- > lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ > lib/librte_eventdev/version.map | 3 + > 14 files changed, 259 insertions(+), 44 deletions(-) > rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) > create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h > > -- > 2.25.1 >
> -----Original Message----- > From: Jerin Jacob <jerinjacobk@gmail.com> > Sent: Tuesday, March 30, 2021 9:34 AM > To: Shijith Thotton <sthotton@marvell.com> > Cc: dpdk-dev <dev@dpdk.org>; Thomas Monjalon > <thomas@monjalon.net>; Jerin Jacob <jerinj@marvell.com>; Gujjar, > Abhinandan S <abhinandan.gujjar@intel.com>; Hemant Agrawal > <hemant.agrawal@nxp.com>; Nipun Gupta <nipun.gupta@nxp.com>; > sachin.saxena@oss.nxp.com; Anoob Joseph <anoobj@marvell.com>; Matan > Azrad <matan@nvidia.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>; > Gagandeep Singh <g.singh@nxp.com>; Carrillo, Erik G > <erik.g.carrillo@intel.com>; Jayatheerthan, Jay > <jay.jayatheerthan@intel.com>; Pavan Nikhilesh > <pbhagavatula@marvell.com>; Van Haaren, Harry > <harry.van.haaren@intel.com>; Akhil Goyal <gakhil@marvell.com> > Subject: Re: [dpdk-dev] [PATCH v1 0/2] Enhancements to crypto adapter > forward mode > > On Fri, Mar 26, 2021 at 2:43 PM Shijith Thotton <sthotton@marvell.com> > wrote: > > > > This series proposes a new event device enqueue operation if crypto > > adapter forward mode is supported. Second patch in the series is the > > implementation of the same in PMD. Test application changes for the > > usage of new API is yet to add. > > Please submit the new version with the test application change. > > > > > v1: > > - Added crypto adapter forward mode support for octeontx2. > > > > Akhil Goyal (1): > > eventdev: introduce crypto adapter enqueue API > > > @Gujjar, Abhinandan S > > Could you review the API changes? Yes Jerin. I will review by this week end. > > > > > > Shijith Thotton (1): > > event/octeontx2: support crypto adapter forward mode > > > > .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++------ > > drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 34 +++++--- > > drivers/event/octeontx2/otx2_evdev.c | 5 +- > > .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h > > => otx2_evdev_crypto_adptr_rx.h} | 6 +- > > .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 82 > +++++++++++++++++++ > > drivers/event/octeontx2/otx2_worker.h | 2 +- > > drivers/event/octeontx2/otx2_worker_dual.h | 2 +- > > lib/librte_eventdev/eventdev_trace_points.c | 3 + > > .../rte_event_crypto_adapter.h | 66 +++++++++++++++ > > lib/librte_eventdev/rte_eventdev.c | 10 +++ > > lib/librte_eventdev/rte_eventdev.h | 8 +- > > lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ > > lib/librte_eventdev/version.map | 3 + > > 14 files changed, 259 insertions(+), 44 deletions(-) rename > > drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => > > otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 > > drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h > > > > -- > > 2.25.1 > >
This series proposes a new event device enqueue operation if crypto adapter forward mode is supported. Second patch in the series is the implementation of the same in PMD. Test application are added in third patch. v3: - Added crypto adapter test application changes. v2: - Updated release notes. - Made use of RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET macro. - Fixed v1 build error. v1: - Added crypto adapter forward mode support for octeontx2. Akhil Goyal (1): eventdev: introduce crypto adapter enqueue API Shijith Thotton (2): event/octeontx2: support crypto adapter forward mode test/event_crypto: use crypto adapter enqueue API app/test/test_event_crypto_adapter.c | 29 +++++-- .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++------ doc/guides/rel_notes/release_21_05.rst | 6 ++ drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 42 ++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 82 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 62 ++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 8 +- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 3 + 16 files changed, 285 insertions(+), 57 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h -- 2.25.1
From: Akhil Goyal <gakhil@marvell.com> In case an event from a previous stage is required to be forwarded to a crypto adapter and PMD supports internal event port in crypto adapter, exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not have a way to check in the API rte_event_enqueue_burst(), whether it is for crypto adapter or for eth tx adapter. Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue(), which can send to a crypto adapter. Note that RTE_EVENT_TYPE_* cannot be used to make that decision, as it is meant for event source and not event destination. And event port designated for crypto adapter is designed to be used for OP_NEW mode. Hence, in order to support an event PMD which has an internal event port in crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, application should use rte_event_crypto_adapter_enqueue() API to enqueue events. When internal port is not available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW mode), application can use API rte_event_enqueue_burst() as it was doing earlier, i.e. retrieve event port used by crypto adapter and bind its event queues to that port and enqueue events using the API rte_event_enqueue_burst(). Signed-off-by: Akhil Goyal <gakhil@marvell.com> --- .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++++------- doc/guides/rel_notes/release_21_05.rst | 6 ++ lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 62 +++++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 8 ++- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 3 + 8 files changed, 144 insertions(+), 27 deletions(-) diff --git a/doc/guides/prog_guide/event_crypto_adapter.rst b/doc/guides/prog_guide/event_crypto_adapter.rst index 1e3eb7139..4fb5c688e 100644 --- a/doc/guides/prog_guide/event_crypto_adapter.rst +++ b/doc/guides/prog_guide/event_crypto_adapter.rst @@ -55,21 +55,22 @@ which is needed to enqueue an event after the crypto operation is completed. RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -In the RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, if HW supports -RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability the application -can directly submit the crypto operations to the cryptodev. -If not, application retrieves crypto adapter's event port using -rte_event_crypto_adapter_event_port_get() API. Then, links its event -queue to this port and starts enqueuing crypto operations as events -to the eventdev. The adapter then dequeues the events and submits the -crypto operations to the cryptodev. After the crypto completions, the -adapter enqueues events to the event device. -Application can use this mode, when ingress packet ordering is needed. -In this mode, events dequeued from the adapter will be treated as -forwarded events. The application needs to specify the cryptodev ID -and queue pair ID (request information) needed to enqueue a crypto -operation in addition to the event information (response information) -needed to enqueue an event after the crypto operation has completed. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), the application should +use ``rte_event_crypto_adapter_enqueue()`` API to enqueue crypto operations as +events to crypto adapter. If not, application retrieves crypto adapter's event +port using ``rte_event_crypto_adapter_event_port_get()`` API, links its event +queue to this port and starts enqueuing crypto operations as events to eventdev +using ``rte_event_enqueue_burst()``. The adapter then dequeues the events and +submits the crypto operations to the cryptodev. After the crypto operation is +complete, the adapter enqueues events to the event device. The application can +use this mode when ingress packet ordering is needed. In this mode, events +dequeued from the adapter will be treated as forwarded events. The application +needs to specify the cryptodev ID and queue pair ID (request information) needed +to enqueue a crypto operation in addition to the event information (response +information) needed to enqueue an event after the crypto operation has +completed. .. _figure_event_crypto_adapter_op_forward: @@ -120,28 +121,44 @@ service function and needs to create an event port for it. The callback is expected to fill the ``struct rte_event_crypto_adapter_conf`` structure passed to it. -For RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, the event port created by adapter -can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` API. -Application can use this event port to link with event queue on which it -enqueues events towards the crypto adapter. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), events with crypto +operations should be enqueued to the crypto adapter using +``rte_event_crypto_adapter_enqueue()`` API. If not, the event port created by +the adapter can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` +API. An application can use this event port to link with an event queue, on +which it enqueues events towards the crypto adapter using +``rte_event_enqueue_burst()``. .. code-block:: c - uint8_t id, evdev, crypto_ev_port_id, app_qid; + uint8_t id, evdev_id, cdev_id, crypto_ev_port_id, app_qid; struct rte_event ev; + uint32_t cap; int ret; - ret = rte_event_crypto_adapter_event_port_get(id, &crypto_ev_port_id); - ret = rte_event_queue_setup(evdev, app_qid, NULL); - ret = rte_event_port_link(evdev, crypto_ev_port_id, &app_qid, NULL, 1); - // Fill in event info and update event_ptr with rte_crypto_op memset(&ev, 0, sizeof(ev)); - ev.queue_id = app_qid; . . ev.event_ptr = op; - ret = rte_event_enqueue_burst(evdev, app_ev_port_id, ev, nb_events); + + ret = rte_event_crypto_adapter_caps_get(evdev_id, cdev_id, &cap); + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) { + ret = rte_event_crypto_adapter_enqueue(evdev_id, app_ev_port_id, + ev, nb_events); + } else { + ret = rte_event_crypto_adapter_event_port_get(id, + &crypto_ev_port_id); + ret = rte_event_queue_setup(evdev_id, app_qid, NULL); + ret = rte_event_port_link(evdev_id, crypto_ev_port_id, &app_qid, + NULL, 1); + ev.queue_id = app_qid; + ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev, + nb_events); + } + Querying adapter capabilities ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst index e2b0886a9..0bee94877 100644 --- a/doc/guides/rel_notes/release_21_05.rst +++ b/doc/guides/rel_notes/release_21_05.rst @@ -106,6 +106,12 @@ New Features * Added support for periodic timer mode in eventdev timer adapter. * Added support for periodic timer mode in octeontx2 event device driver. +* **Enhanced crypto adapter forward mode.** + + * Added ``rte_event_crypto_adapter_enqueue()`` API to enqueue events to crypto + adapter if forward mode is supported by driver. + * Added support for crypto adapter forward mode in octeontx2 event and crypto + device driver. Removed Items ------------- diff --git a/lib/librte_eventdev/eventdev_trace_points.c b/lib/librte_eventdev/eventdev_trace_points.c index 1a0ccc448..3867ec800 100644 --- a/lib/librte_eventdev/eventdev_trace_points.c +++ b/lib/librte_eventdev/eventdev_trace_points.c @@ -118,3 +118,6 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_start, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_stop, lib.eventdev.crypto.stop) + +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_enqueue, + lib.eventdev.crypto.enq) diff --git a/lib/librte_eventdev/rte_event_crypto_adapter.h b/lib/librte_eventdev/rte_event_crypto_adapter.h index 60630ef66..90e9a7863 100644 --- a/lib/librte_eventdev/rte_event_crypto_adapter.h +++ b/lib/librte_eventdev/rte_event_crypto_adapter.h @@ -522,6 +522,68 @@ rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id); int rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); +/** + * Enqueue a burst of crypto operations as events object supplied in *rte_event* + * structure on an event crypto adapter designated by its event *dev_id* through + * the event port specified by *port_id*. This function is supported if the + * eventdev PMD has the #RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD + * capability flag set. + * + * The *nb_events* parameter is the number of event objects to enqueue which are + * supplied in the *ev* array of *rte_event* structure. + * + * The rte_event_crypto_adapter_enqueue() function returns the number of + * events objects it actually enqueued. A return value equal to *nb_events* + * means that all event objects have been enqueued. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param ev + * Points to an array of *nb_events* objects of type *rte_event* structure + * which contain the event object enqueue operations to be processed. + * @param nb_events + * The number of event objects to enqueue, typically number of + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) + * available for this port. + * + * @return + * The number of event objects actually enqueued on the event device. The + * return value can be less than the value of the *nb_events* parameter when + * the event devices queue is full or if invalid parameters are specified in a + * *rte_event*. If the return value is less than *nb_events*, the remaining + * events at the end of ev[] are not consumed and the caller has to take care + * of them, and rte_errno is set accordingly. Possible errno values include: + * - EINVAL The port ID is invalid, device ID is invalid, an event's queue + * ID is invalid, or an event's sched type doesn't match the + * capabilities of the destination queue. + * - ENOSPC The event port was backpressured and unable to enqueue + * one or more events. This error code is only applicable to + * closed systems. + */ +static inline uint16_t +rte_event_crypto_adapter_enqueue(uint8_t dev_id, + uint8_t port_id, + struct rte_event ev[], + uint16_t nb_events) +{ + const struct rte_eventdev *dev = &rte_eventdevs[dev_id]; + +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + if (port_id >= dev->data->nb_ports) { + rte_errno = EINVAL; + return 0; + } +#endif + rte_eventdev_trace_crypto_adapter_enqueue(dev_id, port_id, ev, + nb_events); + + return dev->ca_enqueue(dev->data->ports[port_id], ev, nb_events); +} + #ifdef __cplusplus } #endif diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c index b57363f80..5674bd38e 100644 --- a/lib/librte_eventdev/rte_eventdev.c +++ b/lib/librte_eventdev/rte_eventdev.c @@ -1405,6 +1405,15 @@ rte_event_tx_adapter_enqueue(__rte_unused void *port, return 0; } +static uint16_t +rte_event_crypto_adapter_enqueue(__rte_unused void *port, + __rte_unused struct rte_event ev[], + __rte_unused uint16_t nb_events) +{ + rte_errno = ENOTSUP; + return 0; +} + struct rte_eventdev * rte_event_pmd_allocate(const char *name, int socket_id) { @@ -1427,6 +1436,7 @@ rte_event_pmd_allocate(const char *name, int socket_id) eventdev->txa_enqueue = rte_event_tx_adapter_enqueue; eventdev->txa_enqueue_same_dest = rte_event_tx_adapter_enqueue; + eventdev->ca_enqueue = rte_event_crypto_adapter_enqueue; if (eventdev->data == NULL) { struct rte_eventdev_data *eventdev_data = NULL; diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h index 9fc39e9ca..b50027f88 100644 --- a/lib/librte_eventdev/rte_eventdev.h +++ b/lib/librte_eventdev/rte_eventdev.h @@ -1276,6 +1276,10 @@ typedef uint16_t (*event_tx_adapter_enqueue_same_dest)(void *port, * burst having same destination Ethernet port & Tx queue. */ +typedef uint16_t (*event_crypto_adapter_enqueue)(void *port, + struct rte_event ev[], uint16_t nb_events); +/**< @internal Enqueue burst of events on crypto adapter */ + #define RTE_EVENTDEV_NAME_MAX_LEN (64) /**< @internal Max length of name of event PMD */ @@ -1347,6 +1351,8 @@ struct rte_eventdev { */ event_tx_adapter_enqueue txa_enqueue; /**< Pointer to PMD eth Tx adapter enqueue function. */ + event_crypto_adapter_enqueue ca_enqueue; + /**< Pointer to PMD crypto adapter enqueue function. */ struct rte_eventdev_data *data; /**< Pointer to device data */ struct rte_eventdev_ops *dev_ops; @@ -1359,7 +1365,7 @@ struct rte_eventdev { /**< Flag indicating the device is attached */ uint64_t reserved_64s[4]; /**< Reserved for future fields */ - void *reserved_ptrs[4]; /**< Reserved for future fields */ + void *reserved_ptrs[3]; /**< Reserved for future fields */ } __rte_cache_aligned; extern struct rte_eventdev *rte_eventdevs; diff --git a/lib/librte_eventdev/rte_eventdev_trace_fp.h b/lib/librte_eventdev/rte_eventdev_trace_fp.h index 349129c0f..5639e0b83 100644 --- a/lib/librte_eventdev/rte_eventdev_trace_fp.h +++ b/lib/librte_eventdev/rte_eventdev_trace_fp.h @@ -49,6 +49,16 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_u8(flags); ) +RTE_TRACE_POINT_FP( + rte_eventdev_trace_crypto_adapter_enqueue, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table, + uint16_t nb_events), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_ptr(ev_table); + rte_trace_point_emit_u16(nb_events); +) + RTE_TRACE_POINT_FP( rte_eventdev_trace_timer_arm_burst, RTE_TRACE_POINT_ARGS(const void *adapter, void **evtims_table, diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map index 3e5c09cfd..c63ba7a9c 100644 --- a/lib/librte_eventdev/version.map +++ b/lib/librte_eventdev/version.map @@ -138,6 +138,9 @@ EXPERIMENTAL { __rte_eventdev_trace_port_setup; # added in 20.11 rte_event_pmd_pci_probe_named; + + # added in 21.05 + __rte_eventdev_trace_crypto_adapter_enqueue; }; INTERNAL { -- 2.25.1
Advertise crypto adapter forward mode capability and set crypto adapter enqueue function in driver. Signed-off-by: Shijith Thotton <sthotton@marvell.com> --- drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 42 ++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 82 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- 7 files changed, 121 insertions(+), 21 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c index cec20b5c6..4808dca64 100644 --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c @@ -7,6 +7,7 @@ #include <rte_cryptodev_pmd.h> #include <rte_errno.h> #include <rte_ethdev.h> +#include <rte_event_crypto_adapter.h> #include "otx2_cryptodev.h" #include "otx2_cryptodev_capabilities.h" @@ -434,15 +435,28 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform, return -ENOTSUP; } -static __rte_always_inline void __rte_hot +static __rte_always_inline int32_t __rte_hot otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, struct cpt_request_info *req, void *lmtline, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { + union rte_event_crypto_metadata *m_data; union cpt_inst_s inst; uint64_t lmt_status; + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) + m_data = rte_cryptodev_sym_session_get_user_data( + op->sym->session); + else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + op->private_data_offset) + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)op + + op->private_data_offset); + else + return -EINVAL; + inst.u[0] = 0; inst.s9x.res_addr = req->comp_baddr; inst.u[2] = 0; @@ -453,12 +467,11 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, inst.s9x.ei2 = req->ist.ei2; inst.s9x.ei3 = cpt_inst_w7; - inst.s9x.qord = 1; - inst.s9x.grp = qp->ev.queue_id; - inst.s9x.tt = qp->ev.sched_type; - inst.s9x.tag = (RTE_EVENT_TYPE_CRYPTODEV << 28) | - qp->ev.flow_id; - inst.s9x.wq_ptr = (uint64_t)req >> 3; + inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) | + m_data->response_info.flow_id) | + ((uint64_t)m_data->response_info.sched_type << 32) | + ((uint64_t)m_data->response_info.queue_id << 34)); + inst.u[3] = 1 | (((uint64_t)req >> 3) << 3); req->qp = qp; do { @@ -475,22 +488,22 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, lmt_status = otx2_lmt_submit(qp->lf_nq_reg); } while (lmt_status == 0); + return 0; } static __rte_always_inline int32_t __rte_hot otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp, struct pending_queue *pend_q, struct cpt_request_info *req, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { void *lmtline = qp->lmtline; union cpt_inst_s inst; uint64_t lmt_status; - if (qp->ca_enable) { - otx2_ca_enqueue_req(qp, req, lmtline, cpt_inst_w7); - return 0; - } + if (qp->ca_enable) + return otx2_ca_enqueue_req(qp, req, lmtline, op, cpt_inst_w7); if (unlikely(pend_q->pending_count >= OTX2_CPT_DEFAULT_CMD_QLEN)) return -EAGAIN; @@ -594,7 +607,8 @@ otx2_cpt_enqueue_asym(struct otx2_cpt_qp *qp, goto req_fail; } - ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op, + sess->cpt_inst_w7); if (unlikely(ret)) { CPT_LOG_DP_ERR("Could not enqueue crypto req"); @@ -638,7 +652,7 @@ otx2_cpt_enqueue_sym(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (unlikely(ret)) { /* Free buffer allocated by fill params routines */ @@ -707,7 +721,7 @@ otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (winsz && esn) { seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low; diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 770a801c4..59450521a 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -12,8 +12,9 @@ #include <rte_mbuf_pool_ops.h> #include <rte_pci.h> -#include "otx2_evdev_stats.h" #include "otx2_evdev.h" +#include "otx2_evdev_crypto_adptr_tx.h" +#include "otx2_evdev_stats.h" #include "otx2_irq.h" #include "otx2_tim_evdev.h" @@ -311,6 +312,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_ca_enq; if (dev->dual_ws) { event_dev->enqueue = otx2_ssogws_dual_enq; @@ -473,6 +475,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq; } event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c index 4e8a96cb6..2c9b347f0 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c @@ -18,7 +18,8 @@ otx2_ca_caps_get(const struct rte_eventdev *dev, RTE_SET_USED(cdev); *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND | - RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW; + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW | + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD; return 0; } diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h similarity index 93% rename from drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h rename to drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h index 70b63933e..9e331fdd7 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h @@ -2,8 +2,8 @@ * Copyright (C) 2020 Marvell International Ltd. */ -#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ -#define _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ #include <rte_cryptodev.h> #include <rte_cryptodev_pmd.h> @@ -72,4 +72,4 @@ otx2_handle_crypto_event(uint64_t get_work1) return (uint64_t)(cop); } -#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ */ +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */ diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h new file mode 100644 index 000000000..bcc3c473d --- /dev/null +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h @@ -0,0 +1,82 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2021 Marvell International Ltd. + */ + +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ + +#include <rte_cryptodev.h> +#include <rte_cryptodev_pmd.h> +#include <rte_event_crypto_adapter.h> +#include <rte_eventdev.h> + +#include <otx2_cryptodev_qp.h> +#include <otx2_worker.h> + +static inline uint16_t +otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) +{ + union rte_event_crypto_metadata *m_data; + struct rte_crypto_op *crypto_op; + struct rte_cryptodev *cdev; + struct otx2_cpt_qp *qp; + uint8_t cdev_id; + uint16_t qp_id; + + crypto_op = ev->event_ptr; + if (crypto_op == NULL) + return 0; + + if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + m_data = rte_cryptodev_sym_session_get_user_data( + crypto_op->sym->session); + if (m_data == NULL) + goto free_op; + + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + crypto_op->private_data_offset) { + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)crypto_op + + crypto_op->private_data_offset); + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else { + goto free_op; + } + + cdev = &rte_cryptodevs[cdev_id]; + qp = cdev->data->queue_pairs[qp_id]; + + if (!ev->sched_type) + otx2_ssogws_head_wait(tag_op); + if (qp->ca_enable) + return cdev->enqueue_burst(qp, &crypto_op, 1); + +free_op: + rte_pktmbuf_free(crypto_op->sym->m_src); + rte_crypto_op_free(crypto_op); + return 0; +} + +static uint16_t __rte_hot +otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->tag_op, ev); +} + +static uint16_t __rte_hot +otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws_dual *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->ws_state[!ws->vws].tag_op, ev); +} +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */ diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h index 2b716c042..fd149be91 100644 --- a/drivers/event/octeontx2/otx2_worker.h +++ b/drivers/event/octeontx2/otx2_worker.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" #include "otx2_ethdev_sec_tx.h" /* SSO Operations */ diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h index 72b616439..36ae4dd88 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.h +++ b/drivers/event/octeontx2/otx2_worker_dual.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" /* SSO Operations */ static __rte_always_inline uint16_t -- 2.25.1
Use rte_event_crypto_adapter_enqueue() API to enqueue events to crypto adapter if forward mode is supported in driver. Signed-off-by: Shijith Thotton <sthotton@marvell.com> --- app/test/test_event_crypto_adapter.c | 29 +++++++++++++++++++--------- 1 file changed, 20 insertions(+), 9 deletions(-) diff --git a/app/test/test_event_crypto_adapter.c b/app/test/test_event_crypto_adapter.c index 335211cd8..2b07f1582 100644 --- a/app/test/test_event_crypto_adapter.c +++ b/app/test/test_event_crypto_adapter.c @@ -64,6 +64,7 @@ struct event_crypto_adapter_test_params { struct rte_mempool *session_priv_mpool; struct rte_cryptodev_config *config; uint8_t crypto_event_port_id; + uint8_t internal_port_op_fwd; }; struct rte_event response_info = { @@ -110,9 +111,12 @@ send_recv_ev(struct rte_event *ev) struct rte_event recv_ev; int ret; - ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); - TEST_ASSERT_EQUAL(ret, NUM, - "Failed to send event to crypto adapter\n"); + if (params.internal_port_op_fwd) + ret = rte_event_crypto_adapter_enqueue(evdev, TEST_APP_PORT_ID, + ev, NUM); + else + ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); + TEST_ASSERT_EQUAL(ret, NUM, "Failed to send event to crypto adapter\n"); while (rte_event_dequeue_burst(evdev, TEST_APP_PORT_ID, &recv_ev, NUM, 0) == 0) @@ -741,6 +745,11 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) ret = rte_event_crypto_adapter_caps_get(evdev, TEST_CDEV_ID, &cap); TEST_ASSERT_SUCCESS(ret, "Failed to get adapter capabilities\n"); + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) + params.internal_port_op_fwd = 1; + else + params.internal_port_op_fwd = 0; + /* Skip mode and capability mismatch check for SW eventdev */ if (!(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW) && !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) && @@ -771,9 +780,11 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) TEST_ASSERT_SUCCESS(ret, "Failed to add queue pair\n"); - ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, - ¶ms.crypto_event_port_id); - TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + if (!params.internal_port_op_fwd) { + ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, + ¶ms.crypto_event_port_id); + TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + } return TEST_SUCCESS; } @@ -809,15 +820,15 @@ test_crypto_adapter_conf(enum rte_event_crypto_adapter_mode mode) if (!crypto_adapter_setup_done) { ret = configure_event_crypto_adapter(mode); - if (!ret) { + if (ret) + return ret; + if (!params.internal_port_op_fwd) { qid = TEST_CRYPTO_EV_QUEUE_ID; ret = rte_event_port_link(evdev, params.crypto_event_port_id, &qid, NULL, 1); TEST_ASSERT(ret >= 0, "Failed to link queue %d " "port=%u\n", qid, params.crypto_event_port_id); - } else { - return ret; } crypto_adapter_setup_done = 1; } -- 2.25.1
This series proposes a new event device enqueue operation if crypto adapter forward mode is supported. Second patch in the series is the implementation of the same in PMD. Test application changes are added in third patch. v4: - Fix debug build. v3: - Added crypto adapter test application changes. v2: - Updated release notes. - Made use of RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET macro. - Fixed v1 build error. v1: - Added crypto adapter forward mode support for octeontx2. Akhil Goyal (1): eventdev: introduce crypto adapter enqueue API Shijith Thotton (2): event/octeontx2: support crypto adapter forward mode test/event_crypto: use crypto adapter enqueue API app/test/test_event_crypto_adapter.c | 29 +++++-- .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++------ doc/guides/rel_notes/release_21_05.rst | 6 ++ drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 42 ++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 82 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 63 ++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 8 +- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 3 + 16 files changed, 286 insertions(+), 57 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h -- 2.25.1
From: Akhil Goyal <gakhil@marvell.com> In case an event from a previous stage is required to be forwarded to a crypto adapter and PMD supports internal event port in crypto adapter, exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not have a way to check in the API rte_event_enqueue_burst(), whether it is for crypto adapter or for eth tx adapter. Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue(), which can send to a crypto adapter. Note that RTE_EVENT_TYPE_* cannot be used to make that decision, as it is meant for event source and not event destination. And event port designated for crypto adapter is designed to be used for OP_NEW mode. Hence, in order to support an event PMD which has an internal event port in crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, application should use rte_event_crypto_adapter_enqueue() API to enqueue events. When internal port is not available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW mode), application can use API rte_event_enqueue_burst() as it was doing earlier, i.e. retrieve event port used by crypto adapter and bind its event queues to that port and enqueue events using the API rte_event_enqueue_burst(). Signed-off-by: Akhil Goyal <gakhil@marvell.com> --- .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++++------- doc/guides/rel_notes/release_21_05.rst | 6 ++ lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 63 +++++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 8 ++- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 3 + 8 files changed, 145 insertions(+), 27 deletions(-) diff --git a/doc/guides/prog_guide/event_crypto_adapter.rst b/doc/guides/prog_guide/event_crypto_adapter.rst index 1e3eb7139..4fb5c688e 100644 --- a/doc/guides/prog_guide/event_crypto_adapter.rst +++ b/doc/guides/prog_guide/event_crypto_adapter.rst @@ -55,21 +55,22 @@ which is needed to enqueue an event after the crypto operation is completed. RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -In the RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, if HW supports -RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability the application -can directly submit the crypto operations to the cryptodev. -If not, application retrieves crypto adapter's event port using -rte_event_crypto_adapter_event_port_get() API. Then, links its event -queue to this port and starts enqueuing crypto operations as events -to the eventdev. The adapter then dequeues the events and submits the -crypto operations to the cryptodev. After the crypto completions, the -adapter enqueues events to the event device. -Application can use this mode, when ingress packet ordering is needed. -In this mode, events dequeued from the adapter will be treated as -forwarded events. The application needs to specify the cryptodev ID -and queue pair ID (request information) needed to enqueue a crypto -operation in addition to the event information (response information) -needed to enqueue an event after the crypto operation has completed. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), the application should +use ``rte_event_crypto_adapter_enqueue()`` API to enqueue crypto operations as +events to crypto adapter. If not, application retrieves crypto adapter's event +port using ``rte_event_crypto_adapter_event_port_get()`` API, links its event +queue to this port and starts enqueuing crypto operations as events to eventdev +using ``rte_event_enqueue_burst()``. The adapter then dequeues the events and +submits the crypto operations to the cryptodev. After the crypto operation is +complete, the adapter enqueues events to the event device. The application can +use this mode when ingress packet ordering is needed. In this mode, events +dequeued from the adapter will be treated as forwarded events. The application +needs to specify the cryptodev ID and queue pair ID (request information) needed +to enqueue a crypto operation in addition to the event information (response +information) needed to enqueue an event after the crypto operation has +completed. .. _figure_event_crypto_adapter_op_forward: @@ -120,28 +121,44 @@ service function and needs to create an event port for it. The callback is expected to fill the ``struct rte_event_crypto_adapter_conf`` structure passed to it. -For RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, the event port created by adapter -can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` API. -Application can use this event port to link with event queue on which it -enqueues events towards the crypto adapter. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), events with crypto +operations should be enqueued to the crypto adapter using +``rte_event_crypto_adapter_enqueue()`` API. If not, the event port created by +the adapter can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` +API. An application can use this event port to link with an event queue, on +which it enqueues events towards the crypto adapter using +``rte_event_enqueue_burst()``. .. code-block:: c - uint8_t id, evdev, crypto_ev_port_id, app_qid; + uint8_t id, evdev_id, cdev_id, crypto_ev_port_id, app_qid; struct rte_event ev; + uint32_t cap; int ret; - ret = rte_event_crypto_adapter_event_port_get(id, &crypto_ev_port_id); - ret = rte_event_queue_setup(evdev, app_qid, NULL); - ret = rte_event_port_link(evdev, crypto_ev_port_id, &app_qid, NULL, 1); - // Fill in event info and update event_ptr with rte_crypto_op memset(&ev, 0, sizeof(ev)); - ev.queue_id = app_qid; . . ev.event_ptr = op; - ret = rte_event_enqueue_burst(evdev, app_ev_port_id, ev, nb_events); + + ret = rte_event_crypto_adapter_caps_get(evdev_id, cdev_id, &cap); + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) { + ret = rte_event_crypto_adapter_enqueue(evdev_id, app_ev_port_id, + ev, nb_events); + } else { + ret = rte_event_crypto_adapter_event_port_get(id, + &crypto_ev_port_id); + ret = rte_event_queue_setup(evdev_id, app_qid, NULL); + ret = rte_event_port_link(evdev_id, crypto_ev_port_id, &app_qid, + NULL, 1); + ev.queue_id = app_qid; + ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev, + nb_events); + } + Querying adapter capabilities ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst index e2b0886a9..0bee94877 100644 --- a/doc/guides/rel_notes/release_21_05.rst +++ b/doc/guides/rel_notes/release_21_05.rst @@ -106,6 +106,12 @@ New Features * Added support for periodic timer mode in eventdev timer adapter. * Added support for periodic timer mode in octeontx2 event device driver. +* **Enhanced crypto adapter forward mode.** + + * Added ``rte_event_crypto_adapter_enqueue()`` API to enqueue events to crypto + adapter if forward mode is supported by driver. + * Added support for crypto adapter forward mode in octeontx2 event and crypto + device driver. Removed Items ------------- diff --git a/lib/librte_eventdev/eventdev_trace_points.c b/lib/librte_eventdev/eventdev_trace_points.c index 1a0ccc448..3867ec800 100644 --- a/lib/librte_eventdev/eventdev_trace_points.c +++ b/lib/librte_eventdev/eventdev_trace_points.c @@ -118,3 +118,6 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_start, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_stop, lib.eventdev.crypto.stop) + +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_enqueue, + lib.eventdev.crypto.enq) diff --git a/lib/librte_eventdev/rte_event_crypto_adapter.h b/lib/librte_eventdev/rte_event_crypto_adapter.h index 60630ef66..a4a4129b7 100644 --- a/lib/librte_eventdev/rte_event_crypto_adapter.h +++ b/lib/librte_eventdev/rte_event_crypto_adapter.h @@ -171,6 +171,7 @@ extern "C" { #include <stdint.h> #include "rte_eventdev.h" +#include "eventdev_pmd.h" /** * Crypto event adapter mode @@ -522,6 +523,68 @@ rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id); int rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); +/** + * Enqueue a burst of crypto operations as events object supplied in *rte_event* + * structure on an event crypto adapter designated by its event *dev_id* through + * the event port specified by *port_id*. This function is supported if the + * eventdev PMD has the #RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD + * capability flag set. + * + * The *nb_events* parameter is the number of event objects to enqueue which are + * supplied in the *ev* array of *rte_event* structure. + * + * The rte_event_crypto_adapter_enqueue() function returns the number of + * events objects it actually enqueued. A return value equal to *nb_events* + * means that all event objects have been enqueued. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param ev + * Points to an array of *nb_events* objects of type *rte_event* structure + * which contain the event object enqueue operations to be processed. + * @param nb_events + * The number of event objects to enqueue, typically number of + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) + * available for this port. + * + * @return + * The number of event objects actually enqueued on the event device. The + * return value can be less than the value of the *nb_events* parameter when + * the event devices queue is full or if invalid parameters are specified in a + * *rte_event*. If the return value is less than *nb_events*, the remaining + * events at the end of ev[] are not consumed and the caller has to take care + * of them, and rte_errno is set accordingly. Possible errno values include: + * - EINVAL The port ID is invalid, device ID is invalid, an event's queue + * ID is invalid, or an event's sched type doesn't match the + * capabilities of the destination queue. + * - ENOSPC The event port was backpressured and unable to enqueue + * one or more events. This error code is only applicable to + * closed systems. + */ +static inline uint16_t +rte_event_crypto_adapter_enqueue(uint8_t dev_id, + uint8_t port_id, + struct rte_event ev[], + uint16_t nb_events) +{ + const struct rte_eventdev *dev = &rte_eventdevs[dev_id]; + +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + if (port_id >= dev->data->nb_ports) { + rte_errno = EINVAL; + return 0; + } +#endif + rte_eventdev_trace_crypto_adapter_enqueue(dev_id, port_id, ev, + nb_events); + + return dev->ca_enqueue(dev->data->ports[port_id], ev, nb_events); +} + #ifdef __cplusplus } #endif diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c index b57363f80..5674bd38e 100644 --- a/lib/librte_eventdev/rte_eventdev.c +++ b/lib/librte_eventdev/rte_eventdev.c @@ -1405,6 +1405,15 @@ rte_event_tx_adapter_enqueue(__rte_unused void *port, return 0; } +static uint16_t +rte_event_crypto_adapter_enqueue(__rte_unused void *port, + __rte_unused struct rte_event ev[], + __rte_unused uint16_t nb_events) +{ + rte_errno = ENOTSUP; + return 0; +} + struct rte_eventdev * rte_event_pmd_allocate(const char *name, int socket_id) { @@ -1427,6 +1436,7 @@ rte_event_pmd_allocate(const char *name, int socket_id) eventdev->txa_enqueue = rte_event_tx_adapter_enqueue; eventdev->txa_enqueue_same_dest = rte_event_tx_adapter_enqueue; + eventdev->ca_enqueue = rte_event_crypto_adapter_enqueue; if (eventdev->data == NULL) { struct rte_eventdev_data *eventdev_data = NULL; diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h index 9fc39e9ca..b50027f88 100644 --- a/lib/librte_eventdev/rte_eventdev.h +++ b/lib/librte_eventdev/rte_eventdev.h @@ -1276,6 +1276,10 @@ typedef uint16_t (*event_tx_adapter_enqueue_same_dest)(void *port, * burst having same destination Ethernet port & Tx queue. */ +typedef uint16_t (*event_crypto_adapter_enqueue)(void *port, + struct rte_event ev[], uint16_t nb_events); +/**< @internal Enqueue burst of events on crypto adapter */ + #define RTE_EVENTDEV_NAME_MAX_LEN (64) /**< @internal Max length of name of event PMD */ @@ -1347,6 +1351,8 @@ struct rte_eventdev { */ event_tx_adapter_enqueue txa_enqueue; /**< Pointer to PMD eth Tx adapter enqueue function. */ + event_crypto_adapter_enqueue ca_enqueue; + /**< Pointer to PMD crypto adapter enqueue function. */ struct rte_eventdev_data *data; /**< Pointer to device data */ struct rte_eventdev_ops *dev_ops; @@ -1359,7 +1365,7 @@ struct rte_eventdev { /**< Flag indicating the device is attached */ uint64_t reserved_64s[4]; /**< Reserved for future fields */ - void *reserved_ptrs[4]; /**< Reserved for future fields */ + void *reserved_ptrs[3]; /**< Reserved for future fields */ } __rte_cache_aligned; extern struct rte_eventdev *rte_eventdevs; diff --git a/lib/librte_eventdev/rte_eventdev_trace_fp.h b/lib/librte_eventdev/rte_eventdev_trace_fp.h index 349129c0f..5639e0b83 100644 --- a/lib/librte_eventdev/rte_eventdev_trace_fp.h +++ b/lib/librte_eventdev/rte_eventdev_trace_fp.h @@ -49,6 +49,16 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_u8(flags); ) +RTE_TRACE_POINT_FP( + rte_eventdev_trace_crypto_adapter_enqueue, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table, + uint16_t nb_events), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_ptr(ev_table); + rte_trace_point_emit_u16(nb_events); +) + RTE_TRACE_POINT_FP( rte_eventdev_trace_timer_arm_burst, RTE_TRACE_POINT_ARGS(const void *adapter, void **evtims_table, diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map index 3e5c09cfd..c63ba7a9c 100644 --- a/lib/librte_eventdev/version.map +++ b/lib/librte_eventdev/version.map @@ -138,6 +138,9 @@ EXPERIMENTAL { __rte_eventdev_trace_port_setup; # added in 20.11 rte_event_pmd_pci_probe_named; + + # added in 21.05 + __rte_eventdev_trace_crypto_adapter_enqueue; }; INTERNAL { -- 2.25.1
Advertise crypto adapter forward mode capability and set crypto adapter enqueue function in driver. Signed-off-by: Shijith Thotton <sthotton@marvell.com> --- drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 42 ++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 82 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- 7 files changed, 121 insertions(+), 21 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c index cec20b5c6..4808dca64 100644 --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c @@ -7,6 +7,7 @@ #include <rte_cryptodev_pmd.h> #include <rte_errno.h> #include <rte_ethdev.h> +#include <rte_event_crypto_adapter.h> #include "otx2_cryptodev.h" #include "otx2_cryptodev_capabilities.h" @@ -434,15 +435,28 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform, return -ENOTSUP; } -static __rte_always_inline void __rte_hot +static __rte_always_inline int32_t __rte_hot otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, struct cpt_request_info *req, void *lmtline, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { + union rte_event_crypto_metadata *m_data; union cpt_inst_s inst; uint64_t lmt_status; + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) + m_data = rte_cryptodev_sym_session_get_user_data( + op->sym->session); + else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + op->private_data_offset) + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)op + + op->private_data_offset); + else + return -EINVAL; + inst.u[0] = 0; inst.s9x.res_addr = req->comp_baddr; inst.u[2] = 0; @@ -453,12 +467,11 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, inst.s9x.ei2 = req->ist.ei2; inst.s9x.ei3 = cpt_inst_w7; - inst.s9x.qord = 1; - inst.s9x.grp = qp->ev.queue_id; - inst.s9x.tt = qp->ev.sched_type; - inst.s9x.tag = (RTE_EVENT_TYPE_CRYPTODEV << 28) | - qp->ev.flow_id; - inst.s9x.wq_ptr = (uint64_t)req >> 3; + inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) | + m_data->response_info.flow_id) | + ((uint64_t)m_data->response_info.sched_type << 32) | + ((uint64_t)m_data->response_info.queue_id << 34)); + inst.u[3] = 1 | (((uint64_t)req >> 3) << 3); req->qp = qp; do { @@ -475,22 +488,22 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, lmt_status = otx2_lmt_submit(qp->lf_nq_reg); } while (lmt_status == 0); + return 0; } static __rte_always_inline int32_t __rte_hot otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp, struct pending_queue *pend_q, struct cpt_request_info *req, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { void *lmtline = qp->lmtline; union cpt_inst_s inst; uint64_t lmt_status; - if (qp->ca_enable) { - otx2_ca_enqueue_req(qp, req, lmtline, cpt_inst_w7); - return 0; - } + if (qp->ca_enable) + return otx2_ca_enqueue_req(qp, req, lmtline, op, cpt_inst_w7); if (unlikely(pend_q->pending_count >= OTX2_CPT_DEFAULT_CMD_QLEN)) return -EAGAIN; @@ -594,7 +607,8 @@ otx2_cpt_enqueue_asym(struct otx2_cpt_qp *qp, goto req_fail; } - ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op, + sess->cpt_inst_w7); if (unlikely(ret)) { CPT_LOG_DP_ERR("Could not enqueue crypto req"); @@ -638,7 +652,7 @@ otx2_cpt_enqueue_sym(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (unlikely(ret)) { /* Free buffer allocated by fill params routines */ @@ -707,7 +721,7 @@ otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (winsz && esn) { seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low; diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 770a801c4..59450521a 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -12,8 +12,9 @@ #include <rte_mbuf_pool_ops.h> #include <rte_pci.h> -#include "otx2_evdev_stats.h" #include "otx2_evdev.h" +#include "otx2_evdev_crypto_adptr_tx.h" +#include "otx2_evdev_stats.h" #include "otx2_irq.h" #include "otx2_tim_evdev.h" @@ -311,6 +312,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_ca_enq; if (dev->dual_ws) { event_dev->enqueue = otx2_ssogws_dual_enq; @@ -473,6 +475,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq; } event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c index 4e8a96cb6..2c9b347f0 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c @@ -18,7 +18,8 @@ otx2_ca_caps_get(const struct rte_eventdev *dev, RTE_SET_USED(cdev); *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND | - RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW; + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW | + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD; return 0; } diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h similarity index 93% rename from drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h rename to drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h index 70b63933e..9e331fdd7 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h @@ -2,8 +2,8 @@ * Copyright (C) 2020 Marvell International Ltd. */ -#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ -#define _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ #include <rte_cryptodev.h> #include <rte_cryptodev_pmd.h> @@ -72,4 +72,4 @@ otx2_handle_crypto_event(uint64_t get_work1) return (uint64_t)(cop); } -#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ */ +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */ diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h new file mode 100644 index 000000000..bcc3c473d --- /dev/null +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h @@ -0,0 +1,82 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2021 Marvell International Ltd. + */ + +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ + +#include <rte_cryptodev.h> +#include <rte_cryptodev_pmd.h> +#include <rte_event_crypto_adapter.h> +#include <rte_eventdev.h> + +#include <otx2_cryptodev_qp.h> +#include <otx2_worker.h> + +static inline uint16_t +otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) +{ + union rte_event_crypto_metadata *m_data; + struct rte_crypto_op *crypto_op; + struct rte_cryptodev *cdev; + struct otx2_cpt_qp *qp; + uint8_t cdev_id; + uint16_t qp_id; + + crypto_op = ev->event_ptr; + if (crypto_op == NULL) + return 0; + + if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + m_data = rte_cryptodev_sym_session_get_user_data( + crypto_op->sym->session); + if (m_data == NULL) + goto free_op; + + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + crypto_op->private_data_offset) { + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)crypto_op + + crypto_op->private_data_offset); + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else { + goto free_op; + } + + cdev = &rte_cryptodevs[cdev_id]; + qp = cdev->data->queue_pairs[qp_id]; + + if (!ev->sched_type) + otx2_ssogws_head_wait(tag_op); + if (qp->ca_enable) + return cdev->enqueue_burst(qp, &crypto_op, 1); + +free_op: + rte_pktmbuf_free(crypto_op->sym->m_src); + rte_crypto_op_free(crypto_op); + return 0; +} + +static uint16_t __rte_hot +otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->tag_op, ev); +} + +static uint16_t __rte_hot +otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws_dual *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->ws_state[!ws->vws].tag_op, ev); +} +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */ diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h index 2b716c042..fd149be91 100644 --- a/drivers/event/octeontx2/otx2_worker.h +++ b/drivers/event/octeontx2/otx2_worker.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" #include "otx2_ethdev_sec_tx.h" /* SSO Operations */ diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h index 72b616439..36ae4dd88 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.h +++ b/drivers/event/octeontx2/otx2_worker_dual.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" /* SSO Operations */ static __rte_always_inline uint16_t -- 2.25.1
Use rte_event_crypto_adapter_enqueue() API to enqueue events to crypto adapter if forward mode is supported in driver. Signed-off-by: Shijith Thotton <sthotton@marvell.com> --- app/test/test_event_crypto_adapter.c | 29 +++++++++++++++++++--------- 1 file changed, 20 insertions(+), 9 deletions(-) diff --git a/app/test/test_event_crypto_adapter.c b/app/test/test_event_crypto_adapter.c index 335211cd8..2b07f1582 100644 --- a/app/test/test_event_crypto_adapter.c +++ b/app/test/test_event_crypto_adapter.c @@ -64,6 +64,7 @@ struct event_crypto_adapter_test_params { struct rte_mempool *session_priv_mpool; struct rte_cryptodev_config *config; uint8_t crypto_event_port_id; + uint8_t internal_port_op_fwd; }; struct rte_event response_info = { @@ -110,9 +111,12 @@ send_recv_ev(struct rte_event *ev) struct rte_event recv_ev; int ret; - ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); - TEST_ASSERT_EQUAL(ret, NUM, - "Failed to send event to crypto adapter\n"); + if (params.internal_port_op_fwd) + ret = rte_event_crypto_adapter_enqueue(evdev, TEST_APP_PORT_ID, + ev, NUM); + else + ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); + TEST_ASSERT_EQUAL(ret, NUM, "Failed to send event to crypto adapter\n"); while (rte_event_dequeue_burst(evdev, TEST_APP_PORT_ID, &recv_ev, NUM, 0) == 0) @@ -741,6 +745,11 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) ret = rte_event_crypto_adapter_caps_get(evdev, TEST_CDEV_ID, &cap); TEST_ASSERT_SUCCESS(ret, "Failed to get adapter capabilities\n"); + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) + params.internal_port_op_fwd = 1; + else + params.internal_port_op_fwd = 0; + /* Skip mode and capability mismatch check for SW eventdev */ if (!(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW) && !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) && @@ -771,9 +780,11 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) TEST_ASSERT_SUCCESS(ret, "Failed to add queue pair\n"); - ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, - ¶ms.crypto_event_port_id); - TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + if (!params.internal_port_op_fwd) { + ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, + ¶ms.crypto_event_port_id); + TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + } return TEST_SUCCESS; } @@ -809,15 +820,15 @@ test_crypto_adapter_conf(enum rte_event_crypto_adapter_mode mode) if (!crypto_adapter_setup_done) { ret = configure_event_crypto_adapter(mode); - if (!ret) { + if (ret) + return ret; + if (!params.internal_port_op_fwd) { qid = TEST_CRYPTO_EV_QUEUE_ID; ret = rte_event_port_link(evdev, params.crypto_event_port_id, &qid, NULL, 1); TEST_ASSERT(ret >= 0, "Failed to link queue %d " "port=%u\n", qid, params.crypto_event_port_id); - } else { - return ret; } crypto_adapter_setup_done = 1; } -- 2.25.1
> -----Original Message----- > From: Shijith Thotton <sthotton@marvell.com> > Sent: Friday, April 2, 2021 10:31 PM > To: dev@dpdk.org > Cc: Shijith Thotton <sthotton@marvell.com>; thomas@monjalon.net; > jerinj@marvell.com; Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; > hemant.agrawal@nxp.com; nipun.gupta@nxp.com; > sachin.saxena@oss.nxp.com; anoobj@marvell.com; matan@nvidia.com; > Zhang, Roy Fan <roy.fan.zhang@intel.com>; g.singh@nxp.com; Carrillo, Erik > G <erik.g.carrillo@intel.com>; Jayatheerthan, Jay > <jay.jayatheerthan@intel.com>; pbhagavatula@marvell.com; Van Haaren, > Harry <harry.van.haaren@intel.com>; Akhil Goyal <gakhil@marvell.com> > Subject: [PATCH v4 2/3] event/octeontx2: support crypto adapter forward > mode > > Advertise crypto adapter forward mode capability and set crypto adapter > enqueue function in driver. > > Signed-off-by: Shijith Thotton <sthotton@marvell.com> > --- > drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 42 ++++++---- > drivers/event/octeontx2/otx2_evdev.c | 5 +- > .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => > otx2_evdev_crypto_adptr_rx.h} | 6 +- > .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 82 > +++++++++++++++++++ > drivers/event/octeontx2/otx2_worker.h | 2 +- > drivers/event/octeontx2/otx2_worker_dual.h | 2 +- > 7 files changed, 121 insertions(+), 21 deletions(-) rename > drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => > otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 > drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h > > diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c > b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c > index cec20b5c6..4808dca64 100644 > --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c > +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c > @@ -7,6 +7,7 @@ > #include <rte_cryptodev_pmd.h> > #include <rte_errno.h> > #include <rte_ethdev.h> > +#include <rte_event_crypto_adapter.h> > > #include "otx2_cryptodev.h" > #include "otx2_cryptodev_capabilities.h" > @@ -434,15 +435,28 @@ sym_session_configure(int driver_id, struct > rte_crypto_sym_xform *xform, > return -ENOTSUP; > } > > -static __rte_always_inline void __rte_hot > +static __rte_always_inline int32_t __rte_hot > otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, > struct cpt_request_info *req, > void *lmtline, > + struct rte_crypto_op *op, > uint64_t cpt_inst_w7) > { > + union rte_event_crypto_metadata *m_data; > union cpt_inst_s inst; > uint64_t lmt_status; > > + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) > + m_data = rte_cryptodev_sym_session_get_user_data( > + op->sym->session); > + else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && > + op->private_data_offset) > + m_data = (union rte_event_crypto_metadata *) > + ((uint8_t *)op + > + op->private_data_offset); > + else > + return -EINVAL; > + > inst.u[0] = 0; > inst.s9x.res_addr = req->comp_baddr; > inst.u[2] = 0; > @@ -453,12 +467,11 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp > *qp, > inst.s9x.ei2 = req->ist.ei2; > inst.s9x.ei3 = cpt_inst_w7; > > - inst.s9x.qord = 1; > - inst.s9x.grp = qp->ev.queue_id; > - inst.s9x.tt = qp->ev.sched_type; > - inst.s9x.tag = (RTE_EVENT_TYPE_CRYPTODEV << 28) | > - qp->ev.flow_id; > - inst.s9x.wq_ptr = (uint64_t)req >> 3; > + inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) | > + m_data->response_info.flow_id) | > + ((uint64_t)m_data->response_info.sched_type << 32) | > + ((uint64_t)m_data->response_info.queue_id << 34)); > + inst.u[3] = 1 | (((uint64_t)req >> 3) << 3); > req->qp = qp; > > do { > @@ -475,22 +488,22 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp > *qp, > lmt_status = otx2_lmt_submit(qp->lf_nq_reg); > } while (lmt_status == 0); > > + return 0; > } > > static __rte_always_inline int32_t __rte_hot otx2_cpt_enqueue_req(const > struct otx2_cpt_qp *qp, > struct pending_queue *pend_q, > struct cpt_request_info *req, > + struct rte_crypto_op *op, > uint64_t cpt_inst_w7) > { > void *lmtline = qp->lmtline; > union cpt_inst_s inst; > uint64_t lmt_status; > > - if (qp->ca_enable) { > - otx2_ca_enqueue_req(qp, req, lmtline, cpt_inst_w7); > - return 0; > - } > + if (qp->ca_enable) > + return otx2_ca_enqueue_req(qp, req, lmtline, op, > cpt_inst_w7); > > if (unlikely(pend_q->pending_count >= > OTX2_CPT_DEFAULT_CMD_QLEN)) > return -EAGAIN; > @@ -594,7 +607,8 @@ otx2_cpt_enqueue_asym(struct otx2_cpt_qp *qp, > goto req_fail; > } > > - ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, sess- > >cpt_inst_w7); > + ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op, > + sess->cpt_inst_w7); > > if (unlikely(ret)) { > CPT_LOG_DP_ERR("Could not enqueue crypto req"); @@ - > 638,7 +652,7 @@ otx2_cpt_enqueue_sym(struct otx2_cpt_qp *qp, struct > rte_crypto_op *op, > return ret; > } > > - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); > + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess- > >cpt_inst_w7); > > if (unlikely(ret)) { > /* Free buffer allocated by fill params routines */ @@ -707,7 > +721,7 @@ otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct > rte_crypto_op *op, > return ret; > } > > - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); > + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess- > >cpt_inst_w7); > > if (winsz && esn) { > seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low; diff --git > a/drivers/event/octeontx2/otx2_evdev.c > b/drivers/event/octeontx2/otx2_evdev.c > index 770a801c4..59450521a 100644 > --- a/drivers/event/octeontx2/otx2_evdev.c > +++ b/drivers/event/octeontx2/otx2_evdev.c > @@ -12,8 +12,9 @@ > #include <rte_mbuf_pool_ops.h> > #include <rte_pci.h> > > -#include "otx2_evdev_stats.h" > #include "otx2_evdev.h" > +#include "otx2_evdev_crypto_adptr_tx.h" > +#include "otx2_evdev_stats.h" > #include "otx2_irq.h" > #include "otx2_tim_evdev.h" > > @@ -311,6 +312,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC > [!!(dev->tx_offloads & > NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] > [!!(dev->tx_offloads & > NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; > } > + event_dev->ca_enqueue = otx2_ssogws_ca_enq; > > if (dev->dual_ws) { > event_dev->enqueue = otx2_ssogws_dual_enq; > @@ -473,6 +475,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC > [!!(dev->tx_offloads & > > NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; > } > + event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq; > } > > event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; > diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c > b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c > index 4e8a96cb6..2c9b347f0 100644 > --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c > +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c > @@ -18,7 +18,8 @@ otx2_ca_caps_get(const struct rte_eventdev *dev, > RTE_SET_USED(cdev); > > *caps = > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND | > - > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW; > + > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW | > + > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD; > > return 0; > } > diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h > b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h > similarity index 93% > rename from drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h > rename to drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h > index 70b63933e..9e331fdd7 100644 > --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h > +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h > @@ -2,8 +2,8 @@ > * Copyright (C) 2020 Marvell International Ltd. > */ > > -#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ > -#define _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ > +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ > +#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ > > #include <rte_cryptodev.h> > #include <rte_cryptodev_pmd.h> > @@ -72,4 +72,4 @@ otx2_handle_crypto_event(uint64_t get_work1) > > return (uint64_t)(cop); > } > -#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ */ > +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */ > diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h > b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h > new file mode 100644 > index 000000000..bcc3c473d > --- /dev/null > +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h > @@ -0,0 +1,82 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright (C) 2021 Marvell International Ltd. > + */ > + > +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ > +#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ > + > +#include <rte_cryptodev.h> > +#include <rte_cryptodev_pmd.h> > +#include <rte_event_crypto_adapter.h> > +#include <rte_eventdev.h> > + > +#include <otx2_cryptodev_qp.h> > +#include <otx2_worker.h> > + > +static inline uint16_t > +otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) { > + union rte_event_crypto_metadata *m_data; > + struct rte_crypto_op *crypto_op; > + struct rte_cryptodev *cdev; > + struct otx2_cpt_qp *qp; > + uint8_t cdev_id; > + uint16_t qp_id; > + > + crypto_op = ev->event_ptr; > + if (crypto_op == NULL) > + return 0; > + > + if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { > + m_data = rte_cryptodev_sym_session_get_user_data( > + crypto_op->sym->session); > + if (m_data == NULL) > + goto free_op; > + > + cdev_id = m_data->request_info.cdev_id; > + qp_id = m_data->request_info.queue_pair_id; > + } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS > && > + crypto_op->private_data_offset) { > + m_data = (union rte_event_crypto_metadata *) > + ((uint8_t *)crypto_op + > + crypto_op->private_data_offset); > + cdev_id = m_data->request_info.cdev_id; > + qp_id = m_data->request_info.queue_pair_id; > + } else { > + goto free_op; > + } > + > + cdev = &rte_cryptodevs[cdev_id]; > + qp = cdev->data->queue_pairs[qp_id]; > + > + if (!ev->sched_type) > + otx2_ssogws_head_wait(tag_op); > + if (qp->ca_enable) > + return cdev->enqueue_burst(qp, &crypto_op, 1); > + > +free_op: > + rte_pktmbuf_free(crypto_op->sym->m_src); > + rte_crypto_op_free(crypto_op); > + return 0; > +} I am trying to understand this in requirement perspective. This enqueue function is same as SW adapter's enqueue function. Currently, application could directly enqueue to cryptodev in NEW mode. By having this in PMD, how is FORWARD mode taken care? > + > +static uint16_t __rte_hot > +otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t > +nb_events) { > + struct otx2_ssogws *ws = port; > + > + RTE_SET_USED(nb_events); > + > + return otx2_ca_enq(ws->tag_op, ev); > +} > + > +static uint16_t __rte_hot > +otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t > +nb_events) { > + struct otx2_ssogws_dual *ws = port; > + > + RTE_SET_USED(nb_events); > + > + return otx2_ca_enq(ws->ws_state[!ws->vws].tag_op, ev); } #endif > /* > +_OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */ > diff --git a/drivers/event/octeontx2/otx2_worker.h > b/drivers/event/octeontx2/otx2_worker.h > index 2b716c042..fd149be91 100644 > --- a/drivers/event/octeontx2/otx2_worker.h > +++ b/drivers/event/octeontx2/otx2_worker.h > @@ -10,7 +10,7 @@ > > #include <otx2_common.h> > #include "otx2_evdev.h" > -#include "otx2_evdev_crypto_adptr_dp.h" > +#include "otx2_evdev_crypto_adptr_rx.h" > #include "otx2_ethdev_sec_tx.h" > > /* SSO Operations */ > diff --git a/drivers/event/octeontx2/otx2_worker_dual.h > b/drivers/event/octeontx2/otx2_worker_dual.h > index 72b616439..36ae4dd88 100644 > --- a/drivers/event/octeontx2/otx2_worker_dual.h > +++ b/drivers/event/octeontx2/otx2_worker_dual.h > @@ -10,7 +10,7 @@ > > #include <otx2_common.h> > #include "otx2_evdev.h" > -#include "otx2_evdev_crypto_adptr_dp.h" > +#include "otx2_evdev_crypto_adptr_rx.h" > > /* SSO Operations */ > static __rte_always_inline uint16_t > -- > 2.25.1
> -----Original Message----- > From: Shijith Thotton <sthotton@marvell.com> > Sent: Friday, April 2, 2021 10:31 PM > To: dev@dpdk.org > Cc: Shijith Thotton <sthotton@marvell.com>; thomas@monjalon.net; > jerinj@marvell.com; Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; > hemant.agrawal@nxp.com; nipun.gupta@nxp.com; > sachin.saxena@oss.nxp.com; anoobj@marvell.com; matan@nvidia.com; > Zhang, Roy Fan <roy.fan.zhang@intel.com>; g.singh@nxp.com; Carrillo, Erik > G <erik.g.carrillo@intel.com>; Jayatheerthan, Jay > <jay.jayatheerthan@intel.com>; pbhagavatula@marvell.com; Van Haaren, > Harry <harry.van.haaren@intel.com>; Akhil Goyal <gakhil@marvell.com> > Subject: [PATCH v4 3/3] test/event_crypto: use crypto adapter enqueue API > > Use rte_event_crypto_adapter_enqueue() API to enqueue events to crypto > adapter if forward mode is supported in driver. > > Signed-off-by: Shijith Thotton <sthotton@marvell.com> > --- > app/test/test_event_crypto_adapter.c | 29 +++++++++++++++++++--------- > 1 file changed, 20 insertions(+), 9 deletions(-) > > diff --git a/app/test/test_event_crypto_adapter.c > b/app/test/test_event_crypto_adapter.c > index 335211cd8..2b07f1582 100644 > --- a/app/test/test_event_crypto_adapter.c > +++ b/app/test/test_event_crypto_adapter.c > @@ -64,6 +64,7 @@ struct event_crypto_adapter_test_params { > struct rte_mempool *session_priv_mpool; > struct rte_cryptodev_config *config; > uint8_t crypto_event_port_id; > + uint8_t internal_port_op_fwd; > }; > > struct rte_event response_info = { > @@ -110,9 +111,12 @@ send_recv_ev(struct rte_event *ev) > struct rte_event recv_ev; > int ret; > > - ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, > NUM); > - TEST_ASSERT_EQUAL(ret, NUM, > - "Failed to send event to crypto adapter\n"); > + if (params.internal_port_op_fwd) > + ret = rte_event_crypto_adapter_enqueue(evdev, > TEST_APP_PORT_ID, > + ev, NUM); > + else > + ret = rte_event_enqueue_burst(evdev, > TEST_APP_PORT_ID, ev, NUM); > + TEST_ASSERT_EQUAL(ret, NUM, "Failed to send event to crypto > +adapter\n"); > > while (rte_event_dequeue_burst(evdev, > TEST_APP_PORT_ID, &recv_ev, NUM, 0) == 0) @@ - > 741,6 +745,11 @@ configure_event_crypto_adapter(enum > rte_event_crypto_adapter_mode mode) > ret = rte_event_crypto_adapter_caps_get(evdev, TEST_CDEV_ID, > &cap); > TEST_ASSERT_SUCCESS(ret, "Failed to get adapter capabilities\n"); > > + if (cap & > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) > + params.internal_port_op_fwd = 1; > + else > + params.internal_port_op_fwd = 0; > + There is a check at line 760 for FWD mode. Can't this be set there? > /* Skip mode and capability mismatch check for SW eventdev */ > if (!(cap & > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW) && > !(cap & > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) && @@ - > 771,9 +780,11 @@ configure_event_crypto_adapter(enum > rte_event_crypto_adapter_mode mode) > > TEST_ASSERT_SUCCESS(ret, "Failed to add queue pair\n"); > > - ret = > rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, > - ¶ms.crypto_event_port_id); > - TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); > + if (!params.internal_port_op_fwd) { > + ret = > rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, > + > ¶ms.crypto_event_port_id); > + TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); > + } > > return TEST_SUCCESS; > } > @@ -809,15 +820,15 @@ test_crypto_adapter_conf(enum > rte_event_crypto_adapter_mode mode) > > if (!crypto_adapter_setup_done) { > ret = configure_event_crypto_adapter(mode); > - if (!ret) { > + if (ret) > + return ret; > + if (!params.internal_port_op_fwd) { > qid = TEST_CRYPTO_EV_QUEUE_ID; > ret = rte_event_port_link(evdev, > params.crypto_event_port_id, &qid, NULL, > 1); > TEST_ASSERT(ret >= 0, "Failed to link queue %d " > "port=%u\n", qid, > params.crypto_event_port_id); > - } else { > - return ret; > } > crypto_adapter_setup_done = 1; > } > -- > 2.25.1
> -----Original Message----- > From: Shijith Thotton <sthotton@marvell.com> > Sent: Friday, April 2, 2021 10:31 PM > To: dev@dpdk.org > Cc: Akhil Goyal <gakhil@marvell.com>; thomas@monjalon.net; > jerinj@marvell.com; Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; > hemant.agrawal@nxp.com; nipun.gupta@nxp.com; > sachin.saxena@oss.nxp.com; anoobj@marvell.com; matan@nvidia.com; > Zhang, Roy Fan <roy.fan.zhang@intel.com>; g.singh@nxp.com; Carrillo, Erik > G <erik.g.carrillo@intel.com>; Jayatheerthan, Jay > <jay.jayatheerthan@intel.com>; pbhagavatula@marvell.com; Van Haaren, > Harry <harry.van.haaren@intel.com>; Shijith Thotton > <sthotton@marvell.com> > Subject: [PATCH v4 1/3] eventdev: introduce crypto adapter enqueue API > > From: Akhil Goyal <gakhil@marvell.com> > > In case an event from a previous stage is required to be forwarded to a > crypto adapter and PMD supports internal event port in crypto adapter, > exposed via capability > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not > have a way to check in the API rte_event_enqueue_burst(), whether it is for > crypto adapter or for eth tx adapter. I may be missing something here. Crypto adapter is an atomic stage has a port which is setup during the adapter configuration. So, application enqueuing events will end up sending to the crypto adapter (As the adapter dequeues from a specific port). Still wondering why there is requirement for new API. > > Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue(), > which can send to a crypto adapter. > > Note that RTE_EVENT_TYPE_* cannot be used to make that decision, as it is > meant for event source and not event destination. > And event port designated for crypto adapter is designed to be used for > OP_NEW mode. > > Hence, in order to support an event PMD which has an internal event port in > crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), > exposed via capability > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, > application should use rte_event_crypto_adapter_enqueue() API to > enqueue events. > > When internal port is not available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW > mode), application can use API rte_event_enqueue_burst() as it was doing > earlier, i.e. retrieve event port used by crypto adapter and bind its event > queues to that port and enqueue events using the API > rte_event_enqueue_burst(). > > Signed-off-by: Akhil Goyal <gakhil@marvell.com> > --- > .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++++------- > doc/guides/rel_notes/release_21_05.rst | 6 ++ > lib/librte_eventdev/eventdev_trace_points.c | 3 + > .../rte_event_crypto_adapter.h | 63 +++++++++++++++++ > lib/librte_eventdev/rte_eventdev.c | 10 +++ > lib/librte_eventdev/rte_eventdev.h | 8 ++- > lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ > lib/librte_eventdev/version.map | 3 + > 8 files changed, 145 insertions(+), 27 deletions(-) > > diff --git a/doc/guides/prog_guide/event_crypto_adapter.rst > b/doc/guides/prog_guide/event_crypto_adapter.rst > index 1e3eb7139..4fb5c688e 100644 > --- a/doc/guides/prog_guide/event_crypto_adapter.rst > +++ b/doc/guides/prog_guide/event_crypto_adapter.rst > @@ -55,21 +55,22 @@ which is needed to enqueue an event after the > crypto operation is completed. > RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > -In the RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, if HW supports > -RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability > the application -can directly submit the crypto operations to the cryptodev. > -If not, application retrieves crypto adapter's event port using > -rte_event_crypto_adapter_event_port_get() API. Then, links its event - > queue to this port and starts enqueuing crypto operations as events -to the > eventdev. The adapter then dequeues the events and submits the -crypto > operations to the cryptodev. After the crypto completions, the -adapter > enqueues events to the event device. > -Application can use this mode, when ingress packet ordering is needed. > -In this mode, events dequeued from the adapter will be treated as - > forwarded events. The application needs to specify the cryptodev ID -and > queue pair ID (request information) needed to enqueue a crypto -operation > in addition to the event information (response information) -needed to > enqueue an event after the crypto operation has completed. > +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event > PMD > +and crypto PMD supports internal event port > +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), the > +application should use ``rte_event_crypto_adapter_enqueue()`` API to > +enqueue crypto operations as events to crypto adapter. If not, > +application retrieves crypto adapter's event port using > +``rte_event_crypto_adapter_event_port_get()`` API, links its event > +queue to this port and starts enqueuing crypto operations as events to > +eventdev using ``rte_event_enqueue_burst()``. The adapter then > dequeues > +the events and submits the crypto operations to the cryptodev. After > +the crypto operation is complete, the adapter enqueues events to the > +event device. The application can use this mode when ingress packet > +ordering is needed. In this mode, events dequeued from the adapter will > +be treated as forwarded events. The application needs to specify the > +cryptodev ID and queue pair ID (request information) needed to enqueue > +a crypto operation in addition to the event information (response > +information) needed to enqueue an event after the crypto operation has > +completed. > > .. _figure_event_crypto_adapter_op_forward: > > @@ -120,28 +121,44 @@ service function and needs to create an event port > for it. The callback is expected to fill the ``struct > rte_event_crypto_adapter_conf`` structure passed to it. > > -For RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, the event port > created by adapter -can be retrieved using > ``rte_event_crypto_adapter_event_port_get()`` API. > -Application can use this event port to link with event queue on which it - > enqueues events towards the crypto adapter. > +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event > PMD > +and crypto PMD supports internal event port > +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), > events with > +crypto operations should be enqueued to the crypto adapter using > +``rte_event_crypto_adapter_enqueue()`` API. If not, the event port > +created by the adapter can be retrieved using > +``rte_event_crypto_adapter_event_port_get()`` > +API. An application can use this event port to link with an event > +queue, on which it enqueues events towards the crypto adapter using > +``rte_event_enqueue_burst()``. > > .. code-block:: c > > - uint8_t id, evdev, crypto_ev_port_id, app_qid; > + uint8_t id, evdev_id, cdev_id, crypto_ev_port_id, app_qid; > struct rte_event ev; > + uint32_t cap; > int ret; > > - ret = rte_event_crypto_adapter_event_port_get(id, > &crypto_ev_port_id); > - ret = rte_event_queue_setup(evdev, app_qid, NULL); > - ret = rte_event_port_link(evdev, crypto_ev_port_id, &app_qid, NULL, > 1); > - > // Fill in event info and update event_ptr with rte_crypto_op > memset(&ev, 0, sizeof(ev)); > - ev.queue_id = app_qid; > . > . > ev.event_ptr = op; > - ret = rte_event_enqueue_burst(evdev, app_ev_port_id, ev, > nb_events); > + > + ret = rte_event_crypto_adapter_caps_get(evdev_id, cdev_id, &cap); > + if (cap & > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) { > + ret = rte_event_crypto_adapter_enqueue(evdev_id, > app_ev_port_id, > + ev, nb_events); > + } else { > + ret = rte_event_crypto_adapter_event_port_get(id, > + &crypto_ev_port_id); > + ret = rte_event_queue_setup(evdev_id, app_qid, NULL); > + ret = rte_event_port_link(evdev_id, crypto_ev_port_id, &app_qid, > + NULL, 1); > + ev.queue_id = app_qid; > + ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev, > + nb_events); > + } > + > > Querying adapter capabilities > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > diff --git a/doc/guides/rel_notes/release_21_05.rst > b/doc/guides/rel_notes/release_21_05.rst > index e2b0886a9..0bee94877 100644 > --- a/doc/guides/rel_notes/release_21_05.rst > +++ b/doc/guides/rel_notes/release_21_05.rst > @@ -106,6 +106,12 @@ New Features > * Added support for periodic timer mode in eventdev timer adapter. > * Added support for periodic timer mode in octeontx2 event device driver. > > +* **Enhanced crypto adapter forward mode.** > + > + * Added ``rte_event_crypto_adapter_enqueue()`` API to enqueue events > to crypto > + adapter if forward mode is supported by driver. > + * Added support for crypto adapter forward mode in octeontx2 event and > crypto > + device driver. > > Removed Items > ------------- > diff --git a/lib/librte_eventdev/eventdev_trace_points.c > b/lib/librte_eventdev/eventdev_trace_points.c > index 1a0ccc448..3867ec800 100644 > --- a/lib/librte_eventdev/eventdev_trace_points.c > +++ b/lib/librte_eventdev/eventdev_trace_points.c > @@ -118,3 +118,6 @@ > RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_start, > > RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_stop, > lib.eventdev.crypto.stop) > + > +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_enque > ue, > + lib.eventdev.crypto.enq) > diff --git a/lib/librte_eventdev/rte_event_crypto_adapter.h > b/lib/librte_eventdev/rte_event_crypto_adapter.h > index 60630ef66..a4a4129b7 100644 > --- a/lib/librte_eventdev/rte_event_crypto_adapter.h > +++ b/lib/librte_eventdev/rte_event_crypto_adapter.h > @@ -171,6 +171,7 @@ extern "C" { > #include <stdint.h> > > #include "rte_eventdev.h" > +#include "eventdev_pmd.h" > > /** > * Crypto event adapter mode > @@ -522,6 +523,68 @@ rte_event_crypto_adapter_service_id_get(uint8_t > id, uint32_t *service_id); int > rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t > *event_port_id); > > +/** > + * Enqueue a burst of crypto operations as events object supplied in events object -> event objects > +*rte_event* > + * structure on an event crypto adapter designated by its event > +*dev_id* through > + * the event port specified by *port_id*. This function is supported if > +the > + * eventdev PMD has the > +#RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD > + * capability flag set. > + * > + * The *nb_events* parameter is the number of event objects to enqueue > +which are > + * supplied in the *ev* array of *rte_event* structure. > + * > + * The rte_event_crypto_adapter_enqueue() function returns the number > +of > + * events objects it actually enqueued. A return value equal to events object -> event objects > +*nb_events* > + * means that all event objects have been enqueued. > + * > + * @param dev_id > + * The identifier of the device. > + * @param port_id > + * The identifier of the event port. > + * @param ev > + * Points to an array of *nb_events* objects of type *rte_event* > +structure > + * which contain the event object enqueue operations to be processed. > + * @param nb_events > + * The number of event objects to enqueue, typically number of > + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) > + * available for this port. > + * > + * @return > + * The number of event objects actually enqueued on the event device. > The > + * return value can be less than the value of the *nb_events* parameter > when > + * the event devices queue is full or if invalid parameters are specified in a > + * *rte_event*. If the return value is less than *nb_events*, the remaining > + * events at the end of ev[] are not consumed and the caller has to take > care > + * of them, and rte_errno is set accordingly. Possible errno values include: > + * - EINVAL The port ID is invalid, device ID is invalid, an event's queue > + * ID is invalid, or an event's sched type doesn't match the > + * capabilities of the destination queue. > + * - ENOSPC The event port was backpressured and unable to enqueue > + * one or more events. This error code is only applicable to > + * closed systems. > + */ > +static inline uint16_t > +rte_event_crypto_adapter_enqueue(uint8_t dev_id, > + uint8_t port_id, > + struct rte_event ev[], > + uint16_t nb_events) > +{ > + const struct rte_eventdev *dev = &rte_eventdevs[dev_id]; > + > +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG > + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); > + > + if (port_id >= dev->data->nb_ports) { > + rte_errno = EINVAL; > + return 0; > + } > +#endif > + rte_eventdev_trace_crypto_adapter_enqueue(dev_id, port_id, ev, > + nb_events); > + > + return dev->ca_enqueue(dev->data->ports[port_id], ev, > nb_events); } > + > #ifdef __cplusplus > } > #endif > diff --git a/lib/librte_eventdev/rte_eventdev.c > b/lib/librte_eventdev/rte_eventdev.c > index b57363f80..5674bd38e 100644 > --- a/lib/librte_eventdev/rte_eventdev.c > +++ b/lib/librte_eventdev/rte_eventdev.c > @@ -1405,6 +1405,15 @@ rte_event_tx_adapter_enqueue(__rte_unused > void *port, > return 0; > } > > +static uint16_t > +rte_event_crypto_adapter_enqueue(__rte_unused void *port, > + __rte_unused struct rte_event ev[], > + __rte_unused uint16_t nb_events) Args are not aligned > +{ > + rte_errno = ENOTSUP; > + return 0; > +} > + > struct rte_eventdev * > rte_event_pmd_allocate(const char *name, int socket_id) { @@ -1427,6 > +1436,7 @@ rte_event_pmd_allocate(const char *name, int socket_id) > > eventdev->txa_enqueue = rte_event_tx_adapter_enqueue; > eventdev->txa_enqueue_same_dest = > rte_event_tx_adapter_enqueue; > + eventdev->ca_enqueue = rte_event_crypto_adapter_enqueue; > > if (eventdev->data == NULL) { > struct rte_eventdev_data *eventdev_data = NULL; diff --git > a/lib/librte_eventdev/rte_eventdev.h > b/lib/librte_eventdev/rte_eventdev.h > index 9fc39e9ca..b50027f88 100644 > --- a/lib/librte_eventdev/rte_eventdev.h > +++ b/lib/librte_eventdev/rte_eventdev.h > @@ -1276,6 +1276,10 @@ typedef uint16_t > (*event_tx_adapter_enqueue_same_dest)(void *port, > * burst having same destination Ethernet port & Tx queue. > */ > > +typedef uint16_t (*event_crypto_adapter_enqueue)(void *port, > + struct rte_event ev[], uint16_t nb_events); > /**< @internal Enqueue > +burst of events on crypto adapter */ > + > #define RTE_EVENTDEV_NAME_MAX_LEN (64) > /**< @internal Max length of name of event PMD */ > > @@ -1347,6 +1351,8 @@ struct rte_eventdev { > */ > event_tx_adapter_enqueue txa_enqueue; > /**< Pointer to PMD eth Tx adapter enqueue function. */ > + event_crypto_adapter_enqueue ca_enqueue; > + /**< Pointer to PMD crypto adapter enqueue function. */ > struct rte_eventdev_data *data; > /**< Pointer to device data */ > struct rte_eventdev_ops *dev_ops; > @@ -1359,7 +1365,7 @@ struct rte_eventdev { > /**< Flag indicating the device is attached */ > > uint64_t reserved_64s[4]; /**< Reserved for future fields */ > - void *reserved_ptrs[4]; /**< Reserved for future fields */ > + void *reserved_ptrs[3]; /**< Reserved for future fields */ > } __rte_cache_aligned; > > extern struct rte_eventdev *rte_eventdevs; diff --git > a/lib/librte_eventdev/rte_eventdev_trace_fp.h > b/lib/librte_eventdev/rte_eventdev_trace_fp.h > index 349129c0f..5639e0b83 100644 > --- a/lib/librte_eventdev/rte_eventdev_trace_fp.h > +++ b/lib/librte_eventdev/rte_eventdev_trace_fp.h > @@ -49,6 +49,16 @@ RTE_TRACE_POINT_FP( > rte_trace_point_emit_u8(flags); > ) > > +RTE_TRACE_POINT_FP( > + rte_eventdev_trace_crypto_adapter_enqueue, > + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void > *ev_table, > + uint16_t nb_events), > + rte_trace_point_emit_u8(dev_id); > + rte_trace_point_emit_u8(port_id); > + rte_trace_point_emit_ptr(ev_table); > + rte_trace_point_emit_u16(nb_events); > +) > + > RTE_TRACE_POINT_FP( > rte_eventdev_trace_timer_arm_burst, > RTE_TRACE_POINT_ARGS(const void *adapter, void **evtims_table, > diff --git a/lib/librte_eventdev/version.map > b/lib/librte_eventdev/version.map index 3e5c09cfd..c63ba7a9c 100644 > --- a/lib/librte_eventdev/version.map > +++ b/lib/librte_eventdev/version.map > @@ -138,6 +138,9 @@ EXPERIMENTAL { > __rte_eventdev_trace_port_setup; > # added in 20.11 > rte_event_pmd_pci_probe_named; > + > + # added in 21.05 > + __rte_eventdev_trace_crypto_adapter_enqueue; > }; > > INTERNAL { > -- > 2.25.1
On Sat, Apr 03, 2021 at 11:08:28AM +0000, Gujjar, Abhinandan S wrote: > > > > -----Original Message----- > > From: Shijith Thotton <sthotton@marvell.com> > > Sent: Friday, April 2, 2021 10:31 PM > > To: dev@dpdk.org > > Cc: Shijith Thotton <sthotton@marvell.com>; thomas@monjalon.net; > > jerinj@marvell.com; Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; > > hemant.agrawal@nxp.com; nipun.gupta@nxp.com; > > sachin.saxena@oss.nxp.com; anoobj@marvell.com; matan@nvidia.com; > > Zhang, Roy Fan <roy.fan.zhang@intel.com>; g.singh@nxp.com; Carrillo, Erik > > G <erik.g.carrillo@intel.com>; Jayatheerthan, Jay > > <jay.jayatheerthan@intel.com>; pbhagavatula@marvell.com; Van Haaren, > > Harry <harry.van.haaren@intel.com>; Akhil Goyal <gakhil@marvell.com> > > Subject: [PATCH v4 3/3] test/event_crypto: use crypto adapter enqueue API > > > > Use rte_event_crypto_adapter_enqueue() API to enqueue events to crypto > > adapter if forward mode is supported in driver. > > > > Signed-off-by: Shijith Thotton <sthotton@marvell.com> > > --- > > app/test/test_event_crypto_adapter.c | 29 +++++++++++++++++++--------- > > 1 file changed, 20 insertions(+), 9 deletions(-) > > > > diff --git a/app/test/test_event_crypto_adapter.c > > b/app/test/test_event_crypto_adapter.c > > index 335211cd8..2b07f1582 100644 > > --- a/app/test/test_event_crypto_adapter.c > > +++ b/app/test/test_event_crypto_adapter.c > > @@ -64,6 +64,7 @@ struct event_crypto_adapter_test_params { > > struct rte_mempool *session_priv_mpool; > > struct rte_cryptodev_config *config; > > uint8_t crypto_event_port_id; > > + uint8_t internal_port_op_fwd; > > }; > > > > struct rte_event response_info = { > > @@ -110,9 +111,12 @@ send_recv_ev(struct rte_event *ev) > > struct rte_event recv_ev; > > int ret; > > > > - ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, > > NUM); > > - TEST_ASSERT_EQUAL(ret, NUM, > > - "Failed to send event to crypto adapter\n"); > > + if (params.internal_port_op_fwd) > > + ret = rte_event_crypto_adapter_enqueue(evdev, > > TEST_APP_PORT_ID, > > + ev, NUM); > > + else > > + ret = rte_event_enqueue_burst(evdev, > > TEST_APP_PORT_ID, ev, NUM); > > + TEST_ASSERT_EQUAL(ret, NUM, "Failed to send event to crypto > > +adapter\n"); > > > > while (rte_event_dequeue_burst(evdev, > > TEST_APP_PORT_ID, &recv_ev, NUM, 0) == 0) @@ - > > 741,6 +745,11 @@ configure_event_crypto_adapter(enum > > rte_event_crypto_adapter_mode mode) > > ret = rte_event_crypto_adapter_caps_get(evdev, TEST_CDEV_ID, > > &cap); > > TEST_ASSERT_SUCCESS(ret, "Failed to get adapter capabilities\n"); > > > > + if (cap & > > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) > > + params.internal_port_op_fwd = 1; > > + else > > + params.internal_port_op_fwd = 0; > > + > There is a check at line 760 for FWD mode. Can't this be set there? > Yes, I will move it over there. > > /* Skip mode and capability mismatch check for SW eventdev */ > > if (!(cap & > > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW) && > > !(cap & > > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) && @@ - > > 771,9 +780,11 @@ configure_event_crypto_adapter(enum > > rte_event_crypto_adapter_mode mode) > > > > TEST_ASSERT_SUCCESS(ret, "Failed to add queue pair\n"); > > > > - ret = > > rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, > > - ¶ms.crypto_event_port_id); > > - TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); > > + if (!params.internal_port_op_fwd) { > > + ret = > > rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, > > + > > ¶ms.crypto_event_port_id); > > + TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); > > + } > > > > return TEST_SUCCESS; > > } > > @@ -809,15 +820,15 @@ test_crypto_adapter_conf(enum > > rte_event_crypto_adapter_mode mode) > > > > if (!crypto_adapter_setup_done) { > > ret = configure_event_crypto_adapter(mode); > > - if (!ret) { > > + if (ret) > > + return ret; > > + if (!params.internal_port_op_fwd) { > > qid = TEST_CRYPTO_EV_QUEUE_ID; > > ret = rte_event_port_link(evdev, > > params.crypto_event_port_id, &qid, NULL, > > 1); > > TEST_ASSERT(ret >= 0, "Failed to link queue %d " > > "port=%u\n", qid, > > params.crypto_event_port_id); > > - } else { > > - return ret; > > } > > crypto_adapter_setup_done = 1; > > } > > -- > > 2.25.1 >
Hi Abhnandan, > > > > In case an event from a previous stage is required to be forwarded to a > > crypto adapter and PMD supports internal event port in crypto adapter, > > exposed via capability > > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not > > have a way to check in the API rte_event_enqueue_burst(), whether it is for > > crypto adapter or for eth tx adapter. > I may be missing something here. Crypto adapter is an atomic stage has a > port which is setup during the adapter configuration. > So, application enqueuing events will end up sending to the crypto adapter > (As the adapter dequeues from a specific port). > Still wondering why there is requirement for new API. While we do rte_event_enqueue_burst(), we do not have a way to identify whether The event is for crypto adapter or the eth adaptor. At the application layer, we know where to send the event, but in the event lib We do not know which port it need to be sent. IMO, event port is specifically designed to work for OP_NEW mode. I did not find a way to make it land into crypto adapter. Please let me know in case there is a better option other than adding a new API. > > > > > Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue(), > > which can send to a crypto adapter. > > > > Note that RTE_EVENT_TYPE_* cannot be used to make that decision, as it is > > meant for event source and not event destination. > > And event port designated for crypto adapter is designed to be used for > > OP_NEW mode. > > > > Hence, in order to support an event PMD which has an internal event port > in > > crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), > > exposed via capability > > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, > > application should use rte_event_crypto_adapter_enqueue() API to > > enqueue events. > > > > When internal port is not > available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW > > mode), application can use API rte_event_enqueue_burst() as it was doing > > earlier, i.e. retrieve event port used by crypto adapter and bind its event > > queues to that port and enqueue events using the API > > rte_event_enqueue_burst(). > > > > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Hi Abhinandan, Please see inline. Thanks, Anoob > > > > Advertise crypto adapter forward mode capability and set crypto > > adapter enqueue function in driver. > > > > Signed-off-by: Shijith Thotton <sthotton@marvell.com> [snip] > > + > > + if (!ev->sched_type) > > + otx2_ssogws_head_wait(tag_op); > > + if (qp->ca_enable) > > + return cdev->enqueue_burst(qp, &crypto_op, 1); > > + > > +free_op: > > + rte_pktmbuf_free(crypto_op->sym->m_src); > > + rte_crypto_op_free(crypto_op); > > + return 0; > > +} > > I am trying to understand this in requirement perspective. This enqueue > function is same as SW adapter's enqueue function. > Currently, application could directly enqueue to cryptodev in NEW mode. By > having this in PMD, how is FORWARD mode taken care? > [Anoob] Difference is the ordering point when used with ORDERED flows. If application is working on an ORDERED flow, with OP_NEW, application would require to queue to an ATOMIC queue before submitting to cryptodev (to maintain ordering). But with OP_FORWARD, application can provide an event to the event PMD and internally it can take care of ordering as well enqueue to crypto "hardware". This becomes particularly useful when event hardware can support ordering while enqueueing to crypto hardware(and hence the "internal port"). With the current spec, OP_FORWARD would allow application to enqueue crypto_op as an event to event device. But this event doesn't have any additional information which would indicate it is destined to crypto. The new API would solve this issue. [snip]
> -----Original Message----- > From: Anoob Joseph <anoobj@marvell.com> > Sent: Tuesday, April 6, 2021 8:31 PM > To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com> > Cc: thomas@monjalon.net; Jerin Jacob Kollanukkaran <jerinj@marvell.com>; > hemant.agrawal@nxp.com; nipun.gupta@nxp.com; > sachin.saxena@oss.nxp.com; matan@nvidia.com; Zhang, Roy Fan > <roy.fan.zhang@intel.com>; g.singh@nxp.com; Carrillo, Erik G > <erik.g.carrillo@intel.com>; Jayatheerthan, Jay > <jay.jayatheerthan@intel.com>; Pavan Nikhilesh Bhagavatula > <pbhagavatula@marvell.com>; Van Haaren, Harry > <harry.van.haaren@intel.com>; Akhil Goyal <gakhil@marvell.com>; Shijith > Thotton <sthotton@marvell.com>; dev@dpdk.org > Subject: RE: [PATCH v4 2/3] event/octeontx2: support crypto adapter > forward mode > > Hi Abhinandan, > > Please see inline. > > Thanks, > Anoob > > > > > > > Advertise crypto adapter forward mode capability and set crypto > > > adapter enqueue function in driver. > > > > > > Signed-off-by: Shijith Thotton <sthotton@marvell.com> > > [snip] > > > > + > > > + if (!ev->sched_type) > > > + otx2_ssogws_head_wait(tag_op); > > > + if (qp->ca_enable) > > > + return cdev->enqueue_burst(qp, &crypto_op, 1); > > > + > > > +free_op: > > > + rte_pktmbuf_free(crypto_op->sym->m_src); > > > + rte_crypto_op_free(crypto_op); > > > + return 0; > > > +} > > > > I am trying to understand this in requirement perspective. This > > enqueue function is same as SW adapter's enqueue function. > > Currently, application could directly enqueue to cryptodev in NEW > > mode. By having this in PMD, how is FORWARD mode taken care? > > > > [Anoob] Difference is the ordering point when used with ORDERED flows. > > If application is working on an ORDERED flow, with OP_NEW, application > would require to queue to an ATOMIC queue before submitting to cryptodev > (to maintain ordering). But with OP_FORWARD, application can provide an > event to the event PMD and internally it can take care of ordering as well > enqueue to crypto "hardware". This becomes particularly useful when event > hardware can support ordering while enqueueing to crypto hardware(and > hence the "internal port"). Got it. Referring to the above code, if qp->ca_enable is not enabled, the ops and mbuf will be freed and returned 0. Does not this make the application/worker to think that enqueue is not successful and it should retry enqueuing same buffers again? > > With the current spec, OP_FORWARD would allow application to enqueue > crypto_op as an event to event device. But this event doesn't have any > additional information which would indicate it is destined to crypto. The new > API would solve this issue. > > [snip]
> -----Original Message----- > From: Akhil Goyal <gakhil@marvell.com> > Sent: Monday, April 5, 2021 11:11 PM > To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; Shijith Thotton > <sthotton@marvell.com>; dev@dpdk.org > Cc: thomas@monjalon.net; Jerin Jacob Kollanukkaran <jerinj@marvell.com>; > hemant.agrawal@nxp.com; nipun.gupta@nxp.com; > sachin.saxena@oss.nxp.com; Anoob Joseph <anoobj@marvell.com>; > matan@nvidia.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>; > g.singh@nxp.com; Carrillo, Erik G <erik.g.carrillo@intel.com>; Jayatheerthan, > Jay <jay.jayatheerthan@intel.com>; Pavan Nikhilesh Bhagavatula > <pbhagavatula@marvell.com>; Van Haaren, Harry > <harry.van.haaren@intel.com> > Subject: RE: [PATCH v4 1/3] eventdev: introduce crypto adapter enqueue API > > Hi Abhnandan, > > > > > > In case an event from a previous stage is required to be forwarded > > > to a crypto adapter and PMD supports internal event port in crypto > > > adapter, exposed via capability > > > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do > not have a > > > way to check in the API rte_event_enqueue_burst(), whether it is for > > > crypto adapter or for eth tx adapter. > > I may be missing something here. Crypto adapter is an atomic stage has > > a port which is setup during the adapter configuration. > > So, application enqueuing events will end up sending to the crypto > > adapter (As the adapter dequeues from a specific port). > > Still wondering why there is requirement for new API. > > While we do rte_event_enqueue_burst(), we do not have a way to identify > whether The event is for crypto adapter or the eth adaptor. > At the application layer, we know where to send the event, but in the event > lib We do not know which port it need to be sent. > IMO, event port is specifically designed to work for OP_NEW mode. > I did not find a way to make it land into crypto adapter. > Please let me know in case there is a better option other than adding a new > API. This looks like a hack as the new API does not actual enqueue events to eventdev. Rather it directly extracts the crypto info from each event and then enqueue to cryptodev. How about doing this (No changes to rte_event_enqueue_burst(), PMD takes care of things ): uint16_t __rte_hot ssows_enq_burst(void *port, const struct rte_event ev[], uint16_t nb_events) { + struct otx2_ssogws *ws = port; + + RTE_SET_USED(nb_events); + + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) + return otx2_ca_enq(ws->tag_op, ev); return ssows_enq(port, ev); } Everything will be hidden under the callback and application will not have any changes. > > > > > > > > > Hence we need a new API similar to > > > rte_event_eth_tx_adapter_enqueue(), > > > which can send to a crypto adapter. > > > > > > Note that RTE_EVENT_TYPE_* cannot be used to make that decision, as > > > it is meant for event source and not event destination. > > > And event port designated for crypto adapter is designed to be used > > > for OP_NEW mode. > > > > > > Hence, in order to support an event PMD which has an internal event > > > port > > in > > > crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), > exposed > > > via capability > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, > > > application should use rte_event_crypto_adapter_enqueue() API to > > > enqueue events. > > > > > > When internal port is not > > available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW > > > mode), application can use API rte_event_enqueue_burst() as it was > > > doing earlier, i.e. retrieve event port used by crypto adapter and > > > bind its event queues to that port and enqueue events using the API > > > rte_event_enqueue_burst(). > > > > > > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
On Wed, Apr 07, 2021 at 03:06:16PM +0000, Gujjar, Abhinandan S wrote: > > > > -----Original Message----- > > From: Anoob Joseph <anoobj@marvell.com> > > Sent: Tuesday, April 6, 2021 8:31 PM > > To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com> > > Cc: thomas@monjalon.net; Jerin Jacob Kollanukkaran <jerinj@marvell.com>; > > hemant.agrawal@nxp.com; nipun.gupta@nxp.com; > > sachin.saxena@oss.nxp.com; matan@nvidia.com; Zhang, Roy Fan > > <roy.fan.zhang@intel.com>; g.singh@nxp.com; Carrillo, Erik G > > <erik.g.carrillo@intel.com>; Jayatheerthan, Jay > > <jay.jayatheerthan@intel.com>; Pavan Nikhilesh Bhagavatula > > <pbhagavatula@marvell.com>; Van Haaren, Harry > > <harry.van.haaren@intel.com>; Akhil Goyal <gakhil@marvell.com>; Shijith > > Thotton <sthotton@marvell.com>; dev@dpdk.org > > Subject: RE: [PATCH v4 2/3] event/octeontx2: support crypto adapter > > forward mode > > > > Hi Abhinandan, > > > > Please see inline. > > > > Thanks, > > Anoob > > > > > > > > > > Advertise crypto adapter forward mode capability and set crypto > > > > adapter enqueue function in driver. > > > > > > > > Signed-off-by: Shijith Thotton <sthotton@marvell.com> > > > > [snip] > > > > > > + > > > > + if (!ev->sched_type) > > > > + otx2_ssogws_head_wait(tag_op); > > > > + if (qp->ca_enable) > > > > + return cdev->enqueue_burst(qp, &crypto_op, 1); > > > > + > > > > +free_op: > > > > + rte_pktmbuf_free(crypto_op->sym->m_src); > > > > + rte_crypto_op_free(crypto_op); > > > > + return 0; > > > > +} > > > > > > I am trying to understand this in requirement perspective. This > > > enqueue function is same as SW adapter's enqueue function. > > > Currently, application could directly enqueue to cryptodev in NEW > > > mode. By having this in PMD, how is FORWARD mode taken care? > > > > > > > [Anoob] Difference is the ordering point when used with ORDERED flows. > > > > If application is working on an ORDERED flow, with OP_NEW, application > > would require to queue to an ATOMIC queue before submitting to cryptodev > > (to maintain ordering). But with OP_FORWARD, application can provide an > > event to the event PMD and internally it can take care of ordering as well > > enqueue to crypto "hardware". This becomes particularly useful when event > > hardware can support ordering while enqueueing to crypto hardware(and > > hence the "internal port"). > Got it. > Referring to the above code, if qp->ca_enable is not enabled, the ops and > mbuf will be freed and returned 0. Does not this make the application/worker > to think that enqueue is not successful and it should retry enqueuing same > buffers again? > Thanks for pointing out. Will use proper error number for failures in next version. > > > > With the current spec, OP_FORWARD would allow application to enqueue > > crypto_op as an event to event device. But this event doesn't have any > > additional information which would indicate it is destined to crypto. The > > new API would solve this issue. > > > > [snip]
Hi Abhinandan, > > > > > > > > In case an event from a previous stage is required to be forwarded > > > > to a crypto adapter and PMD supports internal event port in crypto > > > > adapter, exposed via capability > > > > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do > > not have a > > > > way to check in the API rte_event_enqueue_burst(), whether it is for > > > > crypto adapter or for eth tx adapter. > > > I may be missing something here. Crypto adapter is an atomic stage has > > > a port which is setup during the adapter configuration. > > > So, application enqueuing events will end up sending to the crypto > > > adapter (As the adapter dequeues from a specific port). > > > Still wondering why there is requirement for new API. > > > > While we do rte_event_enqueue_burst(), we do not have a way to identify > > whether The event is for crypto adapter or the eth adaptor. > > At the application layer, we know where to send the event, but in the event > > lib We do not know which port it need to be sent. > > IMO, event port is specifically designed to work for OP_NEW mode. > > I did not find a way to make it land into crypto adapter. > > Please let me know in case there is a better option other than adding a new > > API. > This looks like a hack as the new API does not actual enqueue events to > eventdev. > Rather it directly extracts the crypto info from each event and then enqueue > to cryptodev. > > How about doing this (No changes to rte_event_enqueue_burst(), PMD takes > care of things ): > uint16_t __rte_hot > ssows_enq_burst(void *port, const struct rte_event ev[], uint16_t nb_events) > { > + struct otx2_ssogws *ws = port; > + > + RTE_SET_USED(nb_events); > + > + if (cap & > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) > + return otx2_ca_enq(ws->tag_op, ev); > > return ssows_enq(port, ev); > } > > Everything will be hidden under the callback and application will not have > any changes. > You want to say we somehow save a flag in struct otx2_ssogws from the capability And check that flag here to enq to crypto. But that will not work, as the events coming In this API can be for both crypto as well as eth tx adapter. If this check is there, all events will go to crypto adapter. In the library or the driver, we do not have a mechanism to determine the destination of the event (Note that event type is for source of event and not destination). Using some fields of rte_event will be a hack IMO. The solution proposed in this patchset is not a hack. Here is a reasoning to it: - The application when dequeues an event from the previous stage, knows what to do with the event - send it to crypto or send to ethernet. Hence it makes sense to call the different API there itself as inside the rte_event_enqueue_burst() there is no way to determine if it is for crypto adapter or eth adapter. Moreover, the solution is very similar to what eth tx adapter already support (a new API to enqueue). I hope this make things clearer now. Regards, Akhil > > > > > > > > > > > > > Hence we need a new API similar to > > > > rte_event_eth_tx_adapter_enqueue(), > > > > which can send to a crypto adapter. > > > > > > > > Note that RTE_EVENT_TYPE_* cannot be used to make that decision, as > > > > it is meant for event source and not event destination. > > > > And event port designated for crypto adapter is designed to be used > > > > for OP_NEW mode. > > > > > > > > Hence, in order to support an event PMD which has an internal event > > > > port > > > in > > > > crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), > > exposed > > > > via capability > > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, > > > > application should use rte_event_crypto_adapter_enqueue() API to > > > > enqueue events. > > > > > > > > When internal port is not > > > available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW > > > > mode), application can use API rte_event_enqueue_burst() as it was > > > > doing earlier, i.e. retrieve event port used by crypto adapter and > > > > bind its event queues to that port and enqueue events using the API > > > > rte_event_enqueue_burst(). > > > > > > > > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> -----Original Message----- > From: Akhil Goyal <gakhil@marvell.com> > Sent: Thursday, April 8, 2021 8:27 PM > To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; Shijith Thotton > <sthotton@marvell.com>; dev@dpdk.org > Cc: thomas@monjalon.net; Jerin Jacob Kollanukkaran <jerinj@marvell.com>; > hemant.agrawal@nxp.com; nipun.gupta@nxp.com; > sachin.saxena@oss.nxp.com; Anoob Joseph <anoobj@marvell.com>; > matan@nvidia.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>; > g.singh@nxp.com; Carrillo, Erik G <erik.g.carrillo@intel.com>; Jayatheerthan, > Jay <jay.jayatheerthan@intel.com>; Pavan Nikhilesh Bhagavatula > <pbhagavatula@marvell.com>; Van Haaren, Harry > <harry.van.haaren@intel.com> > Subject: RE: [PATCH v4 1/3] eventdev: introduce crypto adapter enqueue API > > Hi Abhinandan, > > > > > > > > > > In case an event from a previous stage is required to be > > > > > forwarded to a crypto adapter and PMD supports internal event > > > > > port in crypto adapter, exposed via capability > > > > > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we > do > > > not have a > > > > > way to check in the API rte_event_enqueue_burst(), whether it is > > > > > for crypto adapter or for eth tx adapter. > > > > I may be missing something here. Crypto adapter is an atomic stage > > > > has a port which is setup during the adapter configuration. > > > > So, application enqueuing events will end up sending to the crypto > > > > adapter (As the adapter dequeues from a specific port). > > > > Still wondering why there is requirement for new API. > > > > > > While we do rte_event_enqueue_burst(), we do not have a way to > > > identify whether The event is for crypto adapter or the eth adaptor. > > > At the application layer, we know where to send the event, but in > > > the event lib We do not know which port it need to be sent. > > > IMO, event port is specifically designed to work for OP_NEW mode. > > > I did not find a way to make it land into crypto adapter. > > > Please let me know in case there is a better option other than > > > adding a new API. > > This looks like a hack as the new API does not actual enqueue events > > to eventdev. > > Rather it directly extracts the crypto info from each event and then > > enqueue to cryptodev. > > > > How about doing this (No changes to rte_event_enqueue_burst(), PMD > > takes care of things ): > > uint16_t __rte_hot > > ssows_enq_burst(void *port, const struct rte_event ev[], uint16_t > > nb_events) { > > + struct otx2_ssogws *ws = port; > > + > > + RTE_SET_USED(nb_events); > > + > > + if (cap & > > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) > > + return otx2_ca_enq(ws->tag_op, ev); > > > > return ssows_enq(port, ev); > > } > > > > Everything will be hidden under the callback and application will not > > have any changes. > > > You want to say we somehow save a flag in struct otx2_ssogws from the > capability And check that flag here to enq to crypto. But that will not work, as > the events coming In this API can be for both crypto as well as eth tx adapter. > If this check is there, all events will go to crypto adapter. > > In the library or the driver, we do not have a mechanism to determine the > destination of the event (Note that event type is for source of event and not > destination). > Using some fields of rte_event will be a hack IMO. > > The solution proposed in this patchset is not a hack. Here is a reasoning to it: > - The application when dequeues an event from the previous stage, knows > what to do with the event - send it to crypto or send to ethernet. Hence it > makes sense to call the different API there itself as inside the > rte_event_enqueue_burst() there is no way to determine if it is for crypto > adapter or eth adapter. > Moreover, the solution is very similar to what eth tx adapter already support > (a new API to enqueue). > > I hope this make things clearer now. Yes Akhil. This makes it more clear. Thanks for clarifying. > > Regards, > Akhil > > > > > > > > > > > > > > > > > Hence we need a new API similar to > > > > > rte_event_eth_tx_adapter_enqueue(), > > > > > which can send to a crypto adapter. > > > > > > > > > > Note that RTE_EVENT_TYPE_* cannot be used to make that decision, > > > > > as it is meant for event source and not event destination. > > > > > And event port designated for crypto adapter is designed to be > > > > > used for OP_NEW mode. > > > > > > > > > > Hence, in order to support an event PMD which has an internal > > > > > event port > > > > in > > > > > crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD > mode), > > > exposed > > > > > via capability > > > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, > > > > > application should use rte_event_crypto_adapter_enqueue() API to > > > > > enqueue events. > > > > > > > > > > When internal port is not > > > > available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW > > > > > mode), application can use API rte_event_enqueue_burst() as it > > > > > was doing earlier, i.e. retrieve event port used by crypto > > > > > adapter and bind its event queues to that port and enqueue > > > > > events using the API rte_event_enqueue_burst(). > > > > > > > > > > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > Hi Abhinandan,
> > > > > >
> > > > > > In case an event from a previous stage is required to be
> > > > > > forwarded to a crypto adapter and PMD supports internal event
> > > > > > port in crypto adapter, exposed via capability
> > > > > > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we
> > do
> > > > not have a
> > > > > > way to check in the API rte_event_enqueue_burst(), whether it is
> > > > > > for crypto adapter or for eth tx adapter.
> > > > > I may be missing something here. Crypto adapter is an atomic stage
> > > > > has a port which is setup during the adapter configuration.
> > > > > So, application enqueuing events will end up sending to the crypto
> > > > > adapter (As the adapter dequeues from a specific port).
> > > > > Still wondering why there is requirement for new API.
> > > >
> > > > While we do rte_event_enqueue_burst(), we do not have a way to
> > > > identify whether The event is for crypto adapter or the eth adaptor.
> > > > At the application layer, we know where to send the event, but in
> > > > the event lib We do not know which port it need to be sent.
> > > > IMO, event port is specifically designed to work for OP_NEW mode.
> > > > I did not find a way to make it land into crypto adapter.
> > > > Please let me know in case there is a better option other than
> > > > adding a new API.
> > > This looks like a hack as the new API does not actual enqueue events
> > > to eventdev.
> > > Rather it directly extracts the crypto info from each event and then
> > > enqueue to cryptodev.
> > >
> > > How about doing this (No changes to rte_event_enqueue_burst(), PMD
> > > takes care of things ):
> > > uint16_t __rte_hot
> > > ssows_enq_burst(void *port, const struct rte_event ev[], uint16_t
> > > nb_events) {
> > > + struct otx2_ssogws *ws = port;
> > > +
> > > + RTE_SET_USED(nb_events);
> > > +
> > > + if (cap &
> > > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD)
> > > + return otx2_ca_enq(ws->tag_op, ev);
> > >
> > > return ssows_enq(port, ev);
> > > }
> > >
> > > Everything will be hidden under the callback and application will not
> > > have any changes.
> > >
> > You want to say we somehow save a flag in struct otx2_ssogws from the
> > capability And check that flag here to enq to crypto. But that will not work,
> as
> > the events coming In this API can be for both crypto as well as eth tx
> adapter.
> > If this check is there, all events will go to crypto adapter.
> >
> > In the library or the driver, we do not have a mechanism to determine the
> > destination of the event (Note that event type is for source of event and
> not
> > destination).
> > Using some fields of rte_event will be a hack IMO.
> >
> > The solution proposed in this patchset is not a hack. Here is a reasoning to
> it:
> > - The application when dequeues an event from the previous stage, knows
> > what to do with the event - send it to crypto or send to ethernet. Hence it
> > makes sense to call the different API there itself as inside the
> > rte_event_enqueue_burst() there is no way to determine if it is for crypto
> > adapter or eth adapter.
> > Moreover, the solution is very similar to what eth tx adapter already
> support
> > (a new API to enqueue).
> >
> > I hope this make things clearer now.
> Yes Akhil. This makes it more clear. Thanks for clarifying.
Do you have any further comments on this patchset? If not, could you please ack it?
This series proposes a new event device enqueue operation if crypto adapter forward mode is supported. Second patch in the series is the implementation of the same in PMD. Test application changes are added in third patch. v5: - Set rte_errno if crypto adapter enqueue fails in driver. - Test application code restructuring. v4: - Fix debug build. v3: - Added crypto adapter test application changes. v2: - Updated release notes. - Made use of RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET macro. - Fixed v1 build error. v1: - Added crypto adapter forward mode support for octeontx2. Akhil Goyal (1): eventdev: introduce crypto adapter enqueue API Shijith Thotton (2): event/octeontx2: support crypto adapter forward mode test/event_crypto: use crypto adapter enqueue API app/test/test_event_crypto_adapter.c | 33 +++++--- .../prog_guide/event_crypto_adapter.rst | 69 +++++++++------ doc/guides/rel_notes/release_21_05.rst | 6 ++ drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 42 ++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 63 ++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 8 +- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 1 + 16 files changed, 286 insertions(+), 60 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h -- 2.25.1
From: Akhil Goyal <gakhil@marvell.com> In case an event from a previous stage is required to be forwarded to a crypto adapter and PMD supports internal event port in crypto adapter, exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not have a way to check in the API rte_event_enqueue_burst(), whether it is for crypto adapter or for eth tx adapter. Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue(), which can send to a crypto adapter. Note that RTE_EVENT_TYPE_* cannot be used to make that decision, as it is meant for event source and not event destination. And event port designated for crypto adapter is designed to be used for OP_NEW mode. Hence, in order to support an event PMD which has an internal event port in crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, application should use rte_event_crypto_adapter_enqueue() API to enqueue events. When internal port is not available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW mode), application can use API rte_event_enqueue_burst() as it was doing earlier, i.e. retrieve event port used by crypto adapter and bind its event queues to that port and enqueue events using the API rte_event_enqueue_burst(). Signed-off-by: Akhil Goyal <gakhil@marvell.com> --- .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++++------- doc/guides/rel_notes/release_21_05.rst | 6 ++ lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 63 +++++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 8 ++- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 1 + 8 files changed, 143 insertions(+), 27 deletions(-) diff --git a/doc/guides/prog_guide/event_crypto_adapter.rst b/doc/guides/prog_guide/event_crypto_adapter.rst index 1e3eb7139..4fb5c688e 100644 --- a/doc/guides/prog_guide/event_crypto_adapter.rst +++ b/doc/guides/prog_guide/event_crypto_adapter.rst @@ -55,21 +55,22 @@ which is needed to enqueue an event after the crypto operation is completed. RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -In the RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, if HW supports -RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability the application -can directly submit the crypto operations to the cryptodev. -If not, application retrieves crypto adapter's event port using -rte_event_crypto_adapter_event_port_get() API. Then, links its event -queue to this port and starts enqueuing crypto operations as events -to the eventdev. The adapter then dequeues the events and submits the -crypto operations to the cryptodev. After the crypto completions, the -adapter enqueues events to the event device. -Application can use this mode, when ingress packet ordering is needed. -In this mode, events dequeued from the adapter will be treated as -forwarded events. The application needs to specify the cryptodev ID -and queue pair ID (request information) needed to enqueue a crypto -operation in addition to the event information (response information) -needed to enqueue an event after the crypto operation has completed. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), the application should +use ``rte_event_crypto_adapter_enqueue()`` API to enqueue crypto operations as +events to crypto adapter. If not, application retrieves crypto adapter's event +port using ``rte_event_crypto_adapter_event_port_get()`` API, links its event +queue to this port and starts enqueuing crypto operations as events to eventdev +using ``rte_event_enqueue_burst()``. The adapter then dequeues the events and +submits the crypto operations to the cryptodev. After the crypto operation is +complete, the adapter enqueues events to the event device. The application can +use this mode when ingress packet ordering is needed. In this mode, events +dequeued from the adapter will be treated as forwarded events. The application +needs to specify the cryptodev ID and queue pair ID (request information) needed +to enqueue a crypto operation in addition to the event information (response +information) needed to enqueue an event after the crypto operation has +completed. .. _figure_event_crypto_adapter_op_forward: @@ -120,28 +121,44 @@ service function and needs to create an event port for it. The callback is expected to fill the ``struct rte_event_crypto_adapter_conf`` structure passed to it. -For RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, the event port created by adapter -can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` API. -Application can use this event port to link with event queue on which it -enqueues events towards the crypto adapter. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), events with crypto +operations should be enqueued to the crypto adapter using +``rte_event_crypto_adapter_enqueue()`` API. If not, the event port created by +the adapter can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` +API. An application can use this event port to link with an event queue, on +which it enqueues events towards the crypto adapter using +``rte_event_enqueue_burst()``. .. code-block:: c - uint8_t id, evdev, crypto_ev_port_id, app_qid; + uint8_t id, evdev_id, cdev_id, crypto_ev_port_id, app_qid; struct rte_event ev; + uint32_t cap; int ret; - ret = rte_event_crypto_adapter_event_port_get(id, &crypto_ev_port_id); - ret = rte_event_queue_setup(evdev, app_qid, NULL); - ret = rte_event_port_link(evdev, crypto_ev_port_id, &app_qid, NULL, 1); - // Fill in event info and update event_ptr with rte_crypto_op memset(&ev, 0, sizeof(ev)); - ev.queue_id = app_qid; . . ev.event_ptr = op; - ret = rte_event_enqueue_burst(evdev, app_ev_port_id, ev, nb_events); + + ret = rte_event_crypto_adapter_caps_get(evdev_id, cdev_id, &cap); + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) { + ret = rte_event_crypto_adapter_enqueue(evdev_id, app_ev_port_id, + ev, nb_events); + } else { + ret = rte_event_crypto_adapter_event_port_get(id, + &crypto_ev_port_id); + ret = rte_event_queue_setup(evdev_id, app_qid, NULL); + ret = rte_event_port_link(evdev_id, crypto_ev_port_id, &app_qid, + NULL, 1); + ev.queue_id = app_qid; + ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev, + nb_events); + } + Querying adapter capabilities ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst index be94f0cd6..f32d3d570 100644 --- a/doc/guides/rel_notes/release_21_05.rst +++ b/doc/guides/rel_notes/release_21_05.rst @@ -139,6 +139,12 @@ New Features the events across multiple stages. * This also reduces the scheduling overhead on a event device. +* **Enhanced crypto adapter forward mode.** + + * Added ``rte_event_crypto_adapter_enqueue()`` API to enqueue events to crypto + adapter if forward mode is supported by driver. + * Added support for crypto adapter forward mode in octeontx2 event and crypto + device driver. Removed Items ------------- diff --git a/lib/librte_eventdev/eventdev_trace_points.c b/lib/librte_eventdev/eventdev_trace_points.c index 1a0ccc448..3867ec800 100644 --- a/lib/librte_eventdev/eventdev_trace_points.c +++ b/lib/librte_eventdev/eventdev_trace_points.c @@ -118,3 +118,6 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_start, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_stop, lib.eventdev.crypto.stop) + +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_enqueue, + lib.eventdev.crypto.enq) diff --git a/lib/librte_eventdev/rte_event_crypto_adapter.h b/lib/librte_eventdev/rte_event_crypto_adapter.h index 60630ef66..a4a4129b7 100644 --- a/lib/librte_eventdev/rte_event_crypto_adapter.h +++ b/lib/librte_eventdev/rte_event_crypto_adapter.h @@ -171,6 +171,7 @@ extern "C" { #include <stdint.h> #include "rte_eventdev.h" +#include "eventdev_pmd.h" /** * Crypto event adapter mode @@ -522,6 +523,68 @@ rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id); int rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); +/** + * Enqueue a burst of crypto operations as events object supplied in *rte_event* + * structure on an event crypto adapter designated by its event *dev_id* through + * the event port specified by *port_id*. This function is supported if the + * eventdev PMD has the #RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD + * capability flag set. + * + * The *nb_events* parameter is the number of event objects to enqueue which are + * supplied in the *ev* array of *rte_event* structure. + * + * The rte_event_crypto_adapter_enqueue() function returns the number of + * events objects it actually enqueued. A return value equal to *nb_events* + * means that all event objects have been enqueued. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param ev + * Points to an array of *nb_events* objects of type *rte_event* structure + * which contain the event object enqueue operations to be processed. + * @param nb_events + * The number of event objects to enqueue, typically number of + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) + * available for this port. + * + * @return + * The number of event objects actually enqueued on the event device. The + * return value can be less than the value of the *nb_events* parameter when + * the event devices queue is full or if invalid parameters are specified in a + * *rte_event*. If the return value is less than *nb_events*, the remaining + * events at the end of ev[] are not consumed and the caller has to take care + * of them, and rte_errno is set accordingly. Possible errno values include: + * - EINVAL The port ID is invalid, device ID is invalid, an event's queue + * ID is invalid, or an event's sched type doesn't match the + * capabilities of the destination queue. + * - ENOSPC The event port was backpressured and unable to enqueue + * one or more events. This error code is only applicable to + * closed systems. + */ +static inline uint16_t +rte_event_crypto_adapter_enqueue(uint8_t dev_id, + uint8_t port_id, + struct rte_event ev[], + uint16_t nb_events) +{ + const struct rte_eventdev *dev = &rte_eventdevs[dev_id]; + +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + if (port_id >= dev->data->nb_ports) { + rte_errno = EINVAL; + return 0; + } +#endif + rte_eventdev_trace_crypto_adapter_enqueue(dev_id, port_id, ev, + nb_events); + + return dev->ca_enqueue(dev->data->ports[port_id], ev, nb_events); +} + #ifdef __cplusplus } #endif diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c index c9bb5d227..594dd5e75 100644 --- a/lib/librte_eventdev/rte_eventdev.c +++ b/lib/librte_eventdev/rte_eventdev.c @@ -1454,6 +1454,15 @@ rte_event_tx_adapter_enqueue(__rte_unused void *port, return 0; } +static uint16_t +rte_event_crypto_adapter_enqueue(__rte_unused void *port, + __rte_unused struct rte_event ev[], + __rte_unused uint16_t nb_events) +{ + rte_errno = ENOTSUP; + return 0; +} + struct rte_eventdev * rte_event_pmd_allocate(const char *name, int socket_id) { @@ -1476,6 +1485,7 @@ rte_event_pmd_allocate(const char *name, int socket_id) eventdev->txa_enqueue = rte_event_tx_adapter_enqueue; eventdev->txa_enqueue_same_dest = rte_event_tx_adapter_enqueue; + eventdev->ca_enqueue = rte_event_crypto_adapter_enqueue; if (eventdev->data == NULL) { struct rte_eventdev_data *eventdev_data = NULL; diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h index 5f1f544cc..477a59461 100644 --- a/lib/librte_eventdev/rte_eventdev.h +++ b/lib/librte_eventdev/rte_eventdev.h @@ -1352,6 +1352,10 @@ typedef uint16_t (*event_tx_adapter_enqueue_same_dest)(void *port, * burst having same destination Ethernet port & Tx queue. */ +typedef uint16_t (*event_crypto_adapter_enqueue)(void *port, + struct rte_event ev[], uint16_t nb_events); +/**< @internal Enqueue burst of events on crypto adapter */ + #define RTE_EVENTDEV_NAME_MAX_LEN (64) /**< @internal Max length of name of event PMD */ @@ -1423,6 +1427,8 @@ struct rte_eventdev { */ event_tx_adapter_enqueue txa_enqueue; /**< Pointer to PMD eth Tx adapter enqueue function. */ + event_crypto_adapter_enqueue ca_enqueue; + /**< Pointer to PMD crypto adapter enqueue function. */ struct rte_eventdev_data *data; /**< Pointer to device data */ struct rte_eventdev_ops *dev_ops; @@ -1435,7 +1441,7 @@ struct rte_eventdev { /**< Flag indicating the device is attached */ uint64_t reserved_64s[4]; /**< Reserved for future fields */ - void *reserved_ptrs[4]; /**< Reserved for future fields */ + void *reserved_ptrs[3]; /**< Reserved for future fields */ } __rte_cache_aligned; extern struct rte_eventdev *rte_eventdevs; diff --git a/lib/librte_eventdev/rte_eventdev_trace_fp.h b/lib/librte_eventdev/rte_eventdev_trace_fp.h index 349129c0f..5639e0b83 100644 --- a/lib/librte_eventdev/rte_eventdev_trace_fp.h +++ b/lib/librte_eventdev/rte_eventdev_trace_fp.h @@ -49,6 +49,16 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_u8(flags); ) +RTE_TRACE_POINT_FP( + rte_eventdev_trace_crypto_adapter_enqueue, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table, + uint16_t nb_events), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_ptr(ev_table); + rte_trace_point_emit_u16(nb_events); +) + RTE_TRACE_POINT_FP( rte_eventdev_trace_timer_arm_burst, RTE_TRACE_POINT_ARGS(const void *adapter, void **evtims_table, diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map index 902df0ae3..7e264d3b8 100644 --- a/lib/librte_eventdev/version.map +++ b/lib/librte_eventdev/version.map @@ -143,6 +143,7 @@ EXPERIMENTAL { rte_event_vector_pool_create; rte_event_eth_rx_adapter_vector_limits_get; rte_event_eth_rx_adapter_queue_event_vector_config; + __rte_eventdev_trace_crypto_adapter_enqueue; }; INTERNAL { -- 2.25.1
Advertise crypto adapter forward mode capability and set crypto adapter enqueue function in driver. Signed-off-by: Shijith Thotton <sthotton@marvell.com> --- drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 42 ++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- 7 files changed, 122 insertions(+), 21 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c index cec20b5c6..4808dca64 100644 --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c @@ -7,6 +7,7 @@ #include <rte_cryptodev_pmd.h> #include <rte_errno.h> #include <rte_ethdev.h> +#include <rte_event_crypto_adapter.h> #include "otx2_cryptodev.h" #include "otx2_cryptodev_capabilities.h" @@ -434,15 +435,28 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform, return -ENOTSUP; } -static __rte_always_inline void __rte_hot +static __rte_always_inline int32_t __rte_hot otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, struct cpt_request_info *req, void *lmtline, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { + union rte_event_crypto_metadata *m_data; union cpt_inst_s inst; uint64_t lmt_status; + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) + m_data = rte_cryptodev_sym_session_get_user_data( + op->sym->session); + else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + op->private_data_offset) + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)op + + op->private_data_offset); + else + return -EINVAL; + inst.u[0] = 0; inst.s9x.res_addr = req->comp_baddr; inst.u[2] = 0; @@ -453,12 +467,11 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, inst.s9x.ei2 = req->ist.ei2; inst.s9x.ei3 = cpt_inst_w7; - inst.s9x.qord = 1; - inst.s9x.grp = qp->ev.queue_id; - inst.s9x.tt = qp->ev.sched_type; - inst.s9x.tag = (RTE_EVENT_TYPE_CRYPTODEV << 28) | - qp->ev.flow_id; - inst.s9x.wq_ptr = (uint64_t)req >> 3; + inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) | + m_data->response_info.flow_id) | + ((uint64_t)m_data->response_info.sched_type << 32) | + ((uint64_t)m_data->response_info.queue_id << 34)); + inst.u[3] = 1 | (((uint64_t)req >> 3) << 3); req->qp = qp; do { @@ -475,22 +488,22 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, lmt_status = otx2_lmt_submit(qp->lf_nq_reg); } while (lmt_status == 0); + return 0; } static __rte_always_inline int32_t __rte_hot otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp, struct pending_queue *pend_q, struct cpt_request_info *req, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { void *lmtline = qp->lmtline; union cpt_inst_s inst; uint64_t lmt_status; - if (qp->ca_enable) { - otx2_ca_enqueue_req(qp, req, lmtline, cpt_inst_w7); - return 0; - } + if (qp->ca_enable) + return otx2_ca_enqueue_req(qp, req, lmtline, op, cpt_inst_w7); if (unlikely(pend_q->pending_count >= OTX2_CPT_DEFAULT_CMD_QLEN)) return -EAGAIN; @@ -594,7 +607,8 @@ otx2_cpt_enqueue_asym(struct otx2_cpt_qp *qp, goto req_fail; } - ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op, + sess->cpt_inst_w7); if (unlikely(ret)) { CPT_LOG_DP_ERR("Could not enqueue crypto req"); @@ -638,7 +652,7 @@ otx2_cpt_enqueue_sym(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (unlikely(ret)) { /* Free buffer allocated by fill params routines */ @@ -707,7 +721,7 @@ otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (winsz && esn) { seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low; diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 770a801c4..59450521a 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -12,8 +12,9 @@ #include <rte_mbuf_pool_ops.h> #include <rte_pci.h> -#include "otx2_evdev_stats.h" #include "otx2_evdev.h" +#include "otx2_evdev_crypto_adptr_tx.h" +#include "otx2_evdev_stats.h" #include "otx2_irq.h" #include "otx2_tim_evdev.h" @@ -311,6 +312,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_ca_enq; if (dev->dual_ws) { event_dev->enqueue = otx2_ssogws_dual_enq; @@ -473,6 +475,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq; } event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c index 4e8a96cb6..2c9b347f0 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c @@ -18,7 +18,8 @@ otx2_ca_caps_get(const struct rte_eventdev *dev, RTE_SET_USED(cdev); *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND | - RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW; + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW | + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD; return 0; } diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h similarity index 93% rename from drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h rename to drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h index 70b63933e..9e331fdd7 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h @@ -2,8 +2,8 @@ * Copyright (C) 2020 Marvell International Ltd. */ -#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ -#define _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ #include <rte_cryptodev.h> #include <rte_cryptodev_pmd.h> @@ -72,4 +72,4 @@ otx2_handle_crypto_event(uint64_t get_work1) return (uint64_t)(cop); } -#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ */ +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */ diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h new file mode 100644 index 000000000..ecf7eb9f5 --- /dev/null +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2021 Marvell International Ltd. + */ + +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ + +#include <rte_cryptodev.h> +#include <rte_cryptodev_pmd.h> +#include <rte_event_crypto_adapter.h> +#include <rte_eventdev.h> + +#include <otx2_cryptodev_qp.h> +#include <otx2_worker.h> + +static inline uint16_t +otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) +{ + union rte_event_crypto_metadata *m_data; + struct rte_crypto_op *crypto_op; + struct rte_cryptodev *cdev; + struct otx2_cpt_qp *qp; + uint8_t cdev_id; + uint16_t qp_id; + + crypto_op = ev->event_ptr; + if (crypto_op == NULL) + return 0; + + if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + m_data = rte_cryptodev_sym_session_get_user_data( + crypto_op->sym->session); + if (m_data == NULL) + goto free_op; + + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + crypto_op->private_data_offset) { + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)crypto_op + + crypto_op->private_data_offset); + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else { + goto free_op; + } + + cdev = &rte_cryptodevs[cdev_id]; + qp = cdev->data->queue_pairs[qp_id]; + + if (!ev->sched_type) + otx2_ssogws_head_wait(tag_op); + if (qp->ca_enable) + return cdev->enqueue_burst(qp, &crypto_op, 1); + +free_op: + rte_pktmbuf_free(crypto_op->sym->m_src); + rte_crypto_op_free(crypto_op); + rte_errno = EINVAL; + return 0; +} + +static uint16_t __rte_hot +otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->tag_op, ev); +} + +static uint16_t __rte_hot +otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws_dual *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->ws_state[!ws->vws].tag_op, ev); +} +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */ diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h index 2b716c042..fd149be91 100644 --- a/drivers/event/octeontx2/otx2_worker.h +++ b/drivers/event/octeontx2/otx2_worker.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" #include "otx2_ethdev_sec_tx.h" /* SSO Operations */ diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h index 72b616439..36ae4dd88 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.h +++ b/drivers/event/octeontx2/otx2_worker_dual.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" /* SSO Operations */ static __rte_always_inline uint16_t -- 2.25.1
Use rte_event_crypto_adapter_enqueue() API to enqueue events to crypto adapter if forward mode is supported in driver. Signed-off-by: Shijith Thotton <sthotton@marvell.com> --- app/test/test_event_crypto_adapter.c | 33 ++++++++++++++++++---------- 1 file changed, 21 insertions(+), 12 deletions(-) diff --git a/app/test/test_event_crypto_adapter.c b/app/test/test_event_crypto_adapter.c index 335211cd8..f689bc1f2 100644 --- a/app/test/test_event_crypto_adapter.c +++ b/app/test/test_event_crypto_adapter.c @@ -64,6 +64,7 @@ struct event_crypto_adapter_test_params { struct rte_mempool *session_priv_mpool; struct rte_cryptodev_config *config; uint8_t crypto_event_port_id; + uint8_t internal_port_op_fwd; }; struct rte_event response_info = { @@ -110,9 +111,12 @@ send_recv_ev(struct rte_event *ev) struct rte_event recv_ev; int ret; - ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); - TEST_ASSERT_EQUAL(ret, NUM, - "Failed to send event to crypto adapter\n"); + if (params.internal_port_op_fwd) + ret = rte_event_crypto_adapter_enqueue(evdev, TEST_APP_PORT_ID, + ev, NUM); + else + ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); + TEST_ASSERT_EQUAL(ret, NUM, "Failed to send event to crypto adapter\n"); while (rte_event_dequeue_burst(evdev, TEST_APP_PORT_ID, &recv_ev, NUM, 0) == 0) @@ -747,9 +751,12 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND)) goto adapter_create; - if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) && - !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD)) - return -ENOTSUP; + if (mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) { + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) + params.internal_port_op_fwd = 1; + else + return -ENOTSUP; + } if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_NEW) && !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW)) @@ -771,9 +778,11 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) TEST_ASSERT_SUCCESS(ret, "Failed to add queue pair\n"); - ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, - ¶ms.crypto_event_port_id); - TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + if (!params.internal_port_op_fwd) { + ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, + ¶ms.crypto_event_port_id); + TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + } return TEST_SUCCESS; } @@ -809,15 +818,15 @@ test_crypto_adapter_conf(enum rte_event_crypto_adapter_mode mode) if (!crypto_adapter_setup_done) { ret = configure_event_crypto_adapter(mode); - if (!ret) { + if (ret) + return ret; + if (!params.internal_port_op_fwd) { qid = TEST_CRYPTO_EV_QUEUE_ID; ret = rte_event_port_link(evdev, params.crypto_event_port_id, &qid, NULL, 1); TEST_ASSERT(ret >= 0, "Failed to link queue %d " "port=%u\n", qid, params.crypto_event_port_id); - } else { - return ret; } crypto_adapter_setup_done = 1; } -- 2.25.1
This series proposes a new event device enqueue operation if crypto adapter forward mode is supported. Second patch in the series is the implementation of the same in PMD. Test application changes are added in third patch. v6: - Rebased. v5: - Set rte_errno if crypto adapter enqueue fails in driver. - Test application code restructuring. v4: - Fix debug build. v3: - Added crypto adapter test application changes. v2: - Updated release notes. - Made use of RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET macro. - Fixed v1 build error. v1: - Added crypto adapter forward mode support for octeontx2. Akhil Goyal (1): eventdev: introduce crypto adapter enqueue API Shijith Thotton (2): event/octeontx2: support crypto adapter forward mode test/event_crypto: use crypto adapter enqueue API app/test/test_event_crypto_adapter.c | 33 +++++--- .../prog_guide/event_crypto_adapter.rst | 69 +++++++++------ doc/guides/rel_notes/release_21_05.rst | 6 ++ drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 42 ++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 63 ++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 8 +- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 1 + 16 files changed, 286 insertions(+), 60 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h -- 2.25.1
From: Akhil Goyal <gakhil@marvell.com> In case an event from a previous stage is required to be forwarded to a crypto adapter and PMD supports internal event port in crypto adapter, exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not have a way to check in the API rte_event_enqueue_burst(), whether it is for crypto adapter or for eth tx adapter. Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue(), which can send to a crypto adapter. Note that RTE_EVENT_TYPE_* cannot be used to make that decision, as it is meant for event source and not event destination. And event port designated for crypto adapter is designed to be used for OP_NEW mode. Hence, in order to support an event PMD which has an internal event port in crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, application should use rte_event_crypto_adapter_enqueue() API to enqueue events. When internal port is not available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW mode), application can use API rte_event_enqueue_burst() as it was doing earlier, i.e. retrieve event port used by crypto adapter and bind its event queues to that port and enqueue events using the API rte_event_enqueue_burst(). Signed-off-by: Akhil Goyal <gakhil@marvell.com> --- .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++++------- doc/guides/rel_notes/release_21_05.rst | 6 ++ lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 63 +++++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 8 ++- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 1 + 8 files changed, 143 insertions(+), 27 deletions(-) diff --git a/doc/guides/prog_guide/event_crypto_adapter.rst b/doc/guides/prog_guide/event_crypto_adapter.rst index 1e3eb7139..4fb5c688e 100644 --- a/doc/guides/prog_guide/event_crypto_adapter.rst +++ b/doc/guides/prog_guide/event_crypto_adapter.rst @@ -55,21 +55,22 @@ which is needed to enqueue an event after the crypto operation is completed. RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -In the RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, if HW supports -RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability the application -can directly submit the crypto operations to the cryptodev. -If not, application retrieves crypto adapter's event port using -rte_event_crypto_adapter_event_port_get() API. Then, links its event -queue to this port and starts enqueuing crypto operations as events -to the eventdev. The adapter then dequeues the events and submits the -crypto operations to the cryptodev. After the crypto completions, the -adapter enqueues events to the event device. -Application can use this mode, when ingress packet ordering is needed. -In this mode, events dequeued from the adapter will be treated as -forwarded events. The application needs to specify the cryptodev ID -and queue pair ID (request information) needed to enqueue a crypto -operation in addition to the event information (response information) -needed to enqueue an event after the crypto operation has completed. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), the application should +use ``rte_event_crypto_adapter_enqueue()`` API to enqueue crypto operations as +events to crypto adapter. If not, application retrieves crypto adapter's event +port using ``rte_event_crypto_adapter_event_port_get()`` API, links its event +queue to this port and starts enqueuing crypto operations as events to eventdev +using ``rte_event_enqueue_burst()``. The adapter then dequeues the events and +submits the crypto operations to the cryptodev. After the crypto operation is +complete, the adapter enqueues events to the event device. The application can +use this mode when ingress packet ordering is needed. In this mode, events +dequeued from the adapter will be treated as forwarded events. The application +needs to specify the cryptodev ID and queue pair ID (request information) needed +to enqueue a crypto operation in addition to the event information (response +information) needed to enqueue an event after the crypto operation has +completed. .. _figure_event_crypto_adapter_op_forward: @@ -120,28 +121,44 @@ service function and needs to create an event port for it. The callback is expected to fill the ``struct rte_event_crypto_adapter_conf`` structure passed to it. -For RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, the event port created by adapter -can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` API. -Application can use this event port to link with event queue on which it -enqueues events towards the crypto adapter. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), events with crypto +operations should be enqueued to the crypto adapter using +``rte_event_crypto_adapter_enqueue()`` API. If not, the event port created by +the adapter can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` +API. An application can use this event port to link with an event queue, on +which it enqueues events towards the crypto adapter using +``rte_event_enqueue_burst()``. .. code-block:: c - uint8_t id, evdev, crypto_ev_port_id, app_qid; + uint8_t id, evdev_id, cdev_id, crypto_ev_port_id, app_qid; struct rte_event ev; + uint32_t cap; int ret; - ret = rte_event_crypto_adapter_event_port_get(id, &crypto_ev_port_id); - ret = rte_event_queue_setup(evdev, app_qid, NULL); - ret = rte_event_port_link(evdev, crypto_ev_port_id, &app_qid, NULL, 1); - // Fill in event info and update event_ptr with rte_crypto_op memset(&ev, 0, sizeof(ev)); - ev.queue_id = app_qid; . . ev.event_ptr = op; - ret = rte_event_enqueue_burst(evdev, app_ev_port_id, ev, nb_events); + + ret = rte_event_crypto_adapter_caps_get(evdev_id, cdev_id, &cap); + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) { + ret = rte_event_crypto_adapter_enqueue(evdev_id, app_ev_port_id, + ev, nb_events); + } else { + ret = rte_event_crypto_adapter_event_port_get(id, + &crypto_ev_port_id); + ret = rte_event_queue_setup(evdev_id, app_qid, NULL); + ret = rte_event_port_link(evdev_id, crypto_ev_port_id, &app_qid, + NULL, 1); + ev.queue_id = app_qid; + ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev, + nb_events); + } + Querying adapter capabilities ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst index e89a392b2..460c45246 100644 --- a/doc/guides/rel_notes/release_21_05.rst +++ b/doc/guides/rel_notes/release_21_05.rst @@ -143,6 +143,12 @@ New Features the events across multiple stages. * This also reduces the scheduling overhead on a event device. +* **Enhanced crypto adapter forward mode.** + + * Added ``rte_event_crypto_adapter_enqueue()`` API to enqueue events to crypto + adapter if forward mode is supported by driver. + * Added support for crypto adapter forward mode in octeontx2 event and crypto + device driver. Removed Items ------------- diff --git a/lib/librte_eventdev/eventdev_trace_points.c b/lib/librte_eventdev/eventdev_trace_points.c index 1a0ccc448..3867ec800 100644 --- a/lib/librte_eventdev/eventdev_trace_points.c +++ b/lib/librte_eventdev/eventdev_trace_points.c @@ -118,3 +118,6 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_start, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_stop, lib.eventdev.crypto.stop) + +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_enqueue, + lib.eventdev.crypto.enq) diff --git a/lib/librte_eventdev/rte_event_crypto_adapter.h b/lib/librte_eventdev/rte_event_crypto_adapter.h index 60630ef66..a4a4129b7 100644 --- a/lib/librte_eventdev/rte_event_crypto_adapter.h +++ b/lib/librte_eventdev/rte_event_crypto_adapter.h @@ -171,6 +171,7 @@ extern "C" { #include <stdint.h> #include "rte_eventdev.h" +#include "eventdev_pmd.h" /** * Crypto event adapter mode @@ -522,6 +523,68 @@ rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id); int rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); +/** + * Enqueue a burst of crypto operations as events object supplied in *rte_event* + * structure on an event crypto adapter designated by its event *dev_id* through + * the event port specified by *port_id*. This function is supported if the + * eventdev PMD has the #RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD + * capability flag set. + * + * The *nb_events* parameter is the number of event objects to enqueue which are + * supplied in the *ev* array of *rte_event* structure. + * + * The rte_event_crypto_adapter_enqueue() function returns the number of + * events objects it actually enqueued. A return value equal to *nb_events* + * means that all event objects have been enqueued. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param ev + * Points to an array of *nb_events* objects of type *rte_event* structure + * which contain the event object enqueue operations to be processed. + * @param nb_events + * The number of event objects to enqueue, typically number of + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) + * available for this port. + * + * @return + * The number of event objects actually enqueued on the event device. The + * return value can be less than the value of the *nb_events* parameter when + * the event devices queue is full or if invalid parameters are specified in a + * *rte_event*. If the return value is less than *nb_events*, the remaining + * events at the end of ev[] are not consumed and the caller has to take care + * of them, and rte_errno is set accordingly. Possible errno values include: + * - EINVAL The port ID is invalid, device ID is invalid, an event's queue + * ID is invalid, or an event's sched type doesn't match the + * capabilities of the destination queue. + * - ENOSPC The event port was backpressured and unable to enqueue + * one or more events. This error code is only applicable to + * closed systems. + */ +static inline uint16_t +rte_event_crypto_adapter_enqueue(uint8_t dev_id, + uint8_t port_id, + struct rte_event ev[], + uint16_t nb_events) +{ + const struct rte_eventdev *dev = &rte_eventdevs[dev_id]; + +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + if (port_id >= dev->data->nb_ports) { + rte_errno = EINVAL; + return 0; + } +#endif + rte_eventdev_trace_crypto_adapter_enqueue(dev_id, port_id, ev, + nb_events); + + return dev->ca_enqueue(dev->data->ports[port_id], ev, nb_events); +} + #ifdef __cplusplus } #endif diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c index c9bb5d227..594dd5e75 100644 --- a/lib/librte_eventdev/rte_eventdev.c +++ b/lib/librte_eventdev/rte_eventdev.c @@ -1454,6 +1454,15 @@ rte_event_tx_adapter_enqueue(__rte_unused void *port, return 0; } +static uint16_t +rte_event_crypto_adapter_enqueue(__rte_unused void *port, + __rte_unused struct rte_event ev[], + __rte_unused uint16_t nb_events) +{ + rte_errno = ENOTSUP; + return 0; +} + struct rte_eventdev * rte_event_pmd_allocate(const char *name, int socket_id) { @@ -1476,6 +1485,7 @@ rte_event_pmd_allocate(const char *name, int socket_id) eventdev->txa_enqueue = rte_event_tx_adapter_enqueue; eventdev->txa_enqueue_same_dest = rte_event_tx_adapter_enqueue; + eventdev->ca_enqueue = rte_event_crypto_adapter_enqueue; if (eventdev->data == NULL) { struct rte_eventdev_data *eventdev_data = NULL; diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h index 5f1f544cc..477a59461 100644 --- a/lib/librte_eventdev/rte_eventdev.h +++ b/lib/librte_eventdev/rte_eventdev.h @@ -1352,6 +1352,10 @@ typedef uint16_t (*event_tx_adapter_enqueue_same_dest)(void *port, * burst having same destination Ethernet port & Tx queue. */ +typedef uint16_t (*event_crypto_adapter_enqueue)(void *port, + struct rte_event ev[], uint16_t nb_events); +/**< @internal Enqueue burst of events on crypto adapter */ + #define RTE_EVENTDEV_NAME_MAX_LEN (64) /**< @internal Max length of name of event PMD */ @@ -1423,6 +1427,8 @@ struct rte_eventdev { */ event_tx_adapter_enqueue txa_enqueue; /**< Pointer to PMD eth Tx adapter enqueue function. */ + event_crypto_adapter_enqueue ca_enqueue; + /**< Pointer to PMD crypto adapter enqueue function. */ struct rte_eventdev_data *data; /**< Pointer to device data */ struct rte_eventdev_ops *dev_ops; @@ -1435,7 +1441,7 @@ struct rte_eventdev { /**< Flag indicating the device is attached */ uint64_t reserved_64s[4]; /**< Reserved for future fields */ - void *reserved_ptrs[4]; /**< Reserved for future fields */ + void *reserved_ptrs[3]; /**< Reserved for future fields */ } __rte_cache_aligned; extern struct rte_eventdev *rte_eventdevs; diff --git a/lib/librte_eventdev/rte_eventdev_trace_fp.h b/lib/librte_eventdev/rte_eventdev_trace_fp.h index 349129c0f..5639e0b83 100644 --- a/lib/librte_eventdev/rte_eventdev_trace_fp.h +++ b/lib/librte_eventdev/rte_eventdev_trace_fp.h @@ -49,6 +49,16 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_u8(flags); ) +RTE_TRACE_POINT_FP( + rte_eventdev_trace_crypto_adapter_enqueue, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table, + uint16_t nb_events), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_ptr(ev_table); + rte_trace_point_emit_u16(nb_events); +) + RTE_TRACE_POINT_FP( rte_eventdev_trace_timer_arm_burst, RTE_TRACE_POINT_ARGS(const void *adapter, void **evtims_table, diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map index 902df0ae3..7e264d3b8 100644 --- a/lib/librte_eventdev/version.map +++ b/lib/librte_eventdev/version.map @@ -143,6 +143,7 @@ EXPERIMENTAL { rte_event_vector_pool_create; rte_event_eth_rx_adapter_vector_limits_get; rte_event_eth_rx_adapter_queue_event_vector_config; + __rte_eventdev_trace_crypto_adapter_enqueue; }; INTERNAL { -- 2.25.1
Advertise crypto adapter forward mode capability and set crypto adapter enqueue function in driver. Signed-off-by: Shijith Thotton <sthotton@marvell.com> --- drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 42 ++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- 7 files changed, 122 insertions(+), 21 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c index cec20b5c6..4808dca64 100644 --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c @@ -7,6 +7,7 @@ #include <rte_cryptodev_pmd.h> #include <rte_errno.h> #include <rte_ethdev.h> +#include <rte_event_crypto_adapter.h> #include "otx2_cryptodev.h" #include "otx2_cryptodev_capabilities.h" @@ -434,15 +435,28 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform, return -ENOTSUP; } -static __rte_always_inline void __rte_hot +static __rte_always_inline int32_t __rte_hot otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, struct cpt_request_info *req, void *lmtline, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { + union rte_event_crypto_metadata *m_data; union cpt_inst_s inst; uint64_t lmt_status; + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) + m_data = rte_cryptodev_sym_session_get_user_data( + op->sym->session); + else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + op->private_data_offset) + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)op + + op->private_data_offset); + else + return -EINVAL; + inst.u[0] = 0; inst.s9x.res_addr = req->comp_baddr; inst.u[2] = 0; @@ -453,12 +467,11 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, inst.s9x.ei2 = req->ist.ei2; inst.s9x.ei3 = cpt_inst_w7; - inst.s9x.qord = 1; - inst.s9x.grp = qp->ev.queue_id; - inst.s9x.tt = qp->ev.sched_type; - inst.s9x.tag = (RTE_EVENT_TYPE_CRYPTODEV << 28) | - qp->ev.flow_id; - inst.s9x.wq_ptr = (uint64_t)req >> 3; + inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) | + m_data->response_info.flow_id) | + ((uint64_t)m_data->response_info.sched_type << 32) | + ((uint64_t)m_data->response_info.queue_id << 34)); + inst.u[3] = 1 | (((uint64_t)req >> 3) << 3); req->qp = qp; do { @@ -475,22 +488,22 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, lmt_status = otx2_lmt_submit(qp->lf_nq_reg); } while (lmt_status == 0); + return 0; } static __rte_always_inline int32_t __rte_hot otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp, struct pending_queue *pend_q, struct cpt_request_info *req, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { void *lmtline = qp->lmtline; union cpt_inst_s inst; uint64_t lmt_status; - if (qp->ca_enable) { - otx2_ca_enqueue_req(qp, req, lmtline, cpt_inst_w7); - return 0; - } + if (qp->ca_enable) + return otx2_ca_enqueue_req(qp, req, lmtline, op, cpt_inst_w7); if (unlikely(pend_q->pending_count >= OTX2_CPT_DEFAULT_CMD_QLEN)) return -EAGAIN; @@ -594,7 +607,8 @@ otx2_cpt_enqueue_asym(struct otx2_cpt_qp *qp, goto req_fail; } - ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op, + sess->cpt_inst_w7); if (unlikely(ret)) { CPT_LOG_DP_ERR("Could not enqueue crypto req"); @@ -638,7 +652,7 @@ otx2_cpt_enqueue_sym(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (unlikely(ret)) { /* Free buffer allocated by fill params routines */ @@ -707,7 +721,7 @@ otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (winsz && esn) { seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low; diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 770a801c4..59450521a 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -12,8 +12,9 @@ #include <rte_mbuf_pool_ops.h> #include <rte_pci.h> -#include "otx2_evdev_stats.h" #include "otx2_evdev.h" +#include "otx2_evdev_crypto_adptr_tx.h" +#include "otx2_evdev_stats.h" #include "otx2_irq.h" #include "otx2_tim_evdev.h" @@ -311,6 +312,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_ca_enq; if (dev->dual_ws) { event_dev->enqueue = otx2_ssogws_dual_enq; @@ -473,6 +475,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq; } event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c index 4e8a96cb6..2c9b347f0 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c @@ -18,7 +18,8 @@ otx2_ca_caps_get(const struct rte_eventdev *dev, RTE_SET_USED(cdev); *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND | - RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW; + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW | + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD; return 0; } diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h similarity index 93% rename from drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h rename to drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h index 70b63933e..9e331fdd7 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h @@ -2,8 +2,8 @@ * Copyright (C) 2020 Marvell International Ltd. */ -#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ -#define _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ #include <rte_cryptodev.h> #include <rte_cryptodev_pmd.h> @@ -72,4 +72,4 @@ otx2_handle_crypto_event(uint64_t get_work1) return (uint64_t)(cop); } -#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ */ +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */ diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h new file mode 100644 index 000000000..ecf7eb9f5 --- /dev/null +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2021 Marvell International Ltd. + */ + +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ + +#include <rte_cryptodev.h> +#include <rte_cryptodev_pmd.h> +#include <rte_event_crypto_adapter.h> +#include <rte_eventdev.h> + +#include <otx2_cryptodev_qp.h> +#include <otx2_worker.h> + +static inline uint16_t +otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) +{ + union rte_event_crypto_metadata *m_data; + struct rte_crypto_op *crypto_op; + struct rte_cryptodev *cdev; + struct otx2_cpt_qp *qp; + uint8_t cdev_id; + uint16_t qp_id; + + crypto_op = ev->event_ptr; + if (crypto_op == NULL) + return 0; + + if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + m_data = rte_cryptodev_sym_session_get_user_data( + crypto_op->sym->session); + if (m_data == NULL) + goto free_op; + + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + crypto_op->private_data_offset) { + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)crypto_op + + crypto_op->private_data_offset); + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else { + goto free_op; + } + + cdev = &rte_cryptodevs[cdev_id]; + qp = cdev->data->queue_pairs[qp_id]; + + if (!ev->sched_type) + otx2_ssogws_head_wait(tag_op); + if (qp->ca_enable) + return cdev->enqueue_burst(qp, &crypto_op, 1); + +free_op: + rte_pktmbuf_free(crypto_op->sym->m_src); + rte_crypto_op_free(crypto_op); + rte_errno = EINVAL; + return 0; +} + +static uint16_t __rte_hot +otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->tag_op, ev); +} + +static uint16_t __rte_hot +otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws_dual *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->ws_state[!ws->vws].tag_op, ev); +} +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */ diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h index 2b716c042..fd149be91 100644 --- a/drivers/event/octeontx2/otx2_worker.h +++ b/drivers/event/octeontx2/otx2_worker.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" #include "otx2_ethdev_sec_tx.h" /* SSO Operations */ diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h index 72b616439..36ae4dd88 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.h +++ b/drivers/event/octeontx2/otx2_worker_dual.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" /* SSO Operations */ static __rte_always_inline uint16_t -- 2.25.1
Use rte_event_crypto_adapter_enqueue() API to enqueue events to crypto adapter if forward mode is supported in driver. Signed-off-by: Shijith Thotton <sthotton@marvell.com> --- app/test/test_event_crypto_adapter.c | 33 ++++++++++++++++++---------- 1 file changed, 21 insertions(+), 12 deletions(-) diff --git a/app/test/test_event_crypto_adapter.c b/app/test/test_event_crypto_adapter.c index 335211cd8..f689bc1f2 100644 --- a/app/test/test_event_crypto_adapter.c +++ b/app/test/test_event_crypto_adapter.c @@ -64,6 +64,7 @@ struct event_crypto_adapter_test_params { struct rte_mempool *session_priv_mpool; struct rte_cryptodev_config *config; uint8_t crypto_event_port_id; + uint8_t internal_port_op_fwd; }; struct rte_event response_info = { @@ -110,9 +111,12 @@ send_recv_ev(struct rte_event *ev) struct rte_event recv_ev; int ret; - ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); - TEST_ASSERT_EQUAL(ret, NUM, - "Failed to send event to crypto adapter\n"); + if (params.internal_port_op_fwd) + ret = rte_event_crypto_adapter_enqueue(evdev, TEST_APP_PORT_ID, + ev, NUM); + else + ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); + TEST_ASSERT_EQUAL(ret, NUM, "Failed to send event to crypto adapter\n"); while (rte_event_dequeue_burst(evdev, TEST_APP_PORT_ID, &recv_ev, NUM, 0) == 0) @@ -747,9 +751,12 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND)) goto adapter_create; - if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) && - !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD)) - return -ENOTSUP; + if (mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) { + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) + params.internal_port_op_fwd = 1; + else + return -ENOTSUP; + } if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_NEW) && !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW)) @@ -771,9 +778,11 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) TEST_ASSERT_SUCCESS(ret, "Failed to add queue pair\n"); - ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, - ¶ms.crypto_event_port_id); - TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + if (!params.internal_port_op_fwd) { + ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, + ¶ms.crypto_event_port_id); + TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + } return TEST_SUCCESS; } @@ -809,15 +818,15 @@ test_crypto_adapter_conf(enum rte_event_crypto_adapter_mode mode) if (!crypto_adapter_setup_done) { ret = configure_event_crypto_adapter(mode); - if (!ret) { + if (ret) + return ret; + if (!params.internal_port_op_fwd) { qid = TEST_CRYPTO_EV_QUEUE_ID; ret = rte_event_port_link(evdev, params.crypto_event_port_id, &qid, NULL, 1); TEST_ASSERT(ret >= 0, "Failed to link queue %d " "port=%u\n", qid, params.crypto_event_port_id); - } else { - return ret; } crypto_adapter_setup_done = 1; } -- 2.25.1
Hi Shijith,
CI is failing for this patch -> ci/Intel-compilation fail apply issues
Whereas CI is not running for other patches. Could you please check?
Regards
Abhinandan
> -----Original Message-----
> From: Shijith Thotton <sthotton@marvell.com>
> Sent: Friday, April 9, 2021 7:30 PM
> To: dev@dpdk.org
> Cc: Shijith Thotton <sthotton@marvell.com>; thomas@monjalon.net;
> jerinj@marvell.com; Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>;
> hemant.agrawal@nxp.com; nipun.gupta@nxp.com;
> sachin.saxena@oss.nxp.com; anoobj@marvell.com; matan@nvidia.com;
> Zhang, Roy Fan <roy.fan.zhang@intel.com>; g.singh@nxp.com; Carrillo, Erik
> G <erik.g.carrillo@intel.com>; Jayatheerthan, Jay
> <jay.jayatheerthan@intel.com>; pbhagavatula@marvell.com; Van Haaren,
> Harry <harry.van.haaren@intel.com>; Akhil Goyal <gakhil@marvell.com>
> Subject: [PATCH v6 3/3] test/event_crypto: use crypto adapter enqueue API
>
> Use rte_event_crypto_adapter_enqueue() API to enqueue events to crypto
> adapter if forward mode is supported in driver.
>
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> ---
> app/test/test_event_crypto_adapter.c | 33 ++++++++++++++++++----------
> 1 file changed, 21 insertions(+), 12 deletions(-)
>
> diff --git a/app/test/test_event_crypto_adapter.c
> b/app/test/test_event_crypto_adapter.c
> index 335211cd8..f689bc1f2 100644
> --- a/app/test/test_event_crypto_adapter.c
> +++ b/app/test/test_event_crypto_adapter.c
> @@ -64,6 +64,7 @@ struct event_crypto_adapter_test_params {
> struct rte_mempool *session_priv_mpool;
> struct rte_cryptodev_config *config;
> uint8_t crypto_event_port_id;
> + uint8_t internal_port_op_fwd;
> };
>
> struct rte_event response_info = {
> @@ -110,9 +111,12 @@ send_recv_ev(struct rte_event *ev)
> struct rte_event recv_ev;
> int ret;
>
> - ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev,
> NUM);
> - TEST_ASSERT_EQUAL(ret, NUM,
> - "Failed to send event to crypto adapter\n");
> + if (params.internal_port_op_fwd)
> + ret = rte_event_crypto_adapter_enqueue(evdev,
> TEST_APP_PORT_ID,
> + ev, NUM);
> + else
> + ret = rte_event_enqueue_burst(evdev,
> TEST_APP_PORT_ID, ev, NUM);
> + TEST_ASSERT_EQUAL(ret, NUM, "Failed to send event to crypto
> +adapter\n");
>
> while (rte_event_dequeue_burst(evdev,
> TEST_APP_PORT_ID, &recv_ev, NUM, 0) == 0) @@ -
> 747,9 +751,12 @@ configure_event_crypto_adapter(enum
> rte_event_crypto_adapter_mode mode)
> !(cap &
> RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND))
> goto adapter_create;
>
> - if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) &&
> - !(cap &
> RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD))
> - return -ENOTSUP;
> + if (mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) {
> + if (cap &
> RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD)
> + params.internal_port_op_fwd = 1;
> + else
> + return -ENOTSUP;
> + }
>
> if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_NEW) &&
> !(cap &
> RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW))
> @@ -771,9 +778,11 @@ configure_event_crypto_adapter(enum
> rte_event_crypto_adapter_mode mode)
>
> TEST_ASSERT_SUCCESS(ret, "Failed to add queue pair\n");
>
> - ret =
> rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID,
> - ¶ms.crypto_event_port_id);
> - TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n");
> + if (!params.internal_port_op_fwd) {
> + ret =
> rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID,
> +
> ¶ms.crypto_event_port_id);
> + TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n");
> + }
>
> return TEST_SUCCESS;
> }
> @@ -809,15 +818,15 @@ test_crypto_adapter_conf(enum
> rte_event_crypto_adapter_mode mode)
>
> if (!crypto_adapter_setup_done) {
> ret = configure_event_crypto_adapter(mode);
> - if (!ret) {
> + if (ret)
> + return ret;
> + if (!params.internal_port_op_fwd) {
> qid = TEST_CRYPTO_EV_QUEUE_ID;
> ret = rte_event_port_link(evdev,
> params.crypto_event_port_id, &qid, NULL,
> 1);
> TEST_ASSERT(ret >= 0, "Failed to link queue %d "
> "port=%u\n", qid,
> params.crypto_event_port_id);
> - } else {
> - return ret;
> }
> crypto_adapter_setup_done = 1;
> }
> --
> 2.25.1
On Mon, Apr 12, 2021 at 05:10:56AM +0000, Gujjar, Abhinandan S wrote:
> Hi Shijith,
>
> CI is failing for this patch -> ci/Intel-compilation fail apply issues
> Whereas CI is not running for other patches. Could you please check?
>
[snip]
Hi Abhinandan,
Full CI is run only for the last patch in the series and checkpatch is run for
the remaining. I have rebased the series (v6) on top of dpdk-next-eventdev, but
ci/Intel-compilation is using dpdk repo and apply is failing.
Thanks,
Shijith
Hi Shijith, > -----Original Message----- > From: Shijith Thotton <shijith.thotton@gmail.com> > Sent: Monday, April 12, 2021 12:33 PM > To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com> > Cc: Shijith Thotton <sthotton@marvell.com>; dev@dpdk.org; > thomas@monjalon.net; jerinj@marvell.com; hemant.agrawal@nxp.com; > nipun.gupta@nxp.com; sachin.saxena@oss.nxp.com; anoobj@marvell.com; > matan@nvidia.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>; > g.singh@nxp.com; Carrillo, Erik G <erik.g.carrillo@intel.com>; Jayatheerthan, > Jay <jay.jayatheerthan@intel.com>; pbhagavatula@marvell.com; Van > Haaren, Harry <harry.van.haaren@intel.com>; Akhil Goyal > <gakhil@marvell.com> > Subject: Re: [dpdk-dev] [PATCH v6 3/3] test/event_crypto: use crypto > adapter enqueue API > > On Mon, Apr 12, 2021 at 05:10:56AM +0000, Gujjar, Abhinandan S wrote: > > Hi Shijith, > > > > CI is failing for this patch -> ci/Intel-compilation fail apply issues > > Whereas CI is not running for other patches. Could you please check? > > > [snip] > > Hi Abhinandan, > > Full CI is run only for the last patch in the series and checkpatch is run for the > remaining. I have rebased the series (v6) on top of dpdk-next-eventdev, but > ci/Intel-compilation is using dpdk repo and apply is failing. Looking at other patches, I somehow feel that full CI is not running on your patch set. Not sure what is missing. > > Thanks, > Shijith
On Mon, Apr 12, 2021 at 07:24:21AM +0000, Gujjar, Abhinandan S wrote:
> Hi Shijith,
>
> > -----Original Message-----
> > From: Shijith Thotton <shijith.thotton@gmail.com>
> > Sent: Monday, April 12, 2021 12:33 PM
> > To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>
> > Cc: Shijith Thotton <sthotton@marvell.com>; dev@dpdk.org;
> > thomas@monjalon.net; jerinj@marvell.com; hemant.agrawal@nxp.com;
> > nipun.gupta@nxp.com; sachin.saxena@oss.nxp.com; anoobj@marvell.com;
> > matan@nvidia.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> > g.singh@nxp.com; Carrillo, Erik G <erik.g.carrillo@intel.com>; Jayatheerthan,
> > Jay <jay.jayatheerthan@intel.com>; pbhagavatula@marvell.com; Van
> > Haaren, Harry <harry.van.haaren@intel.com>; Akhil Goyal
> > <gakhil@marvell.com>
> > Subject: Re: [dpdk-dev] [PATCH v6 3/3] test/event_crypto: use crypto
> > adapter enqueue API
> >
> > On Mon, Apr 12, 2021 at 05:10:56AM +0000, Gujjar, Abhinandan S wrote:
> > > Hi Shijith,
> > >
> > > CI is failing for this patch -> ci/Intel-compilation fail apply issues
> > > Whereas CI is not running for other patches. Could you please check?
> > >
> > [snip]
> >
> > Hi Abhinandan,
> >
> > Full CI is run only for the last patch in the series and checkpatch is run for the
> > remaining. I have rebased the series (v6) on top of dpdk-next-eventdev, but
> > ci/Intel-compilation is using dpdk repo and apply is failing.
> Looking at other patches, I somehow feel that full CI is not running on your patch set.
> Not sure what is missing.
I will rebase and send again.
Thanks,
Shijith
This series proposes a new event device enqueue operation if crypto adapter forward mode is supported. Second patch in the series is the implementation of the same in PMD. Test application changes are added in third patch. v7: - Rebased. v6: - Rebased. v5: - Set rte_errno if crypto adapter enqueue fails in driver. - Test application code restructuring. v4: - Fix debug build. v3: - Added crypto adapter test application changes. v2: - Updated release notes. - Made use of RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET macro. - Fixed v1 build error. v1: - Added crypto adapter forward mode support for octeontx2. Akhil Goyal (1): eventdev: introduce crypto adapter enqueue API Shijith Thotton (2): event/octeontx2: support crypto adapter forward mode test/event_crypto: use crypto adapter enqueue API app/test/test_event_crypto_adapter.c | 33 +++++--- .../prog_guide/event_crypto_adapter.rst | 69 +++++++++------ doc/guides/rel_notes/release_21_05.rst | 6 ++ drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 42 ++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 63 ++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 8 +- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 1 + 16 files changed, 286 insertions(+), 60 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h -- 2.25.1
From: Akhil Goyal <gakhil@marvell.com> In case an event from a previous stage is required to be forwarded to a crypto adapter and PMD supports internal event port in crypto adapter, exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not have a way to check in the API rte_event_enqueue_burst(), whether it is for crypto adapter or for eth tx adapter. Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue(), which can send to a crypto adapter. Note that RTE_EVENT_TYPE_* cannot be used to make that decision, as it is meant for event source and not event destination. And event port designated for crypto adapter is designed to be used for OP_NEW mode. Hence, in order to support an event PMD which has an internal event port in crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, application should use rte_event_crypto_adapter_enqueue() API to enqueue events. When internal port is not available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW mode), application can use API rte_event_enqueue_burst() as it was doing earlier, i.e. retrieve event port used by crypto adapter and bind its event queues to that port and enqueue events using the API rte_event_enqueue_burst(). Signed-off-by: Akhil Goyal <gakhil@marvell.com> --- .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++++------- doc/guides/rel_notes/release_21_05.rst | 6 ++ lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 63 +++++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 8 ++- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 1 + 8 files changed, 143 insertions(+), 27 deletions(-) diff --git a/doc/guides/prog_guide/event_crypto_adapter.rst b/doc/guides/prog_guide/event_crypto_adapter.rst index 1e3eb7139..4fb5c688e 100644 --- a/doc/guides/prog_guide/event_crypto_adapter.rst +++ b/doc/guides/prog_guide/event_crypto_adapter.rst @@ -55,21 +55,22 @@ which is needed to enqueue an event after the crypto operation is completed. RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -In the RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, if HW supports -RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability the application -can directly submit the crypto operations to the cryptodev. -If not, application retrieves crypto adapter's event port using -rte_event_crypto_adapter_event_port_get() API. Then, links its event -queue to this port and starts enqueuing crypto operations as events -to the eventdev. The adapter then dequeues the events and submits the -crypto operations to the cryptodev. After the crypto completions, the -adapter enqueues events to the event device. -Application can use this mode, when ingress packet ordering is needed. -In this mode, events dequeued from the adapter will be treated as -forwarded events. The application needs to specify the cryptodev ID -and queue pair ID (request information) needed to enqueue a crypto -operation in addition to the event information (response information) -needed to enqueue an event after the crypto operation has completed. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), the application should +use ``rte_event_crypto_adapter_enqueue()`` API to enqueue crypto operations as +events to crypto adapter. If not, application retrieves crypto adapter's event +port using ``rte_event_crypto_adapter_event_port_get()`` API, links its event +queue to this port and starts enqueuing crypto operations as events to eventdev +using ``rte_event_enqueue_burst()``. The adapter then dequeues the events and +submits the crypto operations to the cryptodev. After the crypto operation is +complete, the adapter enqueues events to the event device. The application can +use this mode when ingress packet ordering is needed. In this mode, events +dequeued from the adapter will be treated as forwarded events. The application +needs to specify the cryptodev ID and queue pair ID (request information) needed +to enqueue a crypto operation in addition to the event information (response +information) needed to enqueue an event after the crypto operation has +completed. .. _figure_event_crypto_adapter_op_forward: @@ -120,28 +121,44 @@ service function and needs to create an event port for it. The callback is expected to fill the ``struct rte_event_crypto_adapter_conf`` structure passed to it. -For RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, the event port created by adapter -can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` API. -Application can use this event port to link with event queue on which it -enqueues events towards the crypto adapter. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), events with crypto +operations should be enqueued to the crypto adapter using +``rte_event_crypto_adapter_enqueue()`` API. If not, the event port created by +the adapter can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` +API. An application can use this event port to link with an event queue, on +which it enqueues events towards the crypto adapter using +``rte_event_enqueue_burst()``. .. code-block:: c - uint8_t id, evdev, crypto_ev_port_id, app_qid; + uint8_t id, evdev_id, cdev_id, crypto_ev_port_id, app_qid; struct rte_event ev; + uint32_t cap; int ret; - ret = rte_event_crypto_adapter_event_port_get(id, &crypto_ev_port_id); - ret = rte_event_queue_setup(evdev, app_qid, NULL); - ret = rte_event_port_link(evdev, crypto_ev_port_id, &app_qid, NULL, 1); - // Fill in event info and update event_ptr with rte_crypto_op memset(&ev, 0, sizeof(ev)); - ev.queue_id = app_qid; . . ev.event_ptr = op; - ret = rte_event_enqueue_burst(evdev, app_ev_port_id, ev, nb_events); + + ret = rte_event_crypto_adapter_caps_get(evdev_id, cdev_id, &cap); + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) { + ret = rte_event_crypto_adapter_enqueue(evdev_id, app_ev_port_id, + ev, nb_events); + } else { + ret = rte_event_crypto_adapter_event_port_get(id, + &crypto_ev_port_id); + ret = rte_event_queue_setup(evdev_id, app_qid, NULL); + ret = rte_event_port_link(evdev_id, crypto_ev_port_id, &app_qid, + NULL, 1); + ev.queue_id = app_qid; + ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev, + nb_events); + } + Querying adapter capabilities ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst index fbb3c5975..5ad107ae3 100644 --- a/doc/guides/rel_notes/release_21_05.rst +++ b/doc/guides/rel_notes/release_21_05.rst @@ -169,6 +169,12 @@ New Features the events across multiple stages. * This also reduces the scheduling overhead on a event device. +* **Enhanced crypto adapter forward mode.** + + * Added ``rte_event_crypto_adapter_enqueue()`` API to enqueue events to crypto + adapter if forward mode is supported by driver. + * Added support for crypto adapter forward mode in octeontx2 event and crypto + device driver. Removed Items ------------- diff --git a/lib/librte_eventdev/eventdev_trace_points.c b/lib/librte_eventdev/eventdev_trace_points.c index 1a0ccc448..3867ec800 100644 --- a/lib/librte_eventdev/eventdev_trace_points.c +++ b/lib/librte_eventdev/eventdev_trace_points.c @@ -118,3 +118,6 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_start, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_stop, lib.eventdev.crypto.stop) + +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_enqueue, + lib.eventdev.crypto.enq) diff --git a/lib/librte_eventdev/rte_event_crypto_adapter.h b/lib/librte_eventdev/rte_event_crypto_adapter.h index 60630ef66..a4a4129b7 100644 --- a/lib/librte_eventdev/rte_event_crypto_adapter.h +++ b/lib/librte_eventdev/rte_event_crypto_adapter.h @@ -171,6 +171,7 @@ extern "C" { #include <stdint.h> #include "rte_eventdev.h" +#include "eventdev_pmd.h" /** * Crypto event adapter mode @@ -522,6 +523,68 @@ rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id); int rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); +/** + * Enqueue a burst of crypto operations as events object supplied in *rte_event* + * structure on an event crypto adapter designated by its event *dev_id* through + * the event port specified by *port_id*. This function is supported if the + * eventdev PMD has the #RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD + * capability flag set. + * + * The *nb_events* parameter is the number of event objects to enqueue which are + * supplied in the *ev* array of *rte_event* structure. + * + * The rte_event_crypto_adapter_enqueue() function returns the number of + * events objects it actually enqueued. A return value equal to *nb_events* + * means that all event objects have been enqueued. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param ev + * Points to an array of *nb_events* objects of type *rte_event* structure + * which contain the event object enqueue operations to be processed. + * @param nb_events + * The number of event objects to enqueue, typically number of + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) + * available for this port. + * + * @return + * The number of event objects actually enqueued on the event device. The + * return value can be less than the value of the *nb_events* parameter when + * the event devices queue is full or if invalid parameters are specified in a + * *rte_event*. If the return value is less than *nb_events*, the remaining + * events at the end of ev[] are not consumed and the caller has to take care + * of them, and rte_errno is set accordingly. Possible errno values include: + * - EINVAL The port ID is invalid, device ID is invalid, an event's queue + * ID is invalid, or an event's sched type doesn't match the + * capabilities of the destination queue. + * - ENOSPC The event port was backpressured and unable to enqueue + * one or more events. This error code is only applicable to + * closed systems. + */ +static inline uint16_t +rte_event_crypto_adapter_enqueue(uint8_t dev_id, + uint8_t port_id, + struct rte_event ev[], + uint16_t nb_events) +{ + const struct rte_eventdev *dev = &rte_eventdevs[dev_id]; + +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + if (port_id >= dev->data->nb_ports) { + rte_errno = EINVAL; + return 0; + } +#endif + rte_eventdev_trace_crypto_adapter_enqueue(dev_id, port_id, ev, + nb_events); + + return dev->ca_enqueue(dev->data->ports[port_id], ev, nb_events); +} + #ifdef __cplusplus } #endif diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c index c9bb5d227..594dd5e75 100644 --- a/lib/librte_eventdev/rte_eventdev.c +++ b/lib/librte_eventdev/rte_eventdev.c @@ -1454,6 +1454,15 @@ rte_event_tx_adapter_enqueue(__rte_unused void *port, return 0; } +static uint16_t +rte_event_crypto_adapter_enqueue(__rte_unused void *port, + __rte_unused struct rte_event ev[], + __rte_unused uint16_t nb_events) +{ + rte_errno = ENOTSUP; + return 0; +} + struct rte_eventdev * rte_event_pmd_allocate(const char *name, int socket_id) { @@ -1476,6 +1485,7 @@ rte_event_pmd_allocate(const char *name, int socket_id) eventdev->txa_enqueue = rte_event_tx_adapter_enqueue; eventdev->txa_enqueue_same_dest = rte_event_tx_adapter_enqueue; + eventdev->ca_enqueue = rte_event_crypto_adapter_enqueue; if (eventdev->data == NULL) { struct rte_eventdev_data *eventdev_data = NULL; diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h index 5f1f544cc..477a59461 100644 --- a/lib/librte_eventdev/rte_eventdev.h +++ b/lib/librte_eventdev/rte_eventdev.h @@ -1352,6 +1352,10 @@ typedef uint16_t (*event_tx_adapter_enqueue_same_dest)(void *port, * burst having same destination Ethernet port & Tx queue. */ +typedef uint16_t (*event_crypto_adapter_enqueue)(void *port, + struct rte_event ev[], uint16_t nb_events); +/**< @internal Enqueue burst of events on crypto adapter */ + #define RTE_EVENTDEV_NAME_MAX_LEN (64) /**< @internal Max length of name of event PMD */ @@ -1423,6 +1427,8 @@ struct rte_eventdev { */ event_tx_adapter_enqueue txa_enqueue; /**< Pointer to PMD eth Tx adapter enqueue function. */ + event_crypto_adapter_enqueue ca_enqueue; + /**< Pointer to PMD crypto adapter enqueue function. */ struct rte_eventdev_data *data; /**< Pointer to device data */ struct rte_eventdev_ops *dev_ops; @@ -1435,7 +1441,7 @@ struct rte_eventdev { /**< Flag indicating the device is attached */ uint64_t reserved_64s[4]; /**< Reserved for future fields */ - void *reserved_ptrs[4]; /**< Reserved for future fields */ + void *reserved_ptrs[3]; /**< Reserved for future fields */ } __rte_cache_aligned; extern struct rte_eventdev *rte_eventdevs; diff --git a/lib/librte_eventdev/rte_eventdev_trace_fp.h b/lib/librte_eventdev/rte_eventdev_trace_fp.h index 349129c0f..5639e0b83 100644 --- a/lib/librte_eventdev/rte_eventdev_trace_fp.h +++ b/lib/librte_eventdev/rte_eventdev_trace_fp.h @@ -49,6 +49,16 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_u8(flags); ) +RTE_TRACE_POINT_FP( + rte_eventdev_trace_crypto_adapter_enqueue, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table, + uint16_t nb_events), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_ptr(ev_table); + rte_trace_point_emit_u16(nb_events); +) + RTE_TRACE_POINT_FP( rte_eventdev_trace_timer_arm_burst, RTE_TRACE_POINT_ARGS(const void *adapter, void **evtims_table, diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map index 902df0ae3..7e264d3b8 100644 --- a/lib/librte_eventdev/version.map +++ b/lib/librte_eventdev/version.map @@ -143,6 +143,7 @@ EXPERIMENTAL { rte_event_vector_pool_create; rte_event_eth_rx_adapter_vector_limits_get; rte_event_eth_rx_adapter_queue_event_vector_config; + __rte_eventdev_trace_crypto_adapter_enqueue; }; INTERNAL { -- 2.25.1
Advertise crypto adapter forward mode capability and set crypto adapter enqueue function in driver. Signed-off-by: Shijith Thotton <sthotton@marvell.com> --- drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 42 ++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- 7 files changed, 122 insertions(+), 21 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c index cec20b5c6..4808dca64 100644 --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c @@ -7,6 +7,7 @@ #include <rte_cryptodev_pmd.h> #include <rte_errno.h> #include <rte_ethdev.h> +#include <rte_event_crypto_adapter.h> #include "otx2_cryptodev.h" #include "otx2_cryptodev_capabilities.h" @@ -434,15 +435,28 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform, return -ENOTSUP; } -static __rte_always_inline void __rte_hot +static __rte_always_inline int32_t __rte_hot otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, struct cpt_request_info *req, void *lmtline, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { + union rte_event_crypto_metadata *m_data; union cpt_inst_s inst; uint64_t lmt_status; + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) + m_data = rte_cryptodev_sym_session_get_user_data( + op->sym->session); + else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + op->private_data_offset) + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)op + + op->private_data_offset); + else + return -EINVAL; + inst.u[0] = 0; inst.s9x.res_addr = req->comp_baddr; inst.u[2] = 0; @@ -453,12 +467,11 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, inst.s9x.ei2 = req->ist.ei2; inst.s9x.ei3 = cpt_inst_w7; - inst.s9x.qord = 1; - inst.s9x.grp = qp->ev.queue_id; - inst.s9x.tt = qp->ev.sched_type; - inst.s9x.tag = (RTE_EVENT_TYPE_CRYPTODEV << 28) | - qp->ev.flow_id; - inst.s9x.wq_ptr = (uint64_t)req >> 3; + inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) | + m_data->response_info.flow_id) | + ((uint64_t)m_data->response_info.sched_type << 32) | + ((uint64_t)m_data->response_info.queue_id << 34)); + inst.u[3] = 1 | (((uint64_t)req >> 3) << 3); req->qp = qp; do { @@ -475,22 +488,22 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, lmt_status = otx2_lmt_submit(qp->lf_nq_reg); } while (lmt_status == 0); + return 0; } static __rte_always_inline int32_t __rte_hot otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp, struct pending_queue *pend_q, struct cpt_request_info *req, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { void *lmtline = qp->lmtline; union cpt_inst_s inst; uint64_t lmt_status; - if (qp->ca_enable) { - otx2_ca_enqueue_req(qp, req, lmtline, cpt_inst_w7); - return 0; - } + if (qp->ca_enable) + return otx2_ca_enqueue_req(qp, req, lmtline, op, cpt_inst_w7); if (unlikely(pend_q->pending_count >= OTX2_CPT_DEFAULT_CMD_QLEN)) return -EAGAIN; @@ -594,7 +607,8 @@ otx2_cpt_enqueue_asym(struct otx2_cpt_qp *qp, goto req_fail; } - ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op, + sess->cpt_inst_w7); if (unlikely(ret)) { CPT_LOG_DP_ERR("Could not enqueue crypto req"); @@ -638,7 +652,7 @@ otx2_cpt_enqueue_sym(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (unlikely(ret)) { /* Free buffer allocated by fill params routines */ @@ -707,7 +721,7 @@ otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (winsz && esn) { seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low; diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index cdadbb2b2..ee7a6ad51 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -12,8 +12,9 @@ #include <rte_mbuf_pool_ops.h> #include <rte_pci.h> -#include "otx2_evdev_stats.h" #include "otx2_evdev.h" +#include "otx2_evdev_crypto_adptr_tx.h" +#include "otx2_evdev_stats.h" #include "otx2_irq.h" #include "otx2_tim_evdev.h" @@ -311,6 +312,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_ca_enq; if (dev->dual_ws) { event_dev->enqueue = otx2_ssogws_dual_enq; @@ -473,6 +475,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq; } event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c index 4e8a96cb6..2c9b347f0 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c @@ -18,7 +18,8 @@ otx2_ca_caps_get(const struct rte_eventdev *dev, RTE_SET_USED(cdev); *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND | - RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW; + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW | + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD; return 0; } diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h similarity index 93% rename from drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h rename to drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h index 70b63933e..9e331fdd7 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h @@ -2,8 +2,8 @@ * Copyright (C) 2020 Marvell International Ltd. */ -#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ -#define _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ #include <rte_cryptodev.h> #include <rte_cryptodev_pmd.h> @@ -72,4 +72,4 @@ otx2_handle_crypto_event(uint64_t get_work1) return (uint64_t)(cop); } -#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ */ +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */ diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h new file mode 100644 index 000000000..ecf7eb9f5 --- /dev/null +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2021 Marvell International Ltd. + */ + +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ + +#include <rte_cryptodev.h> +#include <rte_cryptodev_pmd.h> +#include <rte_event_crypto_adapter.h> +#include <rte_eventdev.h> + +#include <otx2_cryptodev_qp.h> +#include <otx2_worker.h> + +static inline uint16_t +otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) +{ + union rte_event_crypto_metadata *m_data; + struct rte_crypto_op *crypto_op; + struct rte_cryptodev *cdev; + struct otx2_cpt_qp *qp; + uint8_t cdev_id; + uint16_t qp_id; + + crypto_op = ev->event_ptr; + if (crypto_op == NULL) + return 0; + + if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + m_data = rte_cryptodev_sym_session_get_user_data( + crypto_op->sym->session); + if (m_data == NULL) + goto free_op; + + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + crypto_op->private_data_offset) { + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)crypto_op + + crypto_op->private_data_offset); + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else { + goto free_op; + } + + cdev = &rte_cryptodevs[cdev_id]; + qp = cdev->data->queue_pairs[qp_id]; + + if (!ev->sched_type) + otx2_ssogws_head_wait(tag_op); + if (qp->ca_enable) + return cdev->enqueue_burst(qp, &crypto_op, 1); + +free_op: + rte_pktmbuf_free(crypto_op->sym->m_src); + rte_crypto_op_free(crypto_op); + rte_errno = EINVAL; + return 0; +} + +static uint16_t __rte_hot +otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->tag_op, ev); +} + +static uint16_t __rte_hot +otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws_dual *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->ws_state[!ws->vws].tag_op, ev); +} +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */ diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h index 2b716c042..fd149be91 100644 --- a/drivers/event/octeontx2/otx2_worker.h +++ b/drivers/event/octeontx2/otx2_worker.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" #include "otx2_ethdev_sec_tx.h" /* SSO Operations */ diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h index 72b616439..36ae4dd88 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.h +++ b/drivers/event/octeontx2/otx2_worker_dual.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" /* SSO Operations */ static __rte_always_inline uint16_t -- 2.25.1
Use rte_event_crypto_adapter_enqueue() API to enqueue events to crypto adapter if forward mode is supported in driver. Signed-off-by: Shijith Thotton <sthotton@marvell.com> --- app/test/test_event_crypto_adapter.c | 33 ++++++++++++++++++---------- 1 file changed, 21 insertions(+), 12 deletions(-) diff --git a/app/test/test_event_crypto_adapter.c b/app/test/test_event_crypto_adapter.c index 335211cd8..f689bc1f2 100644 --- a/app/test/test_event_crypto_adapter.c +++ b/app/test/test_event_crypto_adapter.c @@ -64,6 +64,7 @@ struct event_crypto_adapter_test_params { struct rte_mempool *session_priv_mpool; struct rte_cryptodev_config *config; uint8_t crypto_event_port_id; + uint8_t internal_port_op_fwd; }; struct rte_event response_info = { @@ -110,9 +111,12 @@ send_recv_ev(struct rte_event *ev) struct rte_event recv_ev; int ret; - ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); - TEST_ASSERT_EQUAL(ret, NUM, - "Failed to send event to crypto adapter\n"); + if (params.internal_port_op_fwd) + ret = rte_event_crypto_adapter_enqueue(evdev, TEST_APP_PORT_ID, + ev, NUM); + else + ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); + TEST_ASSERT_EQUAL(ret, NUM, "Failed to send event to crypto adapter\n"); while (rte_event_dequeue_burst(evdev, TEST_APP_PORT_ID, &recv_ev, NUM, 0) == 0) @@ -747,9 +751,12 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND)) goto adapter_create; - if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) && - !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD)) - return -ENOTSUP; + if (mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) { + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) + params.internal_port_op_fwd = 1; + else + return -ENOTSUP; + } if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_NEW) && !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW)) @@ -771,9 +778,11 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) TEST_ASSERT_SUCCESS(ret, "Failed to add queue pair\n"); - ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, - ¶ms.crypto_event_port_id); - TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + if (!params.internal_port_op_fwd) { + ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, + ¶ms.crypto_event_port_id); + TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + } return TEST_SUCCESS; } @@ -809,15 +818,15 @@ test_crypto_adapter_conf(enum rte_event_crypto_adapter_mode mode) if (!crypto_adapter_setup_done) { ret = configure_event_crypto_adapter(mode); - if (!ret) { + if (ret) + return ret; + if (!params.internal_port_op_fwd) { qid = TEST_CRYPTO_EV_QUEUE_ID; ret = rte_event_port_link(evdev, params.crypto_event_port_id, &qid, NULL, 1); TEST_ASSERT(ret >= 0, "Failed to link queue %d " "port=%u\n", qid, params.crypto_event_port_id); - } else { - return ret; } crypto_adapter_setup_done = 1; } -- 2.25.1
On Mon, Apr 12, 2021 at 01:05:45PM +0530, Shijith Thotton wrote: > On Mon, Apr 12, 2021 at 07:24:21AM +0000, Gujjar, Abhinandan S wrote: > > Hi Shijith, > > > > > -----Original Message----- > > > From: Shijith Thotton <shijith.thotton@gmail.com> > > > Sent: Monday, April 12, 2021 12:33 PM > > > To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com> > > > Cc: Shijith Thotton <sthotton@marvell.com>; dev@dpdk.org; > > > thomas@monjalon.net; jerinj@marvell.com; hemant.agrawal@nxp.com; > > > nipun.gupta@nxp.com; sachin.saxena@oss.nxp.com; anoobj@marvell.com; > > > matan@nvidia.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>; > > > g.singh@nxp.com; Carrillo, Erik G <erik.g.carrillo@intel.com>; Jayatheerthan, > > > Jay <jay.jayatheerthan@intel.com>; pbhagavatula@marvell.com; Van > > > Haaren, Harry <harry.van.haaren@intel.com>; Akhil Goyal > > > <gakhil@marvell.com> > > > Subject: Re: [dpdk-dev] [PATCH v6 3/3] test/event_crypto: use crypto > > > adapter enqueue API > > > > > > On Mon, Apr 12, 2021 at 05:10:56AM +0000, Gujjar, Abhinandan S wrote: > > > > Hi Shijith, > > > > > > > > CI is failing for this patch -> ci/Intel-compilation fail apply issues > > > > Whereas CI is not running for other patches. Could you please check? > > > > > > > [snip] > > > > > > Hi Abhinandan, > > > > > > Full CI is run only for the last patch in the series and checkpatch is run for the > > > remaining. I have rebased the series (v6) on top of dpdk-next-eventdev, but > > > ci/Intel-compilation is using dpdk repo and apply is failing. > > Looking at other patches, I somehow feel that full CI is not running on your patch set. > > Not sure what is missing. > > I will rebase and send again. > Issue should be fixed once dpdk-next-eventdev syncs with dpdk repo. I will send again after the sync. Please let me know if any further comments on the series. Also please review https://mails.dpdk.org/archives/dev/2021-April/205249.html. Thanks, Shijith
> -----Original Message----- > From: Shijith Thotton <shijith.thotton@gmail.com> > Sent: Monday, April 12, 2021 7:22 PM > To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com> > Cc: Shijith Thotton <sthotton@marvell.com>; dev@dpdk.org; > thomas@monjalon.net; jerinj@marvell.com; hemant.agrawal@nxp.com; > nipun.gupta@nxp.com; sachin.saxena@oss.nxp.com; anoobj@marvell.com; > matan@nvidia.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>; > g.singh@nxp.com; Carrillo, Erik G <erik.g.carrillo@intel.com>; Jayatheerthan, > Jay <jay.jayatheerthan@intel.com>; pbhagavatula@marvell.com; Van > Haaren, Harry <harry.van.haaren@intel.com>; Akhil Goyal > <gakhil@marvell.com> > Subject: Re: [dpdk-dev] [PATCH v6 3/3] test/event_crypto: use crypto > adapter enqueue API > > On Mon, Apr 12, 2021 at 01:05:45PM +0530, Shijith Thotton wrote: > > On Mon, Apr 12, 2021 at 07:24:21AM +0000, Gujjar, Abhinandan S wrote: > > > Hi Shijith, > > > > > > > -----Original Message----- > > > > From: Shijith Thotton <shijith.thotton@gmail.com> > > > > Sent: Monday, April 12, 2021 12:33 PM > > > > To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com> > > > > Cc: Shijith Thotton <sthotton@marvell.com>; dev@dpdk.org; > > > > thomas@monjalon.net; jerinj@marvell.com; > hemant.agrawal@nxp.com; > > > > nipun.gupta@nxp.com; sachin.saxena@oss.nxp.com; > > > > anoobj@marvell.com; matan@nvidia.com; Zhang, Roy Fan > > > > <roy.fan.zhang@intel.com>; g.singh@nxp.com; Carrillo, Erik G > > > > <erik.g.carrillo@intel.com>; Jayatheerthan, Jay > > > > <jay.jayatheerthan@intel.com>; pbhagavatula@marvell.com; Van > > > > Haaren, Harry <harry.van.haaren@intel.com>; Akhil Goyal > > > > <gakhil@marvell.com> > > > > Subject: Re: [dpdk-dev] [PATCH v6 3/3] test/event_crypto: use > > > > crypto adapter enqueue API > > > > > > > > On Mon, Apr 12, 2021 at 05:10:56AM +0000, Gujjar, Abhinandan S > wrote: > > > > > Hi Shijith, > > > > > > > > > > CI is failing for this patch -> ci/Intel-compilation fail apply issues > > > > > Whereas CI is not running for other patches. Could you please check? > > > > > > > > > [snip] > > > > > > > > Hi Abhinandan, > > > > > > > > Full CI is run only for the last patch in the series and > > > > checkpatch is run for the remaining. I have rebased the series > > > > (v6) on top of dpdk-next-eventdev, but ci/Intel-compilation is using > dpdk repo and apply is failing. > > > Looking at other patches, I somehow feel that full CI is not running on > your patch set. > > > Not sure what is missing. > > > > I will rebase and send again. > > > > Issue should be fixed once dpdk-next-eventdev syncs with dpdk repo. I will > send again after the sync. Please let me know if any further comments on > the series. > > Also please review https://mails.dpdk.org/archives/dev/2021- > April/205249.html. Sure Shijith. > > Thanks, > Shijith
> -----Original Message----- > From: Shijith Thotton <sthotton@marvell.com> > Sent: Monday, April 12, 2021 1:14 PM > To: dev@dpdk.org > Cc: Akhil Goyal <gakhil@marvell.com>; thomas@monjalon.net; > jerinj@marvell.com; Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; > hemant.agrawal@nxp.com; nipun.gupta@nxp.com; > sachin.saxena@oss.nxp.com; anoobj@marvell.com; matan@nvidia.com; > Zhang, Roy Fan <roy.fan.zhang@intel.com>; g.singh@nxp.com; Carrillo, Erik > G <erik.g.carrillo@intel.com>; Jayatheerthan, Jay > <jay.jayatheerthan@intel.com>; pbhagavatula@marvell.com; Van Haaren, > Harry <harry.van.haaren@intel.com>; Shijith Thotton > <sthotton@marvell.com> > Subject: [PATCH v7 1/3] eventdev: introduce crypto adapter enqueue API > > From: Akhil Goyal <gakhil@marvell.com> > > In case an event from a previous stage is required to be forwarded to a > crypto adapter and PMD supports internal event port in crypto adapter, > exposed via capability > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not > have a way to check in the API rte_event_enqueue_burst(), whether it is for > crypto adapter or for eth tx adapter. > > Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue(), > which can send to a crypto adapter. > > Note that RTE_EVENT_TYPE_* cannot be used to make that decision, as it is > meant for event source and not event destination. > And event port designated for crypto adapter is designed to be used for > OP_NEW mode. > > Hence, in order to support an event PMD which has an internal event port in > crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), > exposed via capability > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, > application should use rte_event_crypto_adapter_enqueue() API to > enqueue events. > > When internal port is not available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW > mode), application can use API rte_event_enqueue_burst() as it was doing > earlier, i.e. retrieve event port used by crypto adapter and bind its event > queues to that port and enqueue events using the API > rte_event_enqueue_burst(). > > Signed-off-by: Akhil Goyal <gakhil@marvell.com> > --- > .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++++------- > doc/guides/rel_notes/release_21_05.rst | 6 ++ > lib/librte_eventdev/eventdev_trace_points.c | 3 + > .../rte_event_crypto_adapter.h | 63 +++++++++++++++++ > lib/librte_eventdev/rte_eventdev.c | 10 +++ > lib/librte_eventdev/rte_eventdev.h | 8 ++- > lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ > lib/librte_eventdev/version.map | 1 + > 8 files changed, 143 insertions(+), 27 deletions(-) > > diff --git a/doc/guides/prog_guide/event_crypto_adapter.rst > b/doc/guides/prog_guide/event_crypto_adapter.rst > index 1e3eb7139..4fb5c688e 100644 > --- a/doc/guides/prog_guide/event_crypto_adapter.rst > +++ b/doc/guides/prog_guide/event_crypto_adapter.rst > @@ -55,21 +55,22 @@ which is needed to enqueue an event after the > crypto operation is completed. > RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > -In the RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, if HW supports > -RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability > the application -can directly submit the crypto operations to the cryptodev. > -If not, application retrieves crypto adapter's event port using > -rte_event_crypto_adapter_event_port_get() API. Then, links its event - > queue to this port and starts enqueuing crypto operations as events -to the > eventdev. The adapter then dequeues the events and submits the -crypto > operations to the cryptodev. After the crypto completions, the -adapter > enqueues events to the event device. > -Application can use this mode, when ingress packet ordering is needed. > -In this mode, events dequeued from the adapter will be treated as - > forwarded events. The application needs to specify the cryptodev ID -and > queue pair ID (request information) needed to enqueue a crypto -operation > in addition to the event information (response information) -needed to > enqueue an event after the crypto operation has completed. > +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event > PMD > +and crypto PMD supports internal event port > +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), the > +application should use ``rte_event_crypto_adapter_enqueue()`` API to > +enqueue crypto operations as events to crypto adapter. If not, > +application retrieves crypto adapter's event port using > +``rte_event_crypto_adapter_event_port_get()`` API, links its event > +queue to this port and starts enqueuing crypto operations as events to > +eventdev using ``rte_event_enqueue_burst()``. The adapter then > dequeues > +the events and submits the crypto operations to the cryptodev. After > +the crypto operation is complete, the adapter enqueues events to the > +event device. The application can use this mode when ingress packet > +ordering is needed. In this mode, events dequeued from the adapter will > +be treated as forwarded events. The application needs to specify the > +cryptodev ID and queue pair ID (request information) needed to enqueue > +a crypto operation in addition to the event information (response > +information) needed to enqueue an event after the crypto operation has > +completed. > > .. _figure_event_crypto_adapter_op_forward: > > @@ -120,28 +121,44 @@ service function and needs to create an event port > for it. The callback is expected to fill the ``struct > rte_event_crypto_adapter_conf`` structure passed to it. > > -For RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, the event port > created by adapter -can be retrieved using > ``rte_event_crypto_adapter_event_port_get()`` API. > -Application can use this event port to link with event queue on which it - > enqueues events towards the crypto adapter. > +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event > PMD > +and crypto PMD supports internal event port > +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), > events with > +crypto operations should be enqueued to the crypto adapter using > +``rte_event_crypto_adapter_enqueue()`` API. If not, the event port > +created by the adapter can be retrieved using > +``rte_event_crypto_adapter_event_port_get()`` > +API. An application can use this event port to link with an event > +queue, on which it enqueues events towards the crypto adapter using > +``rte_event_enqueue_burst()``. > > .. code-block:: c > > - uint8_t id, evdev, crypto_ev_port_id, app_qid; > + uint8_t id, evdev_id, cdev_id, crypto_ev_port_id, app_qid; > struct rte_event ev; > + uint32_t cap; > int ret; > > - ret = rte_event_crypto_adapter_event_port_get(id, > &crypto_ev_port_id); > - ret = rte_event_queue_setup(evdev, app_qid, NULL); > - ret = rte_event_port_link(evdev, crypto_ev_port_id, &app_qid, NULL, > 1); > - > // Fill in event info and update event_ptr with rte_crypto_op > memset(&ev, 0, sizeof(ev)); > - ev.queue_id = app_qid; > . > . > ev.event_ptr = op; > - ret = rte_event_enqueue_burst(evdev, app_ev_port_id, ev, > nb_events); > + > + ret = rte_event_crypto_adapter_caps_get(evdev_id, cdev_id, &cap); > + if (cap & > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) { > + ret = rte_event_crypto_adapter_enqueue(evdev_id, > app_ev_port_id, > + ev, nb_events); > + } else { > + ret = rte_event_crypto_adapter_event_port_get(id, > + &crypto_ev_port_id); > + ret = rte_event_queue_setup(evdev_id, app_qid, NULL); > + ret = rte_event_port_link(evdev_id, crypto_ev_port_id, &app_qid, > + NULL, 1); > + ev.queue_id = app_qid; > + ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev, > + nb_events); > + } > + > > Querying adapter capabilities > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > diff --git a/doc/guides/rel_notes/release_21_05.rst > b/doc/guides/rel_notes/release_21_05.rst > index fbb3c5975..5ad107ae3 100644 > --- a/doc/guides/rel_notes/release_21_05.rst > +++ b/doc/guides/rel_notes/release_21_05.rst > @@ -169,6 +169,12 @@ New Features > the events across multiple stages. > * This also reduces the scheduling overhead on a event device. > > +* **Enhanced crypto adapter forward mode.** > + > + * Added ``rte_event_crypto_adapter_enqueue()`` API to enqueue events > to crypto > + adapter if forward mode is supported by driver. > + * Added support for crypto adapter forward mode in octeontx2 event and > crypto > + device driver. > > Removed Items > ------------- > diff --git a/lib/librte_eventdev/eventdev_trace_points.c > b/lib/librte_eventdev/eventdev_trace_points.c > index 1a0ccc448..3867ec800 100644 > --- a/lib/librte_eventdev/eventdev_trace_points.c > +++ b/lib/librte_eventdev/eventdev_trace_points.c > @@ -118,3 +118,6 @@ > RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_start, > > RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_stop, > lib.eventdev.crypto.stop) > + > +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_enque > ue, > + lib.eventdev.crypto.enq) > diff --git a/lib/librte_eventdev/rte_event_crypto_adapter.h > b/lib/librte_eventdev/rte_event_crypto_adapter.h > index 60630ef66..a4a4129b7 100644 > --- a/lib/librte_eventdev/rte_event_crypto_adapter.h > +++ b/lib/librte_eventdev/rte_event_crypto_adapter.h > @@ -171,6 +171,7 @@ extern "C" { > #include <stdint.h> > > #include "rte_eventdev.h" > +#include "eventdev_pmd.h" > > /** > * Crypto event adapter mode > @@ -522,6 +523,68 @@ rte_event_crypto_adapter_service_id_get(uint8_t > id, uint32_t *service_id); int > rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t > *event_port_id); > > +/** > + * Enqueue a burst of crypto operations as events object supplied in This is till not updated. events object -> event objects. > +*rte_event* > + * structure on an event crypto adapter designated by its event > +*dev_id* through > + * the event port specified by *port_id*. This function is supported if > +the > + * eventdev PMD has the > +#RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD > + * capability flag set. > + * > + * The *nb_events* parameter is the number of event objects to enqueue > +which are > + * supplied in the *ev* array of *rte_event* structure. > + * > + * The rte_event_crypto_adapter_enqueue() function returns the number > +of > + * events objects it actually enqueued. A return value equal to Same here. events object - > event objects With this you can add Acked-by: Abhinandan.gujjar@intel.com > +*nb_events* > + * means that all event objects have been enqueued. > + * > + * @param dev_id > + * The identifier of the device. > + * @param port_id > + * The identifier of the event port. > + * @param ev > + * Points to an array of *nb_events* objects of type *rte_event* > +structure > + * which contain the event object enqueue operations to be processed. > + * @param nb_events > + * The number of event objects to enqueue, typically number of > + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) > + * available for this port. > + * > + * @return > + * The number of event objects actually enqueued on the event device. > The > + * return value can be less than the value of the *nb_events* parameter > when > + * the event devices queue is full or if invalid parameters are specified in a > + * *rte_event*. If the return value is less than *nb_events*, the remaining > + * events at the end of ev[] are not consumed and the caller has to take > care > + * of them, and rte_errno is set accordingly. Possible errno values include: > + * - EINVAL The port ID is invalid, device ID is invalid, an event's queue > + * ID is invalid, or an event's sched type doesn't match the > + * capabilities of the destination queue. > + * - ENOSPC The event port was backpressured and unable to enqueue > + * one or more events. This error code is only applicable to > + * closed systems. > + */ > +static inline uint16_t > +rte_event_crypto_adapter_enqueue(uint8_t dev_id, > + uint8_t port_id, > + struct rte_event ev[], > + uint16_t nb_events) > +{ > + const struct rte_eventdev *dev = &rte_eventdevs[dev_id]; > + > +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG > + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); > + > + if (port_id >= dev->data->nb_ports) { > + rte_errno = EINVAL; > + return 0; > + } > +#endif > + rte_eventdev_trace_crypto_adapter_enqueue(dev_id, port_id, ev, > + nb_events); > + > + return dev->ca_enqueue(dev->data->ports[port_id], ev, > nb_events); } > + > #ifdef __cplusplus > } > #endif > diff --git a/lib/librte_eventdev/rte_eventdev.c > b/lib/librte_eventdev/rte_eventdev.c > index c9bb5d227..594dd5e75 100644 > --- a/lib/librte_eventdev/rte_eventdev.c > +++ b/lib/librte_eventdev/rte_eventdev.c > @@ -1454,6 +1454,15 @@ rte_event_tx_adapter_enqueue(__rte_unused > void *port, > return 0; > } > > +static uint16_t > +rte_event_crypto_adapter_enqueue(__rte_unused void *port, > + __rte_unused struct rte_event ev[], > + __rte_unused uint16_t nb_events) > +{ > + rte_errno = ENOTSUP; > + return 0; > +} > + > struct rte_eventdev * > rte_event_pmd_allocate(const char *name, int socket_id) { @@ -1476,6 > +1485,7 @@ rte_event_pmd_allocate(const char *name, int socket_id) > > eventdev->txa_enqueue = rte_event_tx_adapter_enqueue; > eventdev->txa_enqueue_same_dest = > rte_event_tx_adapter_enqueue; > + eventdev->ca_enqueue = rte_event_crypto_adapter_enqueue; > > if (eventdev->data == NULL) { > struct rte_eventdev_data *eventdev_data = NULL; diff --git > a/lib/librte_eventdev/rte_eventdev.h > b/lib/librte_eventdev/rte_eventdev.h > index 5f1f544cc..477a59461 100644 > --- a/lib/librte_eventdev/rte_eventdev.h > +++ b/lib/librte_eventdev/rte_eventdev.h > @@ -1352,6 +1352,10 @@ typedef uint16_t > (*event_tx_adapter_enqueue_same_dest)(void *port, > * burst having same destination Ethernet port & Tx queue. > */ > > +typedef uint16_t (*event_crypto_adapter_enqueue)(void *port, > + struct rte_event ev[], uint16_t nb_events); > /**< @internal Enqueue > +burst of events on crypto adapter */ > + > #define RTE_EVENTDEV_NAME_MAX_LEN (64) > /**< @internal Max length of name of event PMD */ > > @@ -1423,6 +1427,8 @@ struct rte_eventdev { > */ > event_tx_adapter_enqueue txa_enqueue; > /**< Pointer to PMD eth Tx adapter enqueue function. */ > + event_crypto_adapter_enqueue ca_enqueue; > + /**< Pointer to PMD crypto adapter enqueue function. */ > struct rte_eventdev_data *data; > /**< Pointer to device data */ > struct rte_eventdev_ops *dev_ops; > @@ -1435,7 +1441,7 @@ struct rte_eventdev { > /**< Flag indicating the device is attached */ > > uint64_t reserved_64s[4]; /**< Reserved for future fields */ > - void *reserved_ptrs[4]; /**< Reserved for future fields */ > + void *reserved_ptrs[3]; /**< Reserved for future fields */ > } __rte_cache_aligned; > > extern struct rte_eventdev *rte_eventdevs; diff --git > a/lib/librte_eventdev/rte_eventdev_trace_fp.h > b/lib/librte_eventdev/rte_eventdev_trace_fp.h > index 349129c0f..5639e0b83 100644 > --- a/lib/librte_eventdev/rte_eventdev_trace_fp.h > +++ b/lib/librte_eventdev/rte_eventdev_trace_fp.h > @@ -49,6 +49,16 @@ RTE_TRACE_POINT_FP( > rte_trace_point_emit_u8(flags); > ) > > +RTE_TRACE_POINT_FP( > + rte_eventdev_trace_crypto_adapter_enqueue, > + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void > *ev_table, > + uint16_t nb_events), > + rte_trace_point_emit_u8(dev_id); > + rte_trace_point_emit_u8(port_id); > + rte_trace_point_emit_ptr(ev_table); > + rte_trace_point_emit_u16(nb_events); > +) > + > RTE_TRACE_POINT_FP( > rte_eventdev_trace_timer_arm_burst, > RTE_TRACE_POINT_ARGS(const void *adapter, void **evtims_table, > diff --git a/lib/librte_eventdev/version.map > b/lib/librte_eventdev/version.map index 902df0ae3..7e264d3b8 100644 > --- a/lib/librte_eventdev/version.map > +++ b/lib/librte_eventdev/version.map > @@ -143,6 +143,7 @@ EXPERIMENTAL { > rte_event_vector_pool_create; > rte_event_eth_rx_adapter_vector_limits_get; > rte_event_eth_rx_adapter_queue_event_vector_config; > + __rte_eventdev_trace_crypto_adapter_enqueue; > }; > > INTERNAL { > -- > 2.25.1
> -----Original Message----- > From: Shijith Thotton <sthotton@marvell.com> > Sent: Monday, April 12, 2021 1:14 PM > To: dev@dpdk.org > Cc: Shijith Thotton <sthotton@marvell.com>; thomas@monjalon.net; > jerinj@marvell.com; Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; > hemant.agrawal@nxp.com; nipun.gupta@nxp.com; > sachin.saxena@oss.nxp.com; anoobj@marvell.com; matan@nvidia.com; > Zhang, Roy Fan <roy.fan.zhang@intel.com>; g.singh@nxp.com; Carrillo, Erik > G <erik.g.carrillo@intel.com>; Jayatheerthan, Jay > <jay.jayatheerthan@intel.com>; pbhagavatula@marvell.com; Van Haaren, > Harry <harry.van.haaren@intel.com>; Akhil Goyal <gakhil@marvell.com> > Subject: [PATCH v7 2/3] event/octeontx2: support crypto adapter forward > mode > > Advertise crypto adapter forward mode capability and set crypto adapter > enqueue function in driver. > > Signed-off-by: Shijith Thotton <sthotton@marvell.com> > --- > drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 42 ++++++---- > drivers/event/octeontx2/otx2_evdev.c | 5 +- > .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => > otx2_evdev_crypto_adptr_rx.h} | 6 +- > .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 > +++++++++++++++++++ > drivers/event/octeontx2/otx2_worker.h | 2 +- > drivers/event/octeontx2/otx2_worker_dual.h | 2 +- > 7 files changed, 122 insertions(+), 21 deletions(-) rename > drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => > otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 > drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h > > diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c > b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c > index cec20b5c6..4808dca64 100644 > --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c > +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c > @@ -7,6 +7,7 @@ > #include <rte_cryptodev_pmd.h> > #include <rte_errno.h> > #include <rte_ethdev.h> > +#include <rte_event_crypto_adapter.h> > > #include "otx2_cryptodev.h" > #include "otx2_cryptodev_capabilities.h" > @@ -434,15 +435,28 @@ sym_session_configure(int driver_id, struct > rte_crypto_sym_xform *xform, > return -ENOTSUP; > } > > -static __rte_always_inline void __rte_hot > +static __rte_always_inline int32_t __rte_hot > otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, > struct cpt_request_info *req, > void *lmtline, > + struct rte_crypto_op *op, > uint64_t cpt_inst_w7) > { > + union rte_event_crypto_metadata *m_data; > union cpt_inst_s inst; > uint64_t lmt_status; > > + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) > + m_data = rte_cryptodev_sym_session_get_user_data( > + op->sym->session); m_data == NULL check & freeing memory is missing. Similar to what you have done in otx2_ca_enq(). With this change you can add Acked-by: Abhinandan.gujjar@intel.com > + else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && > + op->private_data_offset) > + m_data = (union rte_event_crypto_metadata *) > + ((uint8_t *)op + > + op->private_data_offset); > + else > + return -EINVAL; > + > inst.u[0] = 0; > inst.s9x.res_addr = req->comp_baddr; > inst.u[2] = 0; > @@ -453,12 +467,11 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp > *qp, > inst.s9x.ei2 = req->ist.ei2; > inst.s9x.ei3 = cpt_inst_w7; > > - inst.s9x.qord = 1; > - inst.s9x.grp = qp->ev.queue_id; > - inst.s9x.tt = qp->ev.sched_type; > - inst.s9x.tag = (RTE_EVENT_TYPE_CRYPTODEV << 28) | > - qp->ev.flow_id; > - inst.s9x.wq_ptr = (uint64_t)req >> 3; > + inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) | > + m_data->response_info.flow_id) | > + ((uint64_t)m_data->response_info.sched_type << 32) | > + ((uint64_t)m_data->response_info.queue_id << 34)); > + inst.u[3] = 1 | (((uint64_t)req >> 3) << 3); > req->qp = qp; > > do { > @@ -475,22 +488,22 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp > *qp, > lmt_status = otx2_lmt_submit(qp->lf_nq_reg); > } while (lmt_status == 0); > > + return 0; > } > > static __rte_always_inline int32_t __rte_hot otx2_cpt_enqueue_req(const > struct otx2_cpt_qp *qp, > struct pending_queue *pend_q, > struct cpt_request_info *req, > + struct rte_crypto_op *op, > uint64_t cpt_inst_w7) > { > void *lmtline = qp->lmtline; > union cpt_inst_s inst; > uint64_t lmt_status; > > - if (qp->ca_enable) { > - otx2_ca_enqueue_req(qp, req, lmtline, cpt_inst_w7); > - return 0; > - } > + if (qp->ca_enable) > + return otx2_ca_enqueue_req(qp, req, lmtline, op, > cpt_inst_w7); > > if (unlikely(pend_q->pending_count >= > OTX2_CPT_DEFAULT_CMD_QLEN)) > return -EAGAIN; > @@ -594,7 +607,8 @@ otx2_cpt_enqueue_asym(struct otx2_cpt_qp *qp, > goto req_fail; > } > > - ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, sess- > >cpt_inst_w7); > + ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op, > + sess->cpt_inst_w7); > > if (unlikely(ret)) { > CPT_LOG_DP_ERR("Could not enqueue crypto req"); @@ - > 638,7 +652,7 @@ otx2_cpt_enqueue_sym(struct otx2_cpt_qp *qp, struct > rte_crypto_op *op, > return ret; > } > > - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); > + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess- > >cpt_inst_w7); > > if (unlikely(ret)) { > /* Free buffer allocated by fill params routines */ @@ -707,7 > +721,7 @@ otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct > rte_crypto_op *op, > return ret; > } > > - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); > + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess- > >cpt_inst_w7); > > if (winsz && esn) { > seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low; diff --git > a/drivers/event/octeontx2/otx2_evdev.c > b/drivers/event/octeontx2/otx2_evdev.c > index cdadbb2b2..ee7a6ad51 100644 > --- a/drivers/event/octeontx2/otx2_evdev.c > +++ b/drivers/event/octeontx2/otx2_evdev.c > @@ -12,8 +12,9 @@ > #include <rte_mbuf_pool_ops.h> > #include <rte_pci.h> > > -#include "otx2_evdev_stats.h" > #include "otx2_evdev.h" > +#include "otx2_evdev_crypto_adptr_tx.h" > +#include "otx2_evdev_stats.h" > #include "otx2_irq.h" > #include "otx2_tim_evdev.h" > > @@ -311,6 +312,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC > [!!(dev->tx_offloads & > NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] > [!!(dev->tx_offloads & > NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; > } > + event_dev->ca_enqueue = otx2_ssogws_ca_enq; > > if (dev->dual_ws) { > event_dev->enqueue = otx2_ssogws_dual_enq; > @@ -473,6 +475,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC > [!!(dev->tx_offloads & > > NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; > } > + event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq; > } > > event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; > diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c > b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c > index 4e8a96cb6..2c9b347f0 100644 > --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c > +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c > @@ -18,7 +18,8 @@ otx2_ca_caps_get(const struct rte_eventdev *dev, > RTE_SET_USED(cdev); > > *caps = > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND | > - > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW; > + > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW | > + > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD; > > return 0; > } > diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h > b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h > similarity index 93% > rename from drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h > rename to drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h > index 70b63933e..9e331fdd7 100644 > --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h > +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h > @@ -2,8 +2,8 @@ > * Copyright (C) 2020 Marvell International Ltd. > */ > > -#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ > -#define _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ > +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ > +#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ > > #include <rte_cryptodev.h> > #include <rte_cryptodev_pmd.h> > @@ -72,4 +72,4 @@ otx2_handle_crypto_event(uint64_t get_work1) > > return (uint64_t)(cop); > } > -#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ */ > +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */ > diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h > b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h > new file mode 100644 > index 000000000..ecf7eb9f5 > --- /dev/null > +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h > @@ -0,0 +1,83 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright (C) 2021 Marvell International Ltd. > + */ > + > +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ > +#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ > + > +#include <rte_cryptodev.h> > +#include <rte_cryptodev_pmd.h> > +#include <rte_event_crypto_adapter.h> > +#include <rte_eventdev.h> > + > +#include <otx2_cryptodev_qp.h> > +#include <otx2_worker.h> > + > +static inline uint16_t > +otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) { > + union rte_event_crypto_metadata *m_data; > + struct rte_crypto_op *crypto_op; > + struct rte_cryptodev *cdev; > + struct otx2_cpt_qp *qp; > + uint8_t cdev_id; > + uint16_t qp_id; > + > + crypto_op = ev->event_ptr; > + if (crypto_op == NULL) > + return 0; > + > + if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { > + m_data = rte_cryptodev_sym_session_get_user_data( > + crypto_op->sym->session); > + if (m_data == NULL) > + goto free_op; > + > + cdev_id = m_data->request_info.cdev_id; > + qp_id = m_data->request_info.queue_pair_id; > + } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS > && > + crypto_op->private_data_offset) { > + m_data = (union rte_event_crypto_metadata *) > + ((uint8_t *)crypto_op + > + crypto_op->private_data_offset); > + cdev_id = m_data->request_info.cdev_id; > + qp_id = m_data->request_info.queue_pair_id; > + } else { > + goto free_op; > + } > + > + cdev = &rte_cryptodevs[cdev_id]; > + qp = cdev->data->queue_pairs[qp_id]; > + > + if (!ev->sched_type) > + otx2_ssogws_head_wait(tag_op); > + if (qp->ca_enable) > + return cdev->enqueue_burst(qp, &crypto_op, 1); > + > +free_op: > + rte_pktmbuf_free(crypto_op->sym->m_src); > + rte_crypto_op_free(crypto_op); > + rte_errno = EINVAL; > + return 0; > +} > + > +static uint16_t __rte_hot > +otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t > +nb_events) { > + struct otx2_ssogws *ws = port; > + > + RTE_SET_USED(nb_events); > + > + return otx2_ca_enq(ws->tag_op, ev); > +} > + > +static uint16_t __rte_hot > +otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t > +nb_events) { > + struct otx2_ssogws_dual *ws = port; > + > + RTE_SET_USED(nb_events); > + > + return otx2_ca_enq(ws->ws_state[!ws->vws].tag_op, ev); } #endif > /* > +_OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */ > diff --git a/drivers/event/octeontx2/otx2_worker.h > b/drivers/event/octeontx2/otx2_worker.h > index 2b716c042..fd149be91 100644 > --- a/drivers/event/octeontx2/otx2_worker.h > +++ b/drivers/event/octeontx2/otx2_worker.h > @@ -10,7 +10,7 @@ > > #include <otx2_common.h> > #include "otx2_evdev.h" > -#include "otx2_evdev_crypto_adptr_dp.h" > +#include "otx2_evdev_crypto_adptr_rx.h" > #include "otx2_ethdev_sec_tx.h" > > /* SSO Operations */ > diff --git a/drivers/event/octeontx2/otx2_worker_dual.h > b/drivers/event/octeontx2/otx2_worker_dual.h > index 72b616439..36ae4dd88 100644 > --- a/drivers/event/octeontx2/otx2_worker_dual.h > +++ b/drivers/event/octeontx2/otx2_worker_dual.h > @@ -10,7 +10,7 @@ > > #include <otx2_common.h> > #include "otx2_evdev.h" > -#include "otx2_evdev_crypto_adptr_dp.h" > +#include "otx2_evdev_crypto_adptr_rx.h" > > /* SSO Operations */ > static __rte_always_inline uint16_t > -- > 2.25.1
Acked-by: Abhinandan.gujjar@intel.com
> -----Original Message-----
> From: Shijith Thotton <sthotton@marvell.com>
> Sent: Monday, April 12, 2021 1:14 PM
> To: dev@dpdk.org
> Cc: Shijith Thotton <sthotton@marvell.com>; thomas@monjalon.net;
> jerinj@marvell.com; Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>;
> hemant.agrawal@nxp.com; nipun.gupta@nxp.com;
> sachin.saxena@oss.nxp.com; anoobj@marvell.com; matan@nvidia.com;
> Zhang, Roy Fan <roy.fan.zhang@intel.com>; g.singh@nxp.com; Carrillo, Erik
> G <erik.g.carrillo@intel.com>; Jayatheerthan, Jay
> <jay.jayatheerthan@intel.com>; pbhagavatula@marvell.com; Van Haaren,
> Harry <harry.van.haaren@intel.com>; Akhil Goyal <gakhil@marvell.com>
> Subject: [PATCH v7 3/3] test/event_crypto: use crypto adapter enqueue API
>
> Use rte_event_crypto_adapter_enqueue() API to enqueue events to crypto
> adapter if forward mode is supported in driver.
>
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> ---
> app/test/test_event_crypto_adapter.c | 33 ++++++++++++++++++----------
> 1 file changed, 21 insertions(+), 12 deletions(-)
>
> diff --git a/app/test/test_event_crypto_adapter.c
> b/app/test/test_event_crypto_adapter.c
> index 335211cd8..f689bc1f2 100644
> --- a/app/test/test_event_crypto_adapter.c
> +++ b/app/test/test_event_crypto_adapter.c
> @@ -64,6 +64,7 @@ struct event_crypto_adapter_test_params {
> struct rte_mempool *session_priv_mpool;
> struct rte_cryptodev_config *config;
> uint8_t crypto_event_port_id;
> + uint8_t internal_port_op_fwd;
> };
>
> struct rte_event response_info = {
> @@ -110,9 +111,12 @@ send_recv_ev(struct rte_event *ev)
> struct rte_event recv_ev;
> int ret;
>
> - ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev,
> NUM);
> - TEST_ASSERT_EQUAL(ret, NUM,
> - "Failed to send event to crypto adapter\n");
> + if (params.internal_port_op_fwd)
> + ret = rte_event_crypto_adapter_enqueue(evdev,
> TEST_APP_PORT_ID,
> + ev, NUM);
> + else
> + ret = rte_event_enqueue_burst(evdev,
> TEST_APP_PORT_ID, ev, NUM);
> + TEST_ASSERT_EQUAL(ret, NUM, "Failed to send event to crypto
> +adapter\n");
>
> while (rte_event_dequeue_burst(evdev,
> TEST_APP_PORT_ID, &recv_ev, NUM, 0) == 0) @@ -
> 747,9 +751,12 @@ configure_event_crypto_adapter(enum
> rte_event_crypto_adapter_mode mode)
> !(cap &
> RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND))
> goto adapter_create;
>
> - if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) &&
> - !(cap &
> RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD))
> - return -ENOTSUP;
> + if (mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) {
> + if (cap &
> RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD)
> + params.internal_port_op_fwd = 1;
> + else
> + return -ENOTSUP;
> + }
>
> if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_NEW) &&
> !(cap &
> RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW))
> @@ -771,9 +778,11 @@ configure_event_crypto_adapter(enum
> rte_event_crypto_adapter_mode mode)
>
> TEST_ASSERT_SUCCESS(ret, "Failed to add queue pair\n");
>
> - ret =
> rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID,
> - ¶ms.crypto_event_port_id);
> - TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n");
> + if (!params.internal_port_op_fwd) {
> + ret =
> rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID,
> +
> ¶ms.crypto_event_port_id);
> + TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n");
> + }
>
> return TEST_SUCCESS;
> }
> @@ -809,15 +818,15 @@ test_crypto_adapter_conf(enum
> rte_event_crypto_adapter_mode mode)
>
> if (!crypto_adapter_setup_done) {
> ret = configure_event_crypto_adapter(mode);
> - if (!ret) {
> + if (ret)
> + return ret;
> + if (!params.internal_port_op_fwd) {
> qid = TEST_CRYPTO_EV_QUEUE_ID;
> ret = rte_event_port_link(evdev,
> params.crypto_event_port_id, &qid, NULL,
> 1);
> TEST_ASSERT(ret >= 0, "Failed to link queue %d "
> "port=%u\n", qid,
> params.crypto_event_port_id);
> - } else {
> - return ret;
> }
> crypto_adapter_setup_done = 1;
> }
> --
> 2.25.1
On Tue, Apr 13, 2021 at 03:34:50AM +0000, Gujjar, Abhinandan S wrote: > > > > -----Original Message----- > > From: Shijith Thotton <sthotton@marvell.com> > > Sent: Monday, April 12, 2021 1:14 PM > > To: dev@dpdk.org > > Cc: Shijith Thotton <sthotton@marvell.com>; thomas@monjalon.net; > > jerinj@marvell.com; Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; > > hemant.agrawal@nxp.com; nipun.gupta@nxp.com; > > sachin.saxena@oss.nxp.com; anoobj@marvell.com; matan@nvidia.com; > > Zhang, Roy Fan <roy.fan.zhang@intel.com>; g.singh@nxp.com; Carrillo, Erik > > G <erik.g.carrillo@intel.com>; Jayatheerthan, Jay > > <jay.jayatheerthan@intel.com>; pbhagavatula@marvell.com; Van Haaren, > > Harry <harry.van.haaren@intel.com>; Akhil Goyal <gakhil@marvell.com> > > Subject: [PATCH v7 2/3] event/octeontx2: support crypto adapter forward > > mode > > > > Advertise crypto adapter forward mode capability and set crypto adapter > > enqueue function in driver. > > > > Signed-off-by: Shijith Thotton <sthotton@marvell.com> > > --- > > drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 42 ++++++---- > > drivers/event/octeontx2/otx2_evdev.c | 5 +- > > .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => > > otx2_evdev_crypto_adptr_rx.h} | 6 +- > > .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 > > +++++++++++++++++++ > > drivers/event/octeontx2/otx2_worker.h | 2 +- > > drivers/event/octeontx2/otx2_worker_dual.h | 2 +- > > 7 files changed, 122 insertions(+), 21 deletions(-) rename > > drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => > > otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 > > drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h > > > > diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c > > b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c > > index cec20b5c6..4808dca64 100644 > > --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c > > +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c > > @@ -7,6 +7,7 @@ > > #include <rte_cryptodev_pmd.h> > > #include <rte_errno.h> > > #include <rte_ethdev.h> > > +#include <rte_event_crypto_adapter.h> > > > > #include "otx2_cryptodev.h" > > #include "otx2_cryptodev_capabilities.h" > > @@ -434,15 +435,28 @@ sym_session_configure(int driver_id, struct > > rte_crypto_sym_xform *xform, > > return -ENOTSUP; > > } > > > > -static __rte_always_inline void __rte_hot > > +static __rte_always_inline int32_t __rte_hot > > otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, > > struct cpt_request_info *req, > > void *lmtline, > > + struct rte_crypto_op *op, > > uint64_t cpt_inst_w7) > > { > > + union rte_event_crypto_metadata *m_data; > > union cpt_inst_s inst; > > uint64_t lmt_status; > > > > + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) > > + m_data = rte_cryptodev_sym_session_get_user_data( > > + op->sym->session); > m_data == NULL check & freeing memory is missing. Similar to what you have done in otx2_ca_enq(). > With this change you can add Acked-by: Abhinandan.gujjar@intel.com > Thanks. I will send next version with the change. > > + else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && > > + op->private_data_offset) > > + m_data = (union rte_event_crypto_metadata *) > > + ((uint8_t *)op + > > + op->private_data_offset); > > + else > > + return -EINVAL; > > + > > inst.u[0] = 0; > > inst.s9x.res_addr = req->comp_baddr; > > inst.u[2] = 0; > > @@ -453,12 +467,11 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp > > *qp, > > inst.s9x.ei2 = req->ist.ei2; > > inst.s9x.ei3 = cpt_inst_w7; > > > > - inst.s9x.qord = 1; > > - inst.s9x.grp = qp->ev.queue_id; > > - inst.s9x.tt = qp->ev.sched_type; > > - inst.s9x.tag = (RTE_EVENT_TYPE_CRYPTODEV << 28) | > > - qp->ev.flow_id; > > - inst.s9x.wq_ptr = (uint64_t)req >> 3; > > + inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) | > > + m_data->response_info.flow_id) | > > + ((uint64_t)m_data->response_info.sched_type << 32) | > > + ((uint64_t)m_data->response_info.queue_id << 34)); > > + inst.u[3] = 1 | (((uint64_t)req >> 3) << 3); > > req->qp = qp; > > > > do { > > @@ -475,22 +488,22 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp > > *qp, > > lmt_status = otx2_lmt_submit(qp->lf_nq_reg); > > } while (lmt_status == 0); > > > > + return 0; > > } > > > > static __rte_always_inline int32_t __rte_hot otx2_cpt_enqueue_req(const > > struct otx2_cpt_qp *qp, > > struct pending_queue *pend_q, > > struct cpt_request_info *req, > > + struct rte_crypto_op *op, > > uint64_t cpt_inst_w7) > > { > > void *lmtline = qp->lmtline; > > union cpt_inst_s inst; > > uint64_t lmt_status; > > > > - if (qp->ca_enable) { > > - otx2_ca_enqueue_req(qp, req, lmtline, cpt_inst_w7); > > - return 0; > > - } > > + if (qp->ca_enable) > > + return otx2_ca_enqueue_req(qp, req, lmtline, op, > > cpt_inst_w7); > > > > if (unlikely(pend_q->pending_count >= > > OTX2_CPT_DEFAULT_CMD_QLEN)) > > return -EAGAIN; > > @@ -594,7 +607,8 @@ otx2_cpt_enqueue_asym(struct otx2_cpt_qp *qp, > > goto req_fail; > > } > > > > - ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, sess- > > >cpt_inst_w7); > > + ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op, > > + sess->cpt_inst_w7); > > > > if (unlikely(ret)) { > > CPT_LOG_DP_ERR("Could not enqueue crypto req"); @@ - > > 638,7 +652,7 @@ otx2_cpt_enqueue_sym(struct otx2_cpt_qp *qp, struct > > rte_crypto_op *op, > > return ret; > > } > > > > - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); > > + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess- > > >cpt_inst_w7); > > > > if (unlikely(ret)) { > > /* Free buffer allocated by fill params routines */ @@ -707,7 > > +721,7 @@ otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct > > rte_crypto_op *op, > > return ret; > > } > > > > - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); > > + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess- > > >cpt_inst_w7); > > > > if (winsz && esn) { > > seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low; diff --git > > a/drivers/event/octeontx2/otx2_evdev.c > > b/drivers/event/octeontx2/otx2_evdev.c > > index cdadbb2b2..ee7a6ad51 100644 > > --- a/drivers/event/octeontx2/otx2_evdev.c > > +++ b/drivers/event/octeontx2/otx2_evdev.c > > @@ -12,8 +12,9 @@ > > #include <rte_mbuf_pool_ops.h> > > #include <rte_pci.h> > > > > -#include "otx2_evdev_stats.h" > > #include "otx2_evdev.h" > > +#include "otx2_evdev_crypto_adptr_tx.h" > > +#include "otx2_evdev_stats.h" > > #include "otx2_irq.h" > > #include "otx2_tim_evdev.h" > > > > @@ -311,6 +312,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC > > [!!(dev->tx_offloads & > > NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] > > [!!(dev->tx_offloads & > > NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; > > } > > + event_dev->ca_enqueue = otx2_ssogws_ca_enq; > > > > if (dev->dual_ws) { > > event_dev->enqueue = otx2_ssogws_dual_enq; > > @@ -473,6 +475,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC > > [!!(dev->tx_offloads & > > > > NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; > > } > > + event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq; > > } > > > > event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; > > diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c > > b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c > > index 4e8a96cb6..2c9b347f0 100644 > > --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c > > +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c > > @@ -18,7 +18,8 @@ otx2_ca_caps_get(const struct rte_eventdev *dev, > > RTE_SET_USED(cdev); > > > > *caps = > > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND | > > - > > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW; > > + > > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW | > > + > > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD; > > > > return 0; > > } > > diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h > > b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h > > similarity index 93% > > rename from drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h > > rename to drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h > > index 70b63933e..9e331fdd7 100644 > > --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h > > +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h > > @@ -2,8 +2,8 @@ > > * Copyright (C) 2020 Marvell International Ltd. > > */ > > > > -#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ > > -#define _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ > > +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ > > +#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ > > > > #include <rte_cryptodev.h> > > #include <rte_cryptodev_pmd.h> > > @@ -72,4 +72,4 @@ otx2_handle_crypto_event(uint64_t get_work1) > > > > return (uint64_t)(cop); > > } > > -#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ */ > > +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */ > > diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h > > b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h > > new file mode 100644 > > index 000000000..ecf7eb9f5 > > --- /dev/null > > +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h > > @@ -0,0 +1,83 @@ > > +/* SPDX-License-Identifier: BSD-3-Clause > > + * Copyright (C) 2021 Marvell International Ltd. > > + */ > > + > > +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ > > +#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ > > + > > +#include <rte_cryptodev.h> > > +#include <rte_cryptodev_pmd.h> > > +#include <rte_event_crypto_adapter.h> > > +#include <rte_eventdev.h> > > + > > +#include <otx2_cryptodev_qp.h> > > +#include <otx2_worker.h> > > + > > +static inline uint16_t > > +otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) { > > + union rte_event_crypto_metadata *m_data; > > + struct rte_crypto_op *crypto_op; > > + struct rte_cryptodev *cdev; > > + struct otx2_cpt_qp *qp; > > + uint8_t cdev_id; > > + uint16_t qp_id; > > + > > + crypto_op = ev->event_ptr; > > + if (crypto_op == NULL) > > + return 0; > > + > > + if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { > > + m_data = rte_cryptodev_sym_session_get_user_data( > > + crypto_op->sym->session); > > + if (m_data == NULL) > > + goto free_op; > > + > > + cdev_id = m_data->request_info.cdev_id; > > + qp_id = m_data->request_info.queue_pair_id; > > + } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS > > && > > + crypto_op->private_data_offset) { > > + m_data = (union rte_event_crypto_metadata *) > > + ((uint8_t *)crypto_op + > > + crypto_op->private_data_offset); > > + cdev_id = m_data->request_info.cdev_id; > > + qp_id = m_data->request_info.queue_pair_id; > > + } else { > > + goto free_op; > > + } > > + > > + cdev = &rte_cryptodevs[cdev_id]; > > + qp = cdev->data->queue_pairs[qp_id]; > > + > > + if (!ev->sched_type) > > + otx2_ssogws_head_wait(tag_op); > > + if (qp->ca_enable) > > + return cdev->enqueue_burst(qp, &crypto_op, 1); > > + > > +free_op: > > + rte_pktmbuf_free(crypto_op->sym->m_src); > > + rte_crypto_op_free(crypto_op); > > + rte_errno = EINVAL; > > + return 0; > > +} > > + > > +static uint16_t __rte_hot > > +otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t > > +nb_events) { > > + struct otx2_ssogws *ws = port; > > + > > + RTE_SET_USED(nb_events); > > + > > + return otx2_ca_enq(ws->tag_op, ev); > > +} > > + > > +static uint16_t __rte_hot > > +otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t > > +nb_events) { > > + struct otx2_ssogws_dual *ws = port; > > + > > + RTE_SET_USED(nb_events); > > + > > + return otx2_ca_enq(ws->ws_state[!ws->vws].tag_op, ev); } #endif > > /* > > +_OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */ > > diff --git a/drivers/event/octeontx2/otx2_worker.h > > b/drivers/event/octeontx2/otx2_worker.h > > index 2b716c042..fd149be91 100644 > > --- a/drivers/event/octeontx2/otx2_worker.h > > +++ b/drivers/event/octeontx2/otx2_worker.h > > @@ -10,7 +10,7 @@ > > > > #include <otx2_common.h> > > #include "otx2_evdev.h" > > -#include "otx2_evdev_crypto_adptr_dp.h" > > +#include "otx2_evdev_crypto_adptr_rx.h" > > #include "otx2_ethdev_sec_tx.h" > > > > /* SSO Operations */ > > diff --git a/drivers/event/octeontx2/otx2_worker_dual.h > > b/drivers/event/octeontx2/otx2_worker_dual.h > > index 72b616439..36ae4dd88 100644 > > --- a/drivers/event/octeontx2/otx2_worker_dual.h > > +++ b/drivers/event/octeontx2/otx2_worker_dual.h > > @@ -10,7 +10,7 @@ > > > > #include <otx2_common.h> > > #include "otx2_evdev.h" > > -#include "otx2_evdev_crypto_adptr_dp.h" > > +#include "otx2_evdev_crypto_adptr_rx.h" > > > > /* SSO Operations */ > > static __rte_always_inline uint16_t > > -- > > 2.25.1 >
This series proposes a new event device enqueue operation if crypto adapter forward mode is supported. Second patch in the series is the implementation of the same in PMD. Test application changes are added in third patch. v8: - Added metadata NULL check and op free. - events object - > event objects. - Added Acked-by. v7: - Rebased. v6: - Rebased. v5: - Set rte_errno if crypto adapter enqueue fails in driver. - Test application code restructuring. v4: - Fix debug build. v3: - Added crypto adapter test application changes. v2: - Updated release notes. - Made use of RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET macro. - Fixed v1 build error. v1: - Added crypto adapter forward mode support for octeontx2. Akhil Goyal (1): eventdev: introduce crypto adapter enqueue API Shijith Thotton (2): event/octeontx2: support crypto adapter forward mode test/event_crypto: use crypto adapter enqueue API app/test/test_event_crypto_adapter.c | 33 +++++--- .../prog_guide/event_crypto_adapter.rst | 69 +++++++++------ doc/guides/rel_notes/release_21_05.rst | 6 ++ drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 49 +++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 63 ++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 8 +- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 1 + 16 files changed, 293 insertions(+), 60 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h -- 2.25.1
From: Akhil Goyal <gakhil@marvell.com> In case an event from a previous stage is required to be forwarded to a crypto adapter and PMD supports internal event port in crypto adapter, exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not have a way to check in the API rte_event_enqueue_burst(), whether it is for crypto adapter or for eth tx adapter. Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue(), which can send to a crypto adapter. Note that RTE_EVENT_TYPE_* cannot be used to make that decision, as it is meant for event source and not event destination. And event port designated for crypto adapter is designed to be used for OP_NEW mode. Hence, in order to support an event PMD which has an internal event port in crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, application should use rte_event_crypto_adapter_enqueue() API to enqueue events. When internal port is not available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW mode), application can use API rte_event_enqueue_burst() as it was doing earlier, i.e. retrieve event port used by crypto adapter and bind its event queues to that port and enqueue events using the API rte_event_enqueue_burst(). Signed-off-by: Akhil Goyal <gakhil@marvell.com> Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com> --- .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++++------- doc/guides/rel_notes/release_21_05.rst | 6 ++ lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 63 +++++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 8 ++- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 1 + 8 files changed, 143 insertions(+), 27 deletions(-) diff --git a/doc/guides/prog_guide/event_crypto_adapter.rst b/doc/guides/prog_guide/event_crypto_adapter.rst index 1e3eb7139..4fb5c688e 100644 --- a/doc/guides/prog_guide/event_crypto_adapter.rst +++ b/doc/guides/prog_guide/event_crypto_adapter.rst @@ -55,21 +55,22 @@ which is needed to enqueue an event after the crypto operation is completed. RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -In the RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, if HW supports -RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability the application -can directly submit the crypto operations to the cryptodev. -If not, application retrieves crypto adapter's event port using -rte_event_crypto_adapter_event_port_get() API. Then, links its event -queue to this port and starts enqueuing crypto operations as events -to the eventdev. The adapter then dequeues the events and submits the -crypto operations to the cryptodev. After the crypto completions, the -adapter enqueues events to the event device. -Application can use this mode, when ingress packet ordering is needed. -In this mode, events dequeued from the adapter will be treated as -forwarded events. The application needs to specify the cryptodev ID -and queue pair ID (request information) needed to enqueue a crypto -operation in addition to the event information (response information) -needed to enqueue an event after the crypto operation has completed. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), the application should +use ``rte_event_crypto_adapter_enqueue()`` API to enqueue crypto operations as +events to crypto adapter. If not, application retrieves crypto adapter's event +port using ``rte_event_crypto_adapter_event_port_get()`` API, links its event +queue to this port and starts enqueuing crypto operations as events to eventdev +using ``rte_event_enqueue_burst()``. The adapter then dequeues the events and +submits the crypto operations to the cryptodev. After the crypto operation is +complete, the adapter enqueues events to the event device. The application can +use this mode when ingress packet ordering is needed. In this mode, events +dequeued from the adapter will be treated as forwarded events. The application +needs to specify the cryptodev ID and queue pair ID (request information) needed +to enqueue a crypto operation in addition to the event information (response +information) needed to enqueue an event after the crypto operation has +completed. .. _figure_event_crypto_adapter_op_forward: @@ -120,28 +121,44 @@ service function and needs to create an event port for it. The callback is expected to fill the ``struct rte_event_crypto_adapter_conf`` structure passed to it. -For RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, the event port created by adapter -can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` API. -Application can use this event port to link with event queue on which it -enqueues events towards the crypto adapter. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), events with crypto +operations should be enqueued to the crypto adapter using +``rte_event_crypto_adapter_enqueue()`` API. If not, the event port created by +the adapter can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` +API. An application can use this event port to link with an event queue, on +which it enqueues events towards the crypto adapter using +``rte_event_enqueue_burst()``. .. code-block:: c - uint8_t id, evdev, crypto_ev_port_id, app_qid; + uint8_t id, evdev_id, cdev_id, crypto_ev_port_id, app_qid; struct rte_event ev; + uint32_t cap; int ret; - ret = rte_event_crypto_adapter_event_port_get(id, &crypto_ev_port_id); - ret = rte_event_queue_setup(evdev, app_qid, NULL); - ret = rte_event_port_link(evdev, crypto_ev_port_id, &app_qid, NULL, 1); - // Fill in event info and update event_ptr with rte_crypto_op memset(&ev, 0, sizeof(ev)); - ev.queue_id = app_qid; . . ev.event_ptr = op; - ret = rte_event_enqueue_burst(evdev, app_ev_port_id, ev, nb_events); + + ret = rte_event_crypto_adapter_caps_get(evdev_id, cdev_id, &cap); + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) { + ret = rte_event_crypto_adapter_enqueue(evdev_id, app_ev_port_id, + ev, nb_events); + } else { + ret = rte_event_crypto_adapter_event_port_get(id, + &crypto_ev_port_id); + ret = rte_event_queue_setup(evdev_id, app_qid, NULL); + ret = rte_event_port_link(evdev_id, crypto_ev_port_id, &app_qid, + NULL, 1); + ev.queue_id = app_qid; + ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev, + nb_events); + } + Querying adapter capabilities ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst index 9a666b629..fe789c3b7 100644 --- a/doc/guides/rel_notes/release_21_05.rst +++ b/doc/guides/rel_notes/release_21_05.rst @@ -169,6 +169,12 @@ New Features * Added command to display Rx queue used descriptor count. ``show port (port_id) rxq (queue_id) desc used count`` +* **Enhanced crypto adapter forward mode.** + + * Added ``rte_event_crypto_adapter_enqueue()`` API to enqueue events to crypto + adapter if forward mode is supported by driver. + * Added support for crypto adapter forward mode in octeontx2 event and crypto + device driver. Removed Items ------------- diff --git a/lib/librte_eventdev/eventdev_trace_points.c b/lib/librte_eventdev/eventdev_trace_points.c index 1a0ccc448..3867ec800 100644 --- a/lib/librte_eventdev/eventdev_trace_points.c +++ b/lib/librte_eventdev/eventdev_trace_points.c @@ -118,3 +118,6 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_start, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_stop, lib.eventdev.crypto.stop) + +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_enqueue, + lib.eventdev.crypto.enq) diff --git a/lib/librte_eventdev/rte_event_crypto_adapter.h b/lib/librte_eventdev/rte_event_crypto_adapter.h index 60630ef66..f8c6cca87 100644 --- a/lib/librte_eventdev/rte_event_crypto_adapter.h +++ b/lib/librte_eventdev/rte_event_crypto_adapter.h @@ -171,6 +171,7 @@ extern "C" { #include <stdint.h> #include "rte_eventdev.h" +#include "eventdev_pmd.h" /** * Crypto event adapter mode @@ -522,6 +523,68 @@ rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id); int rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); +/** + * Enqueue a burst of crypto operations as event objects supplied in *rte_event* + * structure on an event crypto adapter designated by its event *dev_id* through + * the event port specified by *port_id*. This function is supported if the + * eventdev PMD has the #RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD + * capability flag set. + * + * The *nb_events* parameter is the number of event objects to enqueue which are + * supplied in the *ev* array of *rte_event* structure. + * + * The rte_event_crypto_adapter_enqueue() function returns the number of + * event objects it actually enqueued. A return value equal to *nb_events* + * means that all event objects have been enqueued. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param ev + * Points to an array of *nb_events* objects of type *rte_event* structure + * which contain the event object enqueue operations to be processed. + * @param nb_events + * The number of event objects to enqueue, typically number of + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) + * available for this port. + * + * @return + * The number of event objects actually enqueued on the event device. The + * return value can be less than the value of the *nb_events* parameter when + * the event devices queue is full or if invalid parameters are specified in a + * *rte_event*. If the return value is less than *nb_events*, the remaining + * events at the end of ev[] are not consumed and the caller has to take care + * of them, and rte_errno is set accordingly. Possible errno values include: + * - EINVAL The port ID is invalid, device ID is invalid, an event's queue + * ID is invalid, or an event's sched type doesn't match the + * capabilities of the destination queue. + * - ENOSPC The event port was backpressured and unable to enqueue + * one or more events. This error code is only applicable to + * closed systems. + */ +static inline uint16_t +rte_event_crypto_adapter_enqueue(uint8_t dev_id, + uint8_t port_id, + struct rte_event ev[], + uint16_t nb_events) +{ + const struct rte_eventdev *dev = &rte_eventdevs[dev_id]; + +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + if (port_id >= dev->data->nb_ports) { + rte_errno = EINVAL; + return 0; + } +#endif + rte_eventdev_trace_crypto_adapter_enqueue(dev_id, port_id, ev, + nb_events); + + return dev->ca_enqueue(dev->data->ports[port_id], ev, nb_events); +} + #ifdef __cplusplus } #endif diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c index c9bb5d227..594dd5e75 100644 --- a/lib/librte_eventdev/rte_eventdev.c +++ b/lib/librte_eventdev/rte_eventdev.c @@ -1454,6 +1454,15 @@ rte_event_tx_adapter_enqueue(__rte_unused void *port, return 0; } +static uint16_t +rte_event_crypto_adapter_enqueue(__rte_unused void *port, + __rte_unused struct rte_event ev[], + __rte_unused uint16_t nb_events) +{ + rte_errno = ENOTSUP; + return 0; +} + struct rte_eventdev * rte_event_pmd_allocate(const char *name, int socket_id) { @@ -1476,6 +1485,7 @@ rte_event_pmd_allocate(const char *name, int socket_id) eventdev->txa_enqueue = rte_event_tx_adapter_enqueue; eventdev->txa_enqueue_same_dest = rte_event_tx_adapter_enqueue; + eventdev->ca_enqueue = rte_event_crypto_adapter_enqueue; if (eventdev->data == NULL) { struct rte_eventdev_data *eventdev_data = NULL; diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h index 5f1f544cc..477a59461 100644 --- a/lib/librte_eventdev/rte_eventdev.h +++ b/lib/librte_eventdev/rte_eventdev.h @@ -1352,6 +1352,10 @@ typedef uint16_t (*event_tx_adapter_enqueue_same_dest)(void *port, * burst having same destination Ethernet port & Tx queue. */ +typedef uint16_t (*event_crypto_adapter_enqueue)(void *port, + struct rte_event ev[], uint16_t nb_events); +/**< @internal Enqueue burst of events on crypto adapter */ + #define RTE_EVENTDEV_NAME_MAX_LEN (64) /**< @internal Max length of name of event PMD */ @@ -1423,6 +1427,8 @@ struct rte_eventdev { */ event_tx_adapter_enqueue txa_enqueue; /**< Pointer to PMD eth Tx adapter enqueue function. */ + event_crypto_adapter_enqueue ca_enqueue; + /**< Pointer to PMD crypto adapter enqueue function. */ struct rte_eventdev_data *data; /**< Pointer to device data */ struct rte_eventdev_ops *dev_ops; @@ -1435,7 +1441,7 @@ struct rte_eventdev { /**< Flag indicating the device is attached */ uint64_t reserved_64s[4]; /**< Reserved for future fields */ - void *reserved_ptrs[4]; /**< Reserved for future fields */ + void *reserved_ptrs[3]; /**< Reserved for future fields */ } __rte_cache_aligned; extern struct rte_eventdev *rte_eventdevs; diff --git a/lib/librte_eventdev/rte_eventdev_trace_fp.h b/lib/librte_eventdev/rte_eventdev_trace_fp.h index 349129c0f..5639e0b83 100644 --- a/lib/librte_eventdev/rte_eventdev_trace_fp.h +++ b/lib/librte_eventdev/rte_eventdev_trace_fp.h @@ -49,6 +49,16 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_u8(flags); ) +RTE_TRACE_POINT_FP( + rte_eventdev_trace_crypto_adapter_enqueue, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table, + uint16_t nb_events), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_ptr(ev_table); + rte_trace_point_emit_u16(nb_events); +) + RTE_TRACE_POINT_FP( rte_eventdev_trace_timer_arm_burst, RTE_TRACE_POINT_ARGS(const void *adapter, void **evtims_table, diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map index 902df0ae3..7e264d3b8 100644 --- a/lib/librte_eventdev/version.map +++ b/lib/librte_eventdev/version.map @@ -143,6 +143,7 @@ EXPERIMENTAL { rte_event_vector_pool_create; rte_event_eth_rx_adapter_vector_limits_get; rte_event_eth_rx_adapter_queue_event_vector_config; + __rte_eventdev_trace_crypto_adapter_enqueue; }; INTERNAL { -- 2.25.1
Advertise crypto adapter forward mode capability and set crypto adapter enqueue function in driver. Signed-off-by: Shijith Thotton <sthotton@marvell.com> Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com> --- drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 49 +++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- 7 files changed, 129 insertions(+), 21 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c index cec20b5c6..c918ed864 100644 --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c @@ -7,6 +7,7 @@ #include <rte_cryptodev_pmd.h> #include <rte_errno.h> #include <rte_ethdev.h> +#include <rte_event_crypto_adapter.h> #include "otx2_cryptodev.h" #include "otx2_cryptodev_capabilities.h" @@ -434,15 +435,35 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform, return -ENOTSUP; } -static __rte_always_inline void __rte_hot +static __rte_always_inline int32_t __rte_hot otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, struct cpt_request_info *req, void *lmtline, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { + union rte_event_crypto_metadata *m_data; union cpt_inst_s inst; uint64_t lmt_status; + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + m_data = rte_cryptodev_sym_session_get_user_data( + op->sym->session); + if (m_data == NULL) { + rte_pktmbuf_free(op->sym->m_src); + rte_crypto_op_free(op); + rte_errno = EINVAL; + return -EINVAL; + } + } else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + op->private_data_offset) { + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)op + + op->private_data_offset); + } else { + return -EINVAL; + } + inst.u[0] = 0; inst.s9x.res_addr = req->comp_baddr; inst.u[2] = 0; @@ -453,12 +474,11 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, inst.s9x.ei2 = req->ist.ei2; inst.s9x.ei3 = cpt_inst_w7; - inst.s9x.qord = 1; - inst.s9x.grp = qp->ev.queue_id; - inst.s9x.tt = qp->ev.sched_type; - inst.s9x.tag = (RTE_EVENT_TYPE_CRYPTODEV << 28) | - qp->ev.flow_id; - inst.s9x.wq_ptr = (uint64_t)req >> 3; + inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) | + m_data->response_info.flow_id) | + ((uint64_t)m_data->response_info.sched_type << 32) | + ((uint64_t)m_data->response_info.queue_id << 34)); + inst.u[3] = 1 | (((uint64_t)req >> 3) << 3); req->qp = qp; do { @@ -475,22 +495,22 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, lmt_status = otx2_lmt_submit(qp->lf_nq_reg); } while (lmt_status == 0); + return 0; } static __rte_always_inline int32_t __rte_hot otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp, struct pending_queue *pend_q, struct cpt_request_info *req, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { void *lmtline = qp->lmtline; union cpt_inst_s inst; uint64_t lmt_status; - if (qp->ca_enable) { - otx2_ca_enqueue_req(qp, req, lmtline, cpt_inst_w7); - return 0; - } + if (qp->ca_enable) + return otx2_ca_enqueue_req(qp, req, lmtline, op, cpt_inst_w7); if (unlikely(pend_q->pending_count >= OTX2_CPT_DEFAULT_CMD_QLEN)) return -EAGAIN; @@ -594,7 +614,8 @@ otx2_cpt_enqueue_asym(struct otx2_cpt_qp *qp, goto req_fail; } - ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op, + sess->cpt_inst_w7); if (unlikely(ret)) { CPT_LOG_DP_ERR("Could not enqueue crypto req"); @@ -638,7 +659,7 @@ otx2_cpt_enqueue_sym(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (unlikely(ret)) { /* Free buffer allocated by fill params routines */ @@ -707,7 +728,7 @@ otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (winsz && esn) { seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low; diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index cdadbb2b2..ee7a6ad51 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -12,8 +12,9 @@ #include <rte_mbuf_pool_ops.h> #include <rte_pci.h> -#include "otx2_evdev_stats.h" #include "otx2_evdev.h" +#include "otx2_evdev_crypto_adptr_tx.h" +#include "otx2_evdev_stats.h" #include "otx2_irq.h" #include "otx2_tim_evdev.h" @@ -311,6 +312,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_ca_enq; if (dev->dual_ws) { event_dev->enqueue = otx2_ssogws_dual_enq; @@ -473,6 +475,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq; } event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c index 4e8a96cb6..2c9b347f0 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c @@ -18,7 +18,8 @@ otx2_ca_caps_get(const struct rte_eventdev *dev, RTE_SET_USED(cdev); *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND | - RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW; + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW | + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD; return 0; } diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h similarity index 93% rename from drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h rename to drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h index 70b63933e..9e331fdd7 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h @@ -2,8 +2,8 @@ * Copyright (C) 2020 Marvell International Ltd. */ -#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ -#define _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ #include <rte_cryptodev.h> #include <rte_cryptodev_pmd.h> @@ -72,4 +72,4 @@ otx2_handle_crypto_event(uint64_t get_work1) return (uint64_t)(cop); } -#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ */ +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */ diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h new file mode 100644 index 000000000..ecf7eb9f5 --- /dev/null +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2021 Marvell International Ltd. + */ + +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ + +#include <rte_cryptodev.h> +#include <rte_cryptodev_pmd.h> +#include <rte_event_crypto_adapter.h> +#include <rte_eventdev.h> + +#include <otx2_cryptodev_qp.h> +#include <otx2_worker.h> + +static inline uint16_t +otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) +{ + union rte_event_crypto_metadata *m_data; + struct rte_crypto_op *crypto_op; + struct rte_cryptodev *cdev; + struct otx2_cpt_qp *qp; + uint8_t cdev_id; + uint16_t qp_id; + + crypto_op = ev->event_ptr; + if (crypto_op == NULL) + return 0; + + if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + m_data = rte_cryptodev_sym_session_get_user_data( + crypto_op->sym->session); + if (m_data == NULL) + goto free_op; + + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + crypto_op->private_data_offset) { + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)crypto_op + + crypto_op->private_data_offset); + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else { + goto free_op; + } + + cdev = &rte_cryptodevs[cdev_id]; + qp = cdev->data->queue_pairs[qp_id]; + + if (!ev->sched_type) + otx2_ssogws_head_wait(tag_op); + if (qp->ca_enable) + return cdev->enqueue_burst(qp, &crypto_op, 1); + +free_op: + rte_pktmbuf_free(crypto_op->sym->m_src); + rte_crypto_op_free(crypto_op); + rte_errno = EINVAL; + return 0; +} + +static uint16_t __rte_hot +otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->tag_op, ev); +} + +static uint16_t __rte_hot +otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws_dual *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->ws_state[!ws->vws].tag_op, ev); +} +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */ diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h index 2b716c042..fd149be91 100644 --- a/drivers/event/octeontx2/otx2_worker.h +++ b/drivers/event/octeontx2/otx2_worker.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" #include "otx2_ethdev_sec_tx.h" /* SSO Operations */ diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h index 72b616439..36ae4dd88 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.h +++ b/drivers/event/octeontx2/otx2_worker_dual.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" /* SSO Operations */ static __rte_always_inline uint16_t -- 2.25.1
Use rte_event_crypto_adapter_enqueue() API to enqueue events to crypto adapter if forward mode is supported in driver. Signed-off-by: Shijith Thotton <sthotton@marvell.com> Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com> --- app/test/test_event_crypto_adapter.c | 33 ++++++++++++++++++---------- 1 file changed, 21 insertions(+), 12 deletions(-) diff --git a/app/test/test_event_crypto_adapter.c b/app/test/test_event_crypto_adapter.c index 335211cd8..f689bc1f2 100644 --- a/app/test/test_event_crypto_adapter.c +++ b/app/test/test_event_crypto_adapter.c @@ -64,6 +64,7 @@ struct event_crypto_adapter_test_params { struct rte_mempool *session_priv_mpool; struct rte_cryptodev_config *config; uint8_t crypto_event_port_id; + uint8_t internal_port_op_fwd; }; struct rte_event response_info = { @@ -110,9 +111,12 @@ send_recv_ev(struct rte_event *ev) struct rte_event recv_ev; int ret; - ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); - TEST_ASSERT_EQUAL(ret, NUM, - "Failed to send event to crypto adapter\n"); + if (params.internal_port_op_fwd) + ret = rte_event_crypto_adapter_enqueue(evdev, TEST_APP_PORT_ID, + ev, NUM); + else + ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); + TEST_ASSERT_EQUAL(ret, NUM, "Failed to send event to crypto adapter\n"); while (rte_event_dequeue_burst(evdev, TEST_APP_PORT_ID, &recv_ev, NUM, 0) == 0) @@ -747,9 +751,12 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND)) goto adapter_create; - if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) && - !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD)) - return -ENOTSUP; + if (mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) { + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) + params.internal_port_op_fwd = 1; + else + return -ENOTSUP; + } if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_NEW) && !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW)) @@ -771,9 +778,11 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) TEST_ASSERT_SUCCESS(ret, "Failed to add queue pair\n"); - ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, - ¶ms.crypto_event_port_id); - TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + if (!params.internal_port_op_fwd) { + ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, + ¶ms.crypto_event_port_id); + TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + } return TEST_SUCCESS; } @@ -809,15 +818,15 @@ test_crypto_adapter_conf(enum rte_event_crypto_adapter_mode mode) if (!crypto_adapter_setup_done) { ret = configure_event_crypto_adapter(mode); - if (!ret) { + if (ret) + return ret; + if (!params.internal_port_op_fwd) { qid = TEST_CRYPTO_EV_QUEUE_ID; ret = rte_event_port_link(evdev, params.crypto_event_port_id, &qid, NULL, 1); TEST_ASSERT(ret >= 0, "Failed to link queue %d " "port=%u\n", qid, params.crypto_event_port_id); - } else { - return ret; } crypto_adapter_setup_done = 1; } -- 2.25.1
On Tue, Apr 13, 2021 at 4:00 PM Shijith Thotton <sthotton@marvell.com> wrote: > > Use rte_event_crypto_adapter_enqueue() API to enqueue events to crypto > adapter if forward mode is supported in driver. > > Signed-off-by: Shijith Thotton <sthotton@marvell.com> Could you check this CI failure? http://patches.dpdk.org/project/dpdk/patch/0ec2e4a2aea0d71b8fa19cc1c1d44cee0eb7533f.1618309291.git.sthotton@marvell.com/ > Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com> > --- > app/test/test_event_crypto_adapter.c | 33 ++++++++++++++++++---------- > 1 file changed, 21 insertions(+), 12 deletions(-) > > diff --git a/app/test/test_event_crypto_adapter.c b/app/test/test_event_crypto_adapter.c > index 335211cd8..f689bc1f2 100644 > --- a/app/test/test_event_crypto_adapter.c > +++ b/app/test/test_event_crypto_adapter.c > @@ -64,6 +64,7 @@ struct event_crypto_adapter_test_params { > struct rte_mempool *session_priv_mpool; > struct rte_cryptodev_config *config; > uint8_t crypto_event_port_id; > + uint8_t internal_port_op_fwd; > }; > > struct rte_event response_info = { > @@ -110,9 +111,12 @@ send_recv_ev(struct rte_event *ev) > struct rte_event recv_ev; > int ret; > > - ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); > - TEST_ASSERT_EQUAL(ret, NUM, > - "Failed to send event to crypto adapter\n"); > + if (params.internal_port_op_fwd) > + ret = rte_event_crypto_adapter_enqueue(evdev, TEST_APP_PORT_ID, > + ev, NUM); > + else > + ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); > + TEST_ASSERT_EQUAL(ret, NUM, "Failed to send event to crypto adapter\n"); > > while (rte_event_dequeue_burst(evdev, > TEST_APP_PORT_ID, &recv_ev, NUM, 0) == 0) > @@ -747,9 +751,12 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) > !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND)) > goto adapter_create; > > - if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) && > - !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD)) > - return -ENOTSUP; > + if (mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) { > + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) > + params.internal_port_op_fwd = 1; > + else > + return -ENOTSUP; > + } > > if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_NEW) && > !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW)) > @@ -771,9 +778,11 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) > > TEST_ASSERT_SUCCESS(ret, "Failed to add queue pair\n"); > > - ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, > - ¶ms.crypto_event_port_id); > - TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); > + if (!params.internal_port_op_fwd) { > + ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, > + ¶ms.crypto_event_port_id); > + TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); > + } > > return TEST_SUCCESS; > } > @@ -809,15 +818,15 @@ test_crypto_adapter_conf(enum rte_event_crypto_adapter_mode mode) > > if (!crypto_adapter_setup_done) { > ret = configure_event_crypto_adapter(mode); > - if (!ret) { > + if (ret) > + return ret; > + if (!params.internal_port_op_fwd) { > qid = TEST_CRYPTO_EV_QUEUE_ID; > ret = rte_event_port_link(evdev, > params.crypto_event_port_id, &qid, NULL, 1); > TEST_ASSERT(ret >= 0, "Failed to link queue %d " > "port=%u\n", qid, > params.crypto_event_port_id); > - } else { > - return ret; > } > crypto_adapter_setup_done = 1; > } > -- > 2.25.1 >
> +
> #define RTE_EVENTDEV_NAME_MAX_LEN (64)
> /**< @internal Max length of name of event PMD */
>
> @@ -1423,6 +1427,8 @@ struct rte_eventdev {
> */
> event_tx_adapter_enqueue txa_enqueue;
> /**< Pointer to PMD eth Tx adapter enqueue function. */
> + event_crypto_adapter_enqueue ca_enqueue;
> + /**< Pointer to PMD crypto adapter enqueue function. */
> struct rte_eventdev_data *data;
> /**< Pointer to device data */
> struct rte_eventdev_ops *dev_ops;
> @@ -1435,7 +1441,7 @@ struct rte_eventdev {
> /**< Flag indicating the device is attached */
>
> uint64_t reserved_64s[4]; /**< Reserved for future fields */
> - void *reserved_ptrs[4]; /**< Reserved for future fields */
> + void *reserved_ptrs[3]; /**< Reserved for future fields */
> } __rte_cache_aligned;
This change has following ABI breakage[1].
Could you move ca_enqueue at end of struct to avoid the ABI breakage. Also, please update depreciation notice to move ca_enqueue to above(to align with function pointers) in 21.11 release.
[1]
[C]'function rte_eventdev* rte_event_pmd_allocate(const char*, int)' at rte_eventdev.c:1467:1 has some indirect sub-type changes:
return type changed:
in pointed to type 'struct rte_eventdev' at rte_eventdev.h:1411:1:
type size hasn't changed
1 data member insertion:
'event_crypto_adapter_enqueue rte_eventdev::ca_enqueue', at offset 512 (in bits) at rte_eventdev.h:1430:1
5 data member changes:
'rte_eventdev_data* rte_eventdev::data' offset changed from 512 to 576 (in bits) (by +64 bits)
'rte_eventdev_ops* rte_eventdev::dev_ops' offset changed from 576 to 640 (in bits) (by +64 bits)
'rte_device* rte_eventdev::dev' offset changed from 640 to 704 (in bits) (by +64 bits)
'uint64_t rte_eventdev::reserved_64s[4]' offset changed from 768 to 832 (in bits) (by +64 bits)
type of 'void* rte_eventdev::reserved_ptrs[4]' changed:
type name changed from 'void*[4]' to 'void*[3]'
array type size changed from 256 to 192
array type subrange 1 changed length from 4 to 3
and offset changed from 1024 to 1088 (in bits) (by +64 bits)
Hi,
> > +
> > #define RTE_EVENTDEV_NAME_MAX_LEN (64)
> > /**< @internal Max length of name of event PMD */
> >
> > @@ -1423,6 +1427,8 @@ struct rte_eventdev {
> > */
> > event_tx_adapter_enqueue txa_enqueue;
> > /**< Pointer to PMD eth Tx adapter enqueue function. */
> > + event_crypto_adapter_enqueue ca_enqueue;
> > + /**< Pointer to PMD crypto adapter enqueue function. */
> > struct rte_eventdev_data *data;
> > /**< Pointer to device data */
> > struct rte_eventdev_ops *dev_ops;
> > @@ -1435,7 +1441,7 @@ struct rte_eventdev {
> > /**< Flag indicating the device is attached */
> >
> > uint64_t reserved_64s[4]; /**< Reserved for future fields */
> > - void *reserved_ptrs[4]; /**< Reserved for future fields */
> > + void *reserved_ptrs[3]; /**< Reserved for future fields */
> > } __rte_cache_aligned;
>
>
> This change has following ABI breakage[1].
>
> Could you move ca_enqueue at end of struct to avoid the ABI breakage. Also,
> please update depreciation notice to move ca_enqueue to above(to align
> with function pointers) in 21.11 release.
>
> [1]
> [C]'function rte_eventdev* rte_event_pmd_allocate(const char*, int)' at
> rte_eventdev.c:1467:1 has some indirect sub-type changes:
> return type changed:
> in pointed to type 'struct rte_eventdev' at rte_eventdev.h:1411:1:
> type size hasn't changed
> 1 data member insertion:
> 'event_crypto_adapter_enqueue rte_eventdev::ca_enqueue', at offset
> 512 (in bits) at rte_eventdev.h:1430:1
> 5 data member changes:
> 'rte_eventdev_data* rte_eventdev::data' offset changed from 512 to
> 576 (in bits) (by +64 bits)
> 'rte_eventdev_ops* rte_eventdev::dev_ops' offset changed from 576 to
> 640 (in bits) (by +64 bits)
> 'rte_device* rte_eventdev::dev' offset changed from 640 to 704 (in bits)
> (by +64 bits)
> 'uint64_t rte_eventdev::reserved_64s[4]' offset changed from 768 to
> 832 (in bits) (by +64 bits)
> type of 'void* rte_eventdev::reserved_ptrs[4]' changed:
> type name changed from 'void*[4]' to 'void*[3]'
> array type size changed from 256 to 192
> array type subrange 1 changed length from 4 to 3
> and offset changed from 1024 to 1088 (in bits) (by +64 bits)
>
>
Yes my bad, it should be added in the end.
But abi script will still shout for 'void*[4]' to 'void*[3]' conversion.
We may need to add something in the devtools/libabigail.abignore
So that, CI is not broken when reserved fields are changed.
Otherwise, it does not make sense to introduce reserve fields.
Can we have something generic for reserved fields?
Any suggestions?
Regards,
Akhil
14/04/2021 09:58, Akhil Goyal:
> Hi,
> > > +
> > > #define RTE_EVENTDEV_NAME_MAX_LEN (64)
> > > /**< @internal Max length of name of event PMD */
> > >
> > > @@ -1423,6 +1427,8 @@ struct rte_eventdev {
> > > */
> > > event_tx_adapter_enqueue txa_enqueue;
> > > /**< Pointer to PMD eth Tx adapter enqueue function. */
> > > + event_crypto_adapter_enqueue ca_enqueue;
> > > + /**< Pointer to PMD crypto adapter enqueue function. */
> > > struct rte_eventdev_data *data;
> > > /**< Pointer to device data */
> > > struct rte_eventdev_ops *dev_ops;
> > > @@ -1435,7 +1441,7 @@ struct rte_eventdev {
> > > /**< Flag indicating the device is attached */
> > >
> > > uint64_t reserved_64s[4]; /**< Reserved for future fields */
> > > - void *reserved_ptrs[4]; /**< Reserved for future fields */
> > > + void *reserved_ptrs[3]; /**< Reserved for future fields */
> > > } __rte_cache_aligned;
> >
> >
> > This change has following ABI breakage[1].
> >
> > Could you move ca_enqueue at end of struct to avoid the ABI breakage. Also,
> > please update depreciation notice to move ca_enqueue to above(to align
> > with function pointers) in 21.11 release.
> >
> > [1]
> > [C]'function rte_eventdev* rte_event_pmd_allocate(const char*, int)' at
> > rte_eventdev.c:1467:1 has some indirect sub-type changes:
> > return type changed:
> > in pointed to type 'struct rte_eventdev' at rte_eventdev.h:1411:1:
> > type size hasn't changed
> > 1 data member insertion:
> > 'event_crypto_adapter_enqueue rte_eventdev::ca_enqueue', at offset
> > 512 (in bits) at rte_eventdev.h:1430:1
> > 5 data member changes:
> > 'rte_eventdev_data* rte_eventdev::data' offset changed from 512 to
> > 576 (in bits) (by +64 bits)
> > 'rte_eventdev_ops* rte_eventdev::dev_ops' offset changed from 576 to
> > 640 (in bits) (by +64 bits)
> > 'rte_device* rte_eventdev::dev' offset changed from 640 to 704 (in bits)
> > (by +64 bits)
> > 'uint64_t rte_eventdev::reserved_64s[4]' offset changed from 768 to
> > 832 (in bits) (by +64 bits)
> > type of 'void* rte_eventdev::reserved_ptrs[4]' changed:
> > type name changed from 'void*[4]' to 'void*[3]'
> > array type size changed from 256 to 192
> > array type subrange 1 changed length from 4 to 3
> > and offset changed from 1024 to 1088 (in bits) (by +64 bits)
> >
> >
> Yes my bad, it should be added in the end.
> But abi script will still shout for 'void*[4]' to 'void*[3]' conversion.
> We may need to add something in the devtools/libabigail.abignore
> So that, CI is not broken when reserved fields are changed.
> Otherwise, it does not make sense to introduce reserve fields.
> Can we have something generic for reserved fields?
> Any suggestions?
The ABI check is not aware about the reserved fields.
It needs to be added in libabigail.ignore.
Hi Thomas,
> 14/04/2021 09:58, Akhil Goyal:
> > Hi,
> > > > +
> > > > #define RTE_EVENTDEV_NAME_MAX_LEN (64)
> > > > /**< @internal Max length of name of event PMD */
> > > >
> > > > @@ -1423,6 +1427,8 @@ struct rte_eventdev {
> > > > */
> > > > event_tx_adapter_enqueue txa_enqueue;
> > > > /**< Pointer to PMD eth Tx adapter enqueue function. */
> > > > + event_crypto_adapter_enqueue ca_enqueue;
> > > > + /**< Pointer to PMD crypto adapter enqueue function. */
> > > > struct rte_eventdev_data *data;
> > > > /**< Pointer to device data */
> > > > struct rte_eventdev_ops *dev_ops;
> > > > @@ -1435,7 +1441,7 @@ struct rte_eventdev {
> > > > /**< Flag indicating the device is attached */
> > > >
> > > > uint64_t reserved_64s[4]; /**< Reserved for future fields */
> > > > - void *reserved_ptrs[4]; /**< Reserved for future fields */
> > > > + void *reserved_ptrs[3]; /**< Reserved for future fields */
> > > > } __rte_cache_aligned;
> > >
> > >
> > > This change has following ABI breakage[1].
> > >
> > > Could you move ca_enqueue at end of struct to avoid the ABI breakage.
> Also,
> > > please update depreciation notice to move ca_enqueue to above(to align
> > > with function pointers) in 21.11 release.
> > >
> > > [1]
> > > [C]'function rte_eventdev* rte_event_pmd_allocate(const char*, int)' at
> > > rte_eventdev.c:1467:1 has some indirect sub-type changes:
> > > return type changed:
> > > in pointed to type 'struct rte_eventdev' at rte_eventdev.h:1411:1:
> > > type size hasn't changed
> > > 1 data member insertion:
> > > 'event_crypto_adapter_enqueue rte_eventdev::ca_enqueue', at
> offset
> > > 512 (in bits) at rte_eventdev.h:1430:1
> > > 5 data member changes:
> > > 'rte_eventdev_data* rte_eventdev::data' offset changed from 512 to
> > > 576 (in bits) (by +64 bits)
> > > 'rte_eventdev_ops* rte_eventdev::dev_ops' offset changed from
> 576 to
> > > 640 (in bits) (by +64 bits)
> > > 'rte_device* rte_eventdev::dev' offset changed from 640 to 704 (in
> bits)
> > > (by +64 bits)
> > > 'uint64_t rte_eventdev::reserved_64s[4]' offset changed from 768 to
> > > 832 (in bits) (by +64 bits)
> > > type of 'void* rte_eventdev::reserved_ptrs[4]' changed:
> > > type name changed from 'void*[4]' to 'void*[3]'
> > > array type size changed from 256 to 192
> > > array type subrange 1 changed length from 4 to 3
> > > and offset changed from 1024 to 1088 (in bits) (by +64 bits)
> > >
> > >
> > Yes my bad, it should be added in the end.
> > But abi script will still shout for 'void*[4]' to 'void*[3]' conversion.
> > We may need to add something in the devtools/libabigail.abignore
> > So that, CI is not broken when reserved fields are changed.
> > Otherwise, it does not make sense to introduce reserve fields.
> > Can we have something generic for reserved fields?
> > Any suggestions?
>
> The ABI check is not aware about the reserved fields.
> It needs to be added in libabigail.ignore.
>
Can I add a generic ignore for all reserved fields.
+; Ignore changes in reserved fields
+[suppress_variable]
+ name_regexp = reserved
Regards,
Akhil
14/04/2021 10:39, Akhil Goyal:
> Hi Thomas,
>
> > 14/04/2021 09:58, Akhil Goyal:
> > > Hi,
> > > > > +
> > > > > #define RTE_EVENTDEV_NAME_MAX_LEN (64)
> > > > > /**< @internal Max length of name of event PMD */
> > > > >
> > > > > @@ -1423,6 +1427,8 @@ struct rte_eventdev {
> > > > > */
> > > > > event_tx_adapter_enqueue txa_enqueue;
> > > > > /**< Pointer to PMD eth Tx adapter enqueue function. */
> > > > > + event_crypto_adapter_enqueue ca_enqueue;
> > > > > + /**< Pointer to PMD crypto adapter enqueue function. */
> > > > > struct rte_eventdev_data *data;
> > > > > /**< Pointer to device data */
> > > > > struct rte_eventdev_ops *dev_ops;
> > > > > @@ -1435,7 +1441,7 @@ struct rte_eventdev {
> > > > > /**< Flag indicating the device is attached */
> > > > >
> > > > > uint64_t reserved_64s[4]; /**< Reserved for future fields */
> > > > > - void *reserved_ptrs[4]; /**< Reserved for future fields */
> > > > > + void *reserved_ptrs[3]; /**< Reserved for future fields */
> > > > > } __rte_cache_aligned;
> > > >
> > > >
> > > > This change has following ABI breakage[1].
> > > >
> > > > Could you move ca_enqueue at end of struct to avoid the ABI breakage.
> > Also,
> > > > please update depreciation notice to move ca_enqueue to above(to align
> > > > with function pointers) in 21.11 release.
> > > >
> > > > [1]
> > > > [C]'function rte_eventdev* rte_event_pmd_allocate(const char*, int)' at
> > > > rte_eventdev.c:1467:1 has some indirect sub-type changes:
> > > > return type changed:
> > > > in pointed to type 'struct rte_eventdev' at rte_eventdev.h:1411:1:
> > > > type size hasn't changed
> > > > 1 data member insertion:
> > > > 'event_crypto_adapter_enqueue rte_eventdev::ca_enqueue', at
> > offset
> > > > 512 (in bits) at rte_eventdev.h:1430:1
> > > > 5 data member changes:
> > > > 'rte_eventdev_data* rte_eventdev::data' offset changed from 512 to
> > > > 576 (in bits) (by +64 bits)
> > > > 'rte_eventdev_ops* rte_eventdev::dev_ops' offset changed from
> > 576 to
> > > > 640 (in bits) (by +64 bits)
> > > > 'rte_device* rte_eventdev::dev' offset changed from 640 to 704 (in
> > bits)
> > > > (by +64 bits)
> > > > 'uint64_t rte_eventdev::reserved_64s[4]' offset changed from 768 to
> > > > 832 (in bits) (by +64 bits)
> > > > type of 'void* rte_eventdev::reserved_ptrs[4]' changed:
> > > > type name changed from 'void*[4]' to 'void*[3]'
> > > > array type size changed from 256 to 192
> > > > array type subrange 1 changed length from 4 to 3
> > > > and offset changed from 1024 to 1088 (in bits) (by +64 bits)
> > > >
> > > >
> > > Yes my bad, it should be added in the end.
> > > But abi script will still shout for 'void*[4]' to 'void*[3]' conversion.
> > > We may need to add something in the devtools/libabigail.abignore
> > > So that, CI is not broken when reserved fields are changed.
> > > Otherwise, it does not make sense to introduce reserve fields.
> > > Can we have something generic for reserved fields?
> > > Any suggestions?
> >
> > The ABI check is not aware about the reserved fields.
> > It needs to be added in libabigail.ignore.
> >
> Can I add a generic ignore for all reserved fields.
>
> +; Ignore changes in reserved fields
> +[suppress_variable]
> + name_regexp = reserved
You can propose in a separate patch in your series.
From: Akhil Goyal <gakhil@marvell.com> v9: - moved ca_enqueue in the end of rte_eventdev before reserved fields - added exception in libabigail.abignore for reserved fields - added exception in libabigail.abignore for new field addition in place of reserved in rte_eventdev - added deprecation notice to move ca_enqueue up in the structure. v8: - Added metadata NULL check and op free. - events object - > event objects. - Added Acked-by. v7: - Rebased. v6: - Rebased. v5: - Set rte_errno if crypto adapter enqueue fails in driver. - Test application code restructuring. v4: - Fix debug build. v3: - Added crypto adapter test application changes. v2: - Updated release notes. - Made use of RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET macro. - Fixed v1 build error. v1: - Added crypto adapter forward mode support for octeontx2. Akhil Goyal (2): eventdev: introduce crypto adapter enqueue API devtools: add exception for reserved fields Shijith Thotton (2): event/octeontx2: support crypto adapter forward mode test/event_crypto: use crypto adapter enqueue API app/test/test_event_crypto_adapter.c | 33 +++++--- devtools/libabigail.abignore | 11 ++- .../prog_guide/event_crypto_adapter.rst | 69 +++++++++------ doc/guides/rel_notes/deprecation.rst | 4 + doc/guides/rel_notes/release_21_05.rst | 6 ++ drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 49 +++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 63 ++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 9 +- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 1 + 18 files changed, 308 insertions(+), 61 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h -- 2.25.1
From: Akhil Goyal <gakhil@marvell.com> In case an event from a previous stage is required to be forwarded to a crypto adapter and PMD supports internal event port in crypto adapter, exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not have a way to check in the API rte_event_enqueue_burst(), whether it is for crypto adapter or for eth tx adapter. Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue(), which can send to a crypto adapter. Note that RTE_EVENT_TYPE_* cannot be used to make that decision, as it is meant for event source and not event destination. And event port designated for crypto adapter is designed to be used for OP_NEW mode. Hence, in order to support an event PMD which has an internal event port in crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, application should use rte_event_crypto_adapter_enqueue() API to enqueue events. When internal port is not available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW mode), application can use API rte_event_enqueue_burst() as it was doing earlier, i.e. retrieve event port used by crypto adapter and bind its event queues to that port and enqueue events using the API rte_event_enqueue_burst(). Signed-off-by: Akhil Goyal <gakhil@marvell.com> Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com> --- devtools/libabigail.abignore | 7 +- .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++++------- doc/guides/rel_notes/deprecation.rst | 4 ++ doc/guides/rel_notes/release_21_05.rst | 6 ++ lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 63 +++++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 9 ++- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 1 + 10 files changed, 154 insertions(+), 28 deletions(-) diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore index 6c0b38984..46a5a6af5 100644 --- a/devtools/libabigail.abignore +++ b/devtools/libabigail.abignore @@ -19,4 +19,9 @@ ; Ignore fields inserted in cacheline boundary of rte_cryptodev [suppress_type] name = rte_cryptodev - has_data_member_inserted_between = {offset_after(attached), end} \ No newline at end of file + has_data_member_inserted_between = {offset_after(attached), end} + +; Ignore fields inserted in place of reserved fields of rte_eventdev +[suppress_type] + name = rte_eventdev + has_data_member_inserted_between = {offset_after(attached), end} diff --git a/doc/guides/prog_guide/event_crypto_adapter.rst b/doc/guides/prog_guide/event_crypto_adapter.rst index 1e3eb7139..4fb5c688e 100644 --- a/doc/guides/prog_guide/event_crypto_adapter.rst +++ b/doc/guides/prog_guide/event_crypto_adapter.rst @@ -55,21 +55,22 @@ which is needed to enqueue an event after the crypto operation is completed. RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -In the RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, if HW supports -RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability the application -can directly submit the crypto operations to the cryptodev. -If not, application retrieves crypto adapter's event port using -rte_event_crypto_adapter_event_port_get() API. Then, links its event -queue to this port and starts enqueuing crypto operations as events -to the eventdev. The adapter then dequeues the events and submits the -crypto operations to the cryptodev. After the crypto completions, the -adapter enqueues events to the event device. -Application can use this mode, when ingress packet ordering is needed. -In this mode, events dequeued from the adapter will be treated as -forwarded events. The application needs to specify the cryptodev ID -and queue pair ID (request information) needed to enqueue a crypto -operation in addition to the event information (response information) -needed to enqueue an event after the crypto operation has completed. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), the application should +use ``rte_event_crypto_adapter_enqueue()`` API to enqueue crypto operations as +events to crypto adapter. If not, application retrieves crypto adapter's event +port using ``rte_event_crypto_adapter_event_port_get()`` API, links its event +queue to this port and starts enqueuing crypto operations as events to eventdev +using ``rte_event_enqueue_burst()``. The adapter then dequeues the events and +submits the crypto operations to the cryptodev. After the crypto operation is +complete, the adapter enqueues events to the event device. The application can +use this mode when ingress packet ordering is needed. In this mode, events +dequeued from the adapter will be treated as forwarded events. The application +needs to specify the cryptodev ID and queue pair ID (request information) needed +to enqueue a crypto operation in addition to the event information (response +information) needed to enqueue an event after the crypto operation has +completed. .. _figure_event_crypto_adapter_op_forward: @@ -120,28 +121,44 @@ service function and needs to create an event port for it. The callback is expected to fill the ``struct rte_event_crypto_adapter_conf`` structure passed to it. -For RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, the event port created by adapter -can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` API. -Application can use this event port to link with event queue on which it -enqueues events towards the crypto adapter. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), events with crypto +operations should be enqueued to the crypto adapter using +``rte_event_crypto_adapter_enqueue()`` API. If not, the event port created by +the adapter can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` +API. An application can use this event port to link with an event queue, on +which it enqueues events towards the crypto adapter using +``rte_event_enqueue_burst()``. .. code-block:: c - uint8_t id, evdev, crypto_ev_port_id, app_qid; + uint8_t id, evdev_id, cdev_id, crypto_ev_port_id, app_qid; struct rte_event ev; + uint32_t cap; int ret; - ret = rte_event_crypto_adapter_event_port_get(id, &crypto_ev_port_id); - ret = rte_event_queue_setup(evdev, app_qid, NULL); - ret = rte_event_port_link(evdev, crypto_ev_port_id, &app_qid, NULL, 1); - // Fill in event info and update event_ptr with rte_crypto_op memset(&ev, 0, sizeof(ev)); - ev.queue_id = app_qid; . . ev.event_ptr = op; - ret = rte_event_enqueue_burst(evdev, app_ev_port_id, ev, nb_events); + + ret = rte_event_crypto_adapter_caps_get(evdev_id, cdev_id, &cap); + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) { + ret = rte_event_crypto_adapter_enqueue(evdev_id, app_ev_port_id, + ev, nb_events); + } else { + ret = rte_event_crypto_adapter_event_port_get(id, + &crypto_ev_port_id); + ret = rte_event_queue_setup(evdev_id, app_qid, NULL); + ret = rte_event_port_link(evdev_id, crypto_ev_port_id, &app_qid, + NULL, 1); + ev.queue_id = app_qid; + ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev, + nb_events); + } + Querying adapter capabilities ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 2afc84c39..a973de4a9 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -127,6 +127,10 @@ Deprecation Notices values to the function ``rte_event_eth_rx_adapter_queue_add`` using the structure ``rte_event_eth_rx_adapter_queue_add``. +* eventdev: The function pointer ``ca_enqueue`` in structure ``rte_eventdev`` + will be moved after ``txa_enqueue`` so that all enqueue/dequeue + function pointers are adjacent to each other. + * sched: To allow more traffic classes, flexible mapping of pipe queues to traffic classes, and subport level configuration of pipes and queues changes will be made to macros, data structures and API functions defined diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst index a6ea9d593..48f1860cc 100644 --- a/doc/guides/rel_notes/release_21_05.rst +++ b/doc/guides/rel_notes/release_21_05.rst @@ -177,6 +177,12 @@ New Features * Added command to display Rx queue used descriptor count. ``show port (port_id) rxq (queue_id) desc used count`` +* **Enhanced crypto adapter forward mode.** + + * Added ``rte_event_crypto_adapter_enqueue()`` API to enqueue events to crypto + adapter if forward mode is supported by driver. + * Added support for crypto adapter forward mode in octeontx2 event and crypto + device driver. Removed Items ------------- diff --git a/lib/librte_eventdev/eventdev_trace_points.c b/lib/librte_eventdev/eventdev_trace_points.c index 1a0ccc448..3867ec800 100644 --- a/lib/librte_eventdev/eventdev_trace_points.c +++ b/lib/librte_eventdev/eventdev_trace_points.c @@ -118,3 +118,6 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_start, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_stop, lib.eventdev.crypto.stop) + +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_enqueue, + lib.eventdev.crypto.enq) diff --git a/lib/librte_eventdev/rte_event_crypto_adapter.h b/lib/librte_eventdev/rte_event_crypto_adapter.h index 60630ef66..f8c6cca87 100644 --- a/lib/librte_eventdev/rte_event_crypto_adapter.h +++ b/lib/librte_eventdev/rte_event_crypto_adapter.h @@ -171,6 +171,7 @@ extern "C" { #include <stdint.h> #include "rte_eventdev.h" +#include "eventdev_pmd.h" /** * Crypto event adapter mode @@ -522,6 +523,68 @@ rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id); int rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); +/** + * Enqueue a burst of crypto operations as event objects supplied in *rte_event* + * structure on an event crypto adapter designated by its event *dev_id* through + * the event port specified by *port_id*. This function is supported if the + * eventdev PMD has the #RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD + * capability flag set. + * + * The *nb_events* parameter is the number of event objects to enqueue which are + * supplied in the *ev* array of *rte_event* structure. + * + * The rte_event_crypto_adapter_enqueue() function returns the number of + * event objects it actually enqueued. A return value equal to *nb_events* + * means that all event objects have been enqueued. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param ev + * Points to an array of *nb_events* objects of type *rte_event* structure + * which contain the event object enqueue operations to be processed. + * @param nb_events + * The number of event objects to enqueue, typically number of + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) + * available for this port. + * + * @return + * The number of event objects actually enqueued on the event device. The + * return value can be less than the value of the *nb_events* parameter when + * the event devices queue is full or if invalid parameters are specified in a + * *rte_event*. If the return value is less than *nb_events*, the remaining + * events at the end of ev[] are not consumed and the caller has to take care + * of them, and rte_errno is set accordingly. Possible errno values include: + * - EINVAL The port ID is invalid, device ID is invalid, an event's queue + * ID is invalid, or an event's sched type doesn't match the + * capabilities of the destination queue. + * - ENOSPC The event port was backpressured and unable to enqueue + * one or more events. This error code is only applicable to + * closed systems. + */ +static inline uint16_t +rte_event_crypto_adapter_enqueue(uint8_t dev_id, + uint8_t port_id, + struct rte_event ev[], + uint16_t nb_events) +{ + const struct rte_eventdev *dev = &rte_eventdevs[dev_id]; + +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + if (port_id >= dev->data->nb_ports) { + rte_errno = EINVAL; + return 0; + } +#endif + rte_eventdev_trace_crypto_adapter_enqueue(dev_id, port_id, ev, + nb_events); + + return dev->ca_enqueue(dev->data->ports[port_id], ev, nb_events); +} + #ifdef __cplusplus } #endif diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c index c9bb5d227..594dd5e75 100644 --- a/lib/librte_eventdev/rte_eventdev.c +++ b/lib/librte_eventdev/rte_eventdev.c @@ -1454,6 +1454,15 @@ rte_event_tx_adapter_enqueue(__rte_unused void *port, return 0; } +static uint16_t +rte_event_crypto_adapter_enqueue(__rte_unused void *port, + __rte_unused struct rte_event ev[], + __rte_unused uint16_t nb_events) +{ + rte_errno = ENOTSUP; + return 0; +} + struct rte_eventdev * rte_event_pmd_allocate(const char *name, int socket_id) { @@ -1476,6 +1485,7 @@ rte_event_pmd_allocate(const char *name, int socket_id) eventdev->txa_enqueue = rte_event_tx_adapter_enqueue; eventdev->txa_enqueue_same_dest = rte_event_tx_adapter_enqueue; + eventdev->ca_enqueue = rte_event_crypto_adapter_enqueue; if (eventdev->data == NULL) { struct rte_eventdev_data *eventdev_data = NULL; diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h index 5f1f544cc..a9c496fb6 100644 --- a/lib/librte_eventdev/rte_eventdev.h +++ b/lib/librte_eventdev/rte_eventdev.h @@ -1352,6 +1352,10 @@ typedef uint16_t (*event_tx_adapter_enqueue_same_dest)(void *port, * burst having same destination Ethernet port & Tx queue. */ +typedef uint16_t (*event_crypto_adapter_enqueue)(void *port, + struct rte_event ev[], uint16_t nb_events); +/**< @internal Enqueue burst of events on crypto adapter */ + #define RTE_EVENTDEV_NAME_MAX_LEN (64) /**< @internal Max length of name of event PMD */ @@ -1434,8 +1438,11 @@ struct rte_eventdev { uint8_t attached : 1; /**< Flag indicating the device is attached */ + event_crypto_adapter_enqueue ca_enqueue; + /**< Pointer to PMD crypto adapter enqueue function. */ + uint64_t reserved_64s[4]; /**< Reserved for future fields */ - void *reserved_ptrs[4]; /**< Reserved for future fields */ + void *reserved_ptrs[3]; /**< Reserved for future fields */ } __rte_cache_aligned; extern struct rte_eventdev *rte_eventdevs; diff --git a/lib/librte_eventdev/rte_eventdev_trace_fp.h b/lib/librte_eventdev/rte_eventdev_trace_fp.h index 349129c0f..5639e0b83 100644 --- a/lib/librte_eventdev/rte_eventdev_trace_fp.h +++ b/lib/librte_eventdev/rte_eventdev_trace_fp.h @@ -49,6 +49,16 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_u8(flags); ) +RTE_TRACE_POINT_FP( + rte_eventdev_trace_crypto_adapter_enqueue, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table, + uint16_t nb_events), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_ptr(ev_table); + rte_trace_point_emit_u16(nb_events); +) + RTE_TRACE_POINT_FP( rte_eventdev_trace_timer_arm_burst, RTE_TRACE_POINT_ARGS(const void *adapter, void **evtims_table, diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map index 902df0ae3..7e264d3b8 100644 --- a/lib/librte_eventdev/version.map +++ b/lib/librte_eventdev/version.map @@ -143,6 +143,7 @@ EXPERIMENTAL { rte_event_vector_pool_create; rte_event_eth_rx_adapter_vector_limits_get; rte_event_eth_rx_adapter_queue_event_vector_config; + __rte_eventdev_trace_crypto_adapter_enqueue; }; INTERNAL { -- 2.25.1
From: Shijith Thotton <sthotton@marvell.com> Advertise crypto adapter forward mode capability and set crypto adapter enqueue function in driver. Signed-off-by: Shijith Thotton <sthotton@marvell.com> Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com> --- drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 49 +++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- 7 files changed, 129 insertions(+), 21 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c index fc4d5bac4..5ca16a5ae 100644 --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c @@ -7,6 +7,7 @@ #include <rte_cryptodev_pmd.h> #include <rte_errno.h> #include <rte_ethdev.h> +#include <rte_event_crypto_adapter.h> #include "otx2_cryptodev.h" #include "otx2_cryptodev_capabilities.h" @@ -438,15 +439,35 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform, return -ENOTSUP; } -static __rte_always_inline void __rte_hot +static __rte_always_inline int32_t __rte_hot otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, struct cpt_request_info *req, void *lmtline, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { + union rte_event_crypto_metadata *m_data; union cpt_inst_s inst; uint64_t lmt_status; + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + m_data = rte_cryptodev_sym_session_get_user_data( + op->sym->session); + if (m_data == NULL) { + rte_pktmbuf_free(op->sym->m_src); + rte_crypto_op_free(op); + rte_errno = EINVAL; + return -EINVAL; + } + } else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + op->private_data_offset) { + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)op + + op->private_data_offset); + } else { + return -EINVAL; + } + inst.u[0] = 0; inst.s9x.res_addr = req->comp_baddr; inst.u[2] = 0; @@ -457,12 +478,11 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, inst.s9x.ei2 = req->ist.ei2; inst.s9x.ei3 = cpt_inst_w7; - inst.s9x.qord = 1; - inst.s9x.grp = qp->ev.queue_id; - inst.s9x.tt = qp->ev.sched_type; - inst.s9x.tag = (RTE_EVENT_TYPE_CRYPTODEV << 28) | - qp->ev.flow_id; - inst.s9x.wq_ptr = (uint64_t)req >> 3; + inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) | + m_data->response_info.flow_id) | + ((uint64_t)m_data->response_info.sched_type << 32) | + ((uint64_t)m_data->response_info.queue_id << 34)); + inst.u[3] = 1 | (((uint64_t)req >> 3) << 3); req->qp = qp; do { @@ -479,22 +499,22 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, lmt_status = otx2_lmt_submit(qp->lf_nq_reg); } while (lmt_status == 0); + return 0; } static __rte_always_inline int32_t __rte_hot otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp, struct pending_queue *pend_q, struct cpt_request_info *req, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { void *lmtline = qp->lmtline; union cpt_inst_s inst; uint64_t lmt_status; - if (qp->ca_enable) { - otx2_ca_enqueue_req(qp, req, lmtline, cpt_inst_w7); - return 0; - } + if (qp->ca_enable) + return otx2_ca_enqueue_req(qp, req, lmtline, op, cpt_inst_w7); if (unlikely(pend_q->pending_count >= OTX2_CPT_DEFAULT_CMD_QLEN)) return -EAGAIN; @@ -598,7 +618,8 @@ otx2_cpt_enqueue_asym(struct otx2_cpt_qp *qp, goto req_fail; } - ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op, + sess->cpt_inst_w7); if (unlikely(ret)) { CPT_LOG_DP_ERR("Could not enqueue crypto req"); @@ -642,7 +663,7 @@ otx2_cpt_enqueue_sym(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (unlikely(ret)) { /* Free buffer allocated by fill params routines */ @@ -711,7 +732,7 @@ otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (winsz && esn) { seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low; diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index cdadbb2b2..ee7a6ad51 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -12,8 +12,9 @@ #include <rte_mbuf_pool_ops.h> #include <rte_pci.h> -#include "otx2_evdev_stats.h" #include "otx2_evdev.h" +#include "otx2_evdev_crypto_adptr_tx.h" +#include "otx2_evdev_stats.h" #include "otx2_irq.h" #include "otx2_tim_evdev.h" @@ -311,6 +312,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_ca_enq; if (dev->dual_ws) { event_dev->enqueue = otx2_ssogws_dual_enq; @@ -473,6 +475,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq; } event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c index 4e8a96cb6..2c9b347f0 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c @@ -18,7 +18,8 @@ otx2_ca_caps_get(const struct rte_eventdev *dev, RTE_SET_USED(cdev); *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND | - RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW; + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW | + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD; return 0; } diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h similarity index 93% rename from drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h rename to drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h index 70b63933e..9e331fdd7 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h @@ -2,8 +2,8 @@ * Copyright (C) 2020 Marvell International Ltd. */ -#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ -#define _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ #include <rte_cryptodev.h> #include <rte_cryptodev_pmd.h> @@ -72,4 +72,4 @@ otx2_handle_crypto_event(uint64_t get_work1) return (uint64_t)(cop); } -#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ */ +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */ diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h new file mode 100644 index 000000000..ecf7eb9f5 --- /dev/null +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2021 Marvell International Ltd. + */ + +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ + +#include <rte_cryptodev.h> +#include <rte_cryptodev_pmd.h> +#include <rte_event_crypto_adapter.h> +#include <rte_eventdev.h> + +#include <otx2_cryptodev_qp.h> +#include <otx2_worker.h> + +static inline uint16_t +otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) +{ + union rte_event_crypto_metadata *m_data; + struct rte_crypto_op *crypto_op; + struct rte_cryptodev *cdev; + struct otx2_cpt_qp *qp; + uint8_t cdev_id; + uint16_t qp_id; + + crypto_op = ev->event_ptr; + if (crypto_op == NULL) + return 0; + + if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + m_data = rte_cryptodev_sym_session_get_user_data( + crypto_op->sym->session); + if (m_data == NULL) + goto free_op; + + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + crypto_op->private_data_offset) { + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)crypto_op + + crypto_op->private_data_offset); + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else { + goto free_op; + } + + cdev = &rte_cryptodevs[cdev_id]; + qp = cdev->data->queue_pairs[qp_id]; + + if (!ev->sched_type) + otx2_ssogws_head_wait(tag_op); + if (qp->ca_enable) + return cdev->enqueue_burst(qp, &crypto_op, 1); + +free_op: + rte_pktmbuf_free(crypto_op->sym->m_src); + rte_crypto_op_free(crypto_op); + rte_errno = EINVAL; + return 0; +} + +static uint16_t __rte_hot +otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->tag_op, ev); +} + +static uint16_t __rte_hot +otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws_dual *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->ws_state[!ws->vws].tag_op, ev); +} +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */ diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h index 2b716c042..fd149be91 100644 --- a/drivers/event/octeontx2/otx2_worker.h +++ b/drivers/event/octeontx2/otx2_worker.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" #include "otx2_ethdev_sec_tx.h" /* SSO Operations */ diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h index 72b616439..36ae4dd88 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.h +++ b/drivers/event/octeontx2/otx2_worker_dual.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" /* SSO Operations */ static __rte_always_inline uint16_t -- 2.25.1
From: Shijith Thotton <sthotton@marvell.com> Use rte_event_crypto_adapter_enqueue() API to enqueue events to crypto adapter if forward mode is supported in driver. Signed-off-by: Shijith Thotton <sthotton@marvell.com> Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com> --- app/test/test_event_crypto_adapter.c | 33 ++++++++++++++++++---------- 1 file changed, 21 insertions(+), 12 deletions(-) diff --git a/app/test/test_event_crypto_adapter.c b/app/test/test_event_crypto_adapter.c index 335211cd8..f689bc1f2 100644 --- a/app/test/test_event_crypto_adapter.c +++ b/app/test/test_event_crypto_adapter.c @@ -64,6 +64,7 @@ struct event_crypto_adapter_test_params { struct rte_mempool *session_priv_mpool; struct rte_cryptodev_config *config; uint8_t crypto_event_port_id; + uint8_t internal_port_op_fwd; }; struct rte_event response_info = { @@ -110,9 +111,12 @@ send_recv_ev(struct rte_event *ev) struct rte_event recv_ev; int ret; - ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); - TEST_ASSERT_EQUAL(ret, NUM, - "Failed to send event to crypto adapter\n"); + if (params.internal_port_op_fwd) + ret = rte_event_crypto_adapter_enqueue(evdev, TEST_APP_PORT_ID, + ev, NUM); + else + ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); + TEST_ASSERT_EQUAL(ret, NUM, "Failed to send event to crypto adapter\n"); while (rte_event_dequeue_burst(evdev, TEST_APP_PORT_ID, &recv_ev, NUM, 0) == 0) @@ -747,9 +751,12 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND)) goto adapter_create; - if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) && - !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD)) - return -ENOTSUP; + if (mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) { + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) + params.internal_port_op_fwd = 1; + else + return -ENOTSUP; + } if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_NEW) && !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW)) @@ -771,9 +778,11 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) TEST_ASSERT_SUCCESS(ret, "Failed to add queue pair\n"); - ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, - ¶ms.crypto_event_port_id); - TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + if (!params.internal_port_op_fwd) { + ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, + ¶ms.crypto_event_port_id); + TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + } return TEST_SUCCESS; } @@ -809,15 +818,15 @@ test_crypto_adapter_conf(enum rte_event_crypto_adapter_mode mode) if (!crypto_adapter_setup_done) { ret = configure_event_crypto_adapter(mode); - if (!ret) { + if (ret) + return ret; + if (!params.internal_port_op_fwd) { qid = TEST_CRYPTO_EV_QUEUE_ID; ret = rte_event_port_link(evdev, params.crypto_event_port_id, &qid, NULL, 1); TEST_ASSERT(ret >= 0, "Failed to link queue %d " "port=%u\n", qid, params.crypto_event_port_id); - } else { - return ret; } crypto_adapter_setup_done = 1; } -- 2.25.1
From: Akhil Goyal <gakhil@marvell.com> Certain structures are added with reserved fields to address any future enhancements to retain ABI compatibility. However, ABI script will still report error as it is not aware of reserved fields. Hence, adding a generic exception for reserved fields. Signed-off-by: Akhil Goyal <gakhil@marvell.com> --- devtools/libabigail.abignore | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore index 46a5a6af5..a9d284f76 100644 --- a/devtools/libabigail.abignore +++ b/devtools/libabigail.abignore @@ -25,3 +25,7 @@ [suppress_type] name = rte_eventdev has_data_member_inserted_between = {offset_after(attached), end} + +; Ignore changes in reserved fields +[suppress_variable] + name_regexp = reserved -- 2.25.1
14/04/2021 14:20, gakhil@marvell.com:
> From: Akhil Goyal <gakhil@marvell.com>
>
> Certain structures are added with reserved fields
> to address any future enhancements to retain ABI
> compatibility.
> However, ABI script will still report error as it
> is not aware of reserved fields. Hence, adding a
> generic exception for reserved fields.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
> devtools/libabigail.abignore | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> index 46a5a6af5..a9d284f76 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -25,3 +25,7 @@
> [suppress_type]
> name = rte_eventdev
> has_data_member_inserted_between = {offset_after(attached), end}
> +
> +; Ignore changes in reserved fields
> +[suppress_variable]
> + name_regexp = reserved
If we do that as first patch of this series,
we don't need the exception on rte_eventdev, right?
Hi Thomas,
14/04/2021 14:20, gakhil@marvell.com:
> From: Akhil Goyal <gakhil@marvell.com>
>
> Certain structures are added with reserved fields
> to address any future enhancements to retain ABI
> compatibility.
> However, ABI script will still report error as it
> is not aware of reserved fields. Hence, adding a
> generic exception for reserved fields.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
> devtools/libabigail.abignore | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> index 46a5a6af5..a9d284f76 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -25,3 +25,7 @@
> [suppress_type]
> name = rte_eventdev
> has_data_member_inserted_between = {offset_after(attached), end}
> +
> +; Ignore changes in reserved fields
> +[suppress_variable]
> + name_regexp = reserved
If we do that as first patch of this series,
we don't need the exception on rte_eventdev, right?
It will still be required, as we have 2 issues
1. Reserved_ptr[4] to reserved[3]
2. Additional member ca_enqueue added
So we need both.
Regards,
Akhil
14/04/2021 16:16, Akhil Goyal:
> Hi Thomas,
>
> > 14/04/2021 14:20, gakhil@marvell.com:
> > > From: Akhil Goyal <gakhil@marvell.com>
> > >
> > > Certain structures are added with reserved fields
> > > to address any future enhancements to retain ABI
> > > compatibility.
> > > However, ABI script will still report error as it
> > > is not aware of reserved fields. Hence, adding a
> > > generic exception for reserved fields.
> > >
> > > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > > ---
> > >
> > > devtools/libabigail.abignore | 4 ++++
> > > 1 file changed, 4 insertions(+)
> > >
> > > diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> > > index 46a5a6af5..a9d284f76 100644
> > > --- a/devtools/libabigail.abignore
> > > +++ b/devtools/libabigail.abignore
> > > @@ -25,3 +25,7 @@
> > >
> > > [suppress_type]
> > >
> > > name = rte_eventdev
> > > has_data_member_inserted_between = {offset_after(attached), end}
> > >
> > > +
> > > +; Ignore changes in reserved fields
> > > +[suppress_variable]
> > > + name_regexp = reserved
> >
> > If we do that as first patch of this series,
> > we don't need the exception on rte_eventdev, right?
>
> It will still be required, as we have 2 issues
> 1. Reserved_ptr[4] to reserved[3]
> 2. Additional member ca_enqueue added
>
> So we need both.
If this patch is required, it should not be the last one.
> > > > diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> > > > index 46a5a6af5..a9d284f76 100644
> > > > --- a/devtools/libabigail.abignore
> > > > +++ b/devtools/libabigail.abignore
> > > > @@ -25,3 +25,7 @@
> > > >
> > > > [suppress_type]
> > > >
> > > > name = rte_eventdev
> > > > has_data_member_inserted_between = {offset_after(attached),
> end}
> > > >
> > > > +
> > > > +; Ignore changes in reserved fields
> > > > +[suppress_variable]
> > > > + name_regexp = reserved
> > >
> > > If we do that as first patch of this series,
> > > we don't need the exception on rte_eventdev, right?
> >
> > It will still be required, as we have 2 issues
> > 1. Reserved_ptr[4] to reserved[3]
> > 2. Additional member ca_enqueue added
> >
> > So we need both.
>
> If this patch is required, it should not be the last one.
>
Ok, I will resend.
From: Akhil Goyal <gakhil@marvell.com> v10: - moved last patch to first patch of the series. v9: - moved ca_enqueue in the end of rte_eventdev before reserved fields - added exception in libabigail.abignore for reserved fields - added exception in libabigail.abignore for new field addition in place of reserved in rte_eventdev - added deprecation notice to move ca_enqueue up in the structure. v8: - Added metadata NULL check and op free. - events object - > event objects. - Added Acked-by. v7: - Rebased. v6: - Rebased. v5: - Set rte_errno if crypto adapter enqueue fails in driver. - Test application code restructuring. v4: - Fix debug build. v3: - Added crypto adapter test application changes. v2: - Updated release notes. - Made use of RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET macro. - Fixed v1 build error. v1: - Added crypto adapter forward mode support for octeontx2. Akhil Goyal (2): devtools: add exception for reserved fields eventdev: introduce crypto adapter enqueue API Shijith Thotton (2): event/octeontx2: support crypto adapter forward mode test/event_crypto: use crypto adapter enqueue API app/test/test_event_crypto_adapter.c | 33 +++++--- devtools/libabigail.abignore | 11 ++- .../prog_guide/event_crypto_adapter.rst | 69 +++++++++------ doc/guides/rel_notes/deprecation.rst | 4 + doc/guides/rel_notes/release_21_05.rst | 6 ++ drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 49 +++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 63 ++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 9 +- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 1 + 18 files changed, 308 insertions(+), 61 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h -- 2.25.1
From: Akhil Goyal <gakhil@marvell.com> Certain structures are added with reserved fields to address any future enhancements to retain ABI compatibility. However, ABI script will still report error as it is not aware of reserved fields. Hence, adding a generic exception for reserved fields. Signed-off-by: Akhil Goyal <gakhil@marvell.com> --- devtools/libabigail.abignore | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore index 6c0b38984..654755314 100644 --- a/devtools/libabigail.abignore +++ b/devtools/libabigail.abignore @@ -19,4 +19,8 @@ ; Ignore fields inserted in cacheline boundary of rte_cryptodev [suppress_type] name = rte_cryptodev - has_data_member_inserted_between = {offset_after(attached), end} \ No newline at end of file + has_data_member_inserted_between = {offset_after(attached), end} + +; Ignore changes in reserved fields +[suppress_variable] + name_regexp = reserved -- 2.25.1
From: Akhil Goyal <gakhil@marvell.com> In case an event from a previous stage is required to be forwarded to a crypto adapter and PMD supports internal event port in crypto adapter, exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not have a way to check in the API rte_event_enqueue_burst(), whether it is for crypto adapter or for eth tx adapter. Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue(), which can send to a crypto adapter. Note that RTE_EVENT_TYPE_* cannot be used to make that decision, as it is meant for event source and not event destination. And event port designated for crypto adapter is designed to be used for OP_NEW mode. Hence, in order to support an event PMD which has an internal event port in crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, application should use rte_event_crypto_adapter_enqueue() API to enqueue events. When internal port is not available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW mode), application can use API rte_event_enqueue_burst() as it was doing earlier, i.e. retrieve event port used by crypto adapter and bind its event queues to that port and enqueue events using the API rte_event_enqueue_burst(). Signed-off-by: Akhil Goyal <gakhil@marvell.com> Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com> --- devtools/libabigail.abignore | 5 ++ .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++++------- doc/guides/rel_notes/deprecation.rst | 4 ++ doc/guides/rel_notes/release_21_05.rst | 6 ++ lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 63 +++++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 9 ++- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 1 + 10 files changed, 153 insertions(+), 27 deletions(-) diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore index 654755314..31c42cb55 100644 --- a/devtools/libabigail.abignore +++ b/devtools/libabigail.abignore @@ -24,3 +24,8 @@ ; Ignore changes in reserved fields [suppress_variable] name_regexp = reserved + +; Ignore fields inserted in place of reserved fields of rte_eventdev +[suppress_type] + name = rte_eventdev + has_data_member_inserted_between = {offset_after(attached), end} diff --git a/doc/guides/prog_guide/event_crypto_adapter.rst b/doc/guides/prog_guide/event_crypto_adapter.rst index 1e3eb7139..4fb5c688e 100644 --- a/doc/guides/prog_guide/event_crypto_adapter.rst +++ b/doc/guides/prog_guide/event_crypto_adapter.rst @@ -55,21 +55,22 @@ which is needed to enqueue an event after the crypto operation is completed. RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -In the RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, if HW supports -RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability the application -can directly submit the crypto operations to the cryptodev. -If not, application retrieves crypto adapter's event port using -rte_event_crypto_adapter_event_port_get() API. Then, links its event -queue to this port and starts enqueuing crypto operations as events -to the eventdev. The adapter then dequeues the events and submits the -crypto operations to the cryptodev. After the crypto completions, the -adapter enqueues events to the event device. -Application can use this mode, when ingress packet ordering is needed. -In this mode, events dequeued from the adapter will be treated as -forwarded events. The application needs to specify the cryptodev ID -and queue pair ID (request information) needed to enqueue a crypto -operation in addition to the event information (response information) -needed to enqueue an event after the crypto operation has completed. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), the application should +use ``rte_event_crypto_adapter_enqueue()`` API to enqueue crypto operations as +events to crypto adapter. If not, application retrieves crypto adapter's event +port using ``rte_event_crypto_adapter_event_port_get()`` API, links its event +queue to this port and starts enqueuing crypto operations as events to eventdev +using ``rte_event_enqueue_burst()``. The adapter then dequeues the events and +submits the crypto operations to the cryptodev. After the crypto operation is +complete, the adapter enqueues events to the event device. The application can +use this mode when ingress packet ordering is needed. In this mode, events +dequeued from the adapter will be treated as forwarded events. The application +needs to specify the cryptodev ID and queue pair ID (request information) needed +to enqueue a crypto operation in addition to the event information (response +information) needed to enqueue an event after the crypto operation has +completed. .. _figure_event_crypto_adapter_op_forward: @@ -120,28 +121,44 @@ service function and needs to create an event port for it. The callback is expected to fill the ``struct rte_event_crypto_adapter_conf`` structure passed to it. -For RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, the event port created by adapter -can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` API. -Application can use this event port to link with event queue on which it -enqueues events towards the crypto adapter. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), events with crypto +operations should be enqueued to the crypto adapter using +``rte_event_crypto_adapter_enqueue()`` API. If not, the event port created by +the adapter can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` +API. An application can use this event port to link with an event queue, on +which it enqueues events towards the crypto adapter using +``rte_event_enqueue_burst()``. .. code-block:: c - uint8_t id, evdev, crypto_ev_port_id, app_qid; + uint8_t id, evdev_id, cdev_id, crypto_ev_port_id, app_qid; struct rte_event ev; + uint32_t cap; int ret; - ret = rte_event_crypto_adapter_event_port_get(id, &crypto_ev_port_id); - ret = rte_event_queue_setup(evdev, app_qid, NULL); - ret = rte_event_port_link(evdev, crypto_ev_port_id, &app_qid, NULL, 1); - // Fill in event info and update event_ptr with rte_crypto_op memset(&ev, 0, sizeof(ev)); - ev.queue_id = app_qid; . . ev.event_ptr = op; - ret = rte_event_enqueue_burst(evdev, app_ev_port_id, ev, nb_events); + + ret = rte_event_crypto_adapter_caps_get(evdev_id, cdev_id, &cap); + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) { + ret = rte_event_crypto_adapter_enqueue(evdev_id, app_ev_port_id, + ev, nb_events); + } else { + ret = rte_event_crypto_adapter_event_port_get(id, + &crypto_ev_port_id); + ret = rte_event_queue_setup(evdev_id, app_qid, NULL); + ret = rte_event_port_link(evdev_id, crypto_ev_port_id, &app_qid, + NULL, 1); + ev.queue_id = app_qid; + ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev, + nb_events); + } + Querying adapter capabilities ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 2afc84c39..a973de4a9 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -127,6 +127,10 @@ Deprecation Notices values to the function ``rte_event_eth_rx_adapter_queue_add`` using the structure ``rte_event_eth_rx_adapter_queue_add``. +* eventdev: The function pointer ``ca_enqueue`` in structure ``rte_eventdev`` + will be moved after ``txa_enqueue`` so that all enqueue/dequeue + function pointers are adjacent to each other. + * sched: To allow more traffic classes, flexible mapping of pipe queues to traffic classes, and subport level configuration of pipes and queues changes will be made to macros, data structures and API functions defined diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst index b21906ccf..773dcbd58 100644 --- a/doc/guides/rel_notes/release_21_05.rst +++ b/doc/guides/rel_notes/release_21_05.rst @@ -182,6 +182,12 @@ New Features * Added command to display Rx queue used descriptor count. ``show port (port_id) rxq (queue_id) desc used count`` +* **Enhanced crypto adapter forward mode.** + + * Added ``rte_event_crypto_adapter_enqueue()`` API to enqueue events to crypto + adapter if forward mode is supported by driver. + * Added support for crypto adapter forward mode in octeontx2 event and crypto + device driver. Removed Items ------------- diff --git a/lib/librte_eventdev/eventdev_trace_points.c b/lib/librte_eventdev/eventdev_trace_points.c index 1a0ccc448..3867ec800 100644 --- a/lib/librte_eventdev/eventdev_trace_points.c +++ b/lib/librte_eventdev/eventdev_trace_points.c @@ -118,3 +118,6 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_start, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_stop, lib.eventdev.crypto.stop) + +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_enqueue, + lib.eventdev.crypto.enq) diff --git a/lib/librte_eventdev/rte_event_crypto_adapter.h b/lib/librte_eventdev/rte_event_crypto_adapter.h index 60630ef66..f8c6cca87 100644 --- a/lib/librte_eventdev/rte_event_crypto_adapter.h +++ b/lib/librte_eventdev/rte_event_crypto_adapter.h @@ -171,6 +171,7 @@ extern "C" { #include <stdint.h> #include "rte_eventdev.h" +#include "eventdev_pmd.h" /** * Crypto event adapter mode @@ -522,6 +523,68 @@ rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id); int rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); +/** + * Enqueue a burst of crypto operations as event objects supplied in *rte_event* + * structure on an event crypto adapter designated by its event *dev_id* through + * the event port specified by *port_id*. This function is supported if the + * eventdev PMD has the #RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD + * capability flag set. + * + * The *nb_events* parameter is the number of event objects to enqueue which are + * supplied in the *ev* array of *rte_event* structure. + * + * The rte_event_crypto_adapter_enqueue() function returns the number of + * event objects it actually enqueued. A return value equal to *nb_events* + * means that all event objects have been enqueued. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param ev + * Points to an array of *nb_events* objects of type *rte_event* structure + * which contain the event object enqueue operations to be processed. + * @param nb_events + * The number of event objects to enqueue, typically number of + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) + * available for this port. + * + * @return + * The number of event objects actually enqueued on the event device. The + * return value can be less than the value of the *nb_events* parameter when + * the event devices queue is full or if invalid parameters are specified in a + * *rte_event*. If the return value is less than *nb_events*, the remaining + * events at the end of ev[] are not consumed and the caller has to take care + * of them, and rte_errno is set accordingly. Possible errno values include: + * - EINVAL The port ID is invalid, device ID is invalid, an event's queue + * ID is invalid, or an event's sched type doesn't match the + * capabilities of the destination queue. + * - ENOSPC The event port was backpressured and unable to enqueue + * one or more events. This error code is only applicable to + * closed systems. + */ +static inline uint16_t +rte_event_crypto_adapter_enqueue(uint8_t dev_id, + uint8_t port_id, + struct rte_event ev[], + uint16_t nb_events) +{ + const struct rte_eventdev *dev = &rte_eventdevs[dev_id]; + +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + if (port_id >= dev->data->nb_ports) { + rte_errno = EINVAL; + return 0; + } +#endif + rte_eventdev_trace_crypto_adapter_enqueue(dev_id, port_id, ev, + nb_events); + + return dev->ca_enqueue(dev->data->ports[port_id], ev, nb_events); +} + #ifdef __cplusplus } #endif diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c index c9bb5d227..594dd5e75 100644 --- a/lib/librte_eventdev/rte_eventdev.c +++ b/lib/librte_eventdev/rte_eventdev.c @@ -1454,6 +1454,15 @@ rte_event_tx_adapter_enqueue(__rte_unused void *port, return 0; } +static uint16_t +rte_event_crypto_adapter_enqueue(__rte_unused void *port, + __rte_unused struct rte_event ev[], + __rte_unused uint16_t nb_events) +{ + rte_errno = ENOTSUP; + return 0; +} + struct rte_eventdev * rte_event_pmd_allocate(const char *name, int socket_id) { @@ -1476,6 +1485,7 @@ rte_event_pmd_allocate(const char *name, int socket_id) eventdev->txa_enqueue = rte_event_tx_adapter_enqueue; eventdev->txa_enqueue_same_dest = rte_event_tx_adapter_enqueue; + eventdev->ca_enqueue = rte_event_crypto_adapter_enqueue; if (eventdev->data == NULL) { struct rte_eventdev_data *eventdev_data = NULL; diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h index 5f1f544cc..a9c496fb6 100644 --- a/lib/librte_eventdev/rte_eventdev.h +++ b/lib/librte_eventdev/rte_eventdev.h @@ -1352,6 +1352,10 @@ typedef uint16_t (*event_tx_adapter_enqueue_same_dest)(void *port, * burst having same destination Ethernet port & Tx queue. */ +typedef uint16_t (*event_crypto_adapter_enqueue)(void *port, + struct rte_event ev[], uint16_t nb_events); +/**< @internal Enqueue burst of events on crypto adapter */ + #define RTE_EVENTDEV_NAME_MAX_LEN (64) /**< @internal Max length of name of event PMD */ @@ -1434,8 +1438,11 @@ struct rte_eventdev { uint8_t attached : 1; /**< Flag indicating the device is attached */ + event_crypto_adapter_enqueue ca_enqueue; + /**< Pointer to PMD crypto adapter enqueue function. */ + uint64_t reserved_64s[4]; /**< Reserved for future fields */ - void *reserved_ptrs[4]; /**< Reserved for future fields */ + void *reserved_ptrs[3]; /**< Reserved for future fields */ } __rte_cache_aligned; extern struct rte_eventdev *rte_eventdevs; diff --git a/lib/librte_eventdev/rte_eventdev_trace_fp.h b/lib/librte_eventdev/rte_eventdev_trace_fp.h index 349129c0f..5639e0b83 100644 --- a/lib/librte_eventdev/rte_eventdev_trace_fp.h +++ b/lib/librte_eventdev/rte_eventdev_trace_fp.h @@ -49,6 +49,16 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_u8(flags); ) +RTE_TRACE_POINT_FP( + rte_eventdev_trace_crypto_adapter_enqueue, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table, + uint16_t nb_events), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_ptr(ev_table); + rte_trace_point_emit_u16(nb_events); +) + RTE_TRACE_POINT_FP( rte_eventdev_trace_timer_arm_burst, RTE_TRACE_POINT_ARGS(const void *adapter, void **evtims_table, diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map index 902df0ae3..7e264d3b8 100644 --- a/lib/librte_eventdev/version.map +++ b/lib/librte_eventdev/version.map @@ -143,6 +143,7 @@ EXPERIMENTAL { rte_event_vector_pool_create; rte_event_eth_rx_adapter_vector_limits_get; rte_event_eth_rx_adapter_queue_event_vector_config; + __rte_eventdev_trace_crypto_adapter_enqueue; }; INTERNAL { -- 2.25.1
From: Shijith Thotton <sthotton@marvell.com> Advertise crypto adapter forward mode capability and set crypto adapter enqueue function in driver. Signed-off-by: Shijith Thotton <sthotton@marvell.com> Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com> --- drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 49 +++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- 7 files changed, 129 insertions(+), 21 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c index fc4d5bac4..5ca16a5ae 100644 --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c @@ -7,6 +7,7 @@ #include <rte_cryptodev_pmd.h> #include <rte_errno.h> #include <rte_ethdev.h> +#include <rte_event_crypto_adapter.h> #include "otx2_cryptodev.h" #include "otx2_cryptodev_capabilities.h" @@ -438,15 +439,35 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform, return -ENOTSUP; } -static __rte_always_inline void __rte_hot +static __rte_always_inline int32_t __rte_hot otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, struct cpt_request_info *req, void *lmtline, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { + union rte_event_crypto_metadata *m_data; union cpt_inst_s inst; uint64_t lmt_status; + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + m_data = rte_cryptodev_sym_session_get_user_data( + op->sym->session); + if (m_data == NULL) { + rte_pktmbuf_free(op->sym->m_src); + rte_crypto_op_free(op); + rte_errno = EINVAL; + return -EINVAL; + } + } else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + op->private_data_offset) { + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)op + + op->private_data_offset); + } else { + return -EINVAL; + } + inst.u[0] = 0; inst.s9x.res_addr = req->comp_baddr; inst.u[2] = 0; @@ -457,12 +478,11 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, inst.s9x.ei2 = req->ist.ei2; inst.s9x.ei3 = cpt_inst_w7; - inst.s9x.qord = 1; - inst.s9x.grp = qp->ev.queue_id; - inst.s9x.tt = qp->ev.sched_type; - inst.s9x.tag = (RTE_EVENT_TYPE_CRYPTODEV << 28) | - qp->ev.flow_id; - inst.s9x.wq_ptr = (uint64_t)req >> 3; + inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) | + m_data->response_info.flow_id) | + ((uint64_t)m_data->response_info.sched_type << 32) | + ((uint64_t)m_data->response_info.queue_id << 34)); + inst.u[3] = 1 | (((uint64_t)req >> 3) << 3); req->qp = qp; do { @@ -479,22 +499,22 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, lmt_status = otx2_lmt_submit(qp->lf_nq_reg); } while (lmt_status == 0); + return 0; } static __rte_always_inline int32_t __rte_hot otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp, struct pending_queue *pend_q, struct cpt_request_info *req, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { void *lmtline = qp->lmtline; union cpt_inst_s inst; uint64_t lmt_status; - if (qp->ca_enable) { - otx2_ca_enqueue_req(qp, req, lmtline, cpt_inst_w7); - return 0; - } + if (qp->ca_enable) + return otx2_ca_enqueue_req(qp, req, lmtline, op, cpt_inst_w7); if (unlikely(pend_q->pending_count >= OTX2_CPT_DEFAULT_CMD_QLEN)) return -EAGAIN; @@ -598,7 +618,8 @@ otx2_cpt_enqueue_asym(struct otx2_cpt_qp *qp, goto req_fail; } - ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op, + sess->cpt_inst_w7); if (unlikely(ret)) { CPT_LOG_DP_ERR("Could not enqueue crypto req"); @@ -642,7 +663,7 @@ otx2_cpt_enqueue_sym(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (unlikely(ret)) { /* Free buffer allocated by fill params routines */ @@ -711,7 +732,7 @@ otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (winsz && esn) { seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low; diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index cdadbb2b2..ee7a6ad51 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -12,8 +12,9 @@ #include <rte_mbuf_pool_ops.h> #include <rte_pci.h> -#include "otx2_evdev_stats.h" #include "otx2_evdev.h" +#include "otx2_evdev_crypto_adptr_tx.h" +#include "otx2_evdev_stats.h" #include "otx2_irq.h" #include "otx2_tim_evdev.h" @@ -311,6 +312,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_ca_enq; if (dev->dual_ws) { event_dev->enqueue = otx2_ssogws_dual_enq; @@ -473,6 +475,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq; } event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c index 4e8a96cb6..2c9b347f0 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c @@ -18,7 +18,8 @@ otx2_ca_caps_get(const struct rte_eventdev *dev, RTE_SET_USED(cdev); *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND | - RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW; + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW | + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD; return 0; } diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h similarity index 93% rename from drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h rename to drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h index 70b63933e..9e331fdd7 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h @@ -2,8 +2,8 @@ * Copyright (C) 2020 Marvell International Ltd. */ -#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ -#define _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ #include <rte_cryptodev.h> #include <rte_cryptodev_pmd.h> @@ -72,4 +72,4 @@ otx2_handle_crypto_event(uint64_t get_work1) return (uint64_t)(cop); } -#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ */ +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */ diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h new file mode 100644 index 000000000..ecf7eb9f5 --- /dev/null +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2021 Marvell International Ltd. + */ + +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ + +#include <rte_cryptodev.h> +#include <rte_cryptodev_pmd.h> +#include <rte_event_crypto_adapter.h> +#include <rte_eventdev.h> + +#include <otx2_cryptodev_qp.h> +#include <otx2_worker.h> + +static inline uint16_t +otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) +{ + union rte_event_crypto_metadata *m_data; + struct rte_crypto_op *crypto_op; + struct rte_cryptodev *cdev; + struct otx2_cpt_qp *qp; + uint8_t cdev_id; + uint16_t qp_id; + + crypto_op = ev->event_ptr; + if (crypto_op == NULL) + return 0; + + if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + m_data = rte_cryptodev_sym_session_get_user_data( + crypto_op->sym->session); + if (m_data == NULL) + goto free_op; + + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + crypto_op->private_data_offset) { + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)crypto_op + + crypto_op->private_data_offset); + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else { + goto free_op; + } + + cdev = &rte_cryptodevs[cdev_id]; + qp = cdev->data->queue_pairs[qp_id]; + + if (!ev->sched_type) + otx2_ssogws_head_wait(tag_op); + if (qp->ca_enable) + return cdev->enqueue_burst(qp, &crypto_op, 1); + +free_op: + rte_pktmbuf_free(crypto_op->sym->m_src); + rte_crypto_op_free(crypto_op); + rte_errno = EINVAL; + return 0; +} + +static uint16_t __rte_hot +otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->tag_op, ev); +} + +static uint16_t __rte_hot +otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws_dual *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->ws_state[!ws->vws].tag_op, ev); +} +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */ diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h index 2b716c042..fd149be91 100644 --- a/drivers/event/octeontx2/otx2_worker.h +++ b/drivers/event/octeontx2/otx2_worker.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" #include "otx2_ethdev_sec_tx.h" /* SSO Operations */ diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h index 72b616439..36ae4dd88 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.h +++ b/drivers/event/octeontx2/otx2_worker_dual.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" /* SSO Operations */ static __rte_always_inline uint16_t -- 2.25.1
From: Shijith Thotton <sthotton@marvell.com> Use rte_event_crypto_adapter_enqueue() API to enqueue events to crypto adapter if forward mode is supported in driver. Signed-off-by: Shijith Thotton <sthotton@marvell.com> Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com> --- app/test/test_event_crypto_adapter.c | 33 ++++++++++++++++++---------- 1 file changed, 21 insertions(+), 12 deletions(-) diff --git a/app/test/test_event_crypto_adapter.c b/app/test/test_event_crypto_adapter.c index 335211cd8..f689bc1f2 100644 --- a/app/test/test_event_crypto_adapter.c +++ b/app/test/test_event_crypto_adapter.c @@ -64,6 +64,7 @@ struct event_crypto_adapter_test_params { struct rte_mempool *session_priv_mpool; struct rte_cryptodev_config *config; uint8_t crypto_event_port_id; + uint8_t internal_port_op_fwd; }; struct rte_event response_info = { @@ -110,9 +111,12 @@ send_recv_ev(struct rte_event *ev) struct rte_event recv_ev; int ret; - ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); - TEST_ASSERT_EQUAL(ret, NUM, - "Failed to send event to crypto adapter\n"); + if (params.internal_port_op_fwd) + ret = rte_event_crypto_adapter_enqueue(evdev, TEST_APP_PORT_ID, + ev, NUM); + else + ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); + TEST_ASSERT_EQUAL(ret, NUM, "Failed to send event to crypto adapter\n"); while (rte_event_dequeue_burst(evdev, TEST_APP_PORT_ID, &recv_ev, NUM, 0) == 0) @@ -747,9 +751,12 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND)) goto adapter_create; - if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) && - !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD)) - return -ENOTSUP; + if (mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) { + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) + params.internal_port_op_fwd = 1; + else + return -ENOTSUP; + } if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_NEW) && !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW)) @@ -771,9 +778,11 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) TEST_ASSERT_SUCCESS(ret, "Failed to add queue pair\n"); - ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, - ¶ms.crypto_event_port_id); - TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + if (!params.internal_port_op_fwd) { + ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, + ¶ms.crypto_event_port_id); + TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + } return TEST_SUCCESS; } @@ -809,15 +818,15 @@ test_crypto_adapter_conf(enum rte_event_crypto_adapter_mode mode) if (!crypto_adapter_setup_done) { ret = configure_event_crypto_adapter(mode); - if (!ret) { + if (ret) + return ret; + if (!params.internal_port_op_fwd) { qid = TEST_CRYPTO_EV_QUEUE_ID; ret = rte_event_port_link(evdev, params.crypto_event_port_id, &qid, NULL, 1); TEST_ASSERT(ret >= 0, "Failed to link queue %d " "port=%u\n", qid, params.crypto_event_port_id); - } else { - return ret; } crypto_adapter_setup_done = 1; } -- 2.25.1
On Wed, Apr 14, 2021 at 8:04 PM <gakhil@marvell.com> wrote:
>
> From: Akhil Goyal <gakhil@marvell.com>
>
> Certain structures are added with reserved fields
> to address any future enhancements to retain ABI
> compatibility.
> However, ABI script will still report error as it
> is not aware of reserved fields. Hence, adding a
> generic exception for reserved fields.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
> devtools/libabigail.abignore | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> index 6c0b38984..654755314 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -19,4 +19,8 @@
> ; Ignore fields inserted in cacheline boundary of rte_cryptodev
> [suppress_type]
> name = rte_cryptodev
> - has_data_member_inserted_between = {offset_after(attached), end}
> \ No newline at end of file
> + has_data_member_inserted_between = {offset_after(attached), end}
> +
> +; Ignore changes in reserved fields
> +[suppress_variable]
> + name_regexp = reserved
> --
> 2.25.1
>
Mm, this rule is a bit scary, as it matches anything with "reserved" in it.
You need an exception anyway to insert the new fields (like in patch 2).
Can you test your series dropping this patch 1 ?
--
David Marchand
Hi David, > > Certain structures are added with reserved fields > > to address any future enhancements to retain ABI > > compatibility. > > However, ABI script will still report error as it > > is not aware of reserved fields. Hence, adding a > > generic exception for reserved fields. > > > > Signed-off-by: Akhil Goyal <gakhil@marvell.com> > > --- > > devtools/libabigail.abignore | 6 +++++- > > 1 file changed, 5 insertions(+), 1 deletion(-) > > > > diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore > > index 6c0b38984..654755314 100644 > > --- a/devtools/libabigail.abignore > > +++ b/devtools/libabigail.abignore > > @@ -19,4 +19,8 @@ > > ; Ignore fields inserted in cacheline boundary of rte_cryptodev > > [suppress_type] > > name = rte_cryptodev > > - has_data_member_inserted_between = {offset_after(attached), end} > > \ No newline at end of file > > + has_data_member_inserted_between = {offset_after(attached), end} > > + > > +; Ignore changes in reserved fields > > +[suppress_variable] > > + name_regexp = reserved > Mm, this rule is a bit scary, as it matches anything with "reserved" in it. Why do you feel it is scary? Reserved is something which may change at any time Just like experimental. Hence creating a generic exception rule for it make sense And it is done intentionally in this patch. > > You need an exception anyway to insert the new fields (like in patch 2). > Can you test your series dropping this patch 1 ? It will not work, as there are 2 changes, 1. addition of ca_enqueue after attached. This is taken care by the exception set in patch 2 2. change in the reserved_ptr[4] -> reserved_ptr[3]. For this change we need exception for reserved. Regards, Akhil
On Thu, Apr 15, 2021 at 7:33 AM Akhil Goyal <gakhil@marvell.com> wrote: > > Hi David, > > > Certain structures are added with reserved fields > > > to address any future enhancements to retain ABI > > > compatibility. > > > However, ABI script will still report error as it > > > is not aware of reserved fields. Hence, adding a > > > generic exception for reserved fields. > > > > > > Signed-off-by: Akhil Goyal <gakhil@marvell.com> > > > --- > > > devtools/libabigail.abignore | 6 +++++- > > > 1 file changed, 5 insertions(+), 1 deletion(-) > > > > > > diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore > > > index 6c0b38984..654755314 100644 > > > --- a/devtools/libabigail.abignore > > > +++ b/devtools/libabigail.abignore > > > @@ -19,4 +19,8 @@ > > > ; Ignore fields inserted in cacheline boundary of rte_cryptodev > > > [suppress_type] > > > name = rte_cryptodev > > > - has_data_member_inserted_between = {offset_after(attached), end} > > > \ No newline at end of file > > > + has_data_member_inserted_between = {offset_after(attached), end} > > > + > > > +; Ignore changes in reserved fields > > > +[suppress_variable] > > > + name_regexp = reserved > > Mm, this rule is a bit scary, as it matches anything with "reserved" in it. > > Why do you feel it is scary? Reserved is something which may change at any time > Just like experimental. Hence creating a generic exception rule for it make sense > And it is done intentionally in this patch. The reserved regexp on the name of a variable / struct field is too lax. Anything could be named with reserved in it. If we have clear patterns, they must be preferred, like (untested) name_regexp = ^reserved_(64|ptr)s$ Experimental is different. This is a symbol version tag, which has a clear meaning and can't be used for anything else. > > > > > You need an exception anyway to insert the new fields (like in patch 2). > > Can you test your series dropping this patch 1 ? > It will not work, as there are 2 changes, > 1. addition of ca_enqueue after attached. This is taken care by the exception set in patch 2 > 2. change in the reserved_ptr[4] -> reserved_ptr[3]. For this change we need exception for reserved. In the eventdev struct, reserved fields are all in the range between the attached field and the end of the struct. I pushed your series without patch 1 to a branch of mine, and it passes the check fine: https://github.com/david-marchand/dpdk/commits/crypto_fwd_mode_v10 https://github.com/david-marchand/dpdk/runs/2350324578?check_suite_focus=true#step:15:8549 -- David Marchand
On Thu, Apr 15, 2021 at 09:26:38AM +0200, David Marchand wrote:
> On Thu, Apr 15, 2021 at 7:33 AM Akhil Goyal <gakhil@marvell.com> wrote:
> >
> > Hi David,
> > > > Certain structures are added with reserved fields
> > > > to address any future enhancements to retain ABI
> > > > compatibility.
> > > > However, ABI script will still report error as it
> > > > is not aware of reserved fields. Hence, adding a
> > > > generic exception for reserved fields.
> > > >
> > > > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > > > ---
> > > > devtools/libabigail.abignore | 6 +++++-
> > > > 1 file changed, 5 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> > > > index 6c0b38984..654755314 100644
> > > > --- a/devtools/libabigail.abignore
> > > > +++ b/devtools/libabigail.abignore
> > > > @@ -19,4 +19,8 @@
> > > > ; Ignore fields inserted in cacheline boundary of rte_cryptodev
> > > > [suppress_type]
> > > > name = rte_cryptodev
> > > > - has_data_member_inserted_between = {offset_after(attached), end}
> > > > \ No newline at end of file
> > > > + has_data_member_inserted_between = {offset_after(attached), end}
> > > > +
> > > > +; Ignore changes in reserved fields
> > > > +[suppress_variable]
> > > > + name_regexp = reserved
> > > Mm, this rule is a bit scary, as it matches anything with "reserved" in it.
> >
> > Why do you feel it is scary? Reserved is something which may change at any time
> > Just like experimental. Hence creating a generic exception rule for it make sense
> > And it is done intentionally in this patch.
>
> The reserved regexp on the name of a variable / struct field is too lax.
> Anything could be named with reserved in it.
> If we have clear patterns, they must be preferred, like (untested)
> name_regexp = ^reserved_(64|ptr)s$
>
+1 to have a clear name. I would suggest using a "__reserved" prefix, since
no real field name should ever start with that prefix.
15/04/2021 10:25, Bruce Richardson:
> On Thu, Apr 15, 2021 at 09:26:38AM +0200, David Marchand wrote:
> > On Thu, Apr 15, 2021 at 7:33 AM Akhil Goyal <gakhil@marvell.com> wrote:
> > > > > +; Ignore changes in reserved fields
> > > > > +[suppress_variable]
> > > > > + name_regexp = reserved
> > > >
> > > > Mm, this rule is a bit scary, as it matches anything with "reserved" in it.
> > >
> > > Why do you feel it is scary? Reserved is something which may change at any time
> > > Just like experimental. Hence creating a generic exception rule for it make sense
> > > And it is done intentionally in this patch.
> >
> > The reserved regexp on the name of a variable / struct field is too lax.
> > Anything could be named with reserved in it.
> > If we have clear patterns, they must be preferred, like (untested)
> > name_regexp = ^reserved_(64|ptr)s$
> >
> +1 to have a clear name. I would suggest using a "__reserved" prefix, since
> no real field name should ever start with that prefix.
+1 for the double underscore
Changing it now does not break API as it is not supposed to be used.
> > > > ; Ignore fields inserted in cacheline boundary of rte_cryptodev
> > > > [suppress_type]
> > > > name = rte_cryptodev
> > > > - has_data_member_inserted_between = {offset_after(attached),
> end}
> > > > \ No newline at end of file
> > > > + has_data_member_inserted_between = {offset_after(attached),
> end}
> > > > +
> > > > +; Ignore changes in reserved fields
> > > > +[suppress_variable]
> > > > + name_regexp = reserved
> > > Mm, this rule is a bit scary, as it matches anything with "reserved" in it.
> >
> > Why do you feel it is scary? Reserved is something which may change at
> any time
> > Just like experimental. Hence creating a generic exception rule for it make
> sense
> > And it is done intentionally in this patch.
>
> The reserved regexp on the name of a variable / struct field is too lax.
> Anything could be named with reserved in it.
> If we have clear patterns, they must be preferred, like (untested)
> name_regexp = ^reserved_(64|ptr)s$
>
>
> Experimental is different.
> This is a symbol version tag, which has a clear meaning and can't be
> used for anything else.
>
>
> >
> > >
> > > You need an exception anyway to insert the new fields (like in patch 2).
> > > Can you test your series dropping this patch 1 ?
> > It will not work, as there are 2 changes,
> > 1. addition of ca_enqueue after attached. This is taken care by the
> exception set in patch 2
> > 2. change in the reserved_ptr[4] -> reserved_ptr[3]. For this change we
> need exception for reserved.
>
> In the eventdev struct, reserved fields are all in the range between
> the attached field and the end of the struct.
> I pushed your series without patch 1 to a branch of mine, and it
> passes the check fine:
> https://github.com/david-marchand/dpdk/runs/2350324578?check_suite_focus=true#step:15:8549>
>
Yes it will work, I actually put the new field after reserved and
it was creating issues, so I added it.
But later I decided to move it above reserved fields and missed
that it will work without reserved exception.
Hence we can drop the first patch for now.
Regards,
Akhil
From: Akhil Goyal <gakhil@marvell.com> v11: - removed first patch. - removed deprecation notice. It is sent separately. http://patches.dpdk.org/project/dpdk/patch/20210415090859.1319171-1-gakhil@marvell.com/ v10: - moved last patch to first patch of the series. v9: - moved ca_enqueue in the end of rte_eventdev before reserved fields - added exception in libabigail.abignore for reserved fields - added exception in libabigail.abignore for new field addition in place of reserved in rte_eventdev - added deprecation notice to move ca_enqueue up in the structure. v8: - Added metadata NULL check and op free. - events object - > event objects. - Added Acked-by. v7: - Rebased. v6: - Rebased. v5: - Set rte_errno if crypto adapter enqueue fails in driver. - Test application code restructuring. v4: - Fix debug build. v3: - Added crypto adapter test application changes. v2: - Updated release notes. - Made use of RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET macro. - Fixed v1 build error. v1: - Added crypto adapter forward mode support for octeontx2. Akhil Goyal (1): eventdev: introduce crypto adapter enqueue API Shijith Thotton (2): event/octeontx2: support crypto adapter forward mode test/event_crypto: use crypto adapter enqueue API app/test/test_event_crypto_adapter.c | 33 +++++--- devtools/libabigail.abignore | 7 +- .../prog_guide/event_crypto_adapter.rst | 69 +++++++++------ doc/guides/rel_notes/release_21_05.rst | 6 ++ drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 49 +++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 63 ++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 9 +- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 1 + 17 files changed, 300 insertions(+), 61 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h -- 2.25.1
From: Akhil Goyal <gakhil@marvell.com> In case an event from a previous stage is required to be forwarded to a crypto adapter and PMD supports internal event port in crypto adapter, exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not have a way to check in the API rte_event_enqueue_burst(), whether it is for crypto adapter or for eth tx adapter. Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue(), which can send to a crypto adapter. Note that RTE_EVENT_TYPE_* cannot be used to make that decision, as it is meant for event source and not event destination. And event port designated for crypto adapter is designed to be used for OP_NEW mode. Hence, in order to support an event PMD which has an internal event port in crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, application should use rte_event_crypto_adapter_enqueue() API to enqueue events. When internal port is not available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW mode), application can use API rte_event_enqueue_burst() as it was doing earlier, i.e. retrieve event port used by crypto adapter and bind its event queues to that port and enqueue events using the API rte_event_enqueue_burst(). Signed-off-by: Akhil Goyal <gakhil@marvell.com> Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com> --- devtools/libabigail.abignore | 7 +- .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++++------- doc/guides/rel_notes/release_21_05.rst | 6 ++ lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 63 +++++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 9 ++- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 1 + 9 files changed, 150 insertions(+), 28 deletions(-) diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore index 6c0b38984..46a5a6af5 100644 --- a/devtools/libabigail.abignore +++ b/devtools/libabigail.abignore @@ -19,4 +19,9 @@ ; Ignore fields inserted in cacheline boundary of rte_cryptodev [suppress_type] name = rte_cryptodev - has_data_member_inserted_between = {offset_after(attached), end} \ No newline at end of file + has_data_member_inserted_between = {offset_after(attached), end} + +; Ignore fields inserted in place of reserved fields of rte_eventdev +[suppress_type] + name = rte_eventdev + has_data_member_inserted_between = {offset_after(attached), end} diff --git a/doc/guides/prog_guide/event_crypto_adapter.rst b/doc/guides/prog_guide/event_crypto_adapter.rst index 1e3eb7139..4fb5c688e 100644 --- a/doc/guides/prog_guide/event_crypto_adapter.rst +++ b/doc/guides/prog_guide/event_crypto_adapter.rst @@ -55,21 +55,22 @@ which is needed to enqueue an event after the crypto operation is completed. RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -In the RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, if HW supports -RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability the application -can directly submit the crypto operations to the cryptodev. -If not, application retrieves crypto adapter's event port using -rte_event_crypto_adapter_event_port_get() API. Then, links its event -queue to this port and starts enqueuing crypto operations as events -to the eventdev. The adapter then dequeues the events and submits the -crypto operations to the cryptodev. After the crypto completions, the -adapter enqueues events to the event device. -Application can use this mode, when ingress packet ordering is needed. -In this mode, events dequeued from the adapter will be treated as -forwarded events. The application needs to specify the cryptodev ID -and queue pair ID (request information) needed to enqueue a crypto -operation in addition to the event information (response information) -needed to enqueue an event after the crypto operation has completed. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), the application should +use ``rte_event_crypto_adapter_enqueue()`` API to enqueue crypto operations as +events to crypto adapter. If not, application retrieves crypto adapter's event +port using ``rte_event_crypto_adapter_event_port_get()`` API, links its event +queue to this port and starts enqueuing crypto operations as events to eventdev +using ``rte_event_enqueue_burst()``. The adapter then dequeues the events and +submits the crypto operations to the cryptodev. After the crypto operation is +complete, the adapter enqueues events to the event device. The application can +use this mode when ingress packet ordering is needed. In this mode, events +dequeued from the adapter will be treated as forwarded events. The application +needs to specify the cryptodev ID and queue pair ID (request information) needed +to enqueue a crypto operation in addition to the event information (response +information) needed to enqueue an event after the crypto operation has +completed. .. _figure_event_crypto_adapter_op_forward: @@ -120,28 +121,44 @@ service function and needs to create an event port for it. The callback is expected to fill the ``struct rte_event_crypto_adapter_conf`` structure passed to it. -For RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, the event port created by adapter -can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` API. -Application can use this event port to link with event queue on which it -enqueues events towards the crypto adapter. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), events with crypto +operations should be enqueued to the crypto adapter using +``rte_event_crypto_adapter_enqueue()`` API. If not, the event port created by +the adapter can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` +API. An application can use this event port to link with an event queue, on +which it enqueues events towards the crypto adapter using +``rte_event_enqueue_burst()``. .. code-block:: c - uint8_t id, evdev, crypto_ev_port_id, app_qid; + uint8_t id, evdev_id, cdev_id, crypto_ev_port_id, app_qid; struct rte_event ev; + uint32_t cap; int ret; - ret = rte_event_crypto_adapter_event_port_get(id, &crypto_ev_port_id); - ret = rte_event_queue_setup(evdev, app_qid, NULL); - ret = rte_event_port_link(evdev, crypto_ev_port_id, &app_qid, NULL, 1); - // Fill in event info and update event_ptr with rte_crypto_op memset(&ev, 0, sizeof(ev)); - ev.queue_id = app_qid; . . ev.event_ptr = op; - ret = rte_event_enqueue_burst(evdev, app_ev_port_id, ev, nb_events); + + ret = rte_event_crypto_adapter_caps_get(evdev_id, cdev_id, &cap); + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) { + ret = rte_event_crypto_adapter_enqueue(evdev_id, app_ev_port_id, + ev, nb_events); + } else { + ret = rte_event_crypto_adapter_event_port_get(id, + &crypto_ev_port_id); + ret = rte_event_queue_setup(evdev_id, app_qid, NULL); + ret = rte_event_port_link(evdev_id, crypto_ev_port_id, &app_qid, + NULL, 1); + ev.queue_id = app_qid; + ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev, + nb_events); + } + Querying adapter capabilities ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst index b21906ccf..773dcbd58 100644 --- a/doc/guides/rel_notes/release_21_05.rst +++ b/doc/guides/rel_notes/release_21_05.rst @@ -182,6 +182,12 @@ New Features * Added command to display Rx queue used descriptor count. ``show port (port_id) rxq (queue_id) desc used count`` +* **Enhanced crypto adapter forward mode.** + + * Added ``rte_event_crypto_adapter_enqueue()`` API to enqueue events to crypto + adapter if forward mode is supported by driver. + * Added support for crypto adapter forward mode in octeontx2 event and crypto + device driver. Removed Items ------------- diff --git a/lib/librte_eventdev/eventdev_trace_points.c b/lib/librte_eventdev/eventdev_trace_points.c index 1a0ccc448..3867ec800 100644 --- a/lib/librte_eventdev/eventdev_trace_points.c +++ b/lib/librte_eventdev/eventdev_trace_points.c @@ -118,3 +118,6 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_start, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_stop, lib.eventdev.crypto.stop) + +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_enqueue, + lib.eventdev.crypto.enq) diff --git a/lib/librte_eventdev/rte_event_crypto_adapter.h b/lib/librte_eventdev/rte_event_crypto_adapter.h index 60630ef66..f8c6cca87 100644 --- a/lib/librte_eventdev/rte_event_crypto_adapter.h +++ b/lib/librte_eventdev/rte_event_crypto_adapter.h @@ -171,6 +171,7 @@ extern "C" { #include <stdint.h> #include "rte_eventdev.h" +#include "eventdev_pmd.h" /** * Crypto event adapter mode @@ -522,6 +523,68 @@ rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id); int rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); +/** + * Enqueue a burst of crypto operations as event objects supplied in *rte_event* + * structure on an event crypto adapter designated by its event *dev_id* through + * the event port specified by *port_id*. This function is supported if the + * eventdev PMD has the #RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD + * capability flag set. + * + * The *nb_events* parameter is the number of event objects to enqueue which are + * supplied in the *ev* array of *rte_event* structure. + * + * The rte_event_crypto_adapter_enqueue() function returns the number of + * event objects it actually enqueued. A return value equal to *nb_events* + * means that all event objects have been enqueued. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param ev + * Points to an array of *nb_events* objects of type *rte_event* structure + * which contain the event object enqueue operations to be processed. + * @param nb_events + * The number of event objects to enqueue, typically number of + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) + * available for this port. + * + * @return + * The number of event objects actually enqueued on the event device. The + * return value can be less than the value of the *nb_events* parameter when + * the event devices queue is full or if invalid parameters are specified in a + * *rte_event*. If the return value is less than *nb_events*, the remaining + * events at the end of ev[] are not consumed and the caller has to take care + * of them, and rte_errno is set accordingly. Possible errno values include: + * - EINVAL The port ID is invalid, device ID is invalid, an event's queue + * ID is invalid, or an event's sched type doesn't match the + * capabilities of the destination queue. + * - ENOSPC The event port was backpressured and unable to enqueue + * one or more events. This error code is only applicable to + * closed systems. + */ +static inline uint16_t +rte_event_crypto_adapter_enqueue(uint8_t dev_id, + uint8_t port_id, + struct rte_event ev[], + uint16_t nb_events) +{ + const struct rte_eventdev *dev = &rte_eventdevs[dev_id]; + +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + if (port_id >= dev->data->nb_ports) { + rte_errno = EINVAL; + return 0; + } +#endif + rte_eventdev_trace_crypto_adapter_enqueue(dev_id, port_id, ev, + nb_events); + + return dev->ca_enqueue(dev->data->ports[port_id], ev, nb_events); +} + #ifdef __cplusplus } #endif diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c index c9bb5d227..594dd5e75 100644 --- a/lib/librte_eventdev/rte_eventdev.c +++ b/lib/librte_eventdev/rte_eventdev.c @@ -1454,6 +1454,15 @@ rte_event_tx_adapter_enqueue(__rte_unused void *port, return 0; } +static uint16_t +rte_event_crypto_adapter_enqueue(__rte_unused void *port, + __rte_unused struct rte_event ev[], + __rte_unused uint16_t nb_events) +{ + rte_errno = ENOTSUP; + return 0; +} + struct rte_eventdev * rte_event_pmd_allocate(const char *name, int socket_id) { @@ -1476,6 +1485,7 @@ rte_event_pmd_allocate(const char *name, int socket_id) eventdev->txa_enqueue = rte_event_tx_adapter_enqueue; eventdev->txa_enqueue_same_dest = rte_event_tx_adapter_enqueue; + eventdev->ca_enqueue = rte_event_crypto_adapter_enqueue; if (eventdev->data == NULL) { struct rte_eventdev_data *eventdev_data = NULL; diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h index 5f1f544cc..a9c496fb6 100644 --- a/lib/librte_eventdev/rte_eventdev.h +++ b/lib/librte_eventdev/rte_eventdev.h @@ -1352,6 +1352,10 @@ typedef uint16_t (*event_tx_adapter_enqueue_same_dest)(void *port, * burst having same destination Ethernet port & Tx queue. */ +typedef uint16_t (*event_crypto_adapter_enqueue)(void *port, + struct rte_event ev[], uint16_t nb_events); +/**< @internal Enqueue burst of events on crypto adapter */ + #define RTE_EVENTDEV_NAME_MAX_LEN (64) /**< @internal Max length of name of event PMD */ @@ -1434,8 +1438,11 @@ struct rte_eventdev { uint8_t attached : 1; /**< Flag indicating the device is attached */ + event_crypto_adapter_enqueue ca_enqueue; + /**< Pointer to PMD crypto adapter enqueue function. */ + uint64_t reserved_64s[4]; /**< Reserved for future fields */ - void *reserved_ptrs[4]; /**< Reserved for future fields */ + void *reserved_ptrs[3]; /**< Reserved for future fields */ } __rte_cache_aligned; extern struct rte_eventdev *rte_eventdevs; diff --git a/lib/librte_eventdev/rte_eventdev_trace_fp.h b/lib/librte_eventdev/rte_eventdev_trace_fp.h index 349129c0f..5639e0b83 100644 --- a/lib/librte_eventdev/rte_eventdev_trace_fp.h +++ b/lib/librte_eventdev/rte_eventdev_trace_fp.h @@ -49,6 +49,16 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_u8(flags); ) +RTE_TRACE_POINT_FP( + rte_eventdev_trace_crypto_adapter_enqueue, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table, + uint16_t nb_events), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_ptr(ev_table); + rte_trace_point_emit_u16(nb_events); +) + RTE_TRACE_POINT_FP( rte_eventdev_trace_timer_arm_burst, RTE_TRACE_POINT_ARGS(const void *adapter, void **evtims_table, diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map index 902df0ae3..7e264d3b8 100644 --- a/lib/librte_eventdev/version.map +++ b/lib/librte_eventdev/version.map @@ -143,6 +143,7 @@ EXPERIMENTAL { rte_event_vector_pool_create; rte_event_eth_rx_adapter_vector_limits_get; rte_event_eth_rx_adapter_queue_event_vector_config; + __rte_eventdev_trace_crypto_adapter_enqueue; }; INTERNAL { -- 2.25.1
From: Shijith Thotton <sthotton@marvell.com> Advertise crypto adapter forward mode capability and set crypto adapter enqueue function in driver. Signed-off-by: Shijith Thotton <sthotton@marvell.com> Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com> --- drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 49 +++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- 7 files changed, 129 insertions(+), 21 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c index fc4d5bac4..5ca16a5ae 100644 --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c @@ -7,6 +7,7 @@ #include <rte_cryptodev_pmd.h> #include <rte_errno.h> #include <rte_ethdev.h> +#include <rte_event_crypto_adapter.h> #include "otx2_cryptodev.h" #include "otx2_cryptodev_capabilities.h" @@ -438,15 +439,35 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform, return -ENOTSUP; } -static __rte_always_inline void __rte_hot +static __rte_always_inline int32_t __rte_hot otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, struct cpt_request_info *req, void *lmtline, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { + union rte_event_crypto_metadata *m_data; union cpt_inst_s inst; uint64_t lmt_status; + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + m_data = rte_cryptodev_sym_session_get_user_data( + op->sym->session); + if (m_data == NULL) { + rte_pktmbuf_free(op->sym->m_src); + rte_crypto_op_free(op); + rte_errno = EINVAL; + return -EINVAL; + } + } else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + op->private_data_offset) { + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)op + + op->private_data_offset); + } else { + return -EINVAL; + } + inst.u[0] = 0; inst.s9x.res_addr = req->comp_baddr; inst.u[2] = 0; @@ -457,12 +478,11 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, inst.s9x.ei2 = req->ist.ei2; inst.s9x.ei3 = cpt_inst_w7; - inst.s9x.qord = 1; - inst.s9x.grp = qp->ev.queue_id; - inst.s9x.tt = qp->ev.sched_type; - inst.s9x.tag = (RTE_EVENT_TYPE_CRYPTODEV << 28) | - qp->ev.flow_id; - inst.s9x.wq_ptr = (uint64_t)req >> 3; + inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) | + m_data->response_info.flow_id) | + ((uint64_t)m_data->response_info.sched_type << 32) | + ((uint64_t)m_data->response_info.queue_id << 34)); + inst.u[3] = 1 | (((uint64_t)req >> 3) << 3); req->qp = qp; do { @@ -479,22 +499,22 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, lmt_status = otx2_lmt_submit(qp->lf_nq_reg); } while (lmt_status == 0); + return 0; } static __rte_always_inline int32_t __rte_hot otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp, struct pending_queue *pend_q, struct cpt_request_info *req, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { void *lmtline = qp->lmtline; union cpt_inst_s inst; uint64_t lmt_status; - if (qp->ca_enable) { - otx2_ca_enqueue_req(qp, req, lmtline, cpt_inst_w7); - return 0; - } + if (qp->ca_enable) + return otx2_ca_enqueue_req(qp, req, lmtline, op, cpt_inst_w7); if (unlikely(pend_q->pending_count >= OTX2_CPT_DEFAULT_CMD_QLEN)) return -EAGAIN; @@ -598,7 +618,8 @@ otx2_cpt_enqueue_asym(struct otx2_cpt_qp *qp, goto req_fail; } - ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op, + sess->cpt_inst_w7); if (unlikely(ret)) { CPT_LOG_DP_ERR("Could not enqueue crypto req"); @@ -642,7 +663,7 @@ otx2_cpt_enqueue_sym(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (unlikely(ret)) { /* Free buffer allocated by fill params routines */ @@ -711,7 +732,7 @@ otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (winsz && esn) { seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low; diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index cdadbb2b2..ee7a6ad51 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -12,8 +12,9 @@ #include <rte_mbuf_pool_ops.h> #include <rte_pci.h> -#include "otx2_evdev_stats.h" #include "otx2_evdev.h" +#include "otx2_evdev_crypto_adptr_tx.h" +#include "otx2_evdev_stats.h" #include "otx2_irq.h" #include "otx2_tim_evdev.h" @@ -311,6 +312,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_ca_enq; if (dev->dual_ws) { event_dev->enqueue = otx2_ssogws_dual_enq; @@ -473,6 +475,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq; } event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c index 4e8a96cb6..2c9b347f0 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c @@ -18,7 +18,8 @@ otx2_ca_caps_get(const struct rte_eventdev *dev, RTE_SET_USED(cdev); *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND | - RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW; + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW | + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD; return 0; } diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h similarity index 93% rename from drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h rename to drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h index 70b63933e..9e331fdd7 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h @@ -2,8 +2,8 @@ * Copyright (C) 2020 Marvell International Ltd. */ -#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ -#define _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ #include <rte_cryptodev.h> #include <rte_cryptodev_pmd.h> @@ -72,4 +72,4 @@ otx2_handle_crypto_event(uint64_t get_work1) return (uint64_t)(cop); } -#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ */ +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */ diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h new file mode 100644 index 000000000..ecf7eb9f5 --- /dev/null +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2021 Marvell International Ltd. + */ + +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ + +#include <rte_cryptodev.h> +#include <rte_cryptodev_pmd.h> +#include <rte_event_crypto_adapter.h> +#include <rte_eventdev.h> + +#include <otx2_cryptodev_qp.h> +#include <otx2_worker.h> + +static inline uint16_t +otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) +{ + union rte_event_crypto_metadata *m_data; + struct rte_crypto_op *crypto_op; + struct rte_cryptodev *cdev; + struct otx2_cpt_qp *qp; + uint8_t cdev_id; + uint16_t qp_id; + + crypto_op = ev->event_ptr; + if (crypto_op == NULL) + return 0; + + if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + m_data = rte_cryptodev_sym_session_get_user_data( + crypto_op->sym->session); + if (m_data == NULL) + goto free_op; + + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + crypto_op->private_data_offset) { + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)crypto_op + + crypto_op->private_data_offset); + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else { + goto free_op; + } + + cdev = &rte_cryptodevs[cdev_id]; + qp = cdev->data->queue_pairs[qp_id]; + + if (!ev->sched_type) + otx2_ssogws_head_wait(tag_op); + if (qp->ca_enable) + return cdev->enqueue_burst(qp, &crypto_op, 1); + +free_op: + rte_pktmbuf_free(crypto_op->sym->m_src); + rte_crypto_op_free(crypto_op); + rte_errno = EINVAL; + return 0; +} + +static uint16_t __rte_hot +otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->tag_op, ev); +} + +static uint16_t __rte_hot +otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws_dual *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->ws_state[!ws->vws].tag_op, ev); +} +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */ diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h index 2b716c042..fd149be91 100644 --- a/drivers/event/octeontx2/otx2_worker.h +++ b/drivers/event/octeontx2/otx2_worker.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" #include "otx2_ethdev_sec_tx.h" /* SSO Operations */ diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h index 72b616439..36ae4dd88 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.h +++ b/drivers/event/octeontx2/otx2_worker_dual.h @@ -10,7 +10,7 @@ #include <otx2_common.h> #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" /* SSO Operations */ static __rte_always_inline uint16_t -- 2.25.1
From: Shijith Thotton <sthotton@marvell.com> Use rte_event_crypto_adapter_enqueue() API to enqueue events to crypto adapter if forward mode is supported in driver. Signed-off-by: Shijith Thotton <sthotton@marvell.com> Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com> --- app/test/test_event_crypto_adapter.c | 33 ++++++++++++++++++---------- 1 file changed, 21 insertions(+), 12 deletions(-) diff --git a/app/test/test_event_crypto_adapter.c b/app/test/test_event_crypto_adapter.c index 335211cd8..f689bc1f2 100644 --- a/app/test/test_event_crypto_adapter.c +++ b/app/test/test_event_crypto_adapter.c @@ -64,6 +64,7 @@ struct event_crypto_adapter_test_params { struct rte_mempool *session_priv_mpool; struct rte_cryptodev_config *config; uint8_t crypto_event_port_id; + uint8_t internal_port_op_fwd; }; struct rte_event response_info = { @@ -110,9 +111,12 @@ send_recv_ev(struct rte_event *ev) struct rte_event recv_ev; int ret; - ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); - TEST_ASSERT_EQUAL(ret, NUM, - "Failed to send event to crypto adapter\n"); + if (params.internal_port_op_fwd) + ret = rte_event_crypto_adapter_enqueue(evdev, TEST_APP_PORT_ID, + ev, NUM); + else + ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); + TEST_ASSERT_EQUAL(ret, NUM, "Failed to send event to crypto adapter\n"); while (rte_event_dequeue_burst(evdev, TEST_APP_PORT_ID, &recv_ev, NUM, 0) == 0) @@ -747,9 +751,12 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND)) goto adapter_create; - if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) && - !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD)) - return -ENOTSUP; + if (mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) { + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) + params.internal_port_op_fwd = 1; + else + return -ENOTSUP; + } if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_NEW) && !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW)) @@ -771,9 +778,11 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) TEST_ASSERT_SUCCESS(ret, "Failed to add queue pair\n"); - ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, - ¶ms.crypto_event_port_id); - TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + if (!params.internal_port_op_fwd) { + ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, + ¶ms.crypto_event_port_id); + TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + } return TEST_SUCCESS; } @@ -809,15 +818,15 @@ test_crypto_adapter_conf(enum rte_event_crypto_adapter_mode mode) if (!crypto_adapter_setup_done) { ret = configure_event_crypto_adapter(mode); - if (!ret) { + if (ret) + return ret; + if (!params.internal_port_op_fwd) { qid = TEST_CRYPTO_EV_QUEUE_ID; ret = rte_event_port_link(evdev, params.crypto_event_port_id, &qid, NULL, 1); TEST_ASSERT(ret >= 0, "Failed to link queue %d " "port=%u\n", qid, params.crypto_event_port_id); - } else { - return ret; } crypto_adapter_setup_done = 1; } -- 2.25.1
On Thu, 15 Apr 2021 08:31:30 +0000
Akhil Goyal <gakhil@marvell.com> wrote:
> > > > > ; Ignore fields inserted in cacheline boundary of rte_cryptodev
> > > > > [suppress_type]
> > > > > name = rte_cryptodev
> > > > > - has_data_member_inserted_between = {offset_after(attached),
> > end}
> > > > > \ No newline at end of file
> > > > > + has_data_member_inserted_between = {offset_after(attached),
> > end}
> > > > > +
> > > > > +; Ignore changes in reserved fields
> > > > > +[suppress_variable]
> > > > > + name_regexp = reserved
> > > > Mm, this rule is a bit scary, as it matches anything with "reserved" in it.
> > >
> > > Why do you feel it is scary? Reserved is something which may change at
> > any time
> > > Just like experimental. Hence creating a generic exception rule for it make
> > sense
> > > And it is done intentionally in this patch.
> >
> > The reserved regexp on the name of a variable / struct field is too lax.
> > Anything could be named with reserved in it.
> > If we have clear patterns, they must be preferred, like (untested)
> > name_regexp = ^reserved_(64|ptr)s$
> >
> >
> > Experimental is different.
> > This is a symbol version tag, which has a clear meaning and can't be
> > used for anything else.
> >
> >
> > >
> > > >
> > > > You need an exception anyway to insert the new fields (like in patch 2).
> > > > Can you test your series dropping this patch 1 ?
> > > It will not work, as there are 2 changes,
> > > 1. addition of ca_enqueue after attached. This is taken care by the
> > exception set in patch 2
> > > 2. change in the reserved_ptr[4] -> reserved_ptr[3]. For this change we
> > need exception for reserved.
> >
> > In the eventdev struct, reserved fields are all in the range between
> > the attached field and the end of the struct.
> > I pushed your series without patch 1 to a branch of mine, and it
> > passes the check fine:
> > https://github.com/david-marchand/dpdk/runs/2350324578?check_suite_focus=true#step:15:8549>
> >
> Yes it will work, I actually put the new field after reserved and
> it was creating issues, so I added it.
> But later I decided to move it above reserved fields and missed
> that it will work without reserved exception.
>
> Hence we can drop the first patch for now.
>
> Regards,
> Akhil
Is there a check that size didn't change?
For example if a reserved field was removed.
On Thu, Apr 15, 2021 at 2:44 PM <gakhil@marvell.com> wrote: > > From: Akhil Goyal <gakhil@marvell.com> > > In case an event from a previous stage is required to be forwarded > to a crypto adapter and PMD supports internal event port in crypto > adapter, exposed via capability > RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not have > a way to check in the API rte_event_enqueue_burst(), whether it is > for crypto adapter or for eth tx adapter. > > Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue(), > which can send to a crypto adapter. > > Note that RTE_EVENT_TYPE_* cannot be used to make that decision, > as it is meant for event source and not event destination. > And event port designated for crypto adapter is designed to be used > for OP_NEW mode. > > Hence, in order to support an event PMD which has an internal event port > in crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), exposed > via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, > application should use rte_event_crypto_adapter_enqueue() API to enqueue > events. > > When internal port is not available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW mode), > application can use API rte_event_enqueue_burst() as it was doing earlier, > i.e. retrieve event port used by crypto adapter and bind its event queues > to that port and enqueue events using the API rte_event_enqueue_burst(). > > Signed-off-by: Akhil Goyal <gakhil@marvell.com> > Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com> Acked-by: Jerin Jacob <jerinj@marvell.com> Series applied to dpdk-next-net-eventdev/for-main. Thanks > --- > devtools/libabigail.abignore | 7 +- > .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++++------- > doc/guides/rel_notes/release_21_05.rst | 6 ++ > lib/librte_eventdev/eventdev_trace_points.c | 3 + > .../rte_event_crypto_adapter.h | 63 +++++++++++++++++ > lib/librte_eventdev/rte_eventdev.c | 10 +++ > lib/librte_eventdev/rte_eventdev.h | 9 ++- > lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ > lib/librte_eventdev/version.map | 1 + > 9 files changed, 150 insertions(+), 28 deletions(-) > > diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore > index 6c0b38984..46a5a6af5 100644 > --- a/devtools/libabigail.abignore > +++ b/devtools/libabigail.abignore > @@ -19,4 +19,9 @@ > ; Ignore fields inserted in cacheline boundary of rte_cryptodev > [suppress_type] > name = rte_cryptodev > - has_data_member_inserted_between = {offset_after(attached), end} > \ No newline at end of file > + has_data_member_inserted_between = {offset_after(attached), end} > + > +; Ignore fields inserted in place of reserved fields of rte_eventdev > +[suppress_type] > + name = rte_eventdev > + has_data_member_inserted_between = {offset_after(attached), end} > diff --git a/doc/guides/prog_guide/event_crypto_adapter.rst b/doc/guides/prog_guide/event_crypto_adapter.rst > index 1e3eb7139..4fb5c688e 100644 > --- a/doc/guides/prog_guide/event_crypto_adapter.rst > +++ b/doc/guides/prog_guide/event_crypto_adapter.rst > @@ -55,21 +55,22 @@ which is needed to enqueue an event after the crypto operation is completed. > RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > -In the RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, if HW supports > -RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability the application > -can directly submit the crypto operations to the cryptodev. > -If not, application retrieves crypto adapter's event port using > -rte_event_crypto_adapter_event_port_get() API. Then, links its event > -queue to this port and starts enqueuing crypto operations as events > -to the eventdev. The adapter then dequeues the events and submits the > -crypto operations to the cryptodev. After the crypto completions, the > -adapter enqueues events to the event device. > -Application can use this mode, when ingress packet ordering is needed. > -In this mode, events dequeued from the adapter will be treated as > -forwarded events. The application needs to specify the cryptodev ID > -and queue pair ID (request information) needed to enqueue a crypto > -operation in addition to the event information (response information) > -needed to enqueue an event after the crypto operation has completed. > +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto > +PMD supports internal event port > +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), the application should > +use ``rte_event_crypto_adapter_enqueue()`` API to enqueue crypto operations as > +events to crypto adapter. If not, application retrieves crypto adapter's event > +port using ``rte_event_crypto_adapter_event_port_get()`` API, links its event > +queue to this port and starts enqueuing crypto operations as events to eventdev > +using ``rte_event_enqueue_burst()``. The adapter then dequeues the events and > +submits the crypto operations to the cryptodev. After the crypto operation is > +complete, the adapter enqueues events to the event device. The application can > +use this mode when ingress packet ordering is needed. In this mode, events > +dequeued from the adapter will be treated as forwarded events. The application > +needs to specify the cryptodev ID and queue pair ID (request information) needed > +to enqueue a crypto operation in addition to the event information (response > +information) needed to enqueue an event after the crypto operation has > +completed. > > .. _figure_event_crypto_adapter_op_forward: > > @@ -120,28 +121,44 @@ service function and needs to create an event port for it. The callback is > expected to fill the ``struct rte_event_crypto_adapter_conf`` structure > passed to it. > > -For RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, the event port created by adapter > -can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` API. > -Application can use this event port to link with event queue on which it > -enqueues events towards the crypto adapter. > +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto > +PMD supports internal event port > +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), events with crypto > +operations should be enqueued to the crypto adapter using > +``rte_event_crypto_adapter_enqueue()`` API. If not, the event port created by > +the adapter can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` > +API. An application can use this event port to link with an event queue, on > +which it enqueues events towards the crypto adapter using > +``rte_event_enqueue_burst()``. > > .. code-block:: c > > - uint8_t id, evdev, crypto_ev_port_id, app_qid; > + uint8_t id, evdev_id, cdev_id, crypto_ev_port_id, app_qid; > struct rte_event ev; > + uint32_t cap; > int ret; > > - ret = rte_event_crypto_adapter_event_port_get(id, &crypto_ev_port_id); > - ret = rte_event_queue_setup(evdev, app_qid, NULL); > - ret = rte_event_port_link(evdev, crypto_ev_port_id, &app_qid, NULL, 1); > - > // Fill in event info and update event_ptr with rte_crypto_op > memset(&ev, 0, sizeof(ev)); > - ev.queue_id = app_qid; > . > . > ev.event_ptr = op; > - ret = rte_event_enqueue_burst(evdev, app_ev_port_id, ev, nb_events); > + > + ret = rte_event_crypto_adapter_caps_get(evdev_id, cdev_id, &cap); > + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) { > + ret = rte_event_crypto_adapter_enqueue(evdev_id, app_ev_port_id, > + ev, nb_events); > + } else { > + ret = rte_event_crypto_adapter_event_port_get(id, > + &crypto_ev_port_id); > + ret = rte_event_queue_setup(evdev_id, app_qid, NULL); > + ret = rte_event_port_link(evdev_id, crypto_ev_port_id, &app_qid, > + NULL, 1); > + ev.queue_id = app_qid; > + ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev, > + nb_events); > + } > + > > Querying adapter capabilities > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst > index b21906ccf..773dcbd58 100644 > --- a/doc/guides/rel_notes/release_21_05.rst > +++ b/doc/guides/rel_notes/release_21_05.rst > @@ -182,6 +182,12 @@ New Features > * Added command to display Rx queue used descriptor count. > ``show port (port_id) rxq (queue_id) desc used count`` > > +* **Enhanced crypto adapter forward mode.** > + > + * Added ``rte_event_crypto_adapter_enqueue()`` API to enqueue events to crypto > + adapter if forward mode is supported by driver. > + * Added support for crypto adapter forward mode in octeontx2 event and crypto > + device driver. > > Removed Items > ------------- > diff --git a/lib/librte_eventdev/eventdev_trace_points.c b/lib/librte_eventdev/eventdev_trace_points.c > index 1a0ccc448..3867ec800 100644 > --- a/lib/librte_eventdev/eventdev_trace_points.c > +++ b/lib/librte_eventdev/eventdev_trace_points.c > @@ -118,3 +118,6 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_start, > > RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_stop, > lib.eventdev.crypto.stop) > + > +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_enqueue, > + lib.eventdev.crypto.enq) > diff --git a/lib/librte_eventdev/rte_event_crypto_adapter.h b/lib/librte_eventdev/rte_event_crypto_adapter.h > index 60630ef66..f8c6cca87 100644 > --- a/lib/librte_eventdev/rte_event_crypto_adapter.h > +++ b/lib/librte_eventdev/rte_event_crypto_adapter.h > @@ -171,6 +171,7 @@ extern "C" { > #include <stdint.h> > > #include "rte_eventdev.h" > +#include "eventdev_pmd.h" > > /** > * Crypto event adapter mode > @@ -522,6 +523,68 @@ rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id); > int > rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); > > +/** > + * Enqueue a burst of crypto operations as event objects supplied in *rte_event* > + * structure on an event crypto adapter designated by its event *dev_id* through > + * the event port specified by *port_id*. This function is supported if the > + * eventdev PMD has the #RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD > + * capability flag set. > + * > + * The *nb_events* parameter is the number of event objects to enqueue which are > + * supplied in the *ev* array of *rte_event* structure. > + * > + * The rte_event_crypto_adapter_enqueue() function returns the number of > + * event objects it actually enqueued. A return value equal to *nb_events* > + * means that all event objects have been enqueued. > + * > + * @param dev_id > + * The identifier of the device. > + * @param port_id > + * The identifier of the event port. > + * @param ev > + * Points to an array of *nb_events* objects of type *rte_event* structure > + * which contain the event object enqueue operations to be processed. > + * @param nb_events > + * The number of event objects to enqueue, typically number of > + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) > + * available for this port. > + * > + * @return > + * The number of event objects actually enqueued on the event device. The > + * return value can be less than the value of the *nb_events* parameter when > + * the event devices queue is full or if invalid parameters are specified in a > + * *rte_event*. If the return value is less than *nb_events*, the remaining > + * events at the end of ev[] are not consumed and the caller has to take care > + * of them, and rte_errno is set accordingly. Possible errno values include: > + * - EINVAL The port ID is invalid, device ID is invalid, an event's queue > + * ID is invalid, or an event's sched type doesn't match the > + * capabilities of the destination queue. > + * - ENOSPC The event port was backpressured and unable to enqueue > + * one or more events. This error code is only applicable to > + * closed systems. > + */ > +static inline uint16_t > +rte_event_crypto_adapter_enqueue(uint8_t dev_id, > + uint8_t port_id, > + struct rte_event ev[], > + uint16_t nb_events) > +{ > + const struct rte_eventdev *dev = &rte_eventdevs[dev_id]; > + > +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG > + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); > + > + if (port_id >= dev->data->nb_ports) { > + rte_errno = EINVAL; > + return 0; > + } > +#endif > + rte_eventdev_trace_crypto_adapter_enqueue(dev_id, port_id, ev, > + nb_events); > + > + return dev->ca_enqueue(dev->data->ports[port_id], ev, nb_events); > +} > + > #ifdef __cplusplus > } > #endif > diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c > index c9bb5d227..594dd5e75 100644 > --- a/lib/librte_eventdev/rte_eventdev.c > +++ b/lib/librte_eventdev/rte_eventdev.c > @@ -1454,6 +1454,15 @@ rte_event_tx_adapter_enqueue(__rte_unused void *port, > return 0; > } > > +static uint16_t > +rte_event_crypto_adapter_enqueue(__rte_unused void *port, > + __rte_unused struct rte_event ev[], > + __rte_unused uint16_t nb_events) > +{ > + rte_errno = ENOTSUP; > + return 0; > +} > + > struct rte_eventdev * > rte_event_pmd_allocate(const char *name, int socket_id) > { > @@ -1476,6 +1485,7 @@ rte_event_pmd_allocate(const char *name, int socket_id) > > eventdev->txa_enqueue = rte_event_tx_adapter_enqueue; > eventdev->txa_enqueue_same_dest = rte_event_tx_adapter_enqueue; > + eventdev->ca_enqueue = rte_event_crypto_adapter_enqueue; > > if (eventdev->data == NULL) { > struct rte_eventdev_data *eventdev_data = NULL; > diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h > index 5f1f544cc..a9c496fb6 100644 > --- a/lib/librte_eventdev/rte_eventdev.h > +++ b/lib/librte_eventdev/rte_eventdev.h > @@ -1352,6 +1352,10 @@ typedef uint16_t (*event_tx_adapter_enqueue_same_dest)(void *port, > * burst having same destination Ethernet port & Tx queue. > */ > > +typedef uint16_t (*event_crypto_adapter_enqueue)(void *port, > + struct rte_event ev[], uint16_t nb_events); > +/**< @internal Enqueue burst of events on crypto adapter */ > + > #define RTE_EVENTDEV_NAME_MAX_LEN (64) > /**< @internal Max length of name of event PMD */ > > @@ -1434,8 +1438,11 @@ struct rte_eventdev { > uint8_t attached : 1; > /**< Flag indicating the device is attached */ > > + event_crypto_adapter_enqueue ca_enqueue; > + /**< Pointer to PMD crypto adapter enqueue function. */ > + > uint64_t reserved_64s[4]; /**< Reserved for future fields */ > - void *reserved_ptrs[4]; /**< Reserved for future fields */ > + void *reserved_ptrs[3]; /**< Reserved for future fields */ > } __rte_cache_aligned; > > extern struct rte_eventdev *rte_eventdevs; > diff --git a/lib/librte_eventdev/rte_eventdev_trace_fp.h b/lib/librte_eventdev/rte_eventdev_trace_fp.h > index 349129c0f..5639e0b83 100644 > --- a/lib/librte_eventdev/rte_eventdev_trace_fp.h > +++ b/lib/librte_eventdev/rte_eventdev_trace_fp.h > @@ -49,6 +49,16 @@ RTE_TRACE_POINT_FP( > rte_trace_point_emit_u8(flags); > ) > > +RTE_TRACE_POINT_FP( > + rte_eventdev_trace_crypto_adapter_enqueue, > + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table, > + uint16_t nb_events), > + rte_trace_point_emit_u8(dev_id); > + rte_trace_point_emit_u8(port_id); > + rte_trace_point_emit_ptr(ev_table); > + rte_trace_point_emit_u16(nb_events); > +) > + > RTE_TRACE_POINT_FP( > rte_eventdev_trace_timer_arm_burst, > RTE_TRACE_POINT_ARGS(const void *adapter, void **evtims_table, > diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map > index 902df0ae3..7e264d3b8 100644 > --- a/lib/librte_eventdev/version.map > +++ b/lib/librte_eventdev/version.map > @@ -143,6 +143,7 @@ EXPERIMENTAL { > rte_event_vector_pool_create; > rte_event_eth_rx_adapter_vector_limits_get; > rte_event_eth_rx_adapter_queue_event_vector_config; > + __rte_eventdev_trace_crypto_adapter_enqueue; > }; > > INTERNAL { > -- > 2.25.1 >