From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 015E5A034F; Mon, 11 Oct 2021 14:43:37 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DB367410E0; Mon, 11 Oct 2021 14:43:36 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 66FC4410E0 for ; Mon, 11 Oct 2021 14:43:35 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 19BCYKZP002406; Mon, 11 Oct 2021 05:43:28 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=/svIw+bhLdxGjSGF3I4JUsbFhlndO2vigSjNHJliUH8=; b=bLRk1Td7U27bjvtaRwR2Z1n8+exvaKwV6qmJQTMqoSjjYYyLOV0iJm+HztDeMmB5454B yeCjmJi2ve760UsT4Im9GmRKIEfTPh+dtSfxsU9OZ5c4TFuSmkM1MWjDhqzYA2xdCMsa WW3IgnGziCiQwoheNjmsLSH6DOlbMXLfOFW4DucEZ3VTDQf8jToNorL8xhN8Engf5yq+ Q5OVzJx8fZLd+dyPgamcJ/iF/E5F1b1+nd7ZNE8WyQh2qarImw5qO9i9f7JB0hw9xKay G2oI5eCGx9HT/HOTcXmhxXdU1zBcZeJxMYC3l0zbhLyH5Zdwy8DQA1rf6nECd+C/GRIW Ow== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com with ESMTP id 3bmaa5tayx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 11 Oct 2021 05:43:28 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 11 Oct 2021 05:43:27 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 11 Oct 2021 05:43:26 -0700 Received: from localhost.localdomain (unknown [10.28.36.185]) by maili.marvell.com (Postfix) with ESMTP id AD6C03F707E; Mon, 11 Oct 2021 05:43:21 -0700 (PDT) From: Akhil Goyal To: CC: , , , , , , , , , , , , , , , , , , , Akhil Goyal Date: Mon, 11 Oct 2021 18:13:05 +0530 Message-ID: <20211011124309.4066491-2-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211011124309.4066491-1-gakhil@marvell.com> References: <20210829125139.2173235-1-gakhil@marvell.com> <20211011124309.4066491-1-gakhil@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: 2gxL2vojCqdrC4yeBqK8_WWjsW2Ze95N X-Proofpoint-ORIG-GUID: 2gxL2vojCqdrC4yeBqK8_WWjsW2Ze95N X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-10-11_04,2021-10-07_02,2020-04-07_01 Subject: [dpdk-dev] [PATCH v2 1/5] cryptodev: separate out internal structures X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" A new header file rte_cryptodev_core.h is added and all internal data structures which need not be exposed directly to application are moved to this file. These structures are mostly used by drivers, but they need to be in the public header file as they are accessed by datapath inline functions for performance reasons. Signed-off-by: Akhil Goyal --- lib/cryptodev/cryptodev_pmd.h | 6 - lib/cryptodev/meson.build | 4 +- lib/cryptodev/rte_cryptodev.h | 360 ++++++++++++----------------- lib/cryptodev/rte_cryptodev_core.h | 100 ++++++++ 4 files changed, 245 insertions(+), 225 deletions(-) create mode 100644 lib/cryptodev/rte_cryptodev_core.h diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h index 8cc9051e09..36606dd10b 100644 --- a/lib/cryptodev/cryptodev_pmd.h +++ b/lib/cryptodev/cryptodev_pmd.h @@ -92,12 +92,6 @@ __rte_internal struct rte_cryptodev * rte_cryptodev_pmd_get_named_dev(const char *name); -/** - * The pool of rte_cryptodev structures. - */ -extern struct rte_cryptodev *rte_cryptodevs; - - /** * Definitions of all functions exported by a driver through the * the generic structure of type *crypto_dev_ops* supplied in the diff --git a/lib/cryptodev/meson.build b/lib/cryptodev/meson.build index 51371c3aa2..289b66ab76 100644 --- a/lib/cryptodev/meson.build +++ b/lib/cryptodev/meson.build @@ -14,7 +14,9 @@ headers = files( 'rte_crypto_sym.h', 'rte_crypto_asym.h', ) - +indirect_headers += files( + 'rte_cryptodev_core.h', +) driver_sdk_headers += files( 'cryptodev_pmd.h', ) diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h index cdd7168fba..ce0dca72be 100644 --- a/lib/cryptodev/rte_cryptodev.h +++ b/lib/cryptodev/rte_cryptodev.h @@ -867,17 +867,6 @@ rte_cryptodev_callback_unregister(uint8_t dev_id, enum rte_cryptodev_event_type event, rte_cryptodev_cb_fn cb_fn, void *cb_arg); -typedef uint16_t (*dequeue_pkt_burst_t)(void *qp, - struct rte_crypto_op **ops, uint16_t nb_ops); -/**< Dequeue processed packets from queue pair of a device. */ - -typedef uint16_t (*enqueue_pkt_burst_t)(void *qp, - struct rte_crypto_op **ops, uint16_t nb_ops); -/**< Enqueue packets for processing on queue pair of a device. */ - - - - struct rte_cryptodev_callback; /** Structure to keep track of registered callbacks */ @@ -907,216 +896,9 @@ struct rte_cryptodev_cb_rcu { /**< RCU QSBR variable per queue pair */ }; -/** The data structure associated with each crypto device. */ -struct rte_cryptodev { - dequeue_pkt_burst_t dequeue_burst; - /**< Pointer to PMD receive function. */ - enqueue_pkt_burst_t enqueue_burst; - /**< Pointer to PMD transmit function. */ - - struct rte_cryptodev_data *data; - /**< Pointer to device data */ - struct rte_cryptodev_ops *dev_ops; - /**< Functions exported by PMD */ - uint64_t feature_flags; - /**< Feature flags exposes HW/SW features for the given device */ - struct rte_device *device; - /**< Backing device */ - - uint8_t driver_id; - /**< Crypto driver identifier*/ - - struct rte_cryptodev_cb_list link_intr_cbs; - /**< User application callback for interrupts if present */ - - void *security_ctx; - /**< Context for security ops */ - - __extension__ - uint8_t attached : 1; - /**< Flag indicating the device is attached */ - - struct rte_cryptodev_cb_rcu *enq_cbs; - /**< User application callback for pre enqueue processing */ - - struct rte_cryptodev_cb_rcu *deq_cbs; - /**< User application callback for post dequeue processing */ -} __rte_cache_aligned; - void * rte_cryptodev_get_sec_ctx(uint8_t dev_id); -/** - * - * The data part, with no function pointers, associated with each device. - * - * This structure is safe to place in shared memory to be common among - * different processes in a multi-process configuration. - */ -struct rte_cryptodev_data { - uint8_t dev_id; - /**< Device ID for this instance */ - uint8_t socket_id; - /**< Socket ID where memory is allocated */ - char name[RTE_CRYPTODEV_NAME_MAX_LEN]; - /**< Unique identifier name */ - - __extension__ - uint8_t dev_started : 1; - /**< Device state: STARTED(1)/STOPPED(0) */ - - struct rte_mempool *session_pool; - /**< Session memory pool */ - void **queue_pairs; - /**< Array of pointers to queue pairs. */ - uint16_t nb_queue_pairs; - /**< Number of device queue pairs. */ - - void *dev_private; - /**< PMD-specific private data */ -} __rte_cache_aligned; - -extern struct rte_cryptodev *rte_cryptodevs; -/** - * - * Dequeue a burst of processed crypto operations from a queue on the crypto - * device. The dequeued operation are stored in *rte_crypto_op* structures - * whose pointers are supplied in the *ops* array. - * - * The rte_cryptodev_dequeue_burst() function returns the number of ops - * actually dequeued, which is the number of *rte_crypto_op* data structures - * effectively supplied into the *ops* array. - * - * A return value equal to *nb_ops* indicates that the queue contained - * at least *nb_ops* operations, and this is likely to signify that other - * processed operations remain in the devices output queue. Applications - * implementing a "retrieve as many processed operations as possible" policy - * can check this specific case and keep invoking the - * rte_cryptodev_dequeue_burst() function until a value less than - * *nb_ops* is returned. - * - * The rte_cryptodev_dequeue_burst() function does not provide any error - * notification to avoid the corresponding overhead. - * - * @param dev_id The symmetric crypto device identifier - * @param qp_id The index of the queue pair from which to - * retrieve processed packets. The value must be - * in the range [0, nb_queue_pair - 1] previously - * supplied to rte_cryptodev_configure(). - * @param ops The address of an array of pointers to - * *rte_crypto_op* structures that must be - * large enough to store *nb_ops* pointers in it. - * @param nb_ops The maximum number of operations to dequeue. - * - * @return - * - The number of operations actually dequeued, which is the number - * of pointers to *rte_crypto_op* structures effectively supplied to the - * *ops* array. - */ -static inline uint16_t -rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id, - struct rte_crypto_op **ops, uint16_t nb_ops) -{ - struct rte_cryptodev *dev = &rte_cryptodevs[dev_id]; - - rte_cryptodev_trace_dequeue_burst(dev_id, qp_id, (void **)ops, nb_ops); - nb_ops = (*dev->dequeue_burst) - (dev->data->queue_pairs[qp_id], ops, nb_ops); -#ifdef RTE_CRYPTO_CALLBACKS - if (unlikely(dev->deq_cbs != NULL)) { - struct rte_cryptodev_cb_rcu *list; - struct rte_cryptodev_cb *cb; - - /* __ATOMIC_RELEASE memory order was used when the - * call back was inserted into the list. - * Since there is a clear dependency between loading - * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is - * not required. - */ - list = &dev->deq_cbs[qp_id]; - rte_rcu_qsbr_thread_online(list->qsbr, 0); - cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED); - - while (cb != NULL) { - nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops, - cb->arg); - cb = cb->next; - }; - - rte_rcu_qsbr_thread_offline(list->qsbr, 0); - } -#endif - return nb_ops; -} - -/** - * Enqueue a burst of operations for processing on a crypto device. - * - * The rte_cryptodev_enqueue_burst() function is invoked to place - * crypto operations on the queue *qp_id* of the device designated by - * its *dev_id*. - * - * The *nb_ops* parameter is the number of operations to process which are - * supplied in the *ops* array of *rte_crypto_op* structures. - * - * The rte_cryptodev_enqueue_burst() function returns the number of - * operations it actually enqueued for processing. A return value equal to - * *nb_ops* means that all packets have been enqueued. - * - * @param dev_id The identifier of the device. - * @param qp_id The index of the queue pair which packets are - * to be enqueued for processing. The value - * must be in the range [0, nb_queue_pairs - 1] - * previously supplied to - * *rte_cryptodev_configure*. - * @param ops The address of an array of *nb_ops* pointers - * to *rte_crypto_op* structures which contain - * the crypto operations to be processed. - * @param nb_ops The number of operations to process. - * - * @return - * The number of operations actually enqueued on the crypto device. The return - * value can be less than the value of the *nb_ops* parameter when the - * crypto devices queue is full or if invalid parameters are specified in - * a *rte_crypto_op*. - */ -static inline uint16_t -rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id, - struct rte_crypto_op **ops, uint16_t nb_ops) -{ - struct rte_cryptodev *dev = &rte_cryptodevs[dev_id]; - -#ifdef RTE_CRYPTO_CALLBACKS - if (unlikely(dev->enq_cbs != NULL)) { - struct rte_cryptodev_cb_rcu *list; - struct rte_cryptodev_cb *cb; - - /* __ATOMIC_RELEASE memory order was used when the - * call back was inserted into the list. - * Since there is a clear dependency between loading - * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is - * not required. - */ - list = &dev->enq_cbs[qp_id]; - rte_rcu_qsbr_thread_online(list->qsbr, 0); - cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED); - - while (cb != NULL) { - nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops, - cb->arg); - cb = cb->next; - }; - - rte_rcu_qsbr_thread_offline(list->qsbr, 0); - } -#endif - - rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops, nb_ops); - return (*dev->enqueue_burst)( - dev->data->queue_pairs[qp_id], ops, nb_ops); -} - - /** Cryptodev symmetric crypto session * Each session is derived from a fixed xform chain. Therefore each session * has a fixed algo, key, op-type, digest_len etc. @@ -2009,6 +1791,148 @@ int rte_cryptodev_remove_deq_callback(uint8_t dev_id, uint16_t qp_id, struct rte_cryptodev_cb *cb); +#include +/** + * + * Dequeue a burst of processed crypto operations from a queue on the crypto + * device. The dequeued operation are stored in *rte_crypto_op* structures + * whose pointers are supplied in the *ops* array. + * + * The rte_cryptodev_dequeue_burst() function returns the number of ops + * actually dequeued, which is the number of *rte_crypto_op* data structures + * effectively supplied into the *ops* array. + * + * A return value equal to *nb_ops* indicates that the queue contained + * at least *nb_ops* operations, and this is likely to signify that other + * processed operations remain in the devices output queue. Applications + * implementing a "retrieve as many processed operations as possible" policy + * can check this specific case and keep invoking the + * rte_cryptodev_dequeue_burst() function until a value less than + * *nb_ops* is returned. + * + * The rte_cryptodev_dequeue_burst() function does not provide any error + * notification to avoid the corresponding overhead. + * + * @param dev_id The symmetric crypto device identifier + * @param qp_id The index of the queue pair from which to + * retrieve processed packets. The value must be + * in the range [0, nb_queue_pair - 1] previously + * supplied to rte_cryptodev_configure(). + * @param ops The address of an array of pointers to + * *rte_crypto_op* structures that must be + * large enough to store *nb_ops* pointers in it. + * @param nb_ops The maximum number of operations to dequeue. + * + * @return + * - The number of operations actually dequeued, which is the number + * of pointers to *rte_crypto_op* structures effectively supplied to the + * *ops* array. + */ +static inline uint16_t +rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id, + struct rte_crypto_op **ops, uint16_t nb_ops) +{ + struct rte_cryptodev *dev = &rte_cryptodevs[dev_id]; + + rte_cryptodev_trace_dequeue_burst(dev_id, qp_id, (void **)ops, nb_ops); + nb_ops = (*dev->dequeue_burst) + (dev->data->queue_pairs[qp_id], ops, nb_ops); +#ifdef RTE_CRYPTO_CALLBACKS + if (unlikely(dev->deq_cbs != NULL)) { + struct rte_cryptodev_cb_rcu *list; + struct rte_cryptodev_cb *cb; + + /* __ATOMIC_RELEASE memory order was used when the + * call back was inserted into the list. + * Since there is a clear dependency between loading + * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is + * not required. + */ + list = &dev->deq_cbs[qp_id]; + rte_rcu_qsbr_thread_online(list->qsbr, 0); + cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED); + + while (cb != NULL) { + nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops, + cb->arg); + cb = cb->next; + }; + + rte_rcu_qsbr_thread_offline(list->qsbr, 0); + } +#endif + return nb_ops; +} + +/** + * Enqueue a burst of operations for processing on a crypto device. + * + * The rte_cryptodev_enqueue_burst() function is invoked to place + * crypto operations on the queue *qp_id* of the device designated by + * its *dev_id*. + * + * The *nb_ops* parameter is the number of operations to process which are + * supplied in the *ops* array of *rte_crypto_op* structures. + * + * The rte_cryptodev_enqueue_burst() function returns the number of + * operations it actually enqueued for processing. A return value equal to + * *nb_ops* means that all packets have been enqueued. + * + * @param dev_id The identifier of the device. + * @param qp_id The index of the queue pair which packets are + * to be enqueued for processing. The value + * must be in the range [0, nb_queue_pairs - 1] + * previously supplied to + * *rte_cryptodev_configure*. + * @param ops The address of an array of *nb_ops* pointers + * to *rte_crypto_op* structures which contain + * the crypto operations to be processed. + * @param nb_ops The number of operations to process. + * + * @return + * The number of operations actually enqueued on the crypto device. The return + * value can be less than the value of the *nb_ops* parameter when the + * crypto devices queue is full or if invalid parameters are specified in + * a *rte_crypto_op*. + */ +static inline uint16_t +rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id, + struct rte_crypto_op **ops, uint16_t nb_ops) +{ + struct rte_cryptodev *dev = &rte_cryptodevs[dev_id]; + +#ifdef RTE_CRYPTO_CALLBACKS + if (unlikely(dev->enq_cbs != NULL)) { + struct rte_cryptodev_cb_rcu *list; + struct rte_cryptodev_cb *cb; + + /* __ATOMIC_RELEASE memory order was used when the + * call back was inserted into the list. + * Since there is a clear dependency between loading + * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is + * not required. + */ + list = &dev->enq_cbs[qp_id]; + rte_rcu_qsbr_thread_online(list->qsbr, 0); + cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED); + + while (cb != NULL) { + nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops, + cb->arg); + cb = cb->next; + }; + + rte_rcu_qsbr_thread_offline(list->qsbr, 0); + } +#endif + + rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops, nb_ops); + return (*dev->enqueue_burst)( + dev->data->queue_pairs[qp_id], ops, nb_ops); +} + + + #ifdef __cplusplus } #endif diff --git a/lib/cryptodev/rte_cryptodev_core.h b/lib/cryptodev/rte_cryptodev_core.h new file mode 100644 index 0000000000..1633e55889 --- /dev/null +++ b/lib/cryptodev/rte_cryptodev_core.h @@ -0,0 +1,100 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _RTE_CRYPTODEV_CORE_H_ +#define _RTE_CRYPTODEV_CORE_H_ + +/** + * @file + * + * RTE Crypto Device internal header. + * + * This header contains internal data types. But they are still part of the + * public API because they are used by inline functions in the published API. + * + * Applications should not use these directly. + * + */ + +typedef uint16_t (*dequeue_pkt_burst_t)(void *qp, + struct rte_crypto_op **ops, uint16_t nb_ops); +/**< Dequeue processed packets from queue pair of a device. */ + +typedef uint16_t (*enqueue_pkt_burst_t)(void *qp, + struct rte_crypto_op **ops, uint16_t nb_ops); +/**< Enqueue packets for processing on queue pair of a device. */ + +/** + * @internal + * The data part, with no function pointers, associated with each device. + * + * This structure is safe to place in shared memory to be common among + * different processes in a multi-process configuration. + */ +struct rte_cryptodev_data { + uint8_t dev_id; + /**< Device ID for this instance */ + uint8_t socket_id; + /**< Socket ID where memory is allocated */ + char name[RTE_CRYPTODEV_NAME_MAX_LEN]; + /**< Unique identifier name */ + + __extension__ + uint8_t dev_started : 1; + /**< Device state: STARTED(1)/STOPPED(0) */ + + struct rte_mempool *session_pool; + /**< Session memory pool */ + void **queue_pairs; + /**< Array of pointers to queue pairs. */ + uint16_t nb_queue_pairs; + /**< Number of device queue pairs. */ + + void *dev_private; + /**< PMD-specific private data */ +} __rte_cache_aligned; + + +/** @internal The data structure associated with each crypto device. */ +struct rte_cryptodev { + dequeue_pkt_burst_t dequeue_burst; + /**< Pointer to PMD receive function. */ + enqueue_pkt_burst_t enqueue_burst; + /**< Pointer to PMD transmit function. */ + + struct rte_cryptodev_data *data; + /**< Pointer to device data */ + struct rte_cryptodev_ops *dev_ops; + /**< Functions exported by PMD */ + uint64_t feature_flags; + /**< Feature flags exposes HW/SW features for the given device */ + struct rte_device *device; + /**< Backing device */ + + uint8_t driver_id; + /**< Crypto driver identifier*/ + + struct rte_cryptodev_cb_list link_intr_cbs; + /**< User application callback for interrupts if present */ + + void *security_ctx; + /**< Context for security ops */ + + __extension__ + uint8_t attached : 1; + /**< Flag indicating the device is attached */ + + struct rte_cryptodev_cb_rcu *enq_cbs; + /**< User application callback for pre enqueue processing */ + + struct rte_cryptodev_cb_rcu *deq_cbs; + /**< User application callback for post dequeue processing */ +} __rte_cache_aligned; + +/** + * The pool of rte_cryptodev structures. + */ +extern struct rte_cryptodev *rte_cryptodevs; + +#endif /* _RTE_CRYPTODEV_CORE_H_ */ -- 2.25.1