* [dpdk-dev] [dpdk-dev v6 0/4] cryptodev: add data-path service APIs
@ 2020-08-18 16:28 Fan Zhang
2020-08-18 16:28 ` [dpdk-dev] [dpdk-dev v6 1/4] cryptodev: add crypto " Fan Zhang
` (4 more replies)
0 siblings, 5 replies; 84+ messages in thread
From: Fan Zhang @ 2020-08-18 16:28 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, ArkadiuszX.Kusztal, AdamX.Dybkowski, Fan Zhang
Direct crypto data-path service are a set of APIs that especially provided for
the external libraries/applications who want to take advantage of the rich
features provided by cryptodev, but not necessarily depend on cryptodev
operations, mempools, or mbufs in the their data-path implementations.
The direct crypto data-path service has the following advantages:
- Supports raw data pointer and physical addresses as input.
- Do not require specific data structure allocated from heap, such as
cryptodev operation.
- Enqueue in a burst or single operation. The service allow enqueuing in
a burst similar to ``rte_cryptodev_enqueue_burst`` operation, or only
enqueue one job at a time but maintaining necessary context data locally for
next single job enqueue operation. The latter method is especially helpful
when the user application's crypto operations are clustered into a burst.
Allowing enqueue one operation at a time helps reducing one additional loop
and also reduced the cache misses during the double "looping" situation.
- Customerizable dequeue count. Instead of dequeue maximum possible operations
as same as ``rte_cryptodev_dequeue_burst`` operation, the service allows the
user to provide a callback function to decide how many operations to be
dequeued. This is especially helpful when the expected dequeue count is
hidden inside the opaque data stored during enqueue. The user can provide
the callback function to parse the opaque data structure.
- Abandon enqueue and dequeue anytime. One of the drawbacks of
``rte_cryptodev_enqueue_burst`` and ``rte_cryptodev_dequeue_burst``
operations are: once an operation is enqueued/dequeued there is no way to
undo the operation. The service make the operation abandon possible by
creating a local copy of the queue operation data in the service context
data. The data will be written back to driver maintained operation data
when enqueue or dequeue done function is called.
The cryptodev data-path service uses
Cryptodev PMDs who supports this feature will have
``RTE_CRYPTODEV_FF_SYM_HW_DIRECT_API`` feature flag presented. To use this
feature the function ``rte_cryptodev_get_dp_service_ctx_data_size`` should
be called to get the data path service context data size. The user should
creates a local buffer at least this size long and initialize it using
``rte_cryptodev_dp_configure_service`` function call.
The ``rte_cryptodev_dp_configure_service`` function call initialize or
updates the ``struct rte_crypto_dp_service_ctx`` buffer, in which contains the
driver specific queue pair data pointer and service context buffer, and a
set of function pointers to enqueue and dequeue different algorithms'
operations. The ``rte_cryptodev_dp_configure_service`` should be called when:
- Before enqueuing or dequeuing starts (set ``is_update`` parameter to 0).
- When different cryptodev session, security session, or session-less xform
is used (set ``is_update`` parameter to 1).
Two different enqueue functions are provided.
- ``rte_cryptodev_dp_sym_submit_vec``: submit a burst of operations stored in
the ``rte_crypto_sym_vec`` structure.
- ``rte_cryptodev_dp_submit_single_job``: submit single operation.
Either enqueue functions will not command the crypto device to start processing
until ``rte_cryptodev_dp_submit_done`` function is called. Before then the user
shall expect the driver only stores the necessory context data in the
``rte_crypto_dp_service_ctx`` buffer for the next enqueue operation. If the user
wants to abandon the submitted operations, simply call
``rte_cryptodev_dp_configure_service`` function instead with the parameter
``is_update`` set to 0. The driver will recover the service context data to
the previous state.
To dequeue the operations the user also have two operations:
- ``rte_cryptodev_dp_sym_dequeue``: fully customizable deuqueue operation. The
user needs to provide the callback function for the driver to get the
dequeue count and perform post processing such as write the status field.
- ``rte_cryptodev_dp_sym_dequeue_single_job``: dequeue single job.
Same as enqueue, the function ``rte_cryptodev_dp_dequeue_done`` is used to
merge user's local service context data with the driver's queue operation
data. Also to abandon the dequeue operation (still keep the operations in the
queue), the user shall avoid ``rte_cryptodev_dp_dequeue_done`` function call
but calling ``rte_cryptodev_dp_configure_service`` function with the parameter
``is_update`` set to 0.
There are a few limitations to the data path service:
* Only support in-place operations.
* APIs are NOT thread-safe.
* CANNOT mix the direct API's enqueue with rte_cryptodev_enqueue_burst, or
vice versa.
v6:
- Rebased on top of DPDK 20.08.
- Changed to service ctx and added single job submit/dequeue.
v5:
- Changed to use rte_crypto_sym_vec as input.
- Changed to use public APIs instead of use function pointer.
v4:
- Added missed patch.
v3:
- Instead of QAT only API, moved the API to cryptodev.
- Added cryptodev feature flags.
v2:
- Used a structure to simplify parameters.
- Added unit tests.
- Added documentation.
Fan Zhang (4):
cryptodev: add crypto data-path service APIs
crypto/qat: add crypto data-path service API support
test/crypto: add unit-test for cryptodev direct APIs
doc: add cryptodev service APIs guide
app/test/test_cryptodev.c | 354 ++++++-
app/test/test_cryptodev.h | 6 +
app/test/test_cryptodev_blockcipher.c | 50 +-
doc/guides/prog_guide/cryptodev_lib.rst | 90 ++
drivers/common/qat/Makefile | 1 +
drivers/crypto/qat/meson.build | 1 +
drivers/crypto/qat/qat_sym.h | 13 +
drivers/crypto/qat/qat_sym_hw_dp.c | 923 ++++++++++++++++++
drivers/crypto/qat/qat_sym_pmd.c | 9 +-
lib/librte_cryptodev/rte_crypto.h | 9 +
lib/librte_cryptodev/rte_crypto_sym.h | 44 +-
lib/librte_cryptodev/rte_cryptodev.c | 45 +
lib/librte_cryptodev/rte_cryptodev.h | 335 ++++++-
lib/librte_cryptodev/rte_cryptodev_pmd.h | 47 +-
.../rte_cryptodev_version.map | 10 +
15 files changed, 1889 insertions(+), 48 deletions(-)
create mode 100644 drivers/crypto/qat/qat_sym_hw_dp.c
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v6 1/4] cryptodev: add crypto data-path service APIs
2020-08-18 16:28 [dpdk-dev] [dpdk-dev v6 0/4] cryptodev: add data-path service APIs Fan Zhang
@ 2020-08-18 16:28 ` Fan Zhang
2020-08-18 16:28 ` [dpdk-dev] [dpdk-dev v6 2/4] crypto/qat: add crypto data-path service API support Fan Zhang
` (3 subsequent siblings)
4 siblings, 0 replies; 84+ messages in thread
From: Fan Zhang @ 2020-08-18 16:28 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, ArkadiuszX.Kusztal, AdamX.Dybkowski,
Fan Zhang, Piotr Bronowski
This patch adds data-path service APIs for enqueue and dequeue
operations to cryptodev. The APIs support flexible user-define
enqueue and dequeue behaviors and operation mode.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
---
lib/librte_cryptodev/rte_crypto.h | 9 +
lib/librte_cryptodev/rte_crypto_sym.h | 44 ++-
lib/librte_cryptodev/rte_cryptodev.c | 45 +++
lib/librte_cryptodev/rte_cryptodev.h | 335 +++++++++++++++++-
lib/librte_cryptodev/rte_cryptodev_pmd.h | 47 ++-
.../rte_cryptodev_version.map | 10 +
6 files changed, 481 insertions(+), 9 deletions(-)
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
index fd5ef3a87..f009be9af 100644
--- a/lib/librte_cryptodev/rte_crypto.h
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -438,6 +438,15 @@ rte_crypto_op_attach_asym_session(struct rte_crypto_op *op,
return 0;
}
+/** Crypto data-path service types */
+enum rte_crypto_dp_service {
+ RTE_CRYPTO_DP_SYM_CIPHER_ONLY = 0,
+ RTE_CRYPTO_DP_SYM_AUTH_ONLY,
+ RTE_CRYPTO_DP_SYM_CHAIN,
+ RTE_CRYPTO_DP_SYM_AEAD,
+ RTE_CRYPTO_DP_N_SERVICE
+};
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index f29c98051..518e4111b 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -50,6 +50,18 @@ struct rte_crypto_sgl {
uint32_t num;
};
+/**
+ * Crypto IO Data without length info.
+ * Supposed to be used to pass input/output data buffers with lengths
+ * defined when creating crypto session.
+ */
+struct rte_crypto_data {
+ /** virtual address of the data buffer */
+ void *base;
+ /** IOVA of the data buffer */
+ rte_iova_t iova;
+};
+
/**
* Synchronous operation descriptor.
* Supposed to be used with CPU crypto API call.
@@ -57,12 +69,32 @@ struct rte_crypto_sgl {
struct rte_crypto_sym_vec {
/** array of SGL vectors */
struct rte_crypto_sgl *sgl;
- /** array of pointers to IV */
- void **iv;
- /** array of pointers to AAD */
- void **aad;
- /** array of pointers to digest */
- void **digest;
+
+ union {
+
+ /* Supposed to be used with CPU crypto API call. */
+ struct {
+ /** array of pointers to IV */
+ void **iv;
+ /** array of pointers to AAD */
+ void **aad;
+ /** array of pointers to digest */
+ void **digest;
+ };
+
+ /* Supposed to be used with rte_cryptodev_dp_sym_submit_vec()
+ * call.
+ */
+ struct {
+ /** vector to IV */
+ struct rte_crypto_data *iv_vec;
+ /** vecor to AAD */
+ struct rte_crypto_data *aad_vec;
+ /** vector to Digest */
+ struct rte_crypto_data *digest_vec;
+ };
+ };
+
/**
* array of statuses for each operation:
* - 0 on success
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index 1dd795bcb..8a28511f9 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -1914,6 +1914,51 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec);
}
+int
+rte_cryptodev_get_dp_service_ctx_data_size(uint8_t dev_id)
+{
+ struct rte_cryptodev *dev;
+ int32_t size = sizeof(struct rte_crypto_dp_service_ctx);
+ int32_t priv_size;
+
+ if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+ return -1;
+
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+ if (*dev->dev_ops->get_drv_ctx_size == NULL ||
+ !(dev->feature_flags & RTE_CRYPTODEV_FF_DATA_PLANE_SERVICE)) {
+ return -1;
+ }
+
+ priv_size = (*dev->dev_ops->get_drv_ctx_size)(dev);
+ if (priv_size < 0)
+ return -1;
+
+ return RTE_ALIGN_CEIL((size + priv_size), 8);
+}
+
+int
+rte_cryptodev_dp_configure_service(uint8_t dev_id, uint16_t qp_id,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ struct rte_crypto_dp_service_ctx *ctx, uint8_t is_update)
+{
+ struct rte_cryptodev *dev;
+
+ if (!rte_cryptodev_get_qp_status(dev_id, qp_id))
+ return -1;
+
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+ if (!(dev->feature_flags & RTE_CRYPTODEV_FF_DATA_PLANE_SERVICE)
+ || dev->dev_ops->configure_service == NULL)
+ return -1;
+
+ return (*dev->dev_ops->configure_service)(dev, qp_id, ctx,
+ service_type, sess_type, session_ctx, is_update);
+}
+
/** Initialise rte_crypto_op mempool element */
static void
rte_crypto_op_init(struct rte_mempool *mempool,
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 7b3ebc20f..fdcfcbe14 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -466,7 +466,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
/**< Support symmetric session-less operations */
#define RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA (1ULL << 23)
/**< Support operations on data which is not byte aligned */
-
+#define RTE_CRYPTODEV_FF_DATA_PLANE_SERVICE (1ULL << 24)
+/**< Support accelerated specific raw data as input */
/**
* Get the name of a crypto device feature flag
@@ -1351,6 +1352,338 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
struct rte_crypto_sym_vec *vec);
+/**
+ * Get the size of the data-path service context for all registered drivers.
+ *
+ * @param dev_id The device identifier.
+ *
+ * @return
+ * - If the device supports data-path service, return the context size.
+ * - If the device does not support the data-plane service, return -1.
+ */
+__rte_experimental
+int
+rte_cryptodev_get_dp_service_ctx_data_size(uint8_t dev_id);
+
+/**
+ * Union of different crypto session types, including session-less xform
+ * pointer.
+ */
+union rte_cryptodev_session_ctx {
+ struct rte_cryptodev_sym_session *crypto_sess;
+ struct rte_crypto_sym_xform *xform;
+ struct rte_security_session *sec_sess;
+};
+
+/**
+ * Submit a data vector into device queue but the driver will not start
+ * processing until rte_cryptodev_dp_sym_submit_vec() is called.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param vec The array of job vectors.
+ * @param ofs Start and stop offsets for auth and cipher
+ * operations.
+ * @param opaque The array of opaque data for dequeue.
+ * @return
+ * - The number of jobs successfully submitted.
+ */
+typedef uint32_t (*cryptodev_dp_sym_submit_vec_t)(
+ void *qp, uint8_t *service_data, struct rte_crypto_sym_vec *vec,
+ union rte_crypto_sym_ofs ofs, void **opaque);
+
+/**
+ * Submit single job into device queue but the driver will not start
+ * processing until rte_cryptodev_dp_sym_submit_vec() is called.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param data The buffer vector.
+ * @param n_data_vecs Number of buffer vectors.
+ * @param ofs Start and stop offsets for auth and cipher
+ * operations.
+ * @param iv IV data.
+ * @param digest Digest data.
+ * @param aad AAD data.
+ * @param opaque The opaque data for dequeue.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+typedef int (*cryptodev_dp_submit_single_job_t)(
+ void *qp, uint8_t *service_data, struct rte_crypto_vec *data,
+ uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs,
+ struct rte_crypto_data *iv, struct rte_crypto_data *digest,
+ struct rte_crypto_data *aad, void *opaque);
+
+/**
+ * Inform the queue pair to start processing or finish dequeuing all
+ * submitted/dequeued jobs.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param n The total number of submitted jobs.
+ */
+typedef void (*cryptodev_dp_sym_opeartion_done_t)(void *qp,
+ uint8_t *service_data, uint32_t n);
+
+/**
+ * Typedef that the user provided for the driver to get the dequeue count.
+ * The function may return a fixed number or the number parsed from the opaque
+ * data stored in the first processed job.
+ *
+ * @param opaque Dequeued opaque data.
+ **/
+typedef uint32_t (*rte_cryptodev_get_dequeue_count_t)(void *opaque);
+
+/**
+ * Typedef that the user provided to deal with post dequeue operation, such
+ * as filling status.
+ *
+ * @param opaque Dequeued opaque data. In case
+ * RTE_CRYPTO_HW_DP_FF_GET_OPAQUE_ARRAY bit is
+ * set, this value will be the opaque data stored
+ * in the specific processed jobs referenced by
+ * index, otherwise it will be the opaque data
+ * stored in the first processed job in the burst.
+ * @param index Index number of the processed job.
+ * @param is_op_success Driver filled operation status.
+ **/
+typedef void (*rte_cryptodev_post_dequeue_t)(void *opaque, uint32_t index,
+ uint8_t is_op_success);
+
+/**
+ * Dequeue symmetric crypto processing of user provided data.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param get_dequeue_count User provided callback function to
+ * obtain dequeue count.
+ * @param post_dequeue User provided callback function to
+ * post-process a dequeued operation.
+ * @param out_opaque Opaque pointer array to be retrieve from
+ * device queue. In case of
+ * *is_opaque_array* is set there should
+ * be enough room to store all opaque data.
+ * @param is_opaque_array Set 1 if every dequeued job will be
+ * written the opaque data into
+ * *out_opaque* array.
+ * @param n_success_jobs Driver written value to specific the
+ * total successful operations count.
+ *
+ * @return
+ * - Returns number of dequeued packets.
+ */
+typedef uint32_t (*cryptodev_dp_sym_dequeue_t)(void *qp, uint8_t *service_data,
+ rte_cryptodev_get_dequeue_count_t get_dequeue_count,
+ rte_cryptodev_post_dequeue_t post_dequeue,
+ void **out_opaque, uint8_t is_opaque_array,
+ uint32_t *n_success_jobs);
+
+/**
+ * Dequeue symmetric crypto processing of user provided data.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param out_opaque Opaque pointer to be retrieve from
+ * device queue.
+ *
+ * @return
+ * - 1 if the job is dequeued and the operation is a success.
+ * - 0 if the job is dequeued but the operation is failed.
+ * - -1 if no job is dequeued.
+ */
+typedef int (*cryptodev_dp_sym_dequeue_single_job_t)(
+ void *qp, uint8_t *service_data, void **out_opaque);
+
+/**
+ * Context data for asynchronous crypto process.
+ */
+struct rte_crypto_dp_service_ctx {
+ void *qp_data;
+
+ union {
+ /* Supposed to be used for symmetric crypto service */
+ struct {
+ cryptodev_dp_submit_single_job_t submit_single_job;
+ cryptodev_dp_sym_submit_vec_t submit_vec;
+ cryptodev_dp_sym_opeartion_done_t submit_done;
+ cryptodev_dp_sym_dequeue_t dequeue_opaque;
+ cryptodev_dp_sym_dequeue_single_job_t dequeue_single;
+ cryptodev_dp_sym_opeartion_done_t dequeue_done;
+ };
+ };
+
+ /* Driver specific service data */
+ uint8_t drv_service_data[];
+};
+
+/**
+ * Configure one DP service context data. Calling this function for the first
+ * time the user should unset the *is_update* parameter and the driver will
+ * fill necessary operation data into ctx buffer. Only when
+ * rte_cryptodev_dp_submit_done() is called the data stored in the ctx buffer
+ * will not be effective.
+ *
+ * @param dev_id The device identifier.
+ * @param qp_id The index of the queue pair from which to
+ * retrieve processed packets. The value must be
+ * in the range [0, nb_queue_pair - 1] previously
+ * supplied to rte_cryptodev_configure().
+ * @param service_type Type of the service requested.
+ * @param sess_type session type.
+ * @param session_ctx Session context data.
+ * @param ctx The data-path service context data.
+ * @param is_update Set 1 if ctx is pre-initialized but need
+ * update to different service type or session,
+ * but the rest driver data remains the same.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+__rte_experimental
+int
+rte_cryptodev_dp_configure_service(uint8_t dev_id, uint16_t qp_id,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ struct rte_crypto_dp_service_ctx *ctx, uint8_t is_update);
+
+/**
+ * Submit single job into device queue but the driver will not start
+ * processing until rte_cryptodev_dp_sym_submit_vec() is called.
+ *
+ * @param ctx The initialized data-path service context data.
+ * @param data The buffer vector.
+ * @param n_data_vecs Number of buffer vectors.
+ * @param ofs Start and stop offsets for auth and cipher
+ * operations.
+ * @param iv IV data.
+ * @param digest Digest data.
+ * @param aad AAD data.
+ * @param opaque The array of opaque data for dequeue.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_cryptodev_dp_submit_single_job(struct rte_crypto_dp_service_ctx *ctx,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ struct rte_crypto_data *iv, struct rte_crypto_data *digest,
+ struct rte_crypto_data *aad, void *opaque)
+{
+ return (*ctx->submit_single_job)(ctx->qp_data, ctx->drv_service_data,
+ data, n_data_vecs, ofs, iv, digest, aad, opaque);
+}
+
+/**
+ * Submit a data vector into device queue but the driver will not start
+ * processing until rte_cryptodev_dp_sym_submit_vec() is called.
+ *
+ * @param ctx The initialized data-path service context data.
+ * @param vec The array of job vectors.
+ * @param ofs Start and stop offsets for auth and cipher operations.
+ * @param opaque The array of opaque data for dequeue.
+ * @return
+ * - The number of jobs successfully submitted.
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+rte_cryptodev_dp_sym_submit_vec(struct rte_crypto_dp_service_ctx *ctx,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ return (*ctx->submit_vec)(ctx->qp_data, ctx->drv_service_data, vec,
+ ofs, opaque);
+}
+
+/**
+ * Command the queue pair to start processing all submitted jobs from last
+ * rte_cryptodev_init_dp_service() call.
+ *
+ * @param ctx The initialized data-path service context data.
+ * @param n The total number of submitted jobs.
+ */
+__rte_experimental
+static __rte_always_inline void
+rte_cryptodev_dp_submit_done(struct rte_crypto_dp_service_ctx *ctx, uint32_t n)
+{
+ (*ctx->submit_done)(ctx->qp_data, ctx->drv_service_data, n);
+}
+
+/**
+ * Dequeue symmetric crypto processing of user provided data.
+ *
+ * @param ctx The initialized data-path service
+ * context data.
+ * @param get_dequeue_count User provided callback function to
+ * obtain dequeue count.
+ * @param post_dequeue User provided callback function to
+ * post-process a dequeued operation.
+ * @param out_opaque Opaque pointer array to be retrieve from
+ * device queue. In case of
+ * *is_opaque_array* is set there should
+ * be enough room to store all opaque data.
+ * @param is_opaque_array Set 1 if every dequeued job will be
+ * written the opaque data into
+ * *out_opaque* array.
+ * @param n_success_jobs Driver written value to specific the
+ * total successful operations count.
+ *
+ * @return
+ * - Returns number of dequeued packets.
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+rte_cryptodev_dp_sym_dequeue(struct rte_crypto_dp_service_ctx *ctx,
+ rte_cryptodev_get_dequeue_count_t get_dequeue_count,
+ rte_cryptodev_post_dequeue_t post_dequeue,
+ void **out_opaque, uint8_t is_opaque_array,
+ uint32_t *n_success_jobs)
+{
+ return (*ctx->dequeue_opaque)(ctx->qp_data, ctx->drv_service_data,
+ get_dequeue_count, post_dequeue, out_opaque, is_opaque_array,
+ n_success_jobs);
+}
+
+/**
+ * Dequeue Single symmetric crypto processing of user provided data.
+ *
+ * @param ctx The initialized data-path service
+ * context data.
+ * @param out_opaque Opaque pointer to be retrieve from
+ * device queue. The driver shall support
+ * NULL input of this parameter.
+ *
+ * @return
+ * - 1 if the job is dequeued and the operation is a success.
+ * - 0 if the job is dequeued but the operation is failed.
+ * - -1 if no job is dequeued.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_cryptodev_dp_sym_dequeue_single_job(struct rte_crypto_dp_service_ctx *ctx,
+ void **out_opaque)
+{
+ return (*ctx->dequeue_single)(ctx->qp_data, ctx->drv_service_data,
+ out_opaque);
+}
+
+/**
+ * Inform the queue pair dequeue jobs finished.
+ *
+ * @param ctx The initialized data-path service context data.
+ * @param n The total number of jobs already dequeued.
+ */
+__rte_experimental
+static __rte_always_inline void
+rte_cryptodev_dp_dequeue_done(struct rte_crypto_dp_service_ctx *ctx, uint32_t n)
+{
+ (*ctx->dequeue_done)(ctx->qp_data, ctx->drv_service_data, n);
+}
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
index 81975d72b..bf0260c87 100644
--- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -316,6 +316,41 @@ typedef uint32_t (*cryptodev_sym_cpu_crypto_process_t)
(struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess,
union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec);
+/**
+ * Typedef that the driver provided to get service context private date size.
+ *
+ * @param dev Crypto device pointer.
+ *
+ * @return
+ * - On success return the size of the device's service context private data.
+ * - On failure return negative integer.
+ */
+typedef int (*cryptodev_dp_get_service_ctx_size_t)(
+ struct rte_cryptodev *dev);
+
+/**
+ * Typedef that the driver provided to configure data-path service.
+ *
+ * @param dev Crypto device pointer.
+ * @param qp_id Crypto device queue pair index.
+ * @param ctx The data-path service context data.
+ * @param service_type Type of the service requested.
+ * @param sess_type session type.
+ * @param session_ctx Session context data.
+ * @param is_update Set 1 if ctx is pre-initialized but need
+ * update to different service type or session,
+ * but the rest driver data remains the same.
+ * buffer will always be one.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+typedef int (*cryptodev_dp_configure_service_t)(
+ struct rte_cryptodev *dev, uint16_t qp_id,
+ struct rte_crypto_dp_service_ctx *ctx,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx, uint8_t is_update);
/** Crypto device operations function pointer table */
struct rte_cryptodev_ops {
@@ -348,8 +383,16 @@ struct rte_cryptodev_ops {
/**< Clear a Crypto sessions private data. */
cryptodev_asym_free_session_t asym_session_clear;
/**< Clear a Crypto sessions private data. */
- cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
- /**< process input data synchronously (cpu-crypto). */
+ union {
+ cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
+ /**< process input data synchronously (cpu-crypto). */
+ struct {
+ cryptodev_dp_get_service_ctx_size_t get_drv_ctx_size;
+ /**< Get data path service context data size. */
+ cryptodev_dp_configure_service_t configure_service;
+ /**< Initialize crypto service ctx data. */
+ };
+ };
};
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 02f6dcf72..d384382d3 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -105,4 +105,14 @@ EXPERIMENTAL {
# added in 20.08
rte_cryptodev_get_qp_status;
+
+ # added in 20.11
+ rte_cryptodev_dp_configure_service;
+ rte_cryptodev_get_dp_service_ctx_data_size;
+ rte_cryptodev_dp_submit_single_job;
+ rte_cryptodev_dp_sym_submit_vec;
+ rte_cryptodev_dp_submit_done;
+ rte_cryptodev_dp_sym_dequeue;
+ rte_cryptodev_dp_sym_dequeue_single_job;
+ rte_cryptodev_dp_dequeue_done;
};
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v6 2/4] crypto/qat: add crypto data-path service API support
2020-08-18 16:28 [dpdk-dev] [dpdk-dev v6 0/4] cryptodev: add data-path service APIs Fan Zhang
2020-08-18 16:28 ` [dpdk-dev] [dpdk-dev v6 1/4] cryptodev: add crypto " Fan Zhang
@ 2020-08-18 16:28 ` Fan Zhang
2020-08-18 16:28 ` [dpdk-dev] [dpdk-dev v6 3/4] test/crypto: add unit-test for cryptodev direct APIs Fan Zhang
` (2 subsequent siblings)
4 siblings, 0 replies; 84+ messages in thread
From: Fan Zhang @ 2020-08-18 16:28 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, ArkadiuszX.Kusztal, AdamX.Dybkowski, Fan Zhang
This patch updates QAT PMD to add crypto service API support.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
drivers/common/qat/Makefile | 1 +
drivers/crypto/qat/meson.build | 1 +
drivers/crypto/qat/qat_sym.h | 13 +
drivers/crypto/qat/qat_sym_hw_dp.c | 923 +++++++++++++++++++++++++++++
drivers/crypto/qat/qat_sym_pmd.c | 9 +-
5 files changed, 945 insertions(+), 2 deletions(-)
create mode 100644 drivers/crypto/qat/qat_sym_hw_dp.c
diff --git a/drivers/common/qat/Makefile b/drivers/common/qat/Makefile
index 85d420709..1b71bbbab 100644
--- a/drivers/common/qat/Makefile
+++ b/drivers/common/qat/Makefile
@@ -42,6 +42,7 @@ endif
SRCS-y += qat_sym.c
SRCS-y += qat_sym_session.c
SRCS-y += qat_sym_pmd.c
+ SRCS-y += qat_sym_hw_dp.c
build_qat = yes
endif
endif
diff --git a/drivers/crypto/qat/meson.build b/drivers/crypto/qat/meson.build
index a225f374a..bc90ec44c 100644
--- a/drivers/crypto/qat/meson.build
+++ b/drivers/crypto/qat/meson.build
@@ -15,6 +15,7 @@ if dep.found()
qat_sources += files('qat_sym_pmd.c',
'qat_sym.c',
'qat_sym_session.c',
+ 'qat_sym_hw_dp.c',
'qat_asym_pmd.c',
'qat_asym.c')
qat_ext_deps += dep
diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h
index 1a9748849..2d6316130 100644
--- a/drivers/crypto/qat/qat_sym.h
+++ b/drivers/crypto/qat/qat_sym.h
@@ -264,6 +264,18 @@ qat_sym_process_response(void **op, uint8_t *resp)
}
*op = (void *)rx_op;
}
+
+int
+qat_sym_dp_configure_service_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
+ struct rte_crypto_dp_service_ctx *service_ctx,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ uint8_t is_update);
+
+int
+qat_sym_get_service_ctx_size(struct rte_cryptodev *dev);
+
#else
static inline void
@@ -276,5 +288,6 @@ static inline void
qat_sym_process_response(void **op __rte_unused, uint8_t *resp __rte_unused)
{
}
+
#endif
#endif /* _QAT_SYM_H_ */
diff --git a/drivers/crypto/qat/qat_sym_hw_dp.c b/drivers/crypto/qat/qat_sym_hw_dp.c
new file mode 100644
index 000000000..bb372a0b2
--- /dev/null
+++ b/drivers/crypto/qat/qat_sym_hw_dp.c
@@ -0,0 +1,923 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_cryptodev_pmd.h>
+
+#include "adf_transport_access_macros.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#include "qat_sym.h"
+#include "qat_sym_pmd.h"
+#include "qat_sym_session.h"
+#include "qat_qp.h"
+
+struct qat_sym_dp_service_ctx {
+ struct qat_sym_session *session;
+ uint32_t tail;
+ uint32_t head;
+};
+
+static __rte_always_inline int32_t
+qat_sym_dp_get_data(struct qat_qp *qp, struct icp_qat_fw_la_bulk_req *req,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs)
+{
+ struct qat_queue *tx_queue;
+ struct qat_sym_op_cookie *cookie;
+ struct qat_sgl *list;
+ uint32_t i;
+ uint32_t total_len;
+
+ if (likely(n_data_vecs == 1)) {
+ req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr =
+ data[0].iova;
+ req->comn_mid.src_length = req->comn_mid.dst_length =
+ data[0].len;
+ return data[0].len;
+ }
+
+ if (n_data_vecs == 0 || n_data_vecs > QAT_SYM_SGL_MAX_NUMBER)
+ return -1;
+
+ total_len = 0;
+ tx_queue = &qp->tx_q;
+
+ ICP_QAT_FW_COMN_PTR_TYPE_SET(req->comn_hdr.comn_req_flags,
+ QAT_COMN_PTR_TYPE_SGL);
+ cookie = qp->op_cookies[tx_queue->tail >> tx_queue->trailz];
+ list = (struct qat_sgl *)&cookie->qat_sgl_src;
+
+ for (i = 0; i < n_data_vecs; i++) {
+ list->buffers[i].len = data[i].len;
+ list->buffers[i].resrvd = 0;
+ list->buffers[i].addr = data[i].iova;
+ if (total_len + data[i].len > UINT32_MAX) {
+ QAT_DP_LOG(ERR, "Message too long");
+ return -1;
+ }
+ total_len += data[i].len;
+ }
+
+ list->num_bufs = i;
+ req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr =
+ cookie->qat_sgl_src_phys_addr;
+ req->comn_mid.src_length = req->comn_mid.dst_length = 0;
+ return total_len;
+}
+
+static __rte_always_inline void
+set_cipher_iv(struct icp_qat_fw_la_cipher_req_params *cipher_param,
+ struct rte_crypto_data *iv, uint32_t iv_len,
+ struct icp_qat_fw_la_bulk_req *qat_req)
+{
+ /* copy IV into request if it fits */
+ if (iv_len <= sizeof(cipher_param->u.cipher_IV_array))
+ rte_memcpy(cipher_param->u.cipher_IV_array, iv->base, iv_len);
+ else {
+ ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(
+ qat_req->comn_hdr.serv_specif_flags,
+ ICP_QAT_FW_CIPH_IV_64BIT_PTR);
+ cipher_param->u.s.cipher_IV_ptr = iv->iova;
+ }
+}
+
+#define QAT_SYM_DP_IS_RESP_SUCCESS(resp) \
+ (ICP_QAT_FW_COMN_STATUS_FLAG_OK == \
+ ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(resp->comn_hdr.comn_status))
+
+static __rte_always_inline void
+qat_sym_dp_fill_vec_status(int32_t *sta, int status, uint32_t n)
+{
+ uint32_t i;
+
+ for (i = 0; i < n; i++)
+ sta[i] = status;
+}
+
+static __rte_always_inline void
+submit_one_aead_job(struct qat_sym_session *ctx,
+ struct icp_qat_fw_la_bulk_req *req, struct rte_crypto_data *iv_vec,
+ struct rte_crypto_data *digest_vec, struct rte_crypto_data *aad_vec,
+ union rte_crypto_sym_ofs ofs, uint32_t data_len)
+{
+ struct icp_qat_fw_la_cipher_req_params *cipher_param =
+ (void *)&req->serv_specif_rqpars;
+ struct icp_qat_fw_la_auth_req_params *auth_param =
+ (void *)((uint8_t *)&req->serv_specif_rqpars +
+ ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET);
+ uint8_t *aad_data;
+ uint8_t aad_ccm_real_len;
+ uint8_t aad_len_field_sz;
+ uint32_t msg_len_be;
+ rte_iova_t aad_iova = 0;
+ uint8_t q;
+
+ switch (ctx->qat_hash_alg) {
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+ ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(
+ req->comn_hdr.serv_specif_flags,
+ ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS);
+ rte_memcpy_generic(cipher_param->u.cipher_IV_array,
+ iv_vec->base, ctx->cipher_iv.length);
+ aad_iova = aad_vec->iova;
+ break;
+ case ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC:
+ aad_data = aad_vec->base;
+ aad_iova = aad_vec->iova;
+ aad_ccm_real_len = 0;
+ aad_len_field_sz = 0;
+ msg_len_be = rte_bswap32((uint32_t)data_len -
+ ofs.ofs.cipher.head);
+
+ if (ctx->aad_len > ICP_QAT_HW_CCM_AAD_DATA_OFFSET) {
+ aad_len_field_sz = ICP_QAT_HW_CCM_AAD_LEN_INFO;
+ aad_ccm_real_len = ctx->aad_len -
+ ICP_QAT_HW_CCM_AAD_B0_LEN -
+ ICP_QAT_HW_CCM_AAD_LEN_INFO;
+ } else {
+ aad_data = iv_vec->base;
+ aad_iova = iv_vec->iova;
+ }
+
+ q = ICP_QAT_HW_CCM_NQ_CONST - ctx->cipher_iv.length;
+ aad_data[0] = ICP_QAT_HW_CCM_BUILD_B0_FLAGS(
+ aad_len_field_sz, ctx->digest_length, q);
+ if (q > ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE) {
+ memcpy(aad_data + ctx->cipher_iv.length +
+ ICP_QAT_HW_CCM_NONCE_OFFSET + (q -
+ ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE),
+ (uint8_t *)&msg_len_be,
+ ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE);
+ } else {
+ memcpy(aad_data + ctx->cipher_iv.length +
+ ICP_QAT_HW_CCM_NONCE_OFFSET,
+ (uint8_t *)&msg_len_be +
+ (ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE
+ - q), q);
+ }
+
+ if (aad_len_field_sz > 0) {
+ *(uint16_t *)&aad_data[ICP_QAT_HW_CCM_AAD_B0_LEN] =
+ rte_bswap16(aad_ccm_real_len);
+
+ if ((aad_ccm_real_len + aad_len_field_sz)
+ % ICP_QAT_HW_CCM_AAD_B0_LEN) {
+ uint8_t pad_len = 0;
+ uint8_t pad_idx = 0;
+
+ pad_len = ICP_QAT_HW_CCM_AAD_B0_LEN -
+ ((aad_ccm_real_len +
+ aad_len_field_sz) %
+ ICP_QAT_HW_CCM_AAD_B0_LEN);
+ pad_idx = ICP_QAT_HW_CCM_AAD_B0_LEN +
+ aad_ccm_real_len +
+ aad_len_field_sz;
+ memset(&aad_data[pad_idx], 0, pad_len);
+ }
+
+ rte_memcpy(((uint8_t *)cipher_param->u.cipher_IV_array)
+ + ICP_QAT_HW_CCM_NONCE_OFFSET,
+ (uint8_t *)iv_vec->base +
+ ICP_QAT_HW_CCM_NONCE_OFFSET,
+ ctx->cipher_iv.length);
+ *(uint8_t *)&cipher_param->u.cipher_IV_array[0] =
+ q - ICP_QAT_HW_CCM_NONCE_OFFSET;
+
+ rte_memcpy((uint8_t *)aad_vec->base +
+ ICP_QAT_HW_CCM_NONCE_OFFSET,
+ (uint8_t *)iv_vec->base +
+ ICP_QAT_HW_CCM_NONCE_OFFSET,
+ ctx->cipher_iv.length);
+ }
+ break;
+ default:
+ break;
+ }
+
+ cipher_param->cipher_offset = ofs.ofs.cipher.head;
+ cipher_param->cipher_length = data_len - ofs.ofs.cipher.head;
+ auth_param->auth_off = ofs.ofs.cipher.head;
+ auth_param->auth_len = data_len - ofs.ofs.cipher.head;
+ auth_param->auth_res_addr = digest_vec->iova;
+ auth_param->u1.aad_adr = aad_iova;
+
+ if (ctx->is_single_pass) {
+ cipher_param->spc_aad_addr = aad_iova;
+ cipher_param->spc_auth_res_addr = digest_vec->iova;
+ }
+}
+
+static __rte_always_inline int
+qat_sym_dp_submit_single_aead(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs, struct rte_crypto_data *iv_vec,
+ struct rte_crypto_data *digest_vec, struct rte_crypto_data *aad_vec,
+ void *opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = service_ctx->session;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+ uint32_t tail = service_ctx->tail;
+
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+ rte_prefetch0((uint8_t *)tx_queue->base_addr + tail);
+ data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs);
+ if (unlikely(data_len < 0))
+ return -1;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque;
+
+ submit_one_aead_job(ctx, req, iv_vec, digest_vec, aad_vec, ofs,
+ (uint32_t)data_len);
+
+ service_ctx->tail = tail;
+
+ return 0;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_submit_aead_jobs(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = service_ctx->session;
+ uint32_t i;
+ uint32_t tail;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+
+ if (unlikely(qp->enqueued - qp->dequeued + vec->num >=
+ qp->max_inflights)) {
+ qat_sym_dp_fill_vec_status(vec->status, -1, vec->num);
+ return 0;
+ }
+
+ tail = service_ctx->tail;
+
+ for (i = 0; i < vec->num; i++) {
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+
+ data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec,
+ vec->sgl[i].num) - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ if (unlikely(data_len < 0))
+ break;
+ req->comn_mid.opaque_data = (uint64_t)opaque[i];
+ submit_one_aead_job(ctx, req, vec->iv_vec + i,
+ vec->digest_vec + i, vec->aad_vec + i, ofs,
+ (uint32_t)data_len);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ }
+
+ if (unlikely(i < vec->num))
+ qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i);
+
+ service_ctx->tail = tail;
+ return i;
+}
+
+static __rte_always_inline void
+submit_one_cipher_job(struct qat_sym_session *ctx,
+ struct icp_qat_fw_la_bulk_req *req, struct rte_crypto_data *iv_vec,
+ union rte_crypto_sym_ofs ofs, uint32_t data_len)
+{
+ struct icp_qat_fw_la_cipher_req_params *cipher_param;
+
+ cipher_param = (void *)&req->serv_specif_rqpars;
+
+ /* cipher IV */
+ set_cipher_iv(cipher_param, iv_vec, ctx->cipher_iv.length, req);
+ cipher_param->cipher_offset = ofs.ofs.cipher.head;
+ cipher_param->cipher_length = data_len - ofs.ofs.cipher.head;
+}
+
+static __rte_always_inline int
+qat_sym_dp_submit_single_cipher(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs, struct rte_crypto_data *iv_vec,
+ __rte_unused struct rte_crypto_data *digest_vec,
+ __rte_unused struct rte_crypto_data *aad_vec,
+ void *opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = service_ctx->session;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+ uint32_t tail = service_ctx->tail;
+
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+ rte_prefetch0((uint8_t *)tx_queue->base_addr + tail);
+ data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs);
+ if (unlikely(data_len < 0))
+ return -1;
+ req->comn_mid.opaque_data = (uint64_t)opaque;
+
+ submit_one_cipher_job(ctx, req, iv_vec, ofs, (uint32_t)data_len);
+
+ service_ctx->tail = tail;
+
+ return 0;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_submit_cipher_jobs(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = service_ctx->session;
+ uint32_t i;
+ uint32_t tail;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+
+ if (unlikely(qp->enqueued - qp->dequeued + vec->num >=
+ qp->max_inflights)) {
+ qat_sym_dp_fill_vec_status(vec->status, -1, vec->num);
+ return 0;
+ }
+
+ tail = service_ctx->tail;
+
+ for (i = 0; i < vec->num; i++) {
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+
+ data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec,
+ vec->sgl[i].num) - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ if (unlikely(data_len < 0))
+ break;
+ req->comn_mid.opaque_data = (uint64_t)opaque[i];
+ submit_one_cipher_job(ctx, req, vec->iv_vec + i, ofs,
+ (uint32_t)data_len);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ }
+
+ if (unlikely(i < vec->num))
+ qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i);
+
+ service_ctx->tail = tail;
+ return i;
+}
+
+static __rte_always_inline void
+submit_one_auth_job(struct qat_sym_session *ctx,
+ struct icp_qat_fw_la_bulk_req *req, struct rte_crypto_data *iv_vec,
+ struct rte_crypto_data *digest_vec, union rte_crypto_sym_ofs ofs,
+ uint32_t data_len)
+{
+ struct icp_qat_fw_la_cipher_req_params *cipher_param;
+ struct icp_qat_fw_la_auth_req_params *auth_param;
+
+ cipher_param = (void *)&req->serv_specif_rqpars;
+ auth_param = (void *)((uint8_t *)cipher_param +
+ ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET);
+
+ auth_param->auth_off = ofs.ofs.auth.head;
+ auth_param->auth_len = data_len - ofs.ofs.auth.head;
+ auth_param->auth_res_addr = digest_vec->iova;
+
+ switch (ctx->qat_hash_alg) {
+ case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
+ case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
+ auth_param->u1.aad_adr = iv_vec->iova;
+ break;
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+ ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(
+ req->comn_hdr.serv_specif_flags,
+ ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS);
+ rte_memcpy_generic(cipher_param->u.cipher_IV_array,
+ iv_vec->base, ctx->cipher_iv.length);
+ break;
+ default:
+ break;
+ }
+}
+
+static __rte_always_inline int
+qat_sym_dp_submit_single_auth(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs, struct rte_crypto_data *iv_vec,
+ struct rte_crypto_data *digest_vec,
+ __rte_unused struct rte_crypto_data *aad_vec,
+ void *opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = service_ctx->session;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+ uint32_t tail = service_ctx->tail;
+
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+ rte_prefetch0((uint8_t *)tx_queue->base_addr + tail);
+ data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs);
+ if (unlikely(data_len < 0))
+ return -1;
+ req->comn_mid.opaque_data = (uint64_t)opaque;
+
+ submit_one_auth_job(ctx, req, iv_vec, digest_vec, ofs,
+ (uint32_t)data_len);
+
+ service_ctx->tail = tail;
+
+ return 0;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_submit_auth_jobs(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = service_ctx->session;
+ uint32_t i;
+ uint32_t tail;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+
+ if (unlikely(qp->enqueued - qp->dequeued + vec->num >=
+ qp->max_inflights)) {
+ qat_sym_dp_fill_vec_status(vec->status, -1, vec->num);
+ return 0;
+ }
+
+ tail = service_ctx->tail;
+
+ for (i = 0; i < vec->num; i++) {
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+
+ data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec,
+ vec->sgl[i].num) - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ if (unlikely(data_len < 0))
+ break;
+ req->comn_mid.opaque_data = (uint64_t)opaque[i];
+ submit_one_auth_job(ctx, req, vec->iv_vec + i,
+ vec->digest_vec + i, ofs, (uint32_t)data_len);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ }
+
+ if (unlikely(i < vec->num))
+ qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i);
+
+ service_ctx->tail = tail;
+ return i;
+}
+
+static __rte_always_inline void
+submit_one_chain_job(struct qat_sym_session *ctx,
+ struct icp_qat_fw_la_bulk_req *req, struct rte_crypto_vec *data,
+ uint16_t n_data_vecs, struct rte_crypto_data *iv_vec,
+ struct rte_crypto_data *digest_vec, union rte_crypto_sym_ofs ofs,
+ uint32_t data_len)
+{
+ struct icp_qat_fw_la_cipher_req_params *cipher_param;
+ struct icp_qat_fw_la_auth_req_params *auth_param;
+ rte_iova_t auth_iova_end;
+ int32_t cipher_len, auth_len;
+
+ cipher_param = (void *)&req->serv_specif_rqpars;
+ auth_param = (void *)((uint8_t *)cipher_param +
+ ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET);
+
+ cipher_len = data_len - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ auth_len = data_len - ofs.ofs.cipher.head - ofs.ofs.cipher.tail;
+
+ assert(cipher_len > 0 && auth_len > 0);
+
+ cipher_param->cipher_offset = ofs.ofs.cipher.head;
+ cipher_param->cipher_length = cipher_len;
+ set_cipher_iv(cipher_param, iv_vec, ctx->cipher_iv.length, req);
+
+ auth_param->auth_off = ofs.ofs.cipher.head;
+ auth_param->auth_len = auth_len;
+ auth_param->auth_res_addr = digest_vec->iova;
+
+ switch (ctx->qat_hash_alg) {
+ case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
+ case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
+ auth_param->u1.aad_adr = iv_vec->iova;
+
+ if (unlikely(n_data_vecs > 1)) {
+ int auth_end_get = 0, i = n_data_vecs - 1;
+ struct rte_crypto_vec *cvec = &data[i];
+ uint32_t len;
+
+ len = data_len - ofs.ofs.auth.tail;
+
+ while (i >= 0 && len > 0) {
+ if (cvec->len >= len) {
+ auth_iova_end = cvec->iova +
+ (cvec->len - len);
+ len = 0;
+ auth_end_get = 1;
+ break;
+ }
+ len -= cvec->len;
+ i--;
+ cvec--;
+ }
+
+ assert(auth_end_get != 0);
+ } else
+ auth_iova_end = digest_vec->iova +
+ ctx->digest_length;
+
+ /* Then check if digest-encrypted conditions are met */
+ if ((auth_param->auth_off + auth_param->auth_len <
+ cipher_param->cipher_offset +
+ cipher_param->cipher_length) &&
+ (digest_vec->iova == auth_iova_end)) {
+ /* Handle partial digest encryption */
+ if (cipher_param->cipher_offset +
+ cipher_param->cipher_length <
+ auth_param->auth_off +
+ auth_param->auth_len +
+ ctx->digest_length)
+ req->comn_mid.dst_length =
+ req->comn_mid.src_length =
+ auth_param->auth_off +
+ auth_param->auth_len +
+ ctx->digest_length;
+ struct icp_qat_fw_comn_req_hdr *header =
+ &req->comn_hdr;
+ ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(
+ header->serv_specif_flags,
+ ICP_QAT_FW_LA_DIGEST_IN_BUFFER);
+ }
+ break;
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+ break;
+ default:
+ break;
+ }
+}
+
+static __rte_always_inline int
+qat_sym_dp_submit_single_chain(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs, struct rte_crypto_data *iv_vec,
+ struct rte_crypto_data *digest_vec,
+ __rte_unused struct rte_crypto_data *aad_vec,
+ void *opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = service_ctx->session;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+ uint32_t tail = service_ctx->tail;
+
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+ rte_prefetch0((uint8_t *)tx_queue->base_addr + tail);
+ data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs);
+ if (unlikely(data_len < 0))
+ return -1;
+ req->comn_mid.opaque_data = (uint64_t)opaque;
+
+ submit_one_chain_job(ctx, req, data, n_data_vecs, iv_vec, digest_vec,
+ ofs, (uint32_t)data_len);
+
+ service_ctx->tail = tail;
+
+ return 0;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_submit_chain_jobs(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = service_ctx->session;
+ uint32_t i;
+ uint32_t tail;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+
+ if (unlikely(qp->enqueued - qp->dequeued + vec->num >=
+ qp->max_inflights)) {
+ qat_sym_dp_fill_vec_status(vec->status, -1, vec->num);
+ return 0;
+ }
+
+ tail = service_ctx->tail;
+
+ for (i = 0; i < vec->num; i++) {
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+
+ data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec,
+ vec->sgl[i].num) - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ if (unlikely(data_len < 0))
+ break;
+ req->comn_mid.opaque_data = (uint64_t)opaque[i];
+ submit_one_chain_job(ctx, req, vec->sgl[i].vec, vec->sgl[i].num,
+ vec->iv_vec + i, vec->digest_vec + i, ofs,
+ (uint32_t)data_len);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ }
+
+ if (unlikely(i < vec->num))
+ qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i);
+
+ service_ctx->tail = tail;
+ return i;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_dequeue(void *qp_data, uint8_t *service_data,
+ rte_cryptodev_get_dequeue_count_t get_dequeue_count,
+ rte_cryptodev_post_dequeue_t post_dequeue,
+ void **out_opaque, uint8_t is_opaque_array,
+ uint32_t *n_success_jobs)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+ struct qat_queue *rx_queue = &qp->rx_q;
+ struct icp_qat_fw_comn_resp *resp;
+ void *resp_opaque;
+ uint32_t i, n, inflight;
+ uint32_t head;
+ uint8_t status;
+
+ *n_success_jobs = 0;
+ head = service_ctx->head;
+
+ inflight = qp->enqueued - qp->dequeued;
+ if (unlikely(inflight == 0))
+ return 0;
+
+ resp = (struct icp_qat_fw_comn_resp *)((uint8_t *)rx_queue->base_addr +
+ head);
+ /* no operation ready */
+ if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG))
+ return 0;
+
+ resp_opaque = (void *)(uintptr_t)resp->opaque_data;
+ /* get the dequeue count */
+ n = get_dequeue_count(resp_opaque);
+ if (unlikely(n == 0))
+ return 0;
+
+ out_opaque[0] = resp_opaque;
+ status = QAT_SYM_DP_IS_RESP_SUCCESS(resp);
+ post_dequeue(resp_opaque, 0, status);
+ *n_success_jobs += status;
+
+ head = (head + rx_queue->msg_size) & rx_queue->modulo_mask;
+
+ /* we already finished dequeue when n == 1 */
+ if (unlikely(n == 1)) {
+ i = 1;
+ goto end_deq;
+ }
+
+ if (is_opaque_array) {
+ for (i = 1; i < n; i++) {
+ resp = (struct icp_qat_fw_comn_resp *)(
+ (uint8_t *)rx_queue->base_addr + head);
+ if (unlikely(*(uint32_t *)resp ==
+ ADF_RING_EMPTY_SIG))
+ goto end_deq;
+ out_opaque[i] = (void *)(uintptr_t)
+ resp->opaque_data;
+ status = QAT_SYM_DP_IS_RESP_SUCCESS(resp);
+ *n_success_jobs += status;
+ post_dequeue(out_opaque[i], i, status);
+ head = (head + rx_queue->msg_size) &
+ rx_queue->modulo_mask;
+ }
+
+ goto end_deq;
+ }
+
+ /* opaque is not array */
+ for (i = 1; i < n; i++) {
+ resp = (struct icp_qat_fw_comn_resp *)(
+ (uint8_t *)rx_queue->base_addr + head);
+ status = QAT_SYM_DP_IS_RESP_SUCCESS(resp);
+ if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG))
+ goto end_deq;
+ head = (head + rx_queue->msg_size) &
+ rx_queue->modulo_mask;
+ post_dequeue(resp_opaque, i, status);
+ *n_success_jobs += status;
+ }
+
+end_deq:
+ service_ctx->head = head;
+ return i;
+}
+
+static __rte_always_inline int
+qat_sym_dp_dequeue_single_job(void *qp_data, uint8_t *service_data,
+ void **out_opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+ struct qat_queue *rx_queue = &qp->rx_q;
+
+ register struct icp_qat_fw_comn_resp *resp;
+
+ resp = (struct icp_qat_fw_comn_resp *)((uint8_t *)rx_queue->base_addr +
+ service_ctx->head);
+
+ if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG))
+ return -1;
+
+ *out_opaque = (void *)(uintptr_t)resp->opaque_data;
+
+ service_ctx->head = (service_ctx->head + rx_queue->msg_size) &
+ rx_queue->modulo_mask;
+
+ return QAT_SYM_DP_IS_RESP_SUCCESS(resp);
+}
+
+static __rte_always_inline void
+qat_sym_dp_kick_tail(void *qp_data, uint8_t *service_data, uint32_t n)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+
+ qp->enqueued += n;
+ qp->stats.enqueued_count += n;
+
+ assert(service_ctx->tail == ((tx_queue->tail + tx_queue->msg_size * n) &
+ tx_queue->modulo_mask));
+
+ tx_queue->tail = service_ctx->tail;
+
+ WRITE_CSR_RING_TAIL(qp->mmap_bar_addr,
+ tx_queue->hw_bundle_number,
+ tx_queue->hw_queue_number, tx_queue->tail);
+ tx_queue->csr_tail = tx_queue->tail;
+}
+
+static __rte_always_inline void
+qat_sym_dp_update_head(void *qp_data, uint8_t *service_data, uint32_t n)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_queue *rx_queue = &qp->rx_q;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+
+ assert(service_ctx->head == ((rx_queue->head + rx_queue->msg_size * n) &
+ rx_queue->modulo_mask));
+
+ rx_queue->head = service_ctx->head;
+ rx_queue->nb_processed_responses += n;
+ qp->dequeued += n;
+ qp->stats.dequeued_count += n;
+ if (rx_queue->nb_processed_responses > QAT_CSR_HEAD_WRITE_THRESH) {
+ uint32_t old_head, new_head;
+ uint32_t max_head;
+
+ old_head = rx_queue->csr_head;
+ new_head = rx_queue->head;
+ max_head = qp->nb_descriptors * rx_queue->msg_size;
+
+ /* write out free descriptors */
+ void *cur_desc = (uint8_t *)rx_queue->base_addr + old_head;
+
+ if (new_head < old_head) {
+ memset(cur_desc, ADF_RING_EMPTY_SIG_BYTE,
+ max_head - old_head);
+ memset(rx_queue->base_addr, ADF_RING_EMPTY_SIG_BYTE,
+ new_head);
+ } else {
+ memset(cur_desc, ADF_RING_EMPTY_SIG_BYTE, new_head -
+ old_head);
+ }
+ rx_queue->nb_processed_responses = 0;
+ rx_queue->csr_head = new_head;
+
+ /* write current head to CSR */
+ WRITE_CSR_RING_HEAD(qp->mmap_bar_addr,
+ rx_queue->hw_bundle_number, rx_queue->hw_queue_number,
+ new_head);
+ }
+}
+
+int
+qat_sym_dp_configure_service_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
+ struct rte_crypto_dp_service_ctx *service_ctx,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ uint8_t is_update)
+{
+ struct qat_qp *qp;
+ struct qat_sym_session *ctx;
+ struct qat_sym_dp_service_ctx *dp_ctx;
+
+ if (service_ctx == NULL || session_ctx.crypto_sess == NULL ||
+ sess_type != RTE_CRYPTO_OP_WITH_SESSION)
+ return -EINVAL;
+
+ qp = dev->data->queue_pairs[qp_id];
+ ctx = (struct qat_sym_session *)get_sym_session_private_data(
+ session_ctx.crypto_sess, qat_sym_driver_id);
+ dp_ctx = (struct qat_sym_dp_service_ctx *)
+ service_ctx->drv_service_data;
+
+ if (!is_update) {
+ memset(service_ctx, 0, sizeof(*service_ctx) +
+ sizeof(struct qat_sym_dp_service_ctx));
+ service_ctx->qp_data = dev->data->queue_pairs[qp_id];
+ dp_ctx->tail = qp->tx_q.tail;
+ dp_ctx->head = qp->rx_q.head;
+ }
+
+ dp_ctx->session = ctx;
+
+ service_ctx->submit_done = qat_sym_dp_kick_tail;
+ service_ctx->dequeue_opaque = qat_sym_dp_dequeue;
+ service_ctx->dequeue_single = qat_sym_dp_dequeue_single_job;
+ service_ctx->dequeue_done = qat_sym_dp_update_head;
+
+ if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER ||
+ ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) {
+ /* AES-GCM or AES-CCM */
+ if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64 ||
+ (ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_AES128
+ && ctx->qat_mode == ICP_QAT_HW_CIPHER_CTR_MODE
+ && ctx->qat_hash_alg ==
+ ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC)) {
+ if (service_type != RTE_CRYPTO_DP_SYM_AEAD)
+ return -1;
+ service_ctx->submit_vec = qat_sym_dp_submit_aead_jobs;
+ service_ctx->submit_single_job =
+ qat_sym_dp_submit_single_aead;
+ } else {
+ if (service_type != RTE_CRYPTO_DP_SYM_CHAIN)
+ return -1;
+ service_ctx->submit_vec = qat_sym_dp_submit_chain_jobs;
+ service_ctx->submit_single_job =
+ qat_sym_dp_submit_single_chain;
+ }
+ } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_AUTH) {
+ if (service_type != RTE_CRYPTO_DP_SYM_AUTH_ONLY)
+ return -1;
+ service_ctx->submit_vec = qat_sym_dp_submit_auth_jobs;
+ service_ctx->submit_single_job = qat_sym_dp_submit_single_auth;
+ } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) {
+ if (service_type != RTE_CRYPTO_DP_SYM_CIPHER_ONLY)
+ return -1;
+ service_ctx->submit_vec = qat_sym_dp_submit_cipher_jobs;
+ service_ctx->submit_single_job =
+ qat_sym_dp_submit_single_cipher;
+ }
+
+ return 0;
+}
+
+int
+qat_sym_get_service_ctx_size(__rte_unused struct rte_cryptodev *dev)
+{
+ return sizeof(struct qat_sym_dp_service_ctx);
+}
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index 314742f53..bef08c3bc 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -258,7 +258,11 @@ static struct rte_cryptodev_ops crypto_qat_ops = {
/* Crypto related operations */
.sym_session_get_size = qat_sym_session_get_private_size,
.sym_session_configure = qat_sym_session_configure,
- .sym_session_clear = qat_sym_session_clear
+ .sym_session_clear = qat_sym_session_clear,
+
+ /* Data plane service related operations */
+ .get_drv_ctx_size = qat_sym_get_service_ctx_size,
+ .configure_service = qat_sym_dp_configure_service_ctx,
};
#ifdef RTE_LIBRTE_SECURITY
@@ -376,7 +380,8 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
- RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
+ RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED |
+ RTE_CRYPTODEV_FF_DATA_PLANE_SERVICE;
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v6 3/4] test/crypto: add unit-test for cryptodev direct APIs
2020-08-18 16:28 [dpdk-dev] [dpdk-dev v6 0/4] cryptodev: add data-path service APIs Fan Zhang
2020-08-18 16:28 ` [dpdk-dev] [dpdk-dev v6 1/4] cryptodev: add crypto " Fan Zhang
2020-08-18 16:28 ` [dpdk-dev] [dpdk-dev v6 2/4] crypto/qat: add crypto data-path service API support Fan Zhang
@ 2020-08-18 16:28 ` Fan Zhang
2020-08-18 16:28 ` [dpdk-dev] [dpdk-dev v6 4/4] doc: add cryptodev service APIs guide Fan Zhang
2020-08-28 12:58 ` [dpdk-dev] [dpdk-dev v7 0/4] cryptodev: add data-path service APIs Fan Zhang
4 siblings, 0 replies; 84+ messages in thread
From: Fan Zhang @ 2020-08-18 16:28 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, ArkadiuszX.Kusztal, AdamX.Dybkowski, Fan Zhang
This patch adds the QAT test to use cryptodev symmetric crypto
direct APIs.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
app/test/test_cryptodev.c | 354 ++++++++++++++++++++++++--
app/test/test_cryptodev.h | 6 +
app/test/test_cryptodev_blockcipher.c | 50 ++--
3 files changed, 373 insertions(+), 37 deletions(-)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 70bf6fe2c..d6909984d 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -49,6 +49,8 @@
#define VDEV_ARGS_SIZE 100
#define MAX_NB_SESSIONS 4
+#define MAX_DRV_SERVICE_CTX_SIZE 256
+
#define IN_PLACE 0
#define OUT_OF_PLACE 1
@@ -57,6 +59,8 @@ static int gbl_driver_id;
static enum rte_security_session_action_type gbl_action_type =
RTE_SECURITY_ACTION_TYPE_NONE;
+int hw_dp_test;
+
struct crypto_testsuite_params {
struct rte_mempool *mbuf_pool;
struct rte_mempool *large_mbuf_pool;
@@ -147,6 +151,153 @@ ceil_byte_length(uint32_t num_bits)
return (num_bits >> 3);
}
+void
+process_sym_hw_api_op(uint8_t dev_id, uint16_t qp_id, struct rte_crypto_op *op,
+ uint8_t is_cipher, uint8_t is_auth, uint8_t len_in_bits)
+{
+ int32_t n;
+ struct rte_crypto_sym_op *sop;
+ struct rte_crypto_op *ret_op = NULL;
+ struct rte_crypto_vec data_vec[UINT8_MAX];
+ struct rte_crypto_data iv_vec, aad_vec, digest_vec;
+ union rte_crypto_sym_ofs ofs;
+ int32_t status;
+ uint32_t min_ofs, max_len;
+ union rte_cryptodev_session_ctx sess;
+ enum rte_crypto_dp_service service_type;
+ uint32_t count = 0;
+ uint8_t service_data[MAX_DRV_SERVICE_CTX_SIZE] = {0};
+ struct rte_crypto_dp_service_ctx *ctx = (void *)service_data;
+ int ctx_service_size;
+
+ sop = op->sym;
+
+ sess.crypto_sess = sop->session;
+
+ if (is_cipher && is_auth) {
+ service_type = RTE_CRYPTO_DP_SYM_CHAIN;
+ min_ofs = RTE_MIN(sop->cipher.data.offset,
+ sop->auth.data.offset);
+ max_len = RTE_MAX(sop->cipher.data.length,
+ sop->auth.data.length);
+ } else if (is_cipher) {
+ service_type = RTE_CRYPTO_DP_SYM_CIPHER_ONLY;
+ min_ofs = sop->cipher.data.offset;
+ max_len = sop->cipher.data.length;
+ } else if (is_auth) {
+ service_type = RTE_CRYPTO_DP_SYM_AUTH_ONLY;
+ min_ofs = sop->auth.data.offset;
+ max_len = sop->auth.data.length;
+ } else { /* aead */
+ service_type = RTE_CRYPTO_DP_SYM_AEAD;
+ min_ofs = sop->aead.data.offset;
+ max_len = sop->aead.data.length;
+ }
+
+ if (len_in_bits) {
+ max_len = max_len >> 3;
+ min_ofs = min_ofs >> 3;
+ }
+
+ ctx_service_size = rte_cryptodev_get_dp_service_ctx_data_size(dev_id);
+ assert(ctx_service_size <= MAX_DRV_SERVICE_CTX_SIZE &&
+ ctx_service_size > 0);
+
+ if (rte_cryptodev_dp_configure_service(dev_id, qp_id, service_type,
+ RTE_CRYPTO_OP_WITH_SESSION, sess, ctx, 0) < 0) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ /* test update service */
+ if (rte_cryptodev_dp_configure_service(dev_id, qp_id, service_type,
+ RTE_CRYPTO_OP_WITH_SESSION, sess, ctx, 1) < 0) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ n = rte_crypto_mbuf_to_vec(sop->m_src, 0, min_ofs + max_len,
+ data_vec, RTE_DIM(data_vec));
+ if (n < 0 || n != sop->m_src->nb_segs) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ ofs.raw = 0;
+
+ iv_vec.base = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
+ iv_vec.iova = rte_crypto_op_ctophys_offset(op, IV_OFFSET);
+
+ switch (service_type) {
+ case RTE_CRYPTO_DP_SYM_AEAD:
+ ofs.ofs.cipher.head = sop->cipher.data.offset;
+ aad_vec.base = (void *)sop->aead.aad.data;
+ aad_vec.iova = sop->aead.aad.phys_addr;
+ digest_vec.base = (void *)sop->aead.digest.data;
+ digest_vec.iova = sop->aead.digest.phys_addr;
+ if (len_in_bits) {
+ ofs.ofs.cipher.head >>= 3;
+ ofs.ofs.cipher.tail >>= 3;
+ }
+ break;
+ case RTE_CRYPTO_DP_SYM_CIPHER_ONLY:
+ ofs.ofs.cipher.head = sop->cipher.data.offset;
+ if (len_in_bits) {
+ ofs.ofs.cipher.head >>= 3;
+ ofs.ofs.cipher.tail >>= 3;
+ }
+ break;
+ case RTE_CRYPTO_DP_SYM_AUTH_ONLY:
+ ofs.ofs.auth.head = sop->auth.data.offset;
+ digest_vec.base = (void *)sop->auth.digest.data;
+ digest_vec.iova = sop->auth.digest.phys_addr;
+ break;
+ case RTE_CRYPTO_DP_SYM_CHAIN:
+ ofs.ofs.cipher.head =
+ sop->cipher.data.offset - sop->auth.data.offset;
+ ofs.ofs.cipher.tail =
+ (sop->auth.data.offset + sop->auth.data.length) -
+ (sop->cipher.data.offset + sop->cipher.data.length);
+ if (len_in_bits) {
+ ofs.ofs.cipher.head >>= 3;
+ ofs.ofs.cipher.tail >>= 3;
+ }
+ digest_vec.base = (void *)sop->auth.digest.data;
+ digest_vec.iova = sop->auth.digest.phys_addr;
+ break;
+ default:
+ break;
+ }
+
+ status = rte_cryptodev_dp_submit_single_job(ctx, data_vec, n, ofs,
+ &iv_vec, &digest_vec, &aad_vec, (void *)op);
+ if (status < 0) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ rte_cryptodev_dp_submit_done(ctx, 1);
+
+ status = -1;
+ while (count++ < 1024 && status == -1) {
+ status = rte_cryptodev_dp_sym_dequeue_single_job(ctx,
+ (void **)&ret_op);
+ if (status == -1)
+ rte_pause();
+ }
+
+ if (status != -1)
+ rte_cryptodev_dp_dequeue_done(ctx, 1);
+
+ if (count == 1025 || status != 1 || ret_op != op) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ op->status = status == 1 ? RTE_CRYPTO_OP_STATUS_SUCCESS :
+ RTE_CRYPTO_OP_STATUS_ERROR;
+}
+
static void
process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op)
{
@@ -2470,7 +2621,11 @@ test_snow3g_authentication(const struct snow3g_hash_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
ut_params->obuf = ut_params->op->sym->m_src;
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -2549,7 +2704,11 @@ test_snow3g_authentication_verify(const struct snow3g_hash_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
@@ -2619,6 +2778,9 @@ test_kasumi_authentication(const struct kasumi_hash_test_data *tdata)
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1);
else
ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
@@ -2690,7 +2852,11 @@ test_kasumi_authentication_verify(const struct kasumi_hash_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
@@ -2897,8 +3063,12 @@ test_kasumi_encryption(const struct kasumi_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
- ut_params->op);
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_dst;
@@ -2983,7 +3153,11 @@ test_kasumi_encryption_sgl(const struct kasumi_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -3306,7 +3480,11 @@ test_kasumi_decryption(const struct kasumi_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -3381,7 +3559,11 @@ test_snow3g_encryption(const struct snow3g_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -3756,7 +3938,11 @@ static int test_snow3g_decryption(const struct snow3g_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_dst;
@@ -3924,7 +4110,11 @@ test_zuc_cipher_auth(const struct wireless_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
@@ -4019,7 +4209,11 @@ test_snow3g_cipher_auth(const struct snow3g_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
@@ -4155,7 +4349,11 @@ test_snow3g_auth_cipher(const struct snow3g_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4344,7 +4542,11 @@ test_snow3g_auth_cipher_sgl(const struct snow3g_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4526,7 +4728,11 @@ test_kasumi_auth_cipher(const struct kasumi_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4716,7 +4922,11 @@ test_kasumi_auth_cipher_sgl(const struct kasumi_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4857,7 +5067,11 @@ test_kasumi_cipher_auth(const struct kasumi_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4944,7 +5158,11 @@ test_zuc_encryption(const struct wireless_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5031,7 +5249,11 @@ test_zuc_encryption_sgl(const struct wireless_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5119,7 +5341,11 @@ test_zuc_authentication(const struct wireless_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
ut_params->obuf = ut_params->op->sym->m_src;
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5251,7 +5477,11 @@ test_zuc_auth_cipher(const struct wireless_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5437,7 +5667,11 @@ test_zuc_auth_cipher_sgl(const struct wireless_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -7043,6 +7277,9 @@ test_authenticated_encryption(const struct aead_test_data *tdata)
/* Process crypto operation */
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+ else if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -8540,6 +8777,9 @@ test_authenticated_decryption(const struct aead_test_data *tdata)
/* Process crypto operation */
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+ else if (hw_dp_test == 1)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -11480,6 +11720,9 @@ test_authenticated_encryption_SGL(const struct aead_test_data *tdata,
if (oop == IN_PLACE &&
gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+ else if (oop == IN_PLACE && hw_dp_test == 1)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -13041,6 +13284,75 @@ test_cryptodev_nitrox(void)
return unit_test_suite_runner(&cryptodev_nitrox_testsuite);
}
+static struct unit_test_suite cryptodev_sym_direct_api_testsuite = {
+ .suite_name = "Crypto Sym direct API Test Suite",
+ .setup = testsuite_setup,
+ .teardown = testsuite_teardown,
+ .unit_test_cases = {
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_decryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_auth_cipher_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_auth_cipher_verify_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_hash_generate_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_hash_verify_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_encryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_decryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown, test_AES_cipheronly_all),
+ TEST_CASE_ST(ut_setup, ut_teardown, test_authonly_all),
+ TEST_CASE_ST(ut_setup, ut_teardown, test_AES_chain_all),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_CCM_authenticated_encryption_test_case_128_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_CCM_authenticated_decryption_test_case_128_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_256_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_256_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encrypt_SGL_in_place_1500B),
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ }
+};
+
+static int
+test_qat_sym_direct_api(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+ int ret;
+
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "QAT PMD must be loaded. Check that both "
+ "CONFIG_RTE_LIBRTE_PMD_QAT and CONFIG_RTE_LIBRTE_PMD_QAT_SYM "
+ "are enabled in config file to run this testsuite.\n");
+ return TEST_SKIPPED;
+ }
+
+ hw_dp_test = 1;
+ ret = unit_test_suite_runner(&cryptodev_sym_direct_api_testsuite);
+ hw_dp_test = 0;
+
+ return ret;
+}
+
+REGISTER_TEST_COMMAND(cryptodev_qat_sym_api_autotest, test_qat_sym_direct_api);
REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_mb_autotest,
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 41542e055..c382c12c4 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -71,6 +71,8 @@
#define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
#define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
+extern int hw_dp_test;
+
/**
* Write (spread) data from buffer to mbuf data
*
@@ -209,4 +211,8 @@ create_segmented_mbuf(struct rte_mempool *mbuf_pool, int pkt_len,
return NULL;
}
+void
+process_sym_hw_api_op(uint8_t dev_id, uint16_t qp_id, struct rte_crypto_op *op,
+ uint8_t is_cipher, uint8_t is_auth, uint8_t len_in_bits);
+
#endif /* TEST_CRYPTODEV_H_ */
diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c
index 221262341..fc540e362 100644
--- a/app/test/test_cryptodev_blockcipher.c
+++ b/app/test/test_cryptodev_blockcipher.c
@@ -462,25 +462,43 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
}
/* Process crypto operation */
- if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
- snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
- "line %u FAILED: %s",
- __LINE__, "Error sending packet for encryption");
- status = TEST_FAILED;
- goto error_exit;
- }
+ if (hw_dp_test) {
+ uint8_t is_cipher = 0, is_auth = 0;
+
+ if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) {
+ RTE_LOG(DEBUG, USER1,
+ "QAT direct API does not support OOP, Test Skipped.\n");
+ snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "SKIPPED");
+ status = TEST_SUCCESS;
+ goto error_exit;
+ }
+ if (t->op_mask & BLOCKCIPHER_TEST_OP_CIPHER)
+ is_cipher = 1;
+ if (t->op_mask & BLOCKCIPHER_TEST_OP_AUTH)
+ is_auth = 1;
+
+ process_sym_hw_api_op(dev_id, 0, op, is_cipher, is_auth, 0);
+ } else {
+ if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
+ snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
+ "line %u FAILED: %s",
+ __LINE__, "Error sending packet for encryption");
+ status = TEST_FAILED;
+ goto error_exit;
+ }
- op = NULL;
+ op = NULL;
- while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) == 0)
- rte_pause();
+ while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) == 0)
+ rte_pause();
- if (!op) {
- snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
- "line %u FAILED: %s",
- __LINE__, "Failed to process sym crypto op");
- status = TEST_FAILED;
- goto error_exit;
+ if (!op) {
+ snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
+ "line %u FAILED: %s",
+ __LINE__, "Failed to process sym crypto op");
+ status = TEST_FAILED;
+ goto error_exit;
+ }
}
debug_hexdump(stdout, "m_src(after):",
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v6 4/4] doc: add cryptodev service APIs guide
2020-08-18 16:28 [dpdk-dev] [dpdk-dev v6 0/4] cryptodev: add data-path service APIs Fan Zhang
` (2 preceding siblings ...)
2020-08-18 16:28 ` [dpdk-dev] [dpdk-dev v6 3/4] test/crypto: add unit-test for cryptodev direct APIs Fan Zhang
@ 2020-08-18 16:28 ` Fan Zhang
2020-08-28 12:58 ` [dpdk-dev] [dpdk-dev v7 0/4] cryptodev: add data-path service APIs Fan Zhang
4 siblings, 0 replies; 84+ messages in thread
From: Fan Zhang @ 2020-08-18 16:28 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, ArkadiuszX.Kusztal, AdamX.Dybkowski, Fan Zhang
This patch updates programmer's guide to demonstrate the usage
and limitations of cryptodev symmetric crypto data-path service
APIs.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
doc/guides/prog_guide/cryptodev_lib.rst | 90 +++++++++++++++++++++++++
1 file changed, 90 insertions(+)
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index c14f750fa..77521c959 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -631,6 +631,96 @@ a call argument. Status different than zero must be treated as error.
For more details, e.g. how to convert an mbuf to an SGL, please refer to an
example usage in the IPsec library implementation.
+Cryptodev Direct Data-plane Service API
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Direct crypto data-path service are a set of APIs that especially provided for
+the external libraries/applications who want to take advantage of the rich
+features provided by cryptodev, but not necessarily depend on cryptodev
+operations, mempools, or mbufs in the their data-path implementations.
+
+The direct crypto data-path service has the following advantages:
+- Supports raw data pointer and physical addresses as input.
+- Do not require specific data structure allocated from heap, such as
+ cryptodev operation.
+- Enqueue in a burst or single operation. The service allow enqueuing in
+ a burst similar to ``rte_cryptodev_enqueue_burst`` operation, or only
+ enqueue one job at a time but maintaining necessary context data locally for
+ next single job enqueue operation. The latter method is especially helpful
+ when the user application's crypto operations are clustered into a burst.
+ Allowing enqueue one operation at a time helps reducing one additional loop
+ and also reduced the cache misses during the double "looping" situation.
+- Customerizable dequeue count. Instead of dequeue maximum possible operations
+ as same as ``rte_cryptodev_dequeue_burst`` operation, the service allows the
+ user to provide a callback function to decide how many operations to be
+ dequeued. This is especially helpful when the expected dequeue count is
+ hidden inside the opaque data stored during enqueue. The user can provide
+ the callback function to parse the opaque data structure.
+- Abandon enqueue and dequeue anytime. One of the drawbacks of
+ ``rte_cryptodev_enqueue_burst`` and ``rte_cryptodev_dequeue_burst``
+ operations are: once an operation is enqueued/dequeued there is no way to
+ undo the operation. The service make the operation abandon possible by
+ creating a local copy of the queue operation data in the service context
+ data. The data will be written back to driver maintained operation data
+ when enqueue or dequeue done function is called.
+
+The cryptodev data-path service uses
+
+Cryptodev PMDs who supports this feature will have
+``RTE_CRYPTODEV_FF_SYM_HW_DIRECT_API`` feature flag presented. To use this
+feature the function ``rte_cryptodev_get_dp_service_ctx_data_size`` should
+be called to get the data path service context data size. The user should
+creates a local buffer at least this size long and initialize it using
+``rte_cryptodev_dp_configure_service`` function call.
+
+The ``rte_cryptodev_dp_configure_service`` function call initialize or
+updates the ``struct rte_crypto_dp_service_ctx`` buffer, in which contains the
+driver specific queue pair data pointer and service context buffer, and a
+set of function pointers to enqueue and dequeue different algorithms'
+operations. The ``rte_cryptodev_dp_configure_service`` should be called when:
+
+- Before enqueuing or dequeuing starts (set ``is_update`` parameter to 0).
+- When different cryptodev session, security session, or session-less xform
+ is used (set ``is_update`` parameter to 1).
+
+Two different enqueue functions are provided.
+
+- ``rte_cryptodev_dp_sym_submit_vec``: submit a burst of operations stored in
+ the ``rte_crypto_sym_vec`` structure.
+- ``rte_cryptodev_dp_submit_single_job``: submit single operation.
+
+Either enqueue functions will not command the crypto device to start processing
+until ``rte_cryptodev_dp_submit_done`` function is called. Before then the user
+shall expect the driver only stores the necessory context data in the
+``rte_crypto_dp_service_ctx`` buffer for the next enqueue operation. If the user
+wants to abandon the submitted operations, simply call
+``rte_cryptodev_dp_configure_service`` function instead with the parameter
+``is_update`` set to 0. The driver will recover the service context data to
+the previous state.
+
+To dequeue the operations the user also have two operations:
+
+- ``rte_cryptodev_dp_sym_dequeue``: fully customizable deuqueue operation. The
+ user needs to provide the callback function for the driver to get the
+ dequeue count and perform post processing such as write the status field.
+- ``rte_cryptodev_dp_sym_dequeue_single_job``: dequeue single job.
+
+Same as enqueue, the function ``rte_cryptodev_dp_dequeue_done`` is used to
+merge user's local service context data with the driver's queue operation
+data. Also to abandon the dequeue operation (still keep the operations in the
+queue), the user shall avoid ``rte_cryptodev_dp_dequeue_done`` function call
+but calling ``rte_cryptodev_dp_configure_service`` function with the parameter
+``is_update`` set to 0.
+
+There are a few limitations to the data path service:
+
+* Only support in-place operations.
+* APIs are NOT thread-safe.
+* CANNOT mix the direct API's enqueue with rte_cryptodev_enqueue_burst, or
+ vice versa.
+
+See *DPDK API Reference* for details on each API definitions.
+
Sample code
-----------
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v7 0/4] cryptodev: add data-path service APIs
2020-08-18 16:28 [dpdk-dev] [dpdk-dev v6 0/4] cryptodev: add data-path service APIs Fan Zhang
` (3 preceding siblings ...)
2020-08-18 16:28 ` [dpdk-dev] [dpdk-dev v6 4/4] doc: add cryptodev service APIs guide Fan Zhang
@ 2020-08-28 12:58 ` Fan Zhang
2020-08-28 12:58 ` [dpdk-dev] [dpdk-dev v7 1/4] cryptodev: add crypto " Fan Zhang
` (4 more replies)
4 siblings, 5 replies; 84+ messages in thread
From: Fan Zhang @ 2020-08-28 12:58 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski,
roy.fan.zhang
Direct crypto data-path service are a set of APIs that especially provided for
the external libraries/applications who want to take advantage of the rich
features provided by cryptodev, but not necessarily depend on cryptodev
operations, mempools, or mbufs in the their data-path implementations.
The direct crypto data-path service has the following advantages:
- Supports raw data pointer and physical addresses as input.
- Do not require specific data structure allocated from heap, such as
cryptodev operation.
- Enqueue in a burst or single operation. The service allow enqueuing in
a burst similar to ``rte_cryptodev_enqueue_burst`` operation, or only
enqueue one job at a time but maintaining necessary context data locally for
next single job enqueue operation. The latter method is especially helpful
when the user application's crypto operations are clustered into a burst.
Allowing enqueue one operation at a time helps reducing one additional loop
and also reduced the cache misses during the double "looping" situation.
- Customerizable dequeue count. Instead of dequeue maximum possible operations
as same as ``rte_cryptodev_dequeue_burst`` operation, the service allows the
user to provide a callback function to decide how many operations to be
dequeued. This is especially helpful when the expected dequeue count is
hidden inside the opaque data stored during enqueue. The user can provide
the callback function to parse the opaque data structure.
- Abandon enqueue and dequeue anytime. One of the drawbacks of
``rte_cryptodev_enqueue_burst`` and ``rte_cryptodev_dequeue_burst``
operations are: once an operation is enqueued/dequeued there is no way to
undo the operation. The service make the operation abandon possible by
creating a local copy of the queue operation data in the service context
data. The data will be written back to driver maintained operation data
when enqueue or dequeue done function is called.
The cryptodev data-path service uses
Cryptodev PMDs who supports this feature will have
``RTE_CRYPTODEV_FF_SYM_HW_DIRECT_API`` feature flag presented. To use this
feature the function ``rte_cryptodev_get_dp_service_ctx_data_size`` should
be called to get the data path service context data size. The user should
creates a local buffer at least this size long and initialize it using
``rte_cryptodev_dp_configure_service`` function call.
The ``rte_cryptodev_dp_configure_service`` function call initialize or
updates the ``struct rte_crypto_dp_service_ctx`` buffer, in which contains the
driver specific queue pair data pointer and service context buffer, and a
set of function pointers to enqueue and dequeue different algorithms'
operations. The ``rte_cryptodev_dp_configure_service`` should be called when:
- Before enqueuing or dequeuing starts (set ``is_update`` parameter to 0).
- When different cryptodev session, security session, or session-less xform
is used (set ``is_update`` parameter to 1).
Two different enqueue functions are provided.
- ``rte_cryptodev_dp_sym_submit_vec``: submit a burst of operations stored in
the ``rte_crypto_sym_vec`` structure.
- ``rte_cryptodev_dp_submit_single_job``: submit single operation.
Either enqueue functions will not command the crypto device to start processing
until ``rte_cryptodev_dp_submit_done`` function is called. Before then the user
shall expect the driver only stores the necessory context data in the
``rte_crypto_dp_service_ctx`` buffer for the next enqueue operation. If the user
wants to abandon the submitted operations, simply call
``rte_cryptodev_dp_configure_service`` function instead with the parameter
``is_update`` set to 0. The driver will recover the service context data to
the previous state.
To dequeue the operations the user also have two operations:
- ``rte_cryptodev_dp_sym_dequeue``: fully customizable deuqueue operation. The
user needs to provide the callback function for the driver to get the
dequeue count and perform post processing such as write the status field.
- ``rte_cryptodev_dp_sym_dequeue_single_job``: dequeue single job.
Same as enqueue, the function ``rte_cryptodev_dp_dequeue_done`` is used to
merge user's local service context data with the driver's queue operation
data. Also to abandon the dequeue operation (still keep the operations in the
queue), the user shall avoid ``rte_cryptodev_dp_dequeue_done`` function call
but calling ``rte_cryptodev_dp_configure_service`` function with the parameter
``is_update`` set to 0.
There are a few limitations to the data path service:
* Only support in-place operations.
* APIs are NOT thread-safe.
* CANNOT mix the direct API's enqueue with rte_cryptodev_enqueue_burst, or
vice versa.
v7:
- Fixed a few typos.
- Fixed length calculation bugs.
v6:
- Rebased on top of DPDK 20.08.
- Changed to service ctx and added single job submit/dequeue.
v5:
- Changed to use rte_crypto_sym_vec as input.
- Changed to use public APIs instead of use function pointer.
v4:
- Added missed patch.
v3:
- Instead of QAT only API, moved the API to cryptodev.
- Added cryptodev feature flags.
v2:
- Used a structure to simplify parameters.
- Added unit tests.
- Added documentation.
Fan Zhang (4):
cryptodev: add crypto data-path service APIs
crypto/qat: add crypto data-path service API support
test/crypto: add unit-test for cryptodev direct APIs
doc: add cryptodev service APIs guide
app/test/test_cryptodev.c | 354 ++++++-
app/test/test_cryptodev.h | 6 +
app/test/test_cryptodev_blockcipher.c | 50 +-
doc/guides/prog_guide/cryptodev_lib.rst | 90 ++
drivers/common/qat/Makefile | 1 +
drivers/crypto/qat/meson.build | 1 +
drivers/crypto/qat/qat_sym.h | 13 +
drivers/crypto/qat/qat_sym_hw_dp.c | 926 ++++++++++++++++++
drivers/crypto/qat/qat_sym_pmd.c | 9 +-
lib/librte_cryptodev/rte_crypto.h | 9 +
lib/librte_cryptodev/rte_crypto_sym.h | 44 +-
lib/librte_cryptodev/rte_cryptodev.c | 45 +
lib/librte_cryptodev/rte_cryptodev.h | 335 ++++++-
lib/librte_cryptodev/rte_cryptodev_pmd.h | 47 +-
.../rte_cryptodev_version.map | 10 +
15 files changed, 1892 insertions(+), 48 deletions(-)
create mode 100644 drivers/crypto/qat/qat_sym_hw_dp.c
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v7 1/4] cryptodev: add crypto data-path service APIs
2020-08-28 12:58 ` [dpdk-dev] [dpdk-dev v7 0/4] cryptodev: add data-path service APIs Fan Zhang
@ 2020-08-28 12:58 ` Fan Zhang
2020-08-31 6:23 ` Kusztal, ArkadiuszX
2020-08-28 12:58 ` [dpdk-dev] [dpdk-dev v7 2/4] crypto/qat: add crypto data-path service API support Fan Zhang
` (3 subsequent siblings)
4 siblings, 1 reply; 84+ messages in thread
From: Fan Zhang @ 2020-08-28 12:58 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski,
roy.fan.zhang, Piotr Bronowski
This patch adds data-path service APIs for enqueue and dequeue
operations to cryptodev. The APIs support flexible user-define
enqueue and dequeue behaviors and operation mode.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
---
lib/librte_cryptodev/rte_crypto.h | 9 +
lib/librte_cryptodev/rte_crypto_sym.h | 44 ++-
lib/librte_cryptodev/rte_cryptodev.c | 45 +++
lib/librte_cryptodev/rte_cryptodev.h | 335 +++++++++++++++++-
lib/librte_cryptodev/rte_cryptodev_pmd.h | 47 ++-
.../rte_cryptodev_version.map | 10 +
6 files changed, 481 insertions(+), 9 deletions(-)
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
index fd5ef3a87..f009be9af 100644
--- a/lib/librte_cryptodev/rte_crypto.h
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -438,6 +438,15 @@ rte_crypto_op_attach_asym_session(struct rte_crypto_op *op,
return 0;
}
+/** Crypto data-path service types */
+enum rte_crypto_dp_service {
+ RTE_CRYPTO_DP_SYM_CIPHER_ONLY = 0,
+ RTE_CRYPTO_DP_SYM_AUTH_ONLY,
+ RTE_CRYPTO_DP_SYM_CHAIN,
+ RTE_CRYPTO_DP_SYM_AEAD,
+ RTE_CRYPTO_DP_N_SERVICE
+};
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index f29c98051..518e4111b 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -50,6 +50,18 @@ struct rte_crypto_sgl {
uint32_t num;
};
+/**
+ * Crypto IO Data without length info.
+ * Supposed to be used to pass input/output data buffers with lengths
+ * defined when creating crypto session.
+ */
+struct rte_crypto_data {
+ /** virtual address of the data buffer */
+ void *base;
+ /** IOVA of the data buffer */
+ rte_iova_t iova;
+};
+
/**
* Synchronous operation descriptor.
* Supposed to be used with CPU crypto API call.
@@ -57,12 +69,32 @@ struct rte_crypto_sgl {
struct rte_crypto_sym_vec {
/** array of SGL vectors */
struct rte_crypto_sgl *sgl;
- /** array of pointers to IV */
- void **iv;
- /** array of pointers to AAD */
- void **aad;
- /** array of pointers to digest */
- void **digest;
+
+ union {
+
+ /* Supposed to be used with CPU crypto API call. */
+ struct {
+ /** array of pointers to IV */
+ void **iv;
+ /** array of pointers to AAD */
+ void **aad;
+ /** array of pointers to digest */
+ void **digest;
+ };
+
+ /* Supposed to be used with rte_cryptodev_dp_sym_submit_vec()
+ * call.
+ */
+ struct {
+ /** vector to IV */
+ struct rte_crypto_data *iv_vec;
+ /** vecor to AAD */
+ struct rte_crypto_data *aad_vec;
+ /** vector to Digest */
+ struct rte_crypto_data *digest_vec;
+ };
+ };
+
/**
* array of statuses for each operation:
* - 0 on success
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index 1dd795bcb..8a28511f9 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -1914,6 +1914,51 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec);
}
+int
+rte_cryptodev_get_dp_service_ctx_data_size(uint8_t dev_id)
+{
+ struct rte_cryptodev *dev;
+ int32_t size = sizeof(struct rte_crypto_dp_service_ctx);
+ int32_t priv_size;
+
+ if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+ return -1;
+
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+ if (*dev->dev_ops->get_drv_ctx_size == NULL ||
+ !(dev->feature_flags & RTE_CRYPTODEV_FF_DATA_PLANE_SERVICE)) {
+ return -1;
+ }
+
+ priv_size = (*dev->dev_ops->get_drv_ctx_size)(dev);
+ if (priv_size < 0)
+ return -1;
+
+ return RTE_ALIGN_CEIL((size + priv_size), 8);
+}
+
+int
+rte_cryptodev_dp_configure_service(uint8_t dev_id, uint16_t qp_id,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ struct rte_crypto_dp_service_ctx *ctx, uint8_t is_update)
+{
+ struct rte_cryptodev *dev;
+
+ if (!rte_cryptodev_get_qp_status(dev_id, qp_id))
+ return -1;
+
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+ if (!(dev->feature_flags & RTE_CRYPTODEV_FF_DATA_PLANE_SERVICE)
+ || dev->dev_ops->configure_service == NULL)
+ return -1;
+
+ return (*dev->dev_ops->configure_service)(dev, qp_id, ctx,
+ service_type, sess_type, session_ctx, is_update);
+}
+
/** Initialise rte_crypto_op mempool element */
static void
rte_crypto_op_init(struct rte_mempool *mempool,
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 7b3ebc20f..9c97846f3 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -466,7 +466,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
/**< Support symmetric session-less operations */
#define RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA (1ULL << 23)
/**< Support operations on data which is not byte aligned */
-
+#define RTE_CRYPTODEV_FF_DATA_PLANE_SERVICE (1ULL << 24)
+/**< Support accelerated specific raw data as input */
/**
* Get the name of a crypto device feature flag
@@ -1351,6 +1352,338 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
struct rte_crypto_sym_vec *vec);
+/**
+ * Get the size of the data-path service context for all registered drivers.
+ *
+ * @param dev_id The device identifier.
+ *
+ * @return
+ * - If the device supports data-path service, return the context size.
+ * - If the device does not support the data-plane service, return -1.
+ */
+__rte_experimental
+int
+rte_cryptodev_get_dp_service_ctx_data_size(uint8_t dev_id);
+
+/**
+ * Union of different crypto session types, including session-less xform
+ * pointer.
+ */
+union rte_cryptodev_session_ctx {
+ struct rte_cryptodev_sym_session *crypto_sess;
+ struct rte_crypto_sym_xform *xform;
+ struct rte_security_session *sec_sess;
+};
+
+/**
+ * Submit a data vector into device queue but the driver will not start
+ * processing until rte_cryptodev_dp_sym_submit_vec() is called.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param vec The array of job vectors.
+ * @param ofs Start and stop offsets for auth and cipher
+ * operations.
+ * @param opaque The array of opaque data for dequeue.
+ * @return
+ * - The number of jobs successfully submitted.
+ */
+typedef uint32_t (*cryptodev_dp_sym_submit_vec_t)(
+ void *qp, uint8_t *service_data, struct rte_crypto_sym_vec *vec,
+ union rte_crypto_sym_ofs ofs, void **opaque);
+
+/**
+ * Submit single job into device queue but the driver will not start
+ * processing until rte_cryptodev_dp_sym_submit_vec() is called.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param data The buffer vector.
+ * @param n_data_vecs Number of buffer vectors.
+ * @param ofs Start and stop offsets for auth and cipher
+ * operations.
+ * @param iv IV data.
+ * @param digest Digest data.
+ * @param aad AAD data.
+ * @param opaque The opaque data for dequeue.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+typedef int (*cryptodev_dp_submit_single_job_t)(
+ void *qp, uint8_t *service_data, struct rte_crypto_vec *data,
+ uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs,
+ struct rte_crypto_data *iv, struct rte_crypto_data *digest,
+ struct rte_crypto_data *aad, void *opaque);
+
+/**
+ * Inform the queue pair to start processing or finish dequeuing all
+ * submitted/dequeued jobs.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param n The total number of submitted jobs.
+ */
+typedef void (*cryptodev_dp_sym_operation_done_t)(void *qp,
+ uint8_t *service_data, uint32_t n);
+
+/**
+ * Typedef that the user provided for the driver to get the dequeue count.
+ * The function may return a fixed number or the number parsed from the opaque
+ * data stored in the first processed job.
+ *
+ * @param opaque Dequeued opaque data.
+ **/
+typedef uint32_t (*rte_cryptodev_get_dequeue_count_t)(void *opaque);
+
+/**
+ * Typedef that the user provided to deal with post dequeue operation, such
+ * as filling status.
+ *
+ * @param opaque Dequeued opaque data. In case
+ * RTE_CRYPTO_HW_DP_FF_GET_OPAQUE_ARRAY bit is
+ * set, this value will be the opaque data stored
+ * in the specific processed jobs referenced by
+ * index, otherwise it will be the opaque data
+ * stored in the first processed job in the burst.
+ * @param index Index number of the processed job.
+ * @param is_op_success Driver filled operation status.
+ **/
+typedef void (*rte_cryptodev_post_dequeue_t)(void *opaque, uint32_t index,
+ uint8_t is_op_success);
+
+/**
+ * Dequeue symmetric crypto processing of user provided data.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param get_dequeue_count User provided callback function to
+ * obtain dequeue count.
+ * @param post_dequeue User provided callback function to
+ * post-process a dequeued operation.
+ * @param out_opaque Opaque pointer array to be retrieve from
+ * device queue. In case of
+ * *is_opaque_array* is set there should
+ * be enough room to store all opaque data.
+ * @param is_opaque_array Set 1 if every dequeued job will be
+ * written the opaque data into
+ * *out_opaque* array.
+ * @param n_success_jobs Driver written value to specific the
+ * total successful operations count.
+ *
+ * @return
+ * - Returns number of dequeued packets.
+ */
+typedef uint32_t (*cryptodev_dp_sym_dequeue_t)(void *qp, uint8_t *service_data,
+ rte_cryptodev_get_dequeue_count_t get_dequeue_count,
+ rte_cryptodev_post_dequeue_t post_dequeue,
+ void **out_opaque, uint8_t is_opaque_array,
+ uint32_t *n_success_jobs);
+
+/**
+ * Dequeue symmetric crypto processing of user provided data.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param out_opaque Opaque pointer to be retrieve from
+ * device queue.
+ *
+ * @return
+ * - 1 if the job is dequeued and the operation is a success.
+ * - 0 if the job is dequeued but the operation is failed.
+ * - -1 if no job is dequeued.
+ */
+typedef int (*cryptodev_dp_sym_dequeue_single_job_t)(
+ void *qp, uint8_t *service_data, void **out_opaque);
+
+/**
+ * Context data for asynchronous crypto process.
+ */
+struct rte_crypto_dp_service_ctx {
+ void *qp_data;
+
+ union {
+ /* Supposed to be used for symmetric crypto service */
+ struct {
+ cryptodev_dp_submit_single_job_t submit_single_job;
+ cryptodev_dp_sym_submit_vec_t submit_vec;
+ cryptodev_dp_sym_operation_done_t submit_done;
+ cryptodev_dp_sym_dequeue_t dequeue_opaque;
+ cryptodev_dp_sym_dequeue_single_job_t dequeue_single;
+ cryptodev_dp_sym_operation_done_t dequeue_done;
+ };
+ };
+
+ /* Driver specific service data */
+ uint8_t drv_service_data[];
+};
+
+/**
+ * Configure one DP service context data. Calling this function for the first
+ * time the user should unset the *is_update* parameter and the driver will
+ * fill necessary operation data into ctx buffer. Only when
+ * rte_cryptodev_dp_submit_done() is called the data stored in the ctx buffer
+ * will not be effective.
+ *
+ * @param dev_id The device identifier.
+ * @param qp_id The index of the queue pair from which to
+ * retrieve processed packets. The value must be
+ * in the range [0, nb_queue_pair - 1] previously
+ * supplied to rte_cryptodev_configure().
+ * @param service_type Type of the service requested.
+ * @param sess_type session type.
+ * @param session_ctx Session context data.
+ * @param ctx The data-path service context data.
+ * @param is_update Set 1 if ctx is pre-initialized but need
+ * update to different service type or session,
+ * but the rest driver data remains the same.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+__rte_experimental
+int
+rte_cryptodev_dp_configure_service(uint8_t dev_id, uint16_t qp_id,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ struct rte_crypto_dp_service_ctx *ctx, uint8_t is_update);
+
+/**
+ * Submit single job into device queue but the driver will not start
+ * processing until rte_cryptodev_dp_sym_submit_vec() is called.
+ *
+ * @param ctx The initialized data-path service context data.
+ * @param data The buffer vector.
+ * @param n_data_vecs Number of buffer vectors.
+ * @param ofs Start and stop offsets for auth and cipher
+ * operations.
+ * @param iv IV data.
+ * @param digest Digest data.
+ * @param aad AAD data.
+ * @param opaque The array of opaque data for dequeue.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_cryptodev_dp_submit_single_job(struct rte_crypto_dp_service_ctx *ctx,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ struct rte_crypto_data *iv, struct rte_crypto_data *digest,
+ struct rte_crypto_data *aad, void *opaque)
+{
+ return (*ctx->submit_single_job)(ctx->qp_data, ctx->drv_service_data,
+ data, n_data_vecs, ofs, iv, digest, aad, opaque);
+}
+
+/**
+ * Submit a data vector into device queue but the driver will not start
+ * processing until rte_cryptodev_dp_sym_submit_vec() is called.
+ *
+ * @param ctx The initialized data-path service context data.
+ * @param vec The array of job vectors.
+ * @param ofs Start and stop offsets for auth and cipher operations.
+ * @param opaque The array of opaque data for dequeue.
+ * @return
+ * - The number of jobs successfully submitted.
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+rte_cryptodev_dp_sym_submit_vec(struct rte_crypto_dp_service_ctx *ctx,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ return (*ctx->submit_vec)(ctx->qp_data, ctx->drv_service_data, vec,
+ ofs, opaque);
+}
+
+/**
+ * Command the queue pair to start processing all submitted jobs from last
+ * rte_cryptodev_init_dp_service() call.
+ *
+ * @param ctx The initialized data-path service context data.
+ * @param n The total number of submitted jobs.
+ */
+__rte_experimental
+static __rte_always_inline void
+rte_cryptodev_dp_submit_done(struct rte_crypto_dp_service_ctx *ctx, uint32_t n)
+{
+ (*ctx->submit_done)(ctx->qp_data, ctx->drv_service_data, n);
+}
+
+/**
+ * Dequeue symmetric crypto processing of user provided data.
+ *
+ * @param ctx The initialized data-path service
+ * context data.
+ * @param get_dequeue_count User provided callback function to
+ * obtain dequeue count.
+ * @param post_dequeue User provided callback function to
+ * post-process a dequeued operation.
+ * @param out_opaque Opaque pointer array to be retrieve from
+ * device queue. In case of
+ * *is_opaque_array* is set there should
+ * be enough room to store all opaque data.
+ * @param is_opaque_array Set 1 if every dequeued job will be
+ * written the opaque data into
+ * *out_opaque* array.
+ * @param n_success_jobs Driver written value to specific the
+ * total successful operations count.
+ *
+ * @return
+ * - Returns number of dequeued packets.
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+rte_cryptodev_dp_sym_dequeue(struct rte_crypto_dp_service_ctx *ctx,
+ rte_cryptodev_get_dequeue_count_t get_dequeue_count,
+ rte_cryptodev_post_dequeue_t post_dequeue,
+ void **out_opaque, uint8_t is_opaque_array,
+ uint32_t *n_success_jobs)
+{
+ return (*ctx->dequeue_opaque)(ctx->qp_data, ctx->drv_service_data,
+ get_dequeue_count, post_dequeue, out_opaque, is_opaque_array,
+ n_success_jobs);
+}
+
+/**
+ * Dequeue Single symmetric crypto processing of user provided data.
+ *
+ * @param ctx The initialized data-path service
+ * context data.
+ * @param out_opaque Opaque pointer to be retrieve from
+ * device queue. The driver shall support
+ * NULL input of this parameter.
+ *
+ * @return
+ * - 1 if the job is dequeued and the operation is a success.
+ * - 0 if the job is dequeued but the operation is failed.
+ * - -1 if no job is dequeued.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_cryptodev_dp_sym_dequeue_single_job(struct rte_crypto_dp_service_ctx *ctx,
+ void **out_opaque)
+{
+ return (*ctx->dequeue_single)(ctx->qp_data, ctx->drv_service_data,
+ out_opaque);
+}
+
+/**
+ * Inform the queue pair dequeue jobs finished.
+ *
+ * @param ctx The initialized data-path service context data.
+ * @param n The total number of jobs already dequeued.
+ */
+__rte_experimental
+static __rte_always_inline void
+rte_cryptodev_dp_dequeue_done(struct rte_crypto_dp_service_ctx *ctx, uint32_t n)
+{
+ (*ctx->dequeue_done)(ctx->qp_data, ctx->drv_service_data, n);
+}
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
index 81975d72b..bf0260c87 100644
--- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -316,6 +316,41 @@ typedef uint32_t (*cryptodev_sym_cpu_crypto_process_t)
(struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess,
union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec);
+/**
+ * Typedef that the driver provided to get service context private date size.
+ *
+ * @param dev Crypto device pointer.
+ *
+ * @return
+ * - On success return the size of the device's service context private data.
+ * - On failure return negative integer.
+ */
+typedef int (*cryptodev_dp_get_service_ctx_size_t)(
+ struct rte_cryptodev *dev);
+
+/**
+ * Typedef that the driver provided to configure data-path service.
+ *
+ * @param dev Crypto device pointer.
+ * @param qp_id Crypto device queue pair index.
+ * @param ctx The data-path service context data.
+ * @param service_type Type of the service requested.
+ * @param sess_type session type.
+ * @param session_ctx Session context data.
+ * @param is_update Set 1 if ctx is pre-initialized but need
+ * update to different service type or session,
+ * but the rest driver data remains the same.
+ * buffer will always be one.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+typedef int (*cryptodev_dp_configure_service_t)(
+ struct rte_cryptodev *dev, uint16_t qp_id,
+ struct rte_crypto_dp_service_ctx *ctx,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx, uint8_t is_update);
/** Crypto device operations function pointer table */
struct rte_cryptodev_ops {
@@ -348,8 +383,16 @@ struct rte_cryptodev_ops {
/**< Clear a Crypto sessions private data. */
cryptodev_asym_free_session_t asym_session_clear;
/**< Clear a Crypto sessions private data. */
- cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
- /**< process input data synchronously (cpu-crypto). */
+ union {
+ cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
+ /**< process input data synchronously (cpu-crypto). */
+ struct {
+ cryptodev_dp_get_service_ctx_size_t get_drv_ctx_size;
+ /**< Get data path service context data size. */
+ cryptodev_dp_configure_service_t configure_service;
+ /**< Initialize crypto service ctx data. */
+ };
+ };
};
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 02f6dcf72..d384382d3 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -105,4 +105,14 @@ EXPERIMENTAL {
# added in 20.08
rte_cryptodev_get_qp_status;
+
+ # added in 20.11
+ rte_cryptodev_dp_configure_service;
+ rte_cryptodev_get_dp_service_ctx_data_size;
+ rte_cryptodev_dp_submit_single_job;
+ rte_cryptodev_dp_sym_submit_vec;
+ rte_cryptodev_dp_submit_done;
+ rte_cryptodev_dp_sym_dequeue;
+ rte_cryptodev_dp_sym_dequeue_single_job;
+ rte_cryptodev_dp_dequeue_done;
};
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v7 2/4] crypto/qat: add crypto data-path service API support
2020-08-28 12:58 ` [dpdk-dev] [dpdk-dev v7 0/4] cryptodev: add data-path service APIs Fan Zhang
2020-08-28 12:58 ` [dpdk-dev] [dpdk-dev v7 1/4] cryptodev: add crypto " Fan Zhang
@ 2020-08-28 12:58 ` Fan Zhang
2020-08-28 12:58 ` [dpdk-dev] [dpdk-dev v7 3/4] test/crypto: add unit-test for cryptodev direct APIs Fan Zhang
` (2 subsequent siblings)
4 siblings, 0 replies; 84+ messages in thread
From: Fan Zhang @ 2020-08-28 12:58 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski,
roy.fan.zhang
This patch updates QAT PMD to add crypto service API support.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
drivers/common/qat/Makefile | 1 +
drivers/crypto/qat/meson.build | 1 +
drivers/crypto/qat/qat_sym.h | 13 +
drivers/crypto/qat/qat_sym_hw_dp.c | 926 +++++++++++++++++++++++++++++
drivers/crypto/qat/qat_sym_pmd.c | 9 +-
5 files changed, 948 insertions(+), 2 deletions(-)
create mode 100644 drivers/crypto/qat/qat_sym_hw_dp.c
diff --git a/drivers/common/qat/Makefile b/drivers/common/qat/Makefile
index 85d420709..1b71bbbab 100644
--- a/drivers/common/qat/Makefile
+++ b/drivers/common/qat/Makefile
@@ -42,6 +42,7 @@ endif
SRCS-y += qat_sym.c
SRCS-y += qat_sym_session.c
SRCS-y += qat_sym_pmd.c
+ SRCS-y += qat_sym_hw_dp.c
build_qat = yes
endif
endif
diff --git a/drivers/crypto/qat/meson.build b/drivers/crypto/qat/meson.build
index a225f374a..bc90ec44c 100644
--- a/drivers/crypto/qat/meson.build
+++ b/drivers/crypto/qat/meson.build
@@ -15,6 +15,7 @@ if dep.found()
qat_sources += files('qat_sym_pmd.c',
'qat_sym.c',
'qat_sym_session.c',
+ 'qat_sym_hw_dp.c',
'qat_asym_pmd.c',
'qat_asym.c')
qat_ext_deps += dep
diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h
index 1a9748849..2d6316130 100644
--- a/drivers/crypto/qat/qat_sym.h
+++ b/drivers/crypto/qat/qat_sym.h
@@ -264,6 +264,18 @@ qat_sym_process_response(void **op, uint8_t *resp)
}
*op = (void *)rx_op;
}
+
+int
+qat_sym_dp_configure_service_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
+ struct rte_crypto_dp_service_ctx *service_ctx,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ uint8_t is_update);
+
+int
+qat_sym_get_service_ctx_size(struct rte_cryptodev *dev);
+
#else
static inline void
@@ -276,5 +288,6 @@ static inline void
qat_sym_process_response(void **op __rte_unused, uint8_t *resp __rte_unused)
{
}
+
#endif
#endif /* _QAT_SYM_H_ */
diff --git a/drivers/crypto/qat/qat_sym_hw_dp.c b/drivers/crypto/qat/qat_sym_hw_dp.c
new file mode 100644
index 000000000..0adc55359
--- /dev/null
+++ b/drivers/crypto/qat/qat_sym_hw_dp.c
@@ -0,0 +1,926 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_cryptodev_pmd.h>
+
+#include "adf_transport_access_macros.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#include "qat_sym.h"
+#include "qat_sym_pmd.h"
+#include "qat_sym_session.h"
+#include "qat_qp.h"
+
+struct qat_sym_dp_service_ctx {
+ struct qat_sym_session *session;
+ uint32_t tail;
+ uint32_t head;
+};
+
+static __rte_always_inline int32_t
+qat_sym_dp_get_data(struct qat_qp *qp, struct icp_qat_fw_la_bulk_req *req,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs)
+{
+ struct qat_queue *tx_queue;
+ struct qat_sym_op_cookie *cookie;
+ struct qat_sgl *list;
+ uint32_t i;
+ uint32_t total_len;
+
+ if (likely(n_data_vecs == 1)) {
+ req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr =
+ data[0].iova;
+ req->comn_mid.src_length = req->comn_mid.dst_length =
+ data[0].len;
+ return data[0].len;
+ }
+
+ if (n_data_vecs == 0 || n_data_vecs > QAT_SYM_SGL_MAX_NUMBER)
+ return -1;
+
+ total_len = 0;
+ tx_queue = &qp->tx_q;
+
+ ICP_QAT_FW_COMN_PTR_TYPE_SET(req->comn_hdr.comn_req_flags,
+ QAT_COMN_PTR_TYPE_SGL);
+ cookie = qp->op_cookies[tx_queue->tail >> tx_queue->trailz];
+ list = (struct qat_sgl *)&cookie->qat_sgl_src;
+
+ for (i = 0; i < n_data_vecs; i++) {
+ list->buffers[i].len = data[i].len;
+ list->buffers[i].resrvd = 0;
+ list->buffers[i].addr = data[i].iova;
+ if (total_len + data[i].len > UINT32_MAX) {
+ QAT_DP_LOG(ERR, "Message too long");
+ return -1;
+ }
+ total_len += data[i].len;
+ }
+
+ list->num_bufs = i;
+ req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr =
+ cookie->qat_sgl_src_phys_addr;
+ req->comn_mid.src_length = req->comn_mid.dst_length = 0;
+ return total_len;
+}
+
+static __rte_always_inline void
+set_cipher_iv(struct icp_qat_fw_la_cipher_req_params *cipher_param,
+ struct rte_crypto_data *iv, uint32_t iv_len,
+ struct icp_qat_fw_la_bulk_req *qat_req)
+{
+ /* copy IV into request if it fits */
+ if (iv_len <= sizeof(cipher_param->u.cipher_IV_array))
+ rte_memcpy(cipher_param->u.cipher_IV_array, iv->base, iv_len);
+ else {
+ ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(
+ qat_req->comn_hdr.serv_specif_flags,
+ ICP_QAT_FW_CIPH_IV_64BIT_PTR);
+ cipher_param->u.s.cipher_IV_ptr = iv->iova;
+ }
+}
+
+#define QAT_SYM_DP_IS_RESP_SUCCESS(resp) \
+ (ICP_QAT_FW_COMN_STATUS_FLAG_OK == \
+ ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(resp->comn_hdr.comn_status))
+
+static __rte_always_inline void
+qat_sym_dp_fill_vec_status(int32_t *sta, int status, uint32_t n)
+{
+ uint32_t i;
+
+ for (i = 0; i < n; i++)
+ sta[i] = status;
+}
+
+static __rte_always_inline void
+submit_one_aead_job(struct qat_sym_session *ctx,
+ struct icp_qat_fw_la_bulk_req *req, struct rte_crypto_data *iv_vec,
+ struct rte_crypto_data *digest_vec, struct rte_crypto_data *aad_vec,
+ union rte_crypto_sym_ofs ofs, uint32_t data_len)
+{
+ struct icp_qat_fw_la_cipher_req_params *cipher_param =
+ (void *)&req->serv_specif_rqpars;
+ struct icp_qat_fw_la_auth_req_params *auth_param =
+ (void *)((uint8_t *)&req->serv_specif_rqpars +
+ ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET);
+ uint8_t *aad_data;
+ uint8_t aad_ccm_real_len;
+ uint8_t aad_len_field_sz;
+ uint32_t msg_len_be;
+ rte_iova_t aad_iova = 0;
+ uint8_t q;
+
+ switch (ctx->qat_hash_alg) {
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+ ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(
+ req->comn_hdr.serv_specif_flags,
+ ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS);
+ rte_memcpy_generic(cipher_param->u.cipher_IV_array,
+ iv_vec->base, ctx->cipher_iv.length);
+ aad_iova = aad_vec->iova;
+ break;
+ case ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC:
+ aad_data = aad_vec->base;
+ aad_iova = aad_vec->iova;
+ aad_ccm_real_len = 0;
+ aad_len_field_sz = 0;
+ msg_len_be = rte_bswap32((uint32_t)data_len -
+ ofs.ofs.cipher.head);
+
+ if (ctx->aad_len > ICP_QAT_HW_CCM_AAD_DATA_OFFSET) {
+ aad_len_field_sz = ICP_QAT_HW_CCM_AAD_LEN_INFO;
+ aad_ccm_real_len = ctx->aad_len -
+ ICP_QAT_HW_CCM_AAD_B0_LEN -
+ ICP_QAT_HW_CCM_AAD_LEN_INFO;
+ } else {
+ aad_data = iv_vec->base;
+ aad_iova = iv_vec->iova;
+ }
+
+ q = ICP_QAT_HW_CCM_NQ_CONST - ctx->cipher_iv.length;
+ aad_data[0] = ICP_QAT_HW_CCM_BUILD_B0_FLAGS(
+ aad_len_field_sz, ctx->digest_length, q);
+ if (q > ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE) {
+ memcpy(aad_data + ctx->cipher_iv.length +
+ ICP_QAT_HW_CCM_NONCE_OFFSET + (q -
+ ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE),
+ (uint8_t *)&msg_len_be,
+ ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE);
+ } else {
+ memcpy(aad_data + ctx->cipher_iv.length +
+ ICP_QAT_HW_CCM_NONCE_OFFSET,
+ (uint8_t *)&msg_len_be +
+ (ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE
+ - q), q);
+ }
+
+ if (aad_len_field_sz > 0) {
+ *(uint16_t *)&aad_data[ICP_QAT_HW_CCM_AAD_B0_LEN] =
+ rte_bswap16(aad_ccm_real_len);
+
+ if ((aad_ccm_real_len + aad_len_field_sz)
+ % ICP_QAT_HW_CCM_AAD_B0_LEN) {
+ uint8_t pad_len = 0;
+ uint8_t pad_idx = 0;
+
+ pad_len = ICP_QAT_HW_CCM_AAD_B0_LEN -
+ ((aad_ccm_real_len +
+ aad_len_field_sz) %
+ ICP_QAT_HW_CCM_AAD_B0_LEN);
+ pad_idx = ICP_QAT_HW_CCM_AAD_B0_LEN +
+ aad_ccm_real_len +
+ aad_len_field_sz;
+ memset(&aad_data[pad_idx], 0, pad_len);
+ }
+
+ rte_memcpy(((uint8_t *)cipher_param->u.cipher_IV_array)
+ + ICP_QAT_HW_CCM_NONCE_OFFSET,
+ (uint8_t *)iv_vec->base +
+ ICP_QAT_HW_CCM_NONCE_OFFSET,
+ ctx->cipher_iv.length);
+ *(uint8_t *)&cipher_param->u.cipher_IV_array[0] =
+ q - ICP_QAT_HW_CCM_NONCE_OFFSET;
+
+ rte_memcpy((uint8_t *)aad_vec->base +
+ ICP_QAT_HW_CCM_NONCE_OFFSET,
+ (uint8_t *)iv_vec->base +
+ ICP_QAT_HW_CCM_NONCE_OFFSET,
+ ctx->cipher_iv.length);
+ }
+ break;
+ default:
+ break;
+ }
+
+ cipher_param->cipher_offset = ofs.ofs.cipher.head;
+ cipher_param->cipher_length = data_len - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ auth_param->auth_off = ofs.ofs.cipher.head;
+ auth_param->auth_len = cipher_param->cipher_length;
+ auth_param->auth_res_addr = digest_vec->iova;
+ auth_param->u1.aad_adr = aad_iova;
+
+ if (ctx->is_single_pass) {
+ cipher_param->spc_aad_addr = aad_iova;
+ cipher_param->spc_auth_res_addr = digest_vec->iova;
+ }
+}
+
+static __rte_always_inline int
+qat_sym_dp_submit_single_aead(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs, struct rte_crypto_data *iv_vec,
+ struct rte_crypto_data *digest_vec, struct rte_crypto_data *aad_vec,
+ void *opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = service_ctx->session;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+ uint32_t tail = service_ctx->tail;
+
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+ rte_prefetch0((uint8_t *)tx_queue->base_addr + tail);
+ data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs);
+ if (unlikely(data_len < 0))
+ return -1;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque;
+
+ submit_one_aead_job(ctx, req, iv_vec, digest_vec, aad_vec, ofs,
+ (uint32_t)data_len);
+
+ service_ctx->tail = tail;
+
+ return 0;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_submit_aead_jobs(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = service_ctx->session;
+ uint32_t i;
+ uint32_t tail;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+
+ if (unlikely(qp->enqueued - qp->dequeued + vec->num >=
+ qp->max_inflights)) {
+ qat_sym_dp_fill_vec_status(vec->status, -1, vec->num);
+ return 0;
+ }
+
+ tail = service_ctx->tail;
+
+ for (i = 0; i < vec->num; i++) {
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+
+ data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec,
+ vec->sgl[i].num) - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ if (unlikely(data_len < 0))
+ break;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque[i];
+ submit_one_aead_job(ctx, req, vec->iv_vec + i,
+ vec->digest_vec + i, vec->aad_vec + i, ofs,
+ (uint32_t)data_len);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ }
+
+ if (unlikely(i < vec->num))
+ qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i);
+
+ service_ctx->tail = tail;
+ return i;
+}
+
+static __rte_always_inline void
+submit_one_cipher_job(struct qat_sym_session *ctx,
+ struct icp_qat_fw_la_bulk_req *req, struct rte_crypto_data *iv_vec,
+ union rte_crypto_sym_ofs ofs, uint32_t data_len)
+{
+ struct icp_qat_fw_la_cipher_req_params *cipher_param;
+
+ cipher_param = (void *)&req->serv_specif_rqpars;
+
+ /* cipher IV */
+ set_cipher_iv(cipher_param, iv_vec, ctx->cipher_iv.length, req);
+ cipher_param->cipher_offset = ofs.ofs.cipher.head;
+ cipher_param->cipher_length = data_len - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+}
+
+static __rte_always_inline int
+qat_sym_dp_submit_single_cipher(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs, struct rte_crypto_data *iv_vec,
+ __rte_unused struct rte_crypto_data *digest_vec,
+ __rte_unused struct rte_crypto_data *aad_vec,
+ void *opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = service_ctx->session;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+ uint32_t tail = service_ctx->tail;
+
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+ rte_prefetch0((uint8_t *)tx_queue->base_addr + tail);
+ data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs);
+ if (unlikely(data_len < 0))
+ return -1;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque;
+
+ submit_one_cipher_job(ctx, req, iv_vec, ofs, (uint32_t)data_len);
+
+ service_ctx->tail = tail;
+
+ return 0;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_submit_cipher_jobs(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = service_ctx->session;
+ uint32_t i;
+ uint32_t tail;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+
+ if (unlikely(qp->enqueued - qp->dequeued + vec->num >=
+ qp->max_inflights)) {
+ qat_sym_dp_fill_vec_status(vec->status, -1, vec->num);
+ return 0;
+ }
+
+ tail = service_ctx->tail;
+
+ for (i = 0; i < vec->num; i++) {
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+
+ data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec,
+ vec->sgl[i].num) - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ if (unlikely(data_len < 0))
+ break;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque[i];
+ submit_one_cipher_job(ctx, req, vec->iv_vec + i, ofs,
+ (uint32_t)data_len);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ }
+
+ if (unlikely(i < vec->num))
+ qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i);
+
+ service_ctx->tail = tail;
+ return i;
+}
+
+static __rte_always_inline void
+submit_one_auth_job(struct qat_sym_session *ctx,
+ struct icp_qat_fw_la_bulk_req *req, struct rte_crypto_data *iv_vec,
+ struct rte_crypto_data *digest_vec, union rte_crypto_sym_ofs ofs,
+ uint32_t data_len)
+{
+ struct icp_qat_fw_la_cipher_req_params *cipher_param;
+ struct icp_qat_fw_la_auth_req_params *auth_param;
+
+ cipher_param = (void *)&req->serv_specif_rqpars;
+ auth_param = (void *)((uint8_t *)cipher_param +
+ ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET);
+
+ auth_param->auth_off = ofs.ofs.auth.head;
+ auth_param->auth_len = data_len - ofs.ofs.auth.head -
+ ofs.ofs.auth.tail;
+ auth_param->auth_res_addr = digest_vec->iova;
+
+ switch (ctx->qat_hash_alg) {
+ case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
+ case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
+ auth_param->u1.aad_adr = iv_vec->iova;
+ break;
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+ ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(
+ req->comn_hdr.serv_specif_flags,
+ ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS);
+ rte_memcpy_generic(cipher_param->u.cipher_IV_array,
+ iv_vec->base, ctx->cipher_iv.length);
+ break;
+ default:
+ break;
+ }
+}
+
+static __rte_always_inline int
+qat_sym_dp_submit_single_auth(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs, struct rte_crypto_data *iv_vec,
+ struct rte_crypto_data *digest_vec,
+ __rte_unused struct rte_crypto_data *aad_vec,
+ void *opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = service_ctx->session;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+ uint32_t tail = service_ctx->tail;
+
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+ rte_prefetch0((uint8_t *)tx_queue->base_addr + tail);
+ data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs);
+ if (unlikely(data_len < 0))
+ return -1;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque;
+
+ submit_one_auth_job(ctx, req, iv_vec, digest_vec, ofs,
+ (uint32_t)data_len);
+
+ service_ctx->tail = tail;
+
+ return 0;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_submit_auth_jobs(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = service_ctx->session;
+ uint32_t i;
+ uint32_t tail;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+
+ if (unlikely(qp->enqueued - qp->dequeued + vec->num >=
+ qp->max_inflights)) {
+ qat_sym_dp_fill_vec_status(vec->status, -1, vec->num);
+ return 0;
+ }
+
+ tail = service_ctx->tail;
+
+ for (i = 0; i < vec->num; i++) {
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+
+ data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec,
+ vec->sgl[i].num) - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ if (unlikely(data_len < 0))
+ break;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque[i];
+ submit_one_auth_job(ctx, req, vec->iv_vec + i,
+ vec->digest_vec + i, ofs, (uint32_t)data_len);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ }
+
+ if (unlikely(i < vec->num))
+ qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i);
+
+ service_ctx->tail = tail;
+ return i;
+}
+
+static __rte_always_inline void
+submit_one_chain_job(struct qat_sym_session *ctx,
+ struct icp_qat_fw_la_bulk_req *req, struct rte_crypto_vec *data,
+ uint16_t n_data_vecs, struct rte_crypto_data *iv_vec,
+ struct rte_crypto_data *digest_vec, union rte_crypto_sym_ofs ofs,
+ uint32_t data_len)
+{
+ struct icp_qat_fw_la_cipher_req_params *cipher_param;
+ struct icp_qat_fw_la_auth_req_params *auth_param;
+ rte_iova_t auth_iova_end;
+ int32_t cipher_len, auth_len;
+
+ cipher_param = (void *)&req->serv_specif_rqpars;
+ auth_param = (void *)((uint8_t *)cipher_param +
+ ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET);
+
+ cipher_len = data_len - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ auth_len = data_len - ofs.ofs.auth.head - ofs.ofs.auth.tail;
+
+ assert(cipher_len > 0 && auth_len > 0);
+
+ cipher_param->cipher_offset = ofs.ofs.cipher.head;
+ cipher_param->cipher_length = cipher_len;
+ set_cipher_iv(cipher_param, iv_vec, ctx->cipher_iv.length, req);
+
+ auth_param->auth_off = ofs.ofs.cipher.head;
+ auth_param->auth_len = auth_len;
+ auth_param->auth_res_addr = digest_vec->iova;
+
+ switch (ctx->qat_hash_alg) {
+ case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
+ case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
+ auth_param->u1.aad_adr = iv_vec->iova;
+
+ if (unlikely(n_data_vecs > 1)) {
+ int auth_end_get = 0, i = n_data_vecs - 1;
+ struct rte_crypto_vec *cvec = &data[i];
+ uint32_t len;
+
+ len = data_len - ofs.ofs.auth.tail;
+
+ while (i >= 0 && len > 0) {
+ if (cvec->len >= len) {
+ auth_iova_end = cvec->iova +
+ (cvec->len - len);
+ len = 0;
+ auth_end_get = 1;
+ break;
+ }
+ len -= cvec->len;
+ i--;
+ cvec--;
+ }
+
+ assert(auth_end_get != 0);
+ } else
+ auth_iova_end = digest_vec->iova +
+ ctx->digest_length;
+
+ /* Then check if digest-encrypted conditions are met */
+ if ((auth_param->auth_off + auth_param->auth_len <
+ cipher_param->cipher_offset +
+ cipher_param->cipher_length) &&
+ (digest_vec->iova == auth_iova_end)) {
+ /* Handle partial digest encryption */
+ if (cipher_param->cipher_offset +
+ cipher_param->cipher_length <
+ auth_param->auth_off +
+ auth_param->auth_len +
+ ctx->digest_length)
+ req->comn_mid.dst_length =
+ req->comn_mid.src_length =
+ auth_param->auth_off +
+ auth_param->auth_len +
+ ctx->digest_length;
+ struct icp_qat_fw_comn_req_hdr *header =
+ &req->comn_hdr;
+ ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(
+ header->serv_specif_flags,
+ ICP_QAT_FW_LA_DIGEST_IN_BUFFER);
+ }
+ break;
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+ break;
+ default:
+ break;
+ }
+}
+
+static __rte_always_inline int
+qat_sym_dp_submit_single_chain(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs, struct rte_crypto_data *iv_vec,
+ struct rte_crypto_data *digest_vec,
+ __rte_unused struct rte_crypto_data *aad_vec,
+ void *opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = service_ctx->session;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+ uint32_t tail = service_ctx->tail;
+
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+ rte_prefetch0((uint8_t *)tx_queue->base_addr + tail);
+ data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs);
+ if (unlikely(data_len < 0))
+ return -1;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque;
+
+ submit_one_chain_job(ctx, req, data, n_data_vecs, iv_vec, digest_vec,
+ ofs, (uint32_t)data_len);
+
+ service_ctx->tail = tail;
+
+ return 0;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_submit_chain_jobs(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = service_ctx->session;
+ uint32_t i;
+ uint32_t tail;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+
+ if (unlikely(qp->enqueued - qp->dequeued + vec->num >=
+ qp->max_inflights)) {
+ qat_sym_dp_fill_vec_status(vec->status, -1, vec->num);
+ return 0;
+ }
+
+ tail = service_ctx->tail;
+
+ for (i = 0; i < vec->num; i++) {
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+
+ data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec,
+ vec->sgl[i].num) - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ if (unlikely(data_len < 0))
+ break;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque[i];
+ submit_one_chain_job(ctx, req, vec->sgl[i].vec, vec->sgl[i].num,
+ vec->iv_vec + i, vec->digest_vec + i, ofs,
+ (uint32_t)data_len);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ }
+
+ if (unlikely(i < vec->num))
+ qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i);
+
+ service_ctx->tail = tail;
+ return i;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_dequeue(void *qp_data, uint8_t *service_data,
+ rte_cryptodev_get_dequeue_count_t get_dequeue_count,
+ rte_cryptodev_post_dequeue_t post_dequeue,
+ void **out_opaque, uint8_t is_opaque_array,
+ uint32_t *n_success_jobs)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+ struct qat_queue *rx_queue = &qp->rx_q;
+ struct icp_qat_fw_comn_resp *resp;
+ void *resp_opaque;
+ uint32_t i, n, inflight;
+ uint32_t head;
+ uint8_t status;
+
+ *n_success_jobs = 0;
+ head = service_ctx->head;
+
+ inflight = qp->enqueued - qp->dequeued;
+ if (unlikely(inflight == 0))
+ return 0;
+
+ resp = (struct icp_qat_fw_comn_resp *)((uint8_t *)rx_queue->base_addr +
+ head);
+ /* no operation ready */
+ if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG))
+ return 0;
+
+ resp_opaque = (void *)(uintptr_t)resp->opaque_data;
+ /* get the dequeue count */
+ n = get_dequeue_count(resp_opaque);
+ if (unlikely(n == 0))
+ return 0;
+
+ out_opaque[0] = resp_opaque;
+ status = QAT_SYM_DP_IS_RESP_SUCCESS(resp);
+ post_dequeue(resp_opaque, 0, status);
+ *n_success_jobs += status;
+
+ head = (head + rx_queue->msg_size) & rx_queue->modulo_mask;
+
+ /* we already finished dequeue when n == 1 */
+ if (unlikely(n == 1)) {
+ i = 1;
+ goto end_deq;
+ }
+
+ if (is_opaque_array) {
+ for (i = 1; i < n; i++) {
+ resp = (struct icp_qat_fw_comn_resp *)(
+ (uint8_t *)rx_queue->base_addr + head);
+ if (unlikely(*(uint32_t *)resp ==
+ ADF_RING_EMPTY_SIG))
+ goto end_deq;
+ out_opaque[i] = (void *)(uintptr_t)
+ resp->opaque_data;
+ status = QAT_SYM_DP_IS_RESP_SUCCESS(resp);
+ *n_success_jobs += status;
+ post_dequeue(out_opaque[i], i, status);
+ head = (head + rx_queue->msg_size) &
+ rx_queue->modulo_mask;
+ }
+
+ goto end_deq;
+ }
+
+ /* opaque is not array */
+ for (i = 1; i < n; i++) {
+ resp = (struct icp_qat_fw_comn_resp *)(
+ (uint8_t *)rx_queue->base_addr + head);
+ status = QAT_SYM_DP_IS_RESP_SUCCESS(resp);
+ if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG))
+ goto end_deq;
+ head = (head + rx_queue->msg_size) &
+ rx_queue->modulo_mask;
+ post_dequeue(resp_opaque, i, status);
+ *n_success_jobs += status;
+ }
+
+end_deq:
+ service_ctx->head = head;
+ return i;
+}
+
+static __rte_always_inline int
+qat_sym_dp_dequeue_single_job(void *qp_data, uint8_t *service_data,
+ void **out_opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+ struct qat_queue *rx_queue = &qp->rx_q;
+
+ register struct icp_qat_fw_comn_resp *resp;
+
+ resp = (struct icp_qat_fw_comn_resp *)((uint8_t *)rx_queue->base_addr +
+ service_ctx->head);
+
+ if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG))
+ return -1;
+
+ *out_opaque = (void *)(uintptr_t)resp->opaque_data;
+
+ service_ctx->head = (service_ctx->head + rx_queue->msg_size) &
+ rx_queue->modulo_mask;
+
+ return QAT_SYM_DP_IS_RESP_SUCCESS(resp);
+}
+
+static __rte_always_inline void
+qat_sym_dp_kick_tail(void *qp_data, uint8_t *service_data, uint32_t n)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+
+ qp->enqueued += n;
+ qp->stats.enqueued_count += n;
+
+ assert(service_ctx->tail == ((tx_queue->tail + tx_queue->msg_size * n) &
+ tx_queue->modulo_mask));
+
+ tx_queue->tail = service_ctx->tail;
+
+ WRITE_CSR_RING_TAIL(qp->mmap_bar_addr,
+ tx_queue->hw_bundle_number,
+ tx_queue->hw_queue_number, tx_queue->tail);
+ tx_queue->csr_tail = tx_queue->tail;
+}
+
+static __rte_always_inline void
+qat_sym_dp_update_head(void *qp_data, uint8_t *service_data, uint32_t n)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_queue *rx_queue = &qp->rx_q;
+ struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data;
+
+ assert(service_ctx->head == ((rx_queue->head + rx_queue->msg_size * n) &
+ rx_queue->modulo_mask));
+
+ rx_queue->head = service_ctx->head;
+ rx_queue->nb_processed_responses += n;
+ qp->dequeued += n;
+ qp->stats.dequeued_count += n;
+ if (rx_queue->nb_processed_responses > QAT_CSR_HEAD_WRITE_THRESH) {
+ uint32_t old_head, new_head;
+ uint32_t max_head;
+
+ old_head = rx_queue->csr_head;
+ new_head = rx_queue->head;
+ max_head = qp->nb_descriptors * rx_queue->msg_size;
+
+ /* write out free descriptors */
+ void *cur_desc = (uint8_t *)rx_queue->base_addr + old_head;
+
+ if (new_head < old_head) {
+ memset(cur_desc, ADF_RING_EMPTY_SIG_BYTE,
+ max_head - old_head);
+ memset(rx_queue->base_addr, ADF_RING_EMPTY_SIG_BYTE,
+ new_head);
+ } else {
+ memset(cur_desc, ADF_RING_EMPTY_SIG_BYTE, new_head -
+ old_head);
+ }
+ rx_queue->nb_processed_responses = 0;
+ rx_queue->csr_head = new_head;
+
+ /* write current head to CSR */
+ WRITE_CSR_RING_HEAD(qp->mmap_bar_addr,
+ rx_queue->hw_bundle_number, rx_queue->hw_queue_number,
+ new_head);
+ }
+}
+
+int
+qat_sym_dp_configure_service_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
+ struct rte_crypto_dp_service_ctx *service_ctx,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ uint8_t is_update)
+{
+ struct qat_qp *qp;
+ struct qat_sym_session *ctx;
+ struct qat_sym_dp_service_ctx *dp_ctx;
+
+ if (service_ctx == NULL || session_ctx.crypto_sess == NULL ||
+ sess_type != RTE_CRYPTO_OP_WITH_SESSION)
+ return -EINVAL;
+
+ qp = dev->data->queue_pairs[qp_id];
+ ctx = (struct qat_sym_session *)get_sym_session_private_data(
+ session_ctx.crypto_sess, qat_sym_driver_id);
+ dp_ctx = (struct qat_sym_dp_service_ctx *)
+ service_ctx->drv_service_data;
+
+ if (!is_update) {
+ memset(service_ctx, 0, sizeof(*service_ctx) +
+ sizeof(struct qat_sym_dp_service_ctx));
+ service_ctx->qp_data = dev->data->queue_pairs[qp_id];
+ dp_ctx->tail = qp->tx_q.tail;
+ dp_ctx->head = qp->rx_q.head;
+ }
+
+ dp_ctx->session = ctx;
+
+ service_ctx->submit_done = qat_sym_dp_kick_tail;
+ service_ctx->dequeue_opaque = qat_sym_dp_dequeue;
+ service_ctx->dequeue_single = qat_sym_dp_dequeue_single_job;
+ service_ctx->dequeue_done = qat_sym_dp_update_head;
+
+ if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER ||
+ ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) {
+ /* AES-GCM or AES-CCM */
+ if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64 ||
+ (ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_AES128
+ && ctx->qat_mode == ICP_QAT_HW_CIPHER_CTR_MODE
+ && ctx->qat_hash_alg ==
+ ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC)) {
+ if (service_type != RTE_CRYPTO_DP_SYM_AEAD)
+ return -1;
+ service_ctx->submit_vec = qat_sym_dp_submit_aead_jobs;
+ service_ctx->submit_single_job =
+ qat_sym_dp_submit_single_aead;
+ } else {
+ if (service_type != RTE_CRYPTO_DP_SYM_CHAIN)
+ return -1;
+ service_ctx->submit_vec = qat_sym_dp_submit_chain_jobs;
+ service_ctx->submit_single_job =
+ qat_sym_dp_submit_single_chain;
+ }
+ } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_AUTH) {
+ if (service_type != RTE_CRYPTO_DP_SYM_AUTH_ONLY)
+ return -1;
+ service_ctx->submit_vec = qat_sym_dp_submit_auth_jobs;
+ service_ctx->submit_single_job = qat_sym_dp_submit_single_auth;
+ } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) {
+ if (service_type != RTE_CRYPTO_DP_SYM_CIPHER_ONLY)
+ return -1;
+ service_ctx->submit_vec = qat_sym_dp_submit_cipher_jobs;
+ service_ctx->submit_single_job =
+ qat_sym_dp_submit_single_cipher;
+ }
+
+ return 0;
+}
+
+int
+qat_sym_get_service_ctx_size(__rte_unused struct rte_cryptodev *dev)
+{
+ return sizeof(struct qat_sym_dp_service_ctx);
+}
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index 314742f53..bef08c3bc 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -258,7 +258,11 @@ static struct rte_cryptodev_ops crypto_qat_ops = {
/* Crypto related operations */
.sym_session_get_size = qat_sym_session_get_private_size,
.sym_session_configure = qat_sym_session_configure,
- .sym_session_clear = qat_sym_session_clear
+ .sym_session_clear = qat_sym_session_clear,
+
+ /* Data plane service related operations */
+ .get_drv_ctx_size = qat_sym_get_service_ctx_size,
+ .configure_service = qat_sym_dp_configure_service_ctx,
};
#ifdef RTE_LIBRTE_SECURITY
@@ -376,7 +380,8 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
- RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
+ RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED |
+ RTE_CRYPTODEV_FF_DATA_PLANE_SERVICE;
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v7 3/4] test/crypto: add unit-test for cryptodev direct APIs
2020-08-28 12:58 ` [dpdk-dev] [dpdk-dev v7 0/4] cryptodev: add data-path service APIs Fan Zhang
2020-08-28 12:58 ` [dpdk-dev] [dpdk-dev v7 1/4] cryptodev: add crypto " Fan Zhang
2020-08-28 12:58 ` [dpdk-dev] [dpdk-dev v7 2/4] crypto/qat: add crypto data-path service API support Fan Zhang
@ 2020-08-28 12:58 ` Fan Zhang
2020-08-28 12:58 ` [dpdk-dev] [dpdk-dev v7 4/4] doc: add cryptodev service APIs guide Fan Zhang
2020-09-04 15:25 ` [dpdk-dev] [dpdk-dev v8 0/4] cryptodev: add data-path service APIs Fan Zhang
4 siblings, 0 replies; 84+ messages in thread
From: Fan Zhang @ 2020-08-28 12:58 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski,
roy.fan.zhang
This patch adds the QAT test to use cryptodev symmetric crypto
direct APIs.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
app/test/test_cryptodev.c | 354 ++++++++++++++++++++++++--
app/test/test_cryptodev.h | 6 +
app/test/test_cryptodev_blockcipher.c | 50 ++--
3 files changed, 373 insertions(+), 37 deletions(-)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 70bf6fe2c..d6909984d 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -49,6 +49,8 @@
#define VDEV_ARGS_SIZE 100
#define MAX_NB_SESSIONS 4
+#define MAX_DRV_SERVICE_CTX_SIZE 256
+
#define IN_PLACE 0
#define OUT_OF_PLACE 1
@@ -57,6 +59,8 @@ static int gbl_driver_id;
static enum rte_security_session_action_type gbl_action_type =
RTE_SECURITY_ACTION_TYPE_NONE;
+int hw_dp_test;
+
struct crypto_testsuite_params {
struct rte_mempool *mbuf_pool;
struct rte_mempool *large_mbuf_pool;
@@ -147,6 +151,153 @@ ceil_byte_length(uint32_t num_bits)
return (num_bits >> 3);
}
+void
+process_sym_hw_api_op(uint8_t dev_id, uint16_t qp_id, struct rte_crypto_op *op,
+ uint8_t is_cipher, uint8_t is_auth, uint8_t len_in_bits)
+{
+ int32_t n;
+ struct rte_crypto_sym_op *sop;
+ struct rte_crypto_op *ret_op = NULL;
+ struct rte_crypto_vec data_vec[UINT8_MAX];
+ struct rte_crypto_data iv_vec, aad_vec, digest_vec;
+ union rte_crypto_sym_ofs ofs;
+ int32_t status;
+ uint32_t min_ofs, max_len;
+ union rte_cryptodev_session_ctx sess;
+ enum rte_crypto_dp_service service_type;
+ uint32_t count = 0;
+ uint8_t service_data[MAX_DRV_SERVICE_CTX_SIZE] = {0};
+ struct rte_crypto_dp_service_ctx *ctx = (void *)service_data;
+ int ctx_service_size;
+
+ sop = op->sym;
+
+ sess.crypto_sess = sop->session;
+
+ if (is_cipher && is_auth) {
+ service_type = RTE_CRYPTO_DP_SYM_CHAIN;
+ min_ofs = RTE_MIN(sop->cipher.data.offset,
+ sop->auth.data.offset);
+ max_len = RTE_MAX(sop->cipher.data.length,
+ sop->auth.data.length);
+ } else if (is_cipher) {
+ service_type = RTE_CRYPTO_DP_SYM_CIPHER_ONLY;
+ min_ofs = sop->cipher.data.offset;
+ max_len = sop->cipher.data.length;
+ } else if (is_auth) {
+ service_type = RTE_CRYPTO_DP_SYM_AUTH_ONLY;
+ min_ofs = sop->auth.data.offset;
+ max_len = sop->auth.data.length;
+ } else { /* aead */
+ service_type = RTE_CRYPTO_DP_SYM_AEAD;
+ min_ofs = sop->aead.data.offset;
+ max_len = sop->aead.data.length;
+ }
+
+ if (len_in_bits) {
+ max_len = max_len >> 3;
+ min_ofs = min_ofs >> 3;
+ }
+
+ ctx_service_size = rte_cryptodev_get_dp_service_ctx_data_size(dev_id);
+ assert(ctx_service_size <= MAX_DRV_SERVICE_CTX_SIZE &&
+ ctx_service_size > 0);
+
+ if (rte_cryptodev_dp_configure_service(dev_id, qp_id, service_type,
+ RTE_CRYPTO_OP_WITH_SESSION, sess, ctx, 0) < 0) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ /* test update service */
+ if (rte_cryptodev_dp_configure_service(dev_id, qp_id, service_type,
+ RTE_CRYPTO_OP_WITH_SESSION, sess, ctx, 1) < 0) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ n = rte_crypto_mbuf_to_vec(sop->m_src, 0, min_ofs + max_len,
+ data_vec, RTE_DIM(data_vec));
+ if (n < 0 || n != sop->m_src->nb_segs) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ ofs.raw = 0;
+
+ iv_vec.base = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
+ iv_vec.iova = rte_crypto_op_ctophys_offset(op, IV_OFFSET);
+
+ switch (service_type) {
+ case RTE_CRYPTO_DP_SYM_AEAD:
+ ofs.ofs.cipher.head = sop->cipher.data.offset;
+ aad_vec.base = (void *)sop->aead.aad.data;
+ aad_vec.iova = sop->aead.aad.phys_addr;
+ digest_vec.base = (void *)sop->aead.digest.data;
+ digest_vec.iova = sop->aead.digest.phys_addr;
+ if (len_in_bits) {
+ ofs.ofs.cipher.head >>= 3;
+ ofs.ofs.cipher.tail >>= 3;
+ }
+ break;
+ case RTE_CRYPTO_DP_SYM_CIPHER_ONLY:
+ ofs.ofs.cipher.head = sop->cipher.data.offset;
+ if (len_in_bits) {
+ ofs.ofs.cipher.head >>= 3;
+ ofs.ofs.cipher.tail >>= 3;
+ }
+ break;
+ case RTE_CRYPTO_DP_SYM_AUTH_ONLY:
+ ofs.ofs.auth.head = sop->auth.data.offset;
+ digest_vec.base = (void *)sop->auth.digest.data;
+ digest_vec.iova = sop->auth.digest.phys_addr;
+ break;
+ case RTE_CRYPTO_DP_SYM_CHAIN:
+ ofs.ofs.cipher.head =
+ sop->cipher.data.offset - sop->auth.data.offset;
+ ofs.ofs.cipher.tail =
+ (sop->auth.data.offset + sop->auth.data.length) -
+ (sop->cipher.data.offset + sop->cipher.data.length);
+ if (len_in_bits) {
+ ofs.ofs.cipher.head >>= 3;
+ ofs.ofs.cipher.tail >>= 3;
+ }
+ digest_vec.base = (void *)sop->auth.digest.data;
+ digest_vec.iova = sop->auth.digest.phys_addr;
+ break;
+ default:
+ break;
+ }
+
+ status = rte_cryptodev_dp_submit_single_job(ctx, data_vec, n, ofs,
+ &iv_vec, &digest_vec, &aad_vec, (void *)op);
+ if (status < 0) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ rte_cryptodev_dp_submit_done(ctx, 1);
+
+ status = -1;
+ while (count++ < 1024 && status == -1) {
+ status = rte_cryptodev_dp_sym_dequeue_single_job(ctx,
+ (void **)&ret_op);
+ if (status == -1)
+ rte_pause();
+ }
+
+ if (status != -1)
+ rte_cryptodev_dp_dequeue_done(ctx, 1);
+
+ if (count == 1025 || status != 1 || ret_op != op) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ op->status = status == 1 ? RTE_CRYPTO_OP_STATUS_SUCCESS :
+ RTE_CRYPTO_OP_STATUS_ERROR;
+}
+
static void
process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op)
{
@@ -2470,7 +2621,11 @@ test_snow3g_authentication(const struct snow3g_hash_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
ut_params->obuf = ut_params->op->sym->m_src;
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -2549,7 +2704,11 @@ test_snow3g_authentication_verify(const struct snow3g_hash_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
@@ -2619,6 +2778,9 @@ test_kasumi_authentication(const struct kasumi_hash_test_data *tdata)
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1);
else
ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
@@ -2690,7 +2852,11 @@ test_kasumi_authentication_verify(const struct kasumi_hash_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
@@ -2897,8 +3063,12 @@ test_kasumi_encryption(const struct kasumi_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
- ut_params->op);
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_dst;
@@ -2983,7 +3153,11 @@ test_kasumi_encryption_sgl(const struct kasumi_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -3306,7 +3480,11 @@ test_kasumi_decryption(const struct kasumi_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -3381,7 +3559,11 @@ test_snow3g_encryption(const struct snow3g_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -3756,7 +3938,11 @@ static int test_snow3g_decryption(const struct snow3g_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_dst;
@@ -3924,7 +4110,11 @@ test_zuc_cipher_auth(const struct wireless_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
@@ -4019,7 +4209,11 @@ test_snow3g_cipher_auth(const struct snow3g_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
@@ -4155,7 +4349,11 @@ test_snow3g_auth_cipher(const struct snow3g_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4344,7 +4542,11 @@ test_snow3g_auth_cipher_sgl(const struct snow3g_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4526,7 +4728,11 @@ test_kasumi_auth_cipher(const struct kasumi_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4716,7 +4922,11 @@ test_kasumi_auth_cipher_sgl(const struct kasumi_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4857,7 +5067,11 @@ test_kasumi_cipher_auth(const struct kasumi_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4944,7 +5158,11 @@ test_zuc_encryption(const struct wireless_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5031,7 +5249,11 @@ test_zuc_encryption_sgl(const struct wireless_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5119,7 +5341,11 @@ test_zuc_authentication(const struct wireless_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
ut_params->obuf = ut_params->op->sym->m_src;
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5251,7 +5477,11 @@ test_zuc_auth_cipher(const struct wireless_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5437,7 +5667,11 @@ test_zuc_auth_cipher_sgl(const struct wireless_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -7043,6 +7277,9 @@ test_authenticated_encryption(const struct aead_test_data *tdata)
/* Process crypto operation */
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+ else if (hw_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -8540,6 +8777,9 @@ test_authenticated_decryption(const struct aead_test_data *tdata)
/* Process crypto operation */
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+ else if (hw_dp_test == 1)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -11480,6 +11720,9 @@ test_authenticated_encryption_SGL(const struct aead_test_data *tdata,
if (oop == IN_PLACE &&
gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+ else if (oop == IN_PLACE && hw_dp_test == 1)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -13041,6 +13284,75 @@ test_cryptodev_nitrox(void)
return unit_test_suite_runner(&cryptodev_nitrox_testsuite);
}
+static struct unit_test_suite cryptodev_sym_direct_api_testsuite = {
+ .suite_name = "Crypto Sym direct API Test Suite",
+ .setup = testsuite_setup,
+ .teardown = testsuite_teardown,
+ .unit_test_cases = {
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_decryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_auth_cipher_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_auth_cipher_verify_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_hash_generate_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_hash_verify_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_encryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_kasumi_decryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown, test_AES_cipheronly_all),
+ TEST_CASE_ST(ut_setup, ut_teardown, test_authonly_all),
+ TEST_CASE_ST(ut_setup, ut_teardown, test_AES_chain_all),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_CCM_authenticated_encryption_test_case_128_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_CCM_authenticated_decryption_test_case_128_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_256_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_256_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encrypt_SGL_in_place_1500B),
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ }
+};
+
+static int
+test_qat_sym_direct_api(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+ int ret;
+
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "QAT PMD must be loaded. Check that both "
+ "CONFIG_RTE_LIBRTE_PMD_QAT and CONFIG_RTE_LIBRTE_PMD_QAT_SYM "
+ "are enabled in config file to run this testsuite.\n");
+ return TEST_SKIPPED;
+ }
+
+ hw_dp_test = 1;
+ ret = unit_test_suite_runner(&cryptodev_sym_direct_api_testsuite);
+ hw_dp_test = 0;
+
+ return ret;
+}
+
+REGISTER_TEST_COMMAND(cryptodev_qat_sym_api_autotest, test_qat_sym_direct_api);
REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_mb_autotest,
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 41542e055..c382c12c4 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -71,6 +71,8 @@
#define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
#define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
+extern int hw_dp_test;
+
/**
* Write (spread) data from buffer to mbuf data
*
@@ -209,4 +211,8 @@ create_segmented_mbuf(struct rte_mempool *mbuf_pool, int pkt_len,
return NULL;
}
+void
+process_sym_hw_api_op(uint8_t dev_id, uint16_t qp_id, struct rte_crypto_op *op,
+ uint8_t is_cipher, uint8_t is_auth, uint8_t len_in_bits);
+
#endif /* TEST_CRYPTODEV_H_ */
diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c
index 221262341..fc540e362 100644
--- a/app/test/test_cryptodev_blockcipher.c
+++ b/app/test/test_cryptodev_blockcipher.c
@@ -462,25 +462,43 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
}
/* Process crypto operation */
- if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
- snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
- "line %u FAILED: %s",
- __LINE__, "Error sending packet for encryption");
- status = TEST_FAILED;
- goto error_exit;
- }
+ if (hw_dp_test) {
+ uint8_t is_cipher = 0, is_auth = 0;
+
+ if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) {
+ RTE_LOG(DEBUG, USER1,
+ "QAT direct API does not support OOP, Test Skipped.\n");
+ snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "SKIPPED");
+ status = TEST_SUCCESS;
+ goto error_exit;
+ }
+ if (t->op_mask & BLOCKCIPHER_TEST_OP_CIPHER)
+ is_cipher = 1;
+ if (t->op_mask & BLOCKCIPHER_TEST_OP_AUTH)
+ is_auth = 1;
+
+ process_sym_hw_api_op(dev_id, 0, op, is_cipher, is_auth, 0);
+ } else {
+ if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
+ snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
+ "line %u FAILED: %s",
+ __LINE__, "Error sending packet for encryption");
+ status = TEST_FAILED;
+ goto error_exit;
+ }
- op = NULL;
+ op = NULL;
- while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) == 0)
- rte_pause();
+ while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) == 0)
+ rte_pause();
- if (!op) {
- snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
- "line %u FAILED: %s",
- __LINE__, "Failed to process sym crypto op");
- status = TEST_FAILED;
- goto error_exit;
+ if (!op) {
+ snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
+ "line %u FAILED: %s",
+ __LINE__, "Failed to process sym crypto op");
+ status = TEST_FAILED;
+ goto error_exit;
+ }
}
debug_hexdump(stdout, "m_src(after):",
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v7 4/4] doc: add cryptodev service APIs guide
2020-08-28 12:58 ` [dpdk-dev] [dpdk-dev v7 0/4] cryptodev: add data-path service APIs Fan Zhang
` (2 preceding siblings ...)
2020-08-28 12:58 ` [dpdk-dev] [dpdk-dev v7 3/4] test/crypto: add unit-test for cryptodev direct APIs Fan Zhang
@ 2020-08-28 12:58 ` Fan Zhang
2020-09-04 15:25 ` [dpdk-dev] [dpdk-dev v8 0/4] cryptodev: add data-path service APIs Fan Zhang
4 siblings, 0 replies; 84+ messages in thread
From: Fan Zhang @ 2020-08-28 12:58 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski,
roy.fan.zhang
This patch updates programmer's guide to demonstrate the usage
and limitations of cryptodev symmetric crypto data-path service
APIs.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
doc/guides/prog_guide/cryptodev_lib.rst | 90 +++++++++++++++++++++++++
1 file changed, 90 insertions(+)
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index c14f750fa..77521c959 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -631,6 +631,96 @@ a call argument. Status different than zero must be treated as error.
For more details, e.g. how to convert an mbuf to an SGL, please refer to an
example usage in the IPsec library implementation.
+Cryptodev Direct Data-plane Service API
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Direct crypto data-path service are a set of APIs that especially provided for
+the external libraries/applications who want to take advantage of the rich
+features provided by cryptodev, but not necessarily depend on cryptodev
+operations, mempools, or mbufs in the their data-path implementations.
+
+The direct crypto data-path service has the following advantages:
+- Supports raw data pointer and physical addresses as input.
+- Do not require specific data structure allocated from heap, such as
+ cryptodev operation.
+- Enqueue in a burst or single operation. The service allow enqueuing in
+ a burst similar to ``rte_cryptodev_enqueue_burst`` operation, or only
+ enqueue one job at a time but maintaining necessary context data locally for
+ next single job enqueue operation. The latter method is especially helpful
+ when the user application's crypto operations are clustered into a burst.
+ Allowing enqueue one operation at a time helps reducing one additional loop
+ and also reduced the cache misses during the double "looping" situation.
+- Customerizable dequeue count. Instead of dequeue maximum possible operations
+ as same as ``rte_cryptodev_dequeue_burst`` operation, the service allows the
+ user to provide a callback function to decide how many operations to be
+ dequeued. This is especially helpful when the expected dequeue count is
+ hidden inside the opaque data stored during enqueue. The user can provide
+ the callback function to parse the opaque data structure.
+- Abandon enqueue and dequeue anytime. One of the drawbacks of
+ ``rte_cryptodev_enqueue_burst`` and ``rte_cryptodev_dequeue_burst``
+ operations are: once an operation is enqueued/dequeued there is no way to
+ undo the operation. The service make the operation abandon possible by
+ creating a local copy of the queue operation data in the service context
+ data. The data will be written back to driver maintained operation data
+ when enqueue or dequeue done function is called.
+
+The cryptodev data-path service uses
+
+Cryptodev PMDs who supports this feature will have
+``RTE_CRYPTODEV_FF_SYM_HW_DIRECT_API`` feature flag presented. To use this
+feature the function ``rte_cryptodev_get_dp_service_ctx_data_size`` should
+be called to get the data path service context data size. The user should
+creates a local buffer at least this size long and initialize it using
+``rte_cryptodev_dp_configure_service`` function call.
+
+The ``rte_cryptodev_dp_configure_service`` function call initialize or
+updates the ``struct rte_crypto_dp_service_ctx`` buffer, in which contains the
+driver specific queue pair data pointer and service context buffer, and a
+set of function pointers to enqueue and dequeue different algorithms'
+operations. The ``rte_cryptodev_dp_configure_service`` should be called when:
+
+- Before enqueuing or dequeuing starts (set ``is_update`` parameter to 0).
+- When different cryptodev session, security session, or session-less xform
+ is used (set ``is_update`` parameter to 1).
+
+Two different enqueue functions are provided.
+
+- ``rte_cryptodev_dp_sym_submit_vec``: submit a burst of operations stored in
+ the ``rte_crypto_sym_vec`` structure.
+- ``rte_cryptodev_dp_submit_single_job``: submit single operation.
+
+Either enqueue functions will not command the crypto device to start processing
+until ``rte_cryptodev_dp_submit_done`` function is called. Before then the user
+shall expect the driver only stores the necessory context data in the
+``rte_crypto_dp_service_ctx`` buffer for the next enqueue operation. If the user
+wants to abandon the submitted operations, simply call
+``rte_cryptodev_dp_configure_service`` function instead with the parameter
+``is_update`` set to 0. The driver will recover the service context data to
+the previous state.
+
+To dequeue the operations the user also have two operations:
+
+- ``rte_cryptodev_dp_sym_dequeue``: fully customizable deuqueue operation. The
+ user needs to provide the callback function for the driver to get the
+ dequeue count and perform post processing such as write the status field.
+- ``rte_cryptodev_dp_sym_dequeue_single_job``: dequeue single job.
+
+Same as enqueue, the function ``rte_cryptodev_dp_dequeue_done`` is used to
+merge user's local service context data with the driver's queue operation
+data. Also to abandon the dequeue operation (still keep the operations in the
+queue), the user shall avoid ``rte_cryptodev_dp_dequeue_done`` function call
+but calling ``rte_cryptodev_dp_configure_service`` function with the parameter
+``is_update`` set to 0.
+
+There are a few limitations to the data path service:
+
+* Only support in-place operations.
+* APIs are NOT thread-safe.
+* CANNOT mix the direct API's enqueue with rte_cryptodev_enqueue_burst, or
+ vice versa.
+
+See *DPDK API Reference* for details on each API definitions.
+
Sample code
-----------
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v7 1/4] cryptodev: add crypto data-path service APIs
2020-08-28 12:58 ` [dpdk-dev] [dpdk-dev v7 1/4] cryptodev: add crypto " Fan Zhang
@ 2020-08-31 6:23 ` Kusztal, ArkadiuszX
2020-08-31 12:21 ` Zhang, Roy Fan
2020-08-31 15:15 ` Zhang, Roy Fan
0 siblings, 2 replies; 84+ messages in thread
From: Kusztal, ArkadiuszX @ 2020-08-31 6:23 UTC (permalink / raw)
To: Zhang, Roy Fan, dev
Cc: akhil.goyal, Trahe, Fiona, Dybkowski, AdamX, Bronowski, PiotrX
Hi Fan, Piotrek,
-----Original Message-----
From: Zhang, Roy Fan <roy.fan.zhang@intel.com>
Sent: piątek, 28 sierpnia 2020 14:58
To: dev@dpdk.org
Cc: akhil.goyal@nxp.com; Trahe, Fiona <fiona.trahe@intel.com>; Kusztal, ArkadiuszX <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX <adamx.dybkowski@intel.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>; Bronowski, PiotrX <piotrx.bronowski@intel.com>
Subject: [dpdk-dev v7 1/4] cryptodev: add crypto data-path service APIs
This patch adds data-path service APIs for enqueue and dequeue operations to cryptodev. The APIs support flexible user-define enqueue and dequeue behaviors and operation mode.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
---
lib/librte_cryptodev/rte_crypto.h | 9 +
lib/librte_cryptodev/rte_crypto_sym.h | 44 ++-
lib/librte_cryptodev/rte_cryptodev.c | 45 +++
lib/librte_cryptodev/rte_cryptodev.h | 335 +++++++++++++++++-
lib/librte_cryptodev/rte_cryptodev_pmd.h | 47 ++-
.../rte_cryptodev_version.map | 10 +
6 files changed, 481 insertions(+), 9 deletions(-)
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
index fd5ef3a87..f009be9af 100644
--- a/lib/librte_cryptodev/rte_crypto.h
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -438,6 +438,15 @@ rte_crypto_op_attach_asym_session(struct rte_crypto_op *op,
return 0;
}
+/** Crypto data-path service types */
+enum rte_crypto_dp_service {
+ RTE_CRYPTO_DP_SYM_CIPHER_ONLY = 0,
+ RTE_CRYPTO_DP_SYM_AUTH_ONLY,
+ RTE_CRYPTO_DP_SYM_CHAIN,
[Arek] - if it is auth-cipher/cipher-auth will be decided thanks to sym_session/xform?
+ RTE_CRYPTO_DP_SYM_AEAD,
+ RTE_CRYPTO_DP_N_SERVICE
+};
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index f29c98051..518e4111b 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -50,6 +50,18 @@ struct rte_crypto_sgl {
uint32_t num;
};
+/**
+ * Crypto IO Data without length info.
+ * Supposed to be used to pass input/output data buffers with lengths
+ * defined when creating crypto session.
+ */
+struct rte_crypto_data {
+ /** virtual address of the data buffer */
+ void *base;
+ /** IOVA of the data buffer */
+ rte_iova_t iova;
+};
+
/**
* Synchronous operation descriptor.
* Supposed to be used with CPU crypto API call.
@@ -57,12 +69,32 @@ struct rte_crypto_sgl { struct rte_crypto_sym_vec {
/** array of SGL vectors */
struct rte_crypto_sgl *sgl;
- /** array of pointers to IV */
- void **iv;
- /** array of pointers to AAD */
- void **aad;
- /** array of pointers to digest */
- void **digest;
+
+ union {
+
+ /* Supposed to be used with CPU crypto API call. */
+ struct {
+ /** array of pointers to IV */
+ void **iv;
+ /** array of pointers to AAD */
+ void **aad;
+ /** array of pointers to digest */
+ void **digest;
+ };
+
+ /* Supposed to be used with rte_cryptodev_dp_sym_submit_vec()
+ * call.
+ */
+ struct {
+ /** vector to IV */
+ struct rte_crypto_data *iv_vec;
+ /** vecor to AAD */
+ struct rte_crypto_data *aad_vec;
+ /** vector to Digest */
+ struct rte_crypto_data *digest_vec;
+ };
+ };
+
/**
* array of statuses for each operation:
* - 0 on success
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index 1dd795bcb..8a28511f9 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -1914,6 +1914,51 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec); }
+int
+rte_cryptodev_get_dp_service_ctx_data_size(uint8_t dev_id) {
+ struct rte_cryptodev *dev;
+ int32_t size = sizeof(struct rte_crypto_dp_service_ctx);
+ int32_t priv_size;
+
+ if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+ return -1;
+
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+ if (*dev->dev_ops->get_drv_ctx_size == NULL ||
+ !(dev->feature_flags & RTE_CRYPTODEV_FF_DATA_PLANE_SERVICE)) {
+ return -1;
+ }
+
+ priv_size = (*dev->dev_ops->get_drv_ctx_size)(dev);
+ if (priv_size < 0)
+ return -1;
+
+ return RTE_ALIGN_CEIL((size + priv_size), 8); }
+
+int
+rte_cryptodev_dp_configure_service(uint8_t dev_id, uint16_t qp_id,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ struct rte_crypto_dp_service_ctx *ctx, uint8_t is_update) {
+ struct rte_cryptodev *dev;
+
+ if (!rte_cryptodev_get_qp_status(dev_id, qp_id))
+ return -1;
+
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+ if (!(dev->feature_flags & RTE_CRYPTODEV_FF_DATA_PLANE_SERVICE)
+ || dev->dev_ops->configure_service == NULL)
+ return -1;
+
[Arek] - Why to change order of arguments between rte_cryptodev_dp_configure_service and configure_service pointer? Except of dev and dev_id they all are the same.
+ return (*dev->dev_ops->configure_service)(dev, qp_id, ctx,
+ service_type, sess_type, session_ctx, is_update); }
+
/** Initialise rte_crypto_op mempool element */ static void rte_crypto_op_init(struct rte_mempool *mempool, diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 7b3ebc20f..9c97846f3 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -466,7 +466,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum, /**< Support symmetric session-less operations */
#define RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA (1ULL << 23)
/**< Support operations on data which is not byte aligned */
-
+#define RTE_CRYPTODEV_FF_DATA_PLANE_SERVICE (1ULL << 24)
+/**< Support accelerated specific raw data as input */
/**
* Get the name of a crypto device feature flag @@ -1351,6 +1352,338 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
struct rte_crypto_sym_vec *vec);
+/**
+ * Get the size of the data-path service context for all registered drivers.
+ *
+ * @param dev_id The device identifier.
+ *
+ * @return
+ * - If the device supports data-path service, return the context size.
+ * - If the device does not support the data-plane service, return -1.
+ */
+__rte_experimental
+int
+rte_cryptodev_get_dp_service_ctx_data_size(uint8_t dev_id);
+
+/**
+ * Union of different crypto session types, including session-less
+xform
+ * pointer.
+ */
+union rte_cryptodev_session_ctx {
+ struct rte_cryptodev_sym_session *crypto_sess;
+ struct rte_crypto_sym_xform *xform;
+ struct rte_security_session *sec_sess; };
+
+/**
+ * Submit a data vector into device queue but the driver will not start
+ * processing until rte_cryptodev_dp_sym_submit_vec() is called.
[Arek] " until ``rte_cryptodev_dp_submit_done`` function is called " ?
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param vec The array of job vectors.
+ * @param ofs Start and stop offsets for auth and cipher
+ * operations.
+ * @param opaque The array of opaque data for dequeue.
+ * @return
+ * - The number of jobs successfully submitted.
+ */
+typedef uint32_t (*cryptodev_dp_sym_submit_vec_t)(
+ void *qp, uint8_t *service_data, struct rte_crypto_sym_vec *vec,
+ union rte_crypto_sym_ofs ofs, void **opaque);
+
+/**
+ * Submit single job into device queue but the driver will not start
+ * processing until rte_cryptodev_dp_sym_submit_vec() is called.
[Arek] until ``rte_cryptodev_dp_submit_done`` function is called " as above.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param data The buffer vector.
+ * @param n_data_vecs Number of buffer vectors.
+ * @param ofs Start and stop offsets for auth and cipher
+ * operations.
+ * @param iv IV data.
+ * @param digest Digest data.
+ * @param aad AAD data.
+ * @param opaque The opaque data for dequeue.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
[Arek] How can we distinguish between malformed packet and full queue?
+ */
+typedef int (*cryptodev_dp_submit_single_job_t)(
+ void *qp, uint8_t *service_data, struct rte_crypto_vec *data,
+ uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs,
+ struct rte_crypto_data *iv, struct rte_crypto_data *digest,
+ struct rte_crypto_data *aad, void *opaque);
+
+/**
+ * Inform the queue pair to start processing or finish dequeuing all
+ * submitted/dequeued jobs.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param n The total number of submitted jobs.
+ */
+typedef void (*cryptodev_dp_sym_operation_done_t)(void *qp,
+ uint8_t *service_data, uint32_t n);
+
+/**
+ * Typedef that the user provided for the driver to get the dequeue count.
+ * The function may return a fixed number or the number parsed from the
+opaque
+ * data stored in the first processed job.
+ *
+ * @param opaque Dequeued opaque data.
+ **/
+typedef uint32_t (*rte_cryptodev_get_dequeue_count_t)(void *opaque);
+
+/**
+ * Typedef that the user provided to deal with post dequeue operation,
+such
+ * as filling status.
+ *
+ * @param opaque Dequeued opaque data. In case
+ * RTE_CRYPTO_HW_DP_FF_GET_OPAQUE_ARRAY bit is
+ * set, this value will be the opaque data stored
+ * in the specific processed jobs referenced by
+ * index, otherwise it will be the opaque data
+ * stored in the first processed job in the burst.
+ * @param index Index number of the processed job.
+ * @param is_op_success Driver filled operation status.
+ **/
+typedef void (*rte_cryptodev_post_dequeue_t)(void *opaque, uint32_t index,
+ uint8_t is_op_success);
+
+/**
+ * Dequeue symmetric crypto processing of user provided data.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param get_dequeue_count User provided callback function to
+ * obtain dequeue count.
+ * @param post_dequeue User provided callback function to
+ * post-process a dequeued operation.
+ * @param out_opaque Opaque pointer array to be retrieve from
+ * device queue. In case of
+ * *is_opaque_array* is set there should
+ * be enough room to store all opaque data.
+ * @param is_opaque_array Set 1 if every dequeued job will be
+ * written the opaque data into
+ * *out_opaque* array.
+ * @param n_success_jobs Driver written value to specific the
+ * total successful operations count.
+ *
+ * @return
+ * - Returns number of dequeued packets.
+ */
+typedef uint32_t (*cryptodev_dp_sym_dequeue_t)(void *qp, uint8_t *service_data,
+ rte_cryptodev_get_dequeue_count_t get_dequeue_count,
+ rte_cryptodev_post_dequeue_t post_dequeue,
+ void **out_opaque, uint8_t is_opaque_array,
+ uint32_t *n_success_jobs);
+
+/**
+ * Dequeue symmetric crypto processing of user provided data.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param out_opaque Opaque pointer to be retrieve from
+ * device queue.
+ *
+ * @return
+ * - 1 if the job is dequeued and the operation is a success.
+ * - 0 if the job is dequeued but the operation is failed.
+ * - -1 if no job is dequeued.
+ */
+typedef int (*cryptodev_dp_sym_dequeue_single_job_t)(
+ void *qp, uint8_t *service_data, void **out_opaque);
+
+/**
+ * Context data for asynchronous crypto process.
+ */
+struct rte_crypto_dp_service_ctx {
+ void *qp_data;
+
+ union {
[Arek] - Why union? It will be extended by other structs in future?
[Arek] - unnamed union and struct, C11 extension.
+ /* Supposed to be used for symmetric crypto service */
+ struct {
+ cryptodev_dp_submit_single_job_t submit_single_job;
+ cryptodev_dp_sym_submit_vec_t submit_vec;
+ cryptodev_dp_sym_operation_done_t submit_done;
+ cryptodev_dp_sym_dequeue_t dequeue_opaque;
+ cryptodev_dp_sym_dequeue_single_job_t dequeue_single;
+ cryptodev_dp_sym_operation_done_t dequeue_done;
+ };
+ };
+
+ /* Driver specific service data */
+ uint8_t drv_service_data[];
[Arek] - flexible array, C99 so again extension
+};
+
+/**
+ * Configure one DP service context data. Calling this function for the
+first
+ * time the user should unset the *is_update* parameter and the driver
+will
+ * fill necessary operation data into ctx buffer. Only when
+ * rte_cryptodev_dp_submit_done() is called the data stored in the ctx
+buffer
+ * will not be effective.
+ *
+ * @param dev_id The device identifier.
+ * @param qp_id The index of the queue pair from which to
+ * retrieve processed packets. The value must be
+ * in the range [0, nb_queue_pair - 1] previously
+ * supplied to rte_cryptodev_configure().
+ * @param service_type Type of the service requested.
+ * @param sess_type session type.
+ * @param session_ctx Session context data.
+ * @param ctx The data-path service context data.
+ * @param is_update Set 1 if ctx is pre-initialized but need
+ * update to different service type or session,
+ * but the rest driver data remains the same.
[Arek] - if user will call it only once with is_update == 1 will there be any error shown about ctx not set when started processing or is it undefined behavior?
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+__rte_experimental
+int
+rte_cryptodev_dp_configure_service(uint8_t dev_id, uint16_t qp_id,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ struct rte_crypto_dp_service_ctx *ctx, uint8_t is_update);
+
+/**
+ * Submit single job into device queue but the driver will not start
+ * processing until rte_cryptodev_dp_sym_submit_vec() is called.
+ *
+ * @param ctx The initialized data-path service context data.
+ * @param data The buffer vector.
+ * @param n_data_vecs Number of buffer vectors.
+ * @param ofs Start and stop offsets for auth and cipher
+ * operations.
+ * @param iv IV data.
+ * @param digest Digest data.
+ * @param aad AAD data.
+ * @param opaque The array of opaque data for dequeue.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_cryptodev_dp_submit_single_job(struct rte_crypto_dp_service_ctx *ctx,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ struct rte_crypto_data *iv, struct rte_crypto_data *digest,
+ struct rte_crypto_data *aad, void *opaque) {
+ return (*ctx->submit_single_job)(ctx->qp_data, ctx->drv_service_data,
+ data, n_data_vecs, ofs, iv, digest, aad, opaque); }
+
+/**
+ * Submit a data vector into device queue but the driver will not start
+ * processing until rte_cryptodev_dp_sym_submit_vec() is called.
+ *
+ * @param ctx The initialized data-path service context data.
+ * @param vec The array of job vectors.
+ * @param ofs Start and stop offsets for auth and cipher operations.
+ * @param opaque The array of opaque data for dequeue.
+ * @return
+ * - The number of jobs successfully submitted.
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+rte_cryptodev_dp_sym_submit_vec(struct rte_crypto_dp_service_ctx *ctx,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ return (*ctx->submit_vec)(ctx->qp_data, ctx->drv_service_data, vec,
+ ofs, opaque);
+}
+
+/**
+ * Command the queue pair to start processing all submitted jobs from
+last
+ * rte_cryptodev_init_dp_service() call.
+ *
+ * @param ctx The initialized data-path service context data.
+ * @param n The total number of submitted jobs.
+ */
+__rte_experimental
+static __rte_always_inline void
+rte_cryptodev_dp_submit_done(struct rte_crypto_dp_service_ctx *ctx,
+uint32_t n) {
+ (*ctx->submit_done)(ctx->qp_data, ctx->drv_service_data, n); }
+
+/**
+ * Dequeue symmetric crypto processing of user provided data.
+ *
+ * @param ctx The initialized data-path service
+ * context data.
+ * @param get_dequeue_count User provided callback function to
+ * obtain dequeue count.
+ * @param post_dequeue User provided callback function to
+ * post-process a dequeued operation.
+ * @param out_opaque Opaque pointer array to be retrieve from
+ * device queue. In case of
+ * *is_opaque_array* is set there should
+ * be enough room to store all opaque data.
+ * @param is_opaque_array Set 1 if every dequeued job will be
+ * written the opaque data into
+ * *out_opaque* array.
+ * @param n_success_jobs Driver written value to specific the
+ * total successful operations count.
+ *
+ * @return
+ * - Returns number of dequeued packets.
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+rte_cryptodev_dp_sym_dequeue(struct rte_crypto_dp_service_ctx *ctx,
+ rte_cryptodev_get_dequeue_count_t get_dequeue_count,
+ rte_cryptodev_post_dequeue_t post_dequeue,
+ void **out_opaque, uint8_t is_opaque_array,
+ uint32_t *n_success_jobs)
+{
+ return (*ctx->dequeue_opaque)(ctx->qp_data, ctx->drv_service_data,
+ get_dequeue_count, post_dequeue, out_opaque, is_opaque_array,
+ n_success_jobs);
+}
+
+/**
+ * Dequeue Single symmetric crypto processing of user provided data.
+ *
+ * @param ctx The initialized data-path service
+ * context data.
+ * @param out_opaque Opaque pointer to be retrieve from
+ * device queue. The driver shall support
+ * NULL input of this parameter.
+ *
+ * @return
+ * - 1 if the job is dequeued and the operation is a success.
+ * - 0 if the job is dequeued but the operation is failed.
+ * - -1 if no job is dequeued.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_cryptodev_dp_sym_dequeue_single_job(struct rte_crypto_dp_service_ctx *ctx,
+ void **out_opaque)
+{
+ return (*ctx->dequeue_single)(ctx->qp_data, ctx->drv_service_data,
+ out_opaque);
+}
+
+/**
+ * Inform the queue pair dequeue jobs finished.
+ *
+ * @param ctx The initialized data-path service context data.
+ * @param n The total number of jobs already dequeued.
+ */
+__rte_experimental
+static __rte_always_inline void
+rte_cryptodev_dp_dequeue_done(struct rte_crypto_dp_service_ctx *ctx,
+uint32_t n) {
+ (*ctx->dequeue_done)(ctx->qp_data, ctx->drv_service_data, n); }
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
index 81975d72b..bf0260c87 100644
--- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -316,6 +316,41 @@ typedef uint32_t (*cryptodev_sym_cpu_crypto_process_t)
(struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess,
union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec);
+/**
+ * Typedef that the driver provided to get service context private date size.
+ *
+ * @param dev Crypto device pointer.
+ *
+ * @return
+ * - On success return the size of the device's service context private data.
+ * - On failure return negative integer.
+ */
+typedef int (*cryptodev_dp_get_service_ctx_size_t)(
+ struct rte_cryptodev *dev);
+
+/**
+ * Typedef that the driver provided to configure data-path service.
+ *
+ * @param dev Crypto device pointer.
+ * @param qp_id Crypto device queue pair index.
+ * @param ctx The data-path service context data.
+ * @param service_type Type of the service requested.
+ * @param sess_type session type.
+ * @param session_ctx Session context data.
+ * @param is_update Set 1 if ctx is pre-initialized but need
+ * update to different service type or session,
+ * but the rest driver data remains the same.
+ * buffer will always be one.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+typedef int (*cryptodev_dp_configure_service_t)(
+ struct rte_cryptodev *dev, uint16_t qp_id,
+ struct rte_crypto_dp_service_ctx *ctx,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx, uint8_t is_update);
/** Crypto device operations function pointer table */ struct rte_cryptodev_ops { @@ -348,8 +383,16 @@ struct rte_cryptodev_ops {
/**< Clear a Crypto sessions private data. */
cryptodev_asym_free_session_t asym_session_clear;
/**< Clear a Crypto sessions private data. */
- cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
- /**< process input data synchronously (cpu-crypto). */
+ union {
+ cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
+ /**< process input data synchronously (cpu-crypto). */
+ struct {
+ cryptodev_dp_get_service_ctx_size_t get_drv_ctx_size;
+ /**< Get data path service context data size. */
+ cryptodev_dp_configure_service_t configure_service;
+ /**< Initialize crypto service ctx data. */
+ };
+ };
};
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 02f6dcf72..d384382d3 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -105,4 +105,14 @@ EXPERIMENTAL {
# added in 20.08
rte_cryptodev_get_qp_status;
+
+ # added in 20.11
+ rte_cryptodev_dp_configure_service;
+ rte_cryptodev_get_dp_service_ctx_data_size;
[Arek] - I know its experimental but following 6 functions are not only static but __always_inline__ too, I doubt there will be any symbol generated, should they be placed inside map file then?
[Arek] - Another thing since symbol is not created these funcs are outside of abidifftool scope of interest either I think.
+ rte_cryptodev_dp_submit_single_job;
+ rte_cryptodev_dp_sym_submit_vec;
+ rte_cryptodev_dp_submit_done;
+ rte_cryptodev_dp_sym_dequeue;
+ rte_cryptodev_dp_sym_dequeue_single_job;
+ rte_cryptodev_dp_dequeue_done;
};
[Arek] - Two small things:
- rte_crypto_data - there is many thing in asymmetric/symmetric crypto that can be called like that, and this is basically symmetric encrypted/authenticated data pointer, maybe some more specific name could be better.
- can iv, digest, aad be grouped into one? All share the same features, and there would less arguments to functions.
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v7 1/4] cryptodev: add crypto data-path service APIs
2020-08-31 6:23 ` Kusztal, ArkadiuszX
@ 2020-08-31 12:21 ` Zhang, Roy Fan
2020-08-31 15:15 ` Zhang, Roy Fan
1 sibling, 0 replies; 84+ messages in thread
From: Zhang, Roy Fan @ 2020-08-31 12:21 UTC (permalink / raw)
To: Kusztal, ArkadiuszX, dev
Cc: akhil.goyal, Trahe, Fiona, Dybkowski, AdamX, Bronowski, PiotrX
Hi Arek,
Thank you very much to review.
> -----Original Message-----
> From: Kusztal, ArkadiuszX <arkadiuszx.kusztal@intel.com>
> Sent: Monday, August 31, 2020 7:24 AM
> To: Zhang, Roy Fan <roy.fan.zhang@intel.com>; dev@dpdk.org
> Cc: akhil.goyal@nxp.com; Trahe, Fiona <fiona.trahe@intel.com>; Dybkowski,
> AdamX <adamx.dybkowski@intel.com>; Bronowski, PiotrX
> <piotrx.bronowski@intel.com>
> Subject: RE: [dpdk-dev v7 1/4] cryptodev: add crypto data-path service APIs
>
>
> [Arek] - Two small things:
> - rte_crypto_data - there is many thing in asymmetric/symmetric
> crypto that can be called like that, and this is basically symmetric
> encrypted/authenticated data pointer, maybe some more specific name
> could be better.
[Fan] Any good name to propose?
> - can iv, digest, aad be grouped into one? All share the same features,
> and there would less arguments to functions.
> --
> 2.20.1
Something like
struct rte_crypto_sym_<GOOD_NAME_HERE>
{
void * iv, *digest, *aad;
rte_iova_t iv_iova, digest_iova, aad_iova;
};
?
Definitely can do.
Regards,
Fan
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v7 1/4] cryptodev: add crypto data-path service APIs
2020-08-31 6:23 ` Kusztal, ArkadiuszX
2020-08-31 12:21 ` Zhang, Roy Fan
@ 2020-08-31 15:15 ` Zhang, Roy Fan
1 sibling, 0 replies; 84+ messages in thread
From: Zhang, Roy Fan @ 2020-08-31 15:15 UTC (permalink / raw)
To: Kusztal, ArkadiuszX, dev
Cc: akhil.goyal, Trahe, Fiona, Dybkowski, AdamX, Bronowski, PiotrX
Hi Arek,
> -----Original Message-----
> From: Kusztal, ArkadiuszX <arkadiuszx.kusztal@intel.com>
> Sent: Monday, August 31, 2020 7:24 AM
> To: Zhang, Roy Fan <roy.fan.zhang@intel.com>; dev@dpdk.org
> Cc: akhil.goyal@nxp.com; Trahe, Fiona <fiona.trahe@intel.com>; Dybkowski,
> AdamX <adamx.dybkowski@intel.com>; Bronowski, PiotrX
> <piotrx.bronowski@intel.com>
> Subject: RE: [dpdk-dev v7 1/4] cryptodev: add crypto data-path service APIs
>
> Hi Fan, Piotrek,
>
> -----Original Message-----
> From: Zhang, Roy Fan <roy.fan.zhang@intel.com>
> Sent: piątek, 28 sierpnia 2020 14:58
> To: dev@dpdk.org
> Cc: akhil.goyal@nxp.com; Trahe, Fiona <fiona.trahe@intel.com>; Kusztal,
> ArkadiuszX <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> <adamx.dybkowski@intel.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> Bronowski, PiotrX <piotrx.bronowski@intel.com>
> Subject: [dpdk-dev v7 1/4] cryptodev: add crypto data-path service APIs
>
> This patch adds data-path service APIs for enqueue and dequeue operations
> to cryptodev. The APIs support flexible user-define enqueue and dequeue
> behaviors and operation mode.
>
> Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
> Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
> ---
> lib/librte_cryptodev/rte_crypto.h | 9 +
> lib/librte_cryptodev/rte_crypto_sym.h | 44 ++-
> lib/librte_cryptodev/rte_cryptodev.c | 45 +++
> lib/librte_cryptodev/rte_cryptodev.h | 335 +++++++++++++++++-
> lib/librte_cryptodev/rte_cryptodev_pmd.h | 47 ++-
> .../rte_cryptodev_version.map | 10 +
> 6 files changed, 481 insertions(+), 9 deletions(-)
>
> +/** Crypto data-path service types */
> +enum rte_crypto_dp_service {
> + RTE_CRYPTO_DP_SYM_CIPHER_ONLY = 0,
> + RTE_CRYPTO_DP_SYM_AUTH_ONLY,
> + RTE_CRYPTO_DP_SYM_CHAIN,
> [Arek] - if it is auth-cipher/cipher-auth will be decided thanks to
> sym_session/xform?
[Fan] - yes.
> + RTE_CRYPTO_DP_SYM_AEAD,
> + RTE_CRYPTO_DP_N_SERVICE
...
> + || dev->dev_ops->configure_service == NULL)
> + return -1;
> +
> [Arek] - Why to change order of arguments between
> rte_cryptodev_dp_configure_service and configure_service pointer? Except
> of dev and dev_id they all are the same.
[Fan] - I will update.
> + * Submit a data vector into device queue but the driver will not start
> + * processing until rte_cryptodev_dp_sym_submit_vec() is called.
> [Arek] " until ``rte_cryptodev_dp_submit_done`` function is called " ?
[Fan] - Yes, will update. Sorry for it.
>
> + *
> + * @param qp Driver specific queue pair data.
> + * @param service_data Driver specific service data.
> + * @param vec The array of job vectors.
> + * @param ofs Start and stop offsets for auth and cipher
> + * operations.
> + * @param opaque The array of opaque data for
> dequeue.
> + * @return
> + * - The number of jobs successfully submitted.
> + */
> +typedef uint32_t (*cryptodev_dp_sym_submit_vec_t)(
> + void *qp, uint8_t *service_data, struct rte_crypto_sym_vec *vec,
> + union rte_crypto_sym_ofs ofs, void **opaque);
> +
> +/**
> + * Submit single job into device queue but the driver will not start
> + * processing until rte_cryptodev_dp_sym_submit_vec() is called.
> [Arek] until ``rte_cryptodev_dp_submit_done`` function is called " as above.
[Fan] - Yes, will update. Sorry for it.
> + *
> + * @param qp Driver specific queue pair data.
> + * @param service_data Driver specific service data.
> + * @param data The buffer vector.
> + * @param n_data_vecs Number of buffer vectors.
> + * @param ofs Start and stop offsets for auth and cipher
> + * operations.
> + * @param iv IV data.
> + * @param digest Digest data.
> + * @param aad AAD data.
> + * @param opaque The opaque data for dequeue.
> + * @return
> + * - On success return 0.
> + * - On failure return negative integer.
> [Arek] How can we distinguish between malformed packet and full queue?
[Fan] We may have to rely on the application to track the inflight ops.
We want to avoid use enqueue_burst same as rte_cryptodev_enqueue_burst,
as the caller application and the PMD has to loop the same job burst 2 times to write
and read the job content (cause >5% perf loss). The best option is providing an inline
function to enqueue one job at a time and provides the capability of abandoning the
enqueue if not all expected jobs are enqueued. So the idea is to create a shadow copy
of the device's queue stored in the user application and then have the capability
to abandon the enqueue when not all expected jobs are enqueued. The necessary
check has to be made by the driver when merging the user's local queue data into PMD's
queue data.
> + */
> +typedef int (*cryptodev_dp_submit_single_job_t)(
> + void *qp, uint8_t *service_data, struct rte_crypto_vec *data,
> + uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs,
> + struct rte_crypto_data *iv, struct rte_crypto_data *digest,
> + struct rte_crypto_data *aad, void *opaque);
> +
> +/**
> +struct rte_crypto_dp_service_ctx {
> + void *qp_data;
> +
> + union {
> [Arek] - Why union? It will be extended by other structs in future?
[Fan] - yes, maybe used by asymmetric crypto in the future.
> [Arek] - unnamed union and struct, C11 extension.
[Fan] Will correct it.
> + /* Supposed to be used for symmetric crypto service */
> + struct {
> + cryptodev_dp_submit_single_job_t
> submit_single_job;
> + cryptodev_dp_sym_submit_vec_t submit_vec;
> + cryptodev_dp_sym_operation_done_t submit_done;
> + cryptodev_dp_sym_dequeue_t dequeue_opaque;
> + cryptodev_dp_sym_dequeue_single_job_t
> dequeue_single;
> + cryptodev_dp_sym_operation_done_t
> dequeue_done;
> + };
> + };
> +
> + /* Driver specific service data */
> + uint8_t drv_service_data[];
> [Arek] - flexible array, C99 so again extension
[Fan] Will correct it.
> +};
> +
> +/**
> + * Configure one DP service context data. Calling this function for the
> +first
> + * time the user should unset the *is_update* parameter and the driver
> +will
> + * fill necessary operation data into ctx buffer. Only when
> + * rte_cryptodev_dp_submit_done() is called the data stored in the ctx
> +buffer
> + * will not be effective.
> + *
> + * @param dev_id The device identifier.
> + * @param qp_id The index of the queue pair from which to
> + * retrieve processed packets. The value must
> be
> + * in the range [0, nb_queue_pair - 1]
> previously
> + * supplied to rte_cryptodev_configure().
> + * @param service_type Type of the service requested.
> + * @param sess_type session type.
> + * @param session_ctx Session context data.
> + * @param ctx The data-path service context data.
> + * @param is_update Set 1 if ctx is pre-initialized but need
> + * update to different service type or session,
> + * but the rest driver data remains the same.
> [Arek] - if user will call it only once with is_update == 1 will there be any error
> shown about ctx not set when started processing or is it undefined behavior?
[Fan] - the driver will not know the difference so we may have to rely on the user
to set this correctly
> + rte_cryptodev_dp_configure_service;
> + rte_cryptodev_get_dp_service_ctx_data_size;
>
>
> [Arek] - I know its experimental but following 6 functions are not only static
> but __always_inline__ too, I doubt there will be any symbol generated,
> should they be placed inside map file then?
[Fan] the symbols are not generated. I also have the same question - do we need
to place them in the map file?
> [Arek] - Another thing since symbol is not created these funcs are outside of
> abidifftool scope of interest either I think.
>
> + rte_cryptodev_dp_submit_single_job;
> + rte_cryptodev_dp_sym_submit_vec;
> + rte_cryptodev_dp_submit_done;
> + rte_cryptodev_dp_sym_dequeue;
> + rte_cryptodev_dp_sym_dequeue_single_job;
> + rte_cryptodev_dp_dequeue_done;
> };
>
> [Arek] - Two small things:
> - rte_crypto_data - there is many thing in asymmetric/symmetric
> crypto that can be called like that, and this is basically symmetric
> encrypted/authenticated data pointer, maybe some more specific name
> could be better.
> - can iv, digest, aad be grouped into one? All share the same features,
> and there would less arguments to functions.
[Fan] - answered in the last email. Grouping is a good idea but may limit the usage.
We need a structure that does not necessarily contains the length info for all
data pre-defined when creating the sessions.
> --
> 2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v8 0/4] cryptodev: add data-path service APIs
2020-08-28 12:58 ` [dpdk-dev] [dpdk-dev v7 0/4] cryptodev: add data-path service APIs Fan Zhang
` (3 preceding siblings ...)
2020-08-28 12:58 ` [dpdk-dev] [dpdk-dev v7 4/4] doc: add cryptodev service APIs guide Fan Zhang
@ 2020-09-04 15:25 ` Fan Zhang
2020-09-04 15:25 ` [dpdk-dev] [dpdk-dev v8 1/4] cryptodev: add crypto " Fan Zhang
` (4 more replies)
4 siblings, 5 replies; 84+ messages in thread
From: Fan Zhang @ 2020-09-04 15:25 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski, Fan Zhang
Direct crypto data-path service are a set of APIs that especially provided for
the external libraries/applications who want to take advantage of the rich
features provided by cryptodev, but not necessarily depend on cryptodev
operations, mempools, or mbufs in the their data-path implementations.
The direct crypto data-path service has the following advantages:
- Supports raw data pointer and physical addresses as input.
- Do not require specific data structure allocated from heap, such as
cryptodev operation.
- Enqueue in a burst or single operation. The service allow enqueuing in
a burst similar to ``rte_cryptodev_enqueue_burst`` operation, or only
enqueue one job at a time but maintaining necessary context data locally for
next single job enqueue operation. The latter method is especially helpful
when the user application's crypto operations are clustered into a burst.
Allowing enqueue one operation at a time helps reducing one additional loop
and also reduced the cache misses during the double "looping" situation.
- Customerizable dequeue count. Instead of dequeue maximum possible operations
as same as ``rte_cryptodev_dequeue_burst`` operation, the service allows the
user to provide a callback function to decide how many operations to be
dequeued. This is especially helpful when the expected dequeue count is
hidden inside the opaque data stored during enqueue. The user can provide
the callback function to parse the opaque data structure.
- Abandon enqueue and dequeue anytime. One of the drawbacks of
``rte_cryptodev_enqueue_burst`` and ``rte_cryptodev_dequeue_burst``
operations are: once an operation is enqueued/dequeued there is no way to
undo the operation. The service make the operation abandon possible by
creating a local copy of the queue operation data in the service context
data. The data will be written back to driver maintained operation data
when enqueue or dequeue done function is called.
The cryptodev data-path service uses
Cryptodev PMDs who supports this feature will have
``RTE_CRYPTODEV_FF_SYM_HW_DIRECT_API`` feature flag presented. To use this
feature the function ``rte_cryptodev_get_dp_service_ctx_data_size`` should
be called to get the data path service context data size. The user should
creates a local buffer at least this size long and initialize it using
``rte_cryptodev_dp_configure_service`` function call.
The ``rte_cryptodev_dp_configure_service`` function call initialize or
updates the ``struct rte_crypto_dp_service_ctx`` buffer, in which contains the
driver specific queue pair data pointer and service context buffer, and a
set of function pointers to enqueue and dequeue different algorithms'
operations. The ``rte_cryptodev_dp_configure_service`` should be called when:
- Before enqueuing or dequeuing starts (set ``is_update`` parameter to 0).
- When different cryptodev session, security session, or session-less xform
is used (set ``is_update`` parameter to 1).
Two different enqueue functions are provided.
- ``rte_cryptodev_dp_sym_submit_vec``: submit a burst of operations stored in
the ``rte_crypto_sym_vec`` structure.
- ``rte_cryptodev_dp_submit_single_job``: submit single operation.
Either enqueue functions will not command the crypto device to start processing
until ``rte_cryptodev_dp_submit_done`` function is called. Before then the user
shall expect the driver only stores the necessory context data in the
``rte_crypto_dp_service_ctx`` buffer for the next enqueue operation. If the user
wants to abandon the submitted operations, simply call
``rte_cryptodev_dp_configure_service`` function instead with the parameter
``is_update`` set to 0. The driver will recover the service context data to
the previous state.
To dequeue the operations the user also have two operations:
- ``rte_cryptodev_dp_sym_dequeue``: fully customizable deuqueue operation. The
user needs to provide the callback function for the driver to get the
dequeue count and perform post processing such as write the status field.
- ``rte_cryptodev_dp_sym_dequeue_single_job``: dequeue single job.
Same as enqueue, the function ``rte_cryptodev_dp_dequeue_done`` is used to
merge user's local service context data with the driver's queue operation
data. Also to abandon the dequeue operation (still keep the operations in the
queue), the user shall avoid ``rte_cryptodev_dp_dequeue_done`` function call
but calling ``rte_cryptodev_dp_configure_service`` function with the parameter
``is_update`` set to 0.
There are a few limitations to the data path service:
* Only support in-place operations.
* APIs are NOT thread-safe.
* CANNOT mix the direct API's enqueue with rte_cryptodev_enqueue_burst, or
vice versa.
v8:
- Updated following by comments.
- Fixed a few bugs.
- Fixed ARM build error.
- Updated the unit test covering all tests.
v7:
- Fixed a few typos.
- Fixed length calculation bugs.
v6:
- Rebased on top of DPDK 20.08.
- Changed to service ctx and added single job submit/dequeue.
v5:
- Changed to use rte_crypto_sym_vec as input.
- Changed to use public APIs instead of use function pointer.
v4:
- Added missed patch.
v3:
- Instead of QAT only API, moved the API to cryptodev.
- Added cryptodev feature flags.
v2:
- Used a structure to simplify parameters.
- Added unit tests.
- Added documentation.
Fan Zhang (4):
cryptodev: add crypto data-path service APIs
crypto/qat: add crypto data-path service API support
test/crypto: add unit-test for cryptodev direct APIs
doc: add cryptodev service APIs guide
app/test/test_cryptodev.c | 452 ++++++++-
app/test/test_cryptodev.h | 7 +
app/test/test_cryptodev_blockcipher.c | 51 +-
doc/guides/prog_guide/cryptodev_lib.rst | 90 ++
drivers/common/qat/Makefile | 1 +
drivers/crypto/qat/meson.build | 1 +
drivers/crypto/qat/qat_sym.h | 13 +
drivers/crypto/qat/qat_sym_hw_dp.c | 941 ++++++++++++++++++
drivers/crypto/qat/qat_sym_pmd.c | 9 +-
lib/librte_cryptodev/rte_crypto.h | 9 +
lib/librte_cryptodev/rte_crypto_sym.h | 49 +-
lib/librte_cryptodev/rte_cryptodev.c | 98 ++
lib/librte_cryptodev/rte_cryptodev.h | 332 +++++-
lib/librte_cryptodev/rte_cryptodev_pmd.h | 48 +-
.../rte_cryptodev_version.map | 10 +
15 files changed, 2037 insertions(+), 74 deletions(-)
create mode 100644 drivers/crypto/qat/qat_sym_hw_dp.c
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v8 1/4] cryptodev: add crypto data-path service APIs
2020-09-04 15:25 ` [dpdk-dev] [dpdk-dev v8 0/4] cryptodev: add data-path service APIs Fan Zhang
@ 2020-09-04 15:25 ` Fan Zhang
2020-09-07 12:36 ` Dybkowski, AdamX
2020-09-04 15:25 ` [dpdk-dev] [dpdk-dev v8 2/4] crypto/qat: add crypto data-path service API support Fan Zhang
` (3 subsequent siblings)
4 siblings, 1 reply; 84+ messages in thread
From: Fan Zhang @ 2020-09-04 15:25 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski,
Fan Zhang, Piotr Bronowski
This patch adds data-path service APIs for enqueue and dequeue
operations to cryptodev. The APIs support flexible user-define
enqueue and dequeue behaviors and operation mode.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
---
lib/librte_cryptodev/rte_crypto.h | 9 +
lib/librte_cryptodev/rte_crypto_sym.h | 49 ++-
lib/librte_cryptodev/rte_cryptodev.c | 98 ++++++
lib/librte_cryptodev/rte_cryptodev.h | 332 +++++++++++++++++-
lib/librte_cryptodev/rte_cryptodev_pmd.h | 48 ++-
.../rte_cryptodev_version.map | 10 +
6 files changed, 537 insertions(+), 9 deletions(-)
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
index fd5ef3a87..f009be9af 100644
--- a/lib/librte_cryptodev/rte_crypto.h
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -438,6 +438,15 @@ rte_crypto_op_attach_asym_session(struct rte_crypto_op *op,
return 0;
}
+/** Crypto data-path service types */
+enum rte_crypto_dp_service {
+ RTE_CRYPTO_DP_SYM_CIPHER_ONLY = 0,
+ RTE_CRYPTO_DP_SYM_AUTH_ONLY,
+ RTE_CRYPTO_DP_SYM_CHAIN,
+ RTE_CRYPTO_DP_SYM_AEAD,
+ RTE_CRYPTO_DP_N_SERVICE
+};
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index f29c98051..376412e94 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -50,6 +50,30 @@ struct rte_crypto_sgl {
uint32_t num;
};
+/**
+ * Symmetri Crypto Addtional Data other than src and destination data.
+ * Supposed to be used to pass IV/digest/aad data buffers with lengths
+ * defined when creating crypto session.
+ */
+union rte_crypto_sym_additional_data {
+ struct {
+ void *cipher_iv_ptr;
+ rte_iova_t cipher_iv_iova;
+ void *auth_iv_ptr;
+ rte_iova_t auth_iv_iova;
+ void *digest_ptr;
+ rte_iova_t digest_iova;
+ } cipher_auth;
+ struct {
+ void *iv_ptr;
+ rte_iova_t iv_iova;
+ void *digest_ptr;
+ rte_iova_t digest_iova;
+ void *aad_ptr;
+ rte_iova_t aad_iova;
+ } aead;
+};
+
/**
* Synchronous operation descriptor.
* Supposed to be used with CPU crypto API call.
@@ -57,12 +81,25 @@ struct rte_crypto_sgl {
struct rte_crypto_sym_vec {
/** array of SGL vectors */
struct rte_crypto_sgl *sgl;
- /** array of pointers to IV */
- void **iv;
- /** array of pointers to AAD */
- void **aad;
- /** array of pointers to digest */
- void **digest;
+
+ union {
+
+ /* Supposed to be used with CPU crypto API call. */
+ struct {
+ /** array of pointers to IV */
+ void **iv;
+ /** array of pointers to AAD */
+ void **aad;
+ /** array of pointers to digest */
+ void **digest;
+ };
+
+ /* Supposed to be used with rte_cryptodev_dp_sym_submit_vec()
+ * call.
+ */
+ union rte_crypto_sym_additional_data *additional_data;
+ };
+
/**
* array of statuses for each operation:
* - 0 on success
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index 1dd795bcb..5b670e83e 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -1914,6 +1914,104 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec);
}
+int
+rte_cryptodev_dp_get_service_ctx_data_size(uint8_t dev_id)
+{
+ struct rte_cryptodev *dev;
+ int32_t size = sizeof(struct rte_crypto_dp_service_ctx);
+ int32_t priv_size;
+
+ if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+ return -1;
+
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+ if (*dev->dev_ops->get_drv_ctx_size == NULL ||
+ !(dev->feature_flags & RTE_CRYPTODEV_FF_DATA_PATH_SERVICE)) {
+ return -1;
+ }
+
+ priv_size = (*dev->dev_ops->get_drv_ctx_size)(dev);
+ if (priv_size < 0)
+ return -1;
+
+ return RTE_ALIGN_CEIL((size + priv_size), 8);
+}
+
+int
+rte_cryptodev_dp_configure_service(uint8_t dev_id, uint16_t qp_id,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ struct rte_crypto_dp_service_ctx *ctx, uint8_t is_update)
+{
+ struct rte_cryptodev *dev;
+
+ if (!rte_cryptodev_get_qp_status(dev_id, qp_id))
+ return -1;
+
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+ if (!(dev->feature_flags & RTE_CRYPTODEV_FF_DATA_PATH_SERVICE)
+ || dev->dev_ops->configure_service == NULL)
+ return -1;
+
+ return (*dev->dev_ops->configure_service)(dev, qp_id, service_type,
+ sess_type, session_ctx, ctx, is_update);
+}
+
+int
+rte_cryptodev_dp_sym_submit_single_job(struct rte_crypto_dp_service_ctx *ctx,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ union rte_crypto_sym_additional_data *additional_data,
+ void *opaque)
+{
+ return _cryptodev_dp_submit_single_job(ctx, data, n_data_vecs, ofs,
+ additional_data, opaque);
+}
+
+uint32_t
+rte_cryptodev_dp_sym_submit_vec(struct rte_crypto_dp_service_ctx *ctx,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ return (*ctx->submit_vec)(ctx->qp_data, ctx->drv_service_data, vec,
+ ofs, opaque);
+}
+
+int
+rte_cryptodev_dp_sym_dequeue_single_job(struct rte_crypto_dp_service_ctx *ctx,
+ void **out_opaque)
+{
+ return _cryptodev_dp_sym_dequeue_single_job(ctx, out_opaque);
+}
+
+void
+rte_cryptodev_dp_sym_submit_done(struct rte_crypto_dp_service_ctx *ctx,
+ uint32_t n)
+{
+ (*ctx->submit_done)(ctx->qp_data, ctx->drv_service_data, n);
+}
+
+void
+rte_cryptodev_dp_sym_dequeue_done(struct rte_crypto_dp_service_ctx *ctx,
+ uint32_t n)
+{
+ (*ctx->dequeue_done)(ctx->qp_data, ctx->drv_service_data, n);
+}
+
+uint32_t
+rte_cryptodev_dp_sym_dequeue(struct rte_crypto_dp_service_ctx *ctx,
+ rte_cryptodev_get_dequeue_count_t get_dequeue_count,
+ rte_cryptodev_post_dequeue_t post_dequeue,
+ void **out_opaque, uint8_t is_opaque_array,
+ uint32_t *n_success_jobs)
+{
+ return (*ctx->dequeue_opaque)(ctx->qp_data, ctx->drv_service_data,
+ get_dequeue_count, post_dequeue, out_opaque, is_opaque_array,
+ n_success_jobs);
+}
+
/** Initialise rte_crypto_op mempool element */
static void
rte_crypto_op_init(struct rte_mempool *mempool,
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 7b3ebc20f..5072b3a40 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -466,7 +466,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
/**< Support symmetric session-less operations */
#define RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA (1ULL << 23)
/**< Support operations on data which is not byte aligned */
-
+#define RTE_CRYPTODEV_FF_DATA_PATH_SERVICE (1ULL << 24)
+/**< Support accelerated specific raw data as input */
/**
* Get the name of a crypto device feature flag
@@ -1351,6 +1352,335 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
struct rte_crypto_sym_vec *vec);
+/**
+ * Get the size of the data-path service context for all registered drivers.
+ *
+ * @param dev_id The device identifier.
+ *
+ * @return
+ * - If the device supports data-path service, return the context size.
+ * - If the device does not support the data-dath service, return -1.
+ */
+__rte_experimental
+int
+rte_cryptodev_dp_get_service_ctx_data_size(uint8_t dev_id);
+
+/**
+ * Union of different crypto session types, including session-less xform
+ * pointer.
+ */
+union rte_cryptodev_session_ctx {
+ struct rte_cryptodev_sym_session *crypto_sess;
+ struct rte_crypto_sym_xform *xform;
+ struct rte_security_session *sec_sess;
+};
+
+/**
+ * Submit a data vector into device queue but the driver will not start
+ * processing until rte_cryptodev_dp_sym_submit_vec() is called.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param vec The array of job vectors.
+ * @param ofs Start and stop offsets for auth and cipher
+ * operations.
+ * @param opaque The array of opaque data for dequeue.
+ * @return
+ * - The number of jobs successfully submitted.
+ */
+typedef uint32_t (*cryptodev_dp_sym_submit_vec_t)(
+ void *qp, uint8_t *service_data, struct rte_crypto_sym_vec *vec,
+ union rte_crypto_sym_ofs ofs, void **opaque);
+
+/**
+ * Submit single job into device queue but the driver will not start
+ * processing until rte_cryptodev_dp_sym_submit_vec() is called.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param data The buffer vector.
+ * @param n_data_vecs Number of buffer vectors.
+ * @param ofs Start and stop offsets for auth and cipher
+ * operations.
+ * @param additional_data IV, digest, and aad data.
+ * @param opaque The opaque data for dequeue.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+typedef int (*cryptodev_dp_submit_single_job_t)(
+ void *qp, uint8_t *service_data, struct rte_crypto_vec *data,
+ uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs,
+ union rte_crypto_sym_additional_data *additional_data,
+ void *opaque);
+
+/**
+ * Inform the queue pair to start processing or finish dequeuing all
+ * submitted/dequeued jobs.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param n The total number of submitted jobs.
+ */
+typedef void (*cryptodev_dp_sym_operation_done_t)(void *qp,
+ uint8_t *service_data, uint32_t n);
+
+/**
+ * Typedef that the user provided for the driver to get the dequeue count.
+ * The function may return a fixed number or the number parsed from the opaque
+ * data stored in the first processed job.
+ *
+ * @param opaque Dequeued opaque data.
+ **/
+typedef uint32_t (*rte_cryptodev_get_dequeue_count_t)(void *opaque);
+
+/**
+ * Typedef that the user provided to deal with post dequeue operation, such
+ * as filling status.
+ *
+ * @param opaque Dequeued opaque data. In case
+ * RTE_CRYPTO_HW_DP_FF_GET_OPAQUE_ARRAY bit is
+ * set, this value will be the opaque data stored
+ * in the specific processed jobs referenced by
+ * index, otherwise it will be the opaque data
+ * stored in the first processed job in the burst.
+ * @param index Index number of the processed job.
+ * @param is_op_success Driver filled operation status.
+ **/
+typedef void (*rte_cryptodev_post_dequeue_t)(void *opaque, uint32_t index,
+ uint8_t is_op_success);
+
+/**
+ * Dequeue symmetric crypto processing of user provided data.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param get_dequeue_count User provided callback function to
+ * obtain dequeue count.
+ * @param post_dequeue User provided callback function to
+ * post-process a dequeued operation.
+ * @param out_opaque Opaque pointer array to be retrieve from
+ * device queue. In case of
+ * *is_opaque_array* is set there should
+ * be enough room to store all opaque data.
+ * @param is_opaque_array Set 1 if every dequeued job will be
+ * written the opaque data into
+ * *out_opaque* array.
+ * @param n_success_jobs Driver written value to specific the
+ * total successful operations count.
+ *
+ * @return
+ * - Returns number of dequeued packets.
+ */
+typedef uint32_t (*cryptodev_dp_sym_dequeue_t)(void *qp, uint8_t *service_data,
+ rte_cryptodev_get_dequeue_count_t get_dequeue_count,
+ rte_cryptodev_post_dequeue_t post_dequeue,
+ void **out_opaque, uint8_t is_opaque_array,
+ uint32_t *n_success_jobs);
+
+/**
+ * Dequeue symmetric crypto processing of user provided data.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param out_opaque Opaque pointer to be retrieve from
+ * device queue.
+ *
+ * @return
+ * - 1 if the job is dequeued and the operation is a success.
+ * - 0 if the job is dequeued but the operation is failed.
+ * - -1 if no job is dequeued.
+ */
+typedef int (*cryptodev_dp_sym_dequeue_single_job_t)(
+ void *qp, uint8_t *service_data, void **out_opaque);
+
+/**
+ * Context data for asynchronous crypto process.
+ */
+struct rte_crypto_dp_service_ctx {
+ void *qp_data;
+
+ struct {
+ cryptodev_dp_submit_single_job_t submit_single_job;
+ cryptodev_dp_sym_submit_vec_t submit_vec;
+ cryptodev_dp_sym_operation_done_t submit_done;
+ cryptodev_dp_sym_dequeue_t dequeue_opaque;
+ cryptodev_dp_sym_dequeue_single_job_t dequeue_single;
+ cryptodev_dp_sym_operation_done_t dequeue_done;
+ };
+
+ /* Driver specific service data */
+ __extension__ uint8_t drv_service_data[];
+};
+
+/**
+ * Configure one DP service context data. Calling this function for the first
+ * time the user should unset the *is_update* parameter and the driver will
+ * fill necessary operation data into ctx buffer. Only when
+ * rte_cryptodev_dp_submit_done() is called the data stored in the ctx buffer
+ * will not be effective.
+ *
+ * @param dev_id The device identifier.
+ * @param qp_id The index of the queue pair from which to
+ * retrieve processed packets. The value must be
+ * in the range [0, nb_queue_pair - 1] previously
+ * supplied to rte_cryptodev_configure().
+ * @param service_type Type of the service requested.
+ * @param sess_type session type.
+ * @param session_ctx Session context data.
+ * @param ctx The data-path service context data.
+ * @param is_update Set 1 if ctx is pre-initialized but need
+ * update to different service type or session,
+ * but the rest driver data remains the same.
+ * Since service context data buffer is provided
+ * by user, the driver will not check the
+ * validity of the buffer nor its content. It is
+ * the user's obligation to initialize and
+ * uses the buffer properly by setting this field.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+__rte_experimental
+int
+rte_cryptodev_dp_configure_service(uint8_t dev_id, uint16_t qp_id,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ struct rte_crypto_dp_service_ctx *ctx, uint8_t is_update);
+
+static __rte_always_inline int
+_cryptodev_dp_submit_single_job(struct rte_crypto_dp_service_ctx *ctx,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ union rte_crypto_sym_additional_data *additional_data,
+ void *opaque)
+{
+ return (*ctx->submit_single_job)(ctx->qp_data, ctx->drv_service_data,
+ data, n_data_vecs, ofs, additional_data, opaque);
+}
+
+static __rte_always_inline int
+_cryptodev_dp_sym_dequeue_single_job(struct rte_crypto_dp_service_ctx *ctx,
+ void **out_opaque)
+{
+ return (*ctx->dequeue_single)(ctx->qp_data, ctx->drv_service_data,
+ out_opaque);
+}
+
+/**
+ * Submit single job into device queue but the driver will not start
+ * processing until rte_cryptodev_dp_submit_done() is called. This is a
+ * simplified
+ *
+ * @param ctx The initialized data-path service context data.
+ * @param data The buffer vector.
+ * @param n_data_vecs Number of buffer vectors.
+ * @param ofs Start and stop offsets for auth and cipher
+ * operations.
+ * @param additional_data IV, digest, and aad
+ * @param opaque The array of opaque data for dequeue.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+__rte_experimental
+int
+rte_cryptodev_dp_sym_submit_single_job(struct rte_crypto_dp_service_ctx *ctx,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ union rte_crypto_sym_additional_data *additional_data,
+ void *opaque);
+
+/**
+ * Submit a data vector into device queue but the driver will not start
+ * processing until rte_cryptodev_dp_submit_done() is called.
+ *
+ * @param ctx The initialized data-path service context data.
+ * @param vec The array of job vectors.
+ * @param ofs Start and stop offsets for auth and cipher operations.
+ * @param opaque The array of opaque data for dequeue.
+ * @return
+ * - The number of jobs successfully submitted.
+ */
+__rte_experimental
+uint32_t
+rte_cryptodev_dp_sym_submit_vec(struct rte_crypto_dp_service_ctx *ctx,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque);
+
+/**
+ * Command the queue pair to start processing all submitted jobs from last
+ * rte_cryptodev_init_dp_service() call.
+ *
+ * @param ctx The initialized data-path service context data.
+ * @param n The total number of submitted jobs.
+ */
+__rte_experimental
+void
+rte_cryptodev_dp_sym_submit_done(struct rte_crypto_dp_service_ctx *ctx,
+ uint32_t n);
+
+/**
+ * Dequeue symmetric crypto processing of user provided data.
+ *
+ * @param ctx The initialized data-path service
+ * context data.
+ * @param get_dequeue_count User provided callback function to
+ * obtain dequeue count.
+ * @param post_dequeue User provided callback function to
+ * post-process a dequeued operation.
+ * @param out_opaque Opaque pointer array to be retrieve from
+ * device queue. In case of
+ * *is_opaque_array* is set there should
+ * be enough room to store all opaque data.
+ * @param is_opaque_array Set 1 if every dequeued job will be
+ * written the opaque data into
+ * *out_opaque* array.
+ * @param n_success_jobs Driver written value to specific the
+ * total successful operations count.
+ *
+ * @return
+ * - Returns number of dequeued packets.
+ */
+__rte_experimental
+uint32_t
+rte_cryptodev_dp_sym_dequeue(struct rte_crypto_dp_service_ctx *ctx,
+ rte_cryptodev_get_dequeue_count_t get_dequeue_count,
+ rte_cryptodev_post_dequeue_t post_dequeue,
+ void **out_opaque, uint8_t is_opaque_array,
+ uint32_t *n_success_jobs);
+
+/**
+ * Dequeue Single symmetric crypto processing of user provided data.
+ *
+ * @param ctx The initialized data-path service
+ * context data.
+ * @param out_opaque Opaque pointer to be retrieve from
+ * device queue. The driver shall support
+ * NULL input of this parameter.
+ *
+ * @return
+ * - 1 if the job is dequeued and the operation is a success.
+ * - 0 if the job is dequeued but the operation is failed.
+ * - -1 if no job is dequeued.
+ */
+__rte_experimental
+int
+rte_cryptodev_dp_sym_dequeue_single_job(struct rte_crypto_dp_service_ctx *ctx,
+ void **out_opaque);
+
+/**
+ * Inform the queue pair dequeue jobs finished.
+ *
+ * @param ctx The initialized data-path service context data.
+ * @param n The total number of jobs already dequeued.
+ */
+__rte_experimental
+void
+rte_cryptodev_dp_sym_dequeue_done(struct rte_crypto_dp_service_ctx *ctx,
+ uint32_t n);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
index 81975d72b..e19de458c 100644
--- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -316,6 +316,42 @@ typedef uint32_t (*cryptodev_sym_cpu_crypto_process_t)
(struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess,
union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec);
+/**
+ * Typedef that the driver provided to get service context private date size.
+ *
+ * @param dev Crypto device pointer.
+ *
+ * @return
+ * - On success return the size of the device's service context private data.
+ * - On failure return negative integer.
+ */
+typedef int (*cryptodev_dp_get_service_ctx_size_t)(
+ struct rte_cryptodev *dev);
+
+/**
+ * Typedef that the driver provided to configure data-path service.
+ *
+ * @param dev Crypto device pointer.
+ * @param qp_id Crypto device queue pair index.
+ * @param service_type Type of the service requested.
+ * @param sess_type session type.
+ * @param session_ctx Session context data.
+ * @param ctx The data-path service context data.
+ * @param is_update Set 1 if ctx is pre-initialized but need
+ * update to different service type or session,
+ * but the rest driver data remains the same.
+ * buffer will always be one.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+typedef int (*cryptodev_dp_configure_service_t)(
+ struct rte_cryptodev *dev, uint16_t qp_id,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ struct rte_crypto_dp_service_ctx *ctx,
+ uint8_t is_update);
/** Crypto device operations function pointer table */
struct rte_cryptodev_ops {
@@ -348,8 +384,16 @@ struct rte_cryptodev_ops {
/**< Clear a Crypto sessions private data. */
cryptodev_asym_free_session_t asym_session_clear;
/**< Clear a Crypto sessions private data. */
- cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
- /**< process input data synchronously (cpu-crypto). */
+ union {
+ cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
+ /**< process input data synchronously (cpu-crypto). */
+ struct {
+ cryptodev_dp_get_service_ctx_size_t get_drv_ctx_size;
+ /**< Get data path service context data size. */
+ cryptodev_dp_configure_service_t configure_service;
+ /**< Initialize crypto service ctx data. */
+ };
+ };
};
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 02f6dcf72..10388ae90 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -105,4 +105,14 @@ EXPERIMENTAL {
# added in 20.08
rte_cryptodev_get_qp_status;
+
+ # added in 20.11
+ rte_cryptodev_dp_configure_service;
+ rte_cryptodev_dp_get_service_ctx_data_size;
+ rte_cryptodev_dp_sym_dequeue;
+ rte_cryptodev_dp_sym_dequeue_done;
+ rte_cryptodev_dp_sym_dequeue_single_job;
+ rte_cryptodev_dp_sym_submit_done;
+ rte_cryptodev_dp_sym_submit_single_job;
+ rte_cryptodev_dp_sym_submit_vec;
};
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v8 2/4] crypto/qat: add crypto data-path service API support
2020-09-04 15:25 ` [dpdk-dev] [dpdk-dev v8 0/4] cryptodev: add data-path service APIs Fan Zhang
2020-09-04 15:25 ` [dpdk-dev] [dpdk-dev v8 1/4] cryptodev: add crypto " Fan Zhang
@ 2020-09-04 15:25 ` Fan Zhang
2020-09-04 15:25 ` [dpdk-dev] [dpdk-dev v8 3/4] test/crypto: add unit-test for cryptodev direct APIs Fan Zhang
` (2 subsequent siblings)
4 siblings, 0 replies; 84+ messages in thread
From: Fan Zhang @ 2020-09-04 15:25 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski, Fan Zhang
This patch updates QAT PMD to add crypto service API support.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
drivers/common/qat/Makefile | 1 +
drivers/crypto/qat/meson.build | 1 +
drivers/crypto/qat/qat_sym.h | 13 +
drivers/crypto/qat/qat_sym_hw_dp.c | 941 +++++++++++++++++++++++++++++
drivers/crypto/qat/qat_sym_pmd.c | 9 +-
5 files changed, 963 insertions(+), 2 deletions(-)
create mode 100644 drivers/crypto/qat/qat_sym_hw_dp.c
diff --git a/drivers/common/qat/Makefile b/drivers/common/qat/Makefile
index 85d420709..1b71bbbab 100644
--- a/drivers/common/qat/Makefile
+++ b/drivers/common/qat/Makefile
@@ -42,6 +42,7 @@ endif
SRCS-y += qat_sym.c
SRCS-y += qat_sym_session.c
SRCS-y += qat_sym_pmd.c
+ SRCS-y += qat_sym_hw_dp.c
build_qat = yes
endif
endif
diff --git a/drivers/crypto/qat/meson.build b/drivers/crypto/qat/meson.build
index a225f374a..bc90ec44c 100644
--- a/drivers/crypto/qat/meson.build
+++ b/drivers/crypto/qat/meson.build
@@ -15,6 +15,7 @@ if dep.found()
qat_sources += files('qat_sym_pmd.c',
'qat_sym.c',
'qat_sym_session.c',
+ 'qat_sym_hw_dp.c',
'qat_asym_pmd.c',
'qat_asym.c')
qat_ext_deps += dep
diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h
index 1a9748849..ea2db0ca0 100644
--- a/drivers/crypto/qat/qat_sym.h
+++ b/drivers/crypto/qat/qat_sym.h
@@ -264,6 +264,18 @@ qat_sym_process_response(void **op, uint8_t *resp)
}
*op = (void *)rx_op;
}
+
+int
+qat_sym_dp_configure_service_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ struct rte_crypto_dp_service_ctx *service_ctx,
+ uint8_t is_update);
+
+int
+qat_sym_get_service_ctx_size(struct rte_cryptodev *dev);
+
#else
static inline void
@@ -276,5 +288,6 @@ static inline void
qat_sym_process_response(void **op __rte_unused, uint8_t *resp __rte_unused)
{
}
+
#endif
#endif /* _QAT_SYM_H_ */
diff --git a/drivers/crypto/qat/qat_sym_hw_dp.c b/drivers/crypto/qat/qat_sym_hw_dp.c
new file mode 100644
index 000000000..81887bb96
--- /dev/null
+++ b/drivers/crypto/qat/qat_sym_hw_dp.c
@@ -0,0 +1,941 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_cryptodev_pmd.h>
+
+#include "adf_transport_access_macros.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#include "qat_sym.h"
+#include "qat_sym_pmd.h"
+#include "qat_sym_session.h"
+#include "qat_qp.h"
+
+struct qat_sym_dp_service_ctx {
+ struct qat_sym_session *session;
+ uint32_t tail;
+ uint32_t head;
+ uint16_t cached_enqueue;
+ uint16_t cached_dequeue;
+ enum rte_crypto_dp_service last_service_type;
+};
+
+static __rte_always_inline int32_t
+qat_sym_dp_get_data(struct qat_qp *qp, struct icp_qat_fw_la_bulk_req *req,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs)
+{
+ struct qat_queue *tx_queue;
+ struct qat_sym_op_cookie *cookie;
+ struct qat_sgl *list;
+ uint32_t i;
+ uint32_t total_len;
+
+ if (likely(n_data_vecs == 1)) {
+ req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr =
+ data[0].iova;
+ req->comn_mid.src_length = req->comn_mid.dst_length =
+ data[0].len;
+ return data[0].len;
+ }
+
+ if (n_data_vecs == 0 || n_data_vecs > QAT_SYM_SGL_MAX_NUMBER)
+ return -1;
+
+ total_len = 0;
+ tx_queue = &qp->tx_q;
+
+ ICP_QAT_FW_COMN_PTR_TYPE_SET(req->comn_hdr.comn_req_flags,
+ QAT_COMN_PTR_TYPE_SGL);
+ cookie = qp->op_cookies[tx_queue->tail >> tx_queue->trailz];
+ list = (struct qat_sgl *)&cookie->qat_sgl_src;
+
+ for (i = 0; i < n_data_vecs; i++) {
+ list->buffers[i].len = data[i].len;
+ list->buffers[i].resrvd = 0;
+ list->buffers[i].addr = data[i].iova;
+ if (total_len + data[i].len > UINT32_MAX) {
+ QAT_DP_LOG(ERR, "Message too long");
+ return -1;
+ }
+ total_len += data[i].len;
+ }
+
+ list->num_bufs = i;
+ req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr =
+ cookie->qat_sgl_src_phys_addr;
+ req->comn_mid.src_length = req->comn_mid.dst_length = 0;
+ return total_len;
+}
+
+static __rte_always_inline void
+set_cipher_iv(struct icp_qat_fw_la_cipher_req_params *cipher_param,
+ union rte_crypto_sym_additional_data *a_data, uint32_t iv_len,
+ struct icp_qat_fw_la_bulk_req *qat_req)
+{
+ /* copy IV into request if it fits */
+ if (iv_len <= sizeof(cipher_param->u.cipher_IV_array))
+ rte_memcpy(cipher_param->u.cipher_IV_array,
+ a_data->cipher_auth.cipher_iv_ptr, iv_len);
+ else {
+ ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(
+ qat_req->comn_hdr.serv_specif_flags,
+ ICP_QAT_FW_CIPH_IV_64BIT_PTR);
+ cipher_param->u.s.cipher_IV_ptr =
+ a_data->cipher_auth.cipher_iv_iova;
+ }
+}
+
+#define QAT_SYM_DP_IS_RESP_SUCCESS(resp) \
+ (ICP_QAT_FW_COMN_STATUS_FLAG_OK == \
+ ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(resp->comn_hdr.comn_status))
+
+static __rte_always_inline void
+qat_sym_dp_fill_vec_status(int32_t *sta, int status, uint32_t n)
+{
+ uint32_t i;
+
+ for (i = 0; i < n; i++)
+ sta[i] = status;
+}
+
+#define QAT_SYM_DP_CHECK_ENQ_POSSIBLE(q, c, n) \
+ (q->enqueued - q->dequeued + c + n < q->max_inflights)
+
+static __rte_always_inline void
+submit_one_aead_job(struct qat_sym_session *ctx,
+ struct icp_qat_fw_la_bulk_req *req,
+ union rte_crypto_sym_additional_data *a_data,
+ union rte_crypto_sym_ofs ofs, uint32_t data_len)
+{
+ struct icp_qat_fw_la_cipher_req_params *cipher_param =
+ (void *)&req->serv_specif_rqpars;
+ struct icp_qat_fw_la_auth_req_params *auth_param =
+ (void *)((uint8_t *)&req->serv_specif_rqpars +
+ ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET);
+ uint8_t *aad_data;
+ uint8_t aad_ccm_real_len;
+ uint8_t aad_len_field_sz;
+ uint32_t msg_len_be;
+ rte_iova_t aad_iova = 0;
+ uint8_t q;
+
+ switch (ctx->qat_hash_alg) {
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+ ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(
+ req->comn_hdr.serv_specif_flags,
+ ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS);
+ rte_memcpy(cipher_param->u.cipher_IV_array,
+ a_data->aead.iv_ptr, ctx->cipher_iv.length);
+ aad_iova = a_data->aead.aad_iova;
+ break;
+ case ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC:
+ aad_data = a_data->aead.aad_ptr;
+ aad_iova = a_data->aead.aad_iova;
+ aad_ccm_real_len = 0;
+ aad_len_field_sz = 0;
+ msg_len_be = rte_bswap32((uint32_t)data_len -
+ ofs.ofs.cipher.head);
+
+ if (ctx->aad_len > ICP_QAT_HW_CCM_AAD_DATA_OFFSET) {
+ aad_len_field_sz = ICP_QAT_HW_CCM_AAD_LEN_INFO;
+ aad_ccm_real_len = ctx->aad_len -
+ ICP_QAT_HW_CCM_AAD_B0_LEN -
+ ICP_QAT_HW_CCM_AAD_LEN_INFO;
+ } else {
+ aad_data = a_data->aead.iv_ptr;
+ aad_iova = a_data->aead.iv_iova;
+ }
+
+ q = ICP_QAT_HW_CCM_NQ_CONST - ctx->cipher_iv.length;
+ aad_data[0] = ICP_QAT_HW_CCM_BUILD_B0_FLAGS(
+ aad_len_field_sz, ctx->digest_length, q);
+ if (q > ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE) {
+ memcpy(aad_data + ctx->cipher_iv.length +
+ ICP_QAT_HW_CCM_NONCE_OFFSET + (q -
+ ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE),
+ (uint8_t *)&msg_len_be,
+ ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE);
+ } else {
+ memcpy(aad_data + ctx->cipher_iv.length +
+ ICP_QAT_HW_CCM_NONCE_OFFSET,
+ (uint8_t *)&msg_len_be +
+ (ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE
+ - q), q);
+ }
+
+ if (aad_len_field_sz > 0) {
+ *(uint16_t *)&aad_data[ICP_QAT_HW_CCM_AAD_B0_LEN] =
+ rte_bswap16(aad_ccm_real_len);
+
+ if ((aad_ccm_real_len + aad_len_field_sz)
+ % ICP_QAT_HW_CCM_AAD_B0_LEN) {
+ uint8_t pad_len = 0;
+ uint8_t pad_idx = 0;
+
+ pad_len = ICP_QAT_HW_CCM_AAD_B0_LEN -
+ ((aad_ccm_real_len +
+ aad_len_field_sz) %
+ ICP_QAT_HW_CCM_AAD_B0_LEN);
+ pad_idx = ICP_QAT_HW_CCM_AAD_B0_LEN +
+ aad_ccm_real_len +
+ aad_len_field_sz;
+ memset(&aad_data[pad_idx], 0, pad_len);
+ }
+ }
+
+ rte_memcpy(((uint8_t *)cipher_param->u.cipher_IV_array)
+ + ICP_QAT_HW_CCM_NONCE_OFFSET,
+ (uint8_t *)a_data->aead.iv_ptr +
+ ICP_QAT_HW_CCM_NONCE_OFFSET, ctx->cipher_iv.length);
+ *(uint8_t *)&cipher_param->u.cipher_IV_array[0] =
+ q - ICP_QAT_HW_CCM_NONCE_OFFSET;
+
+ rte_memcpy((uint8_t *)a_data->aead.aad_ptr +
+ ICP_QAT_HW_CCM_NONCE_OFFSET,
+ (uint8_t *)a_data->aead.iv_ptr +
+ ICP_QAT_HW_CCM_NONCE_OFFSET,
+ ctx->cipher_iv.length);
+ break;
+ default:
+ break;
+ }
+
+ cipher_param->cipher_offset = ofs.ofs.cipher.head;
+ cipher_param->cipher_length = data_len - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ auth_param->auth_off = ofs.ofs.cipher.head;
+ auth_param->auth_len = cipher_param->cipher_length;
+ auth_param->auth_res_addr = a_data->aead.digest_iova;
+ auth_param->u1.aad_adr = aad_iova;
+
+ if (ctx->is_single_pass) {
+ cipher_param->spc_aad_addr = aad_iova;
+ cipher_param->spc_auth_res_addr = a_data->aead.digest_iova;
+ }
+}
+
+static __rte_always_inline int
+qat_sym_dp_submit_single_aead(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ union rte_crypto_sym_additional_data *a_data,
+ void *opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+ uint32_t tail = dp_ctx->tail;
+
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+ rte_prefetch0((uint8_t *)tx_queue->base_addr + tail);
+ data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs);
+ if (unlikely(data_len < 0))
+ return -1;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque;
+
+ submit_one_aead_job(ctx, req, a_data, ofs,
+ (uint32_t)data_len);
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue++;
+
+ return 0;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_submit_aead_jobs(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ uint32_t i;
+ uint32_t tail;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+
+ if (unlikely(QAT_SYM_DP_CHECK_ENQ_POSSIBLE(qp,
+ dp_ctx->cached_enqueue, vec->num) == 0)) {
+ qat_sym_dp_fill_vec_status(vec->status, -1, vec->num);
+ return 0;
+ }
+
+ tail = dp_ctx->tail;
+
+ for (i = 0; i < vec->num; i++) {
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+
+ data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec,
+ vec->sgl[i].num) - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ if (unlikely(data_len < 0))
+ break;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque[i];
+ submit_one_aead_job(ctx, req, vec->additional_data + i, ofs,
+ (uint32_t)data_len);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ }
+
+ if (unlikely(i < vec->num))
+ qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i);
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue += i;
+ return i;
+}
+
+static __rte_always_inline void
+submit_one_cipher_job(struct qat_sym_session *ctx,
+ struct icp_qat_fw_la_bulk_req *req,
+ union rte_crypto_sym_additional_data *a_data,
+ union rte_crypto_sym_ofs ofs, uint32_t data_len)
+{
+ struct icp_qat_fw_la_cipher_req_params *cipher_param;
+
+ cipher_param = (void *)&req->serv_specif_rqpars;
+
+ /* cipher IV */
+ set_cipher_iv(cipher_param, a_data, ctx->cipher_iv.length, req);
+ cipher_param->cipher_offset = ofs.ofs.cipher.head;
+ cipher_param->cipher_length = data_len - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+}
+
+static __rte_always_inline int
+qat_sym_dp_submit_single_cipher(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ union rte_crypto_sym_additional_data *a_data,
+ void *opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+ uint32_t tail = dp_ctx->tail;
+
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+ rte_prefetch0((uint8_t *)tx_queue->base_addr + tail);
+ data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs);
+ if (unlikely(data_len < 0))
+ return -1;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque;
+
+ submit_one_cipher_job(ctx, req, a_data, ofs, (uint32_t)data_len);
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue++;
+
+ return 0;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_submit_cipher_jobs(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ uint32_t i;
+ uint32_t tail;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+
+ if (unlikely(QAT_SYM_DP_CHECK_ENQ_POSSIBLE(qp,
+ dp_ctx->cached_enqueue, vec->num) == 0)) {
+ qat_sym_dp_fill_vec_status(vec->status, -1, vec->num);
+ return 0;
+ }
+
+ tail = dp_ctx->tail;
+
+ for (i = 0; i < vec->num; i++) {
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+
+ data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec,
+ vec->sgl[i].num) - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ if (unlikely(data_len < 0))
+ break;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque[i];
+ submit_one_cipher_job(ctx, req, vec->additional_data + i, ofs,
+ (uint32_t)data_len);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ }
+
+ if (unlikely(i < vec->num))
+ qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i);
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue += i;
+ return i;
+}
+
+static __rte_always_inline void
+submit_one_auth_job(struct qat_sym_session *ctx,
+ struct icp_qat_fw_la_bulk_req *req,
+ union rte_crypto_sym_additional_data *a_data,
+ union rte_crypto_sym_ofs ofs, uint32_t data_len)
+{
+ struct icp_qat_fw_la_cipher_req_params *cipher_param;
+ struct icp_qat_fw_la_auth_req_params *auth_param;
+
+ cipher_param = (void *)&req->serv_specif_rqpars;
+ auth_param = (void *)((uint8_t *)cipher_param +
+ ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET);
+
+ auth_param->auth_off = ofs.ofs.auth.head;
+ auth_param->auth_len = data_len - ofs.ofs.auth.head -
+ ofs.ofs.auth.tail;
+ auth_param->auth_res_addr = a_data->cipher_auth.digest_iova;
+
+ switch (ctx->qat_hash_alg) {
+ case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
+ case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
+ auth_param->u1.aad_adr = a_data->cipher_auth.auth_iv_iova;
+ break;
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+ ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(
+ req->comn_hdr.serv_specif_flags,
+ ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS);
+ rte_memcpy(cipher_param->u.cipher_IV_array,
+ a_data->cipher_auth.auth_iv_ptr,
+ ctx->auth_iv.length);
+ break;
+ default:
+ break;
+ }
+}
+
+static __rte_always_inline int
+qat_sym_dp_submit_single_auth(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ union rte_crypto_sym_additional_data *a_data, void *opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+ uint32_t tail = dp_ctx->tail;
+
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+ rte_prefetch0((uint8_t *)tx_queue->base_addr + tail);
+ data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs);
+ if (unlikely(data_len < 0))
+ return -1;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque;
+
+ submit_one_auth_job(ctx, req, a_data, ofs, (uint32_t)data_len);
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue++;
+
+ return 0;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_submit_auth_jobs(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ uint32_t i;
+ uint32_t tail;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+
+ if (unlikely(QAT_SYM_DP_CHECK_ENQ_POSSIBLE(qp,
+ dp_ctx->cached_enqueue, vec->num) == 0)) {
+ qat_sym_dp_fill_vec_status(vec->status, -1, vec->num);
+ return 0;
+ }
+
+ tail = dp_ctx->tail;
+
+ for (i = 0; i < vec->num; i++) {
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+
+ data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec,
+ vec->sgl[i].num) - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ if (unlikely(data_len < 0))
+ break;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque[i];
+ submit_one_auth_job(ctx, req, vec->additional_data + i, ofs,
+ (uint32_t)data_len);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ }
+
+ if (unlikely(i < vec->num))
+ qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i);
+
+ dp_ctx->tail = tail;
+ return i;
+}
+
+static __rte_always_inline void
+submit_one_chain_job(struct qat_sym_session *ctx,
+ struct icp_qat_fw_la_bulk_req *req, struct rte_crypto_vec *data,
+ uint16_t n_data_vecs, union rte_crypto_sym_additional_data *a_data,
+ union rte_crypto_sym_ofs ofs, uint32_t data_len)
+{
+ struct icp_qat_fw_la_cipher_req_params *cipher_param;
+ struct icp_qat_fw_la_auth_req_params *auth_param;
+ rte_iova_t auth_iova_end;
+ int32_t cipher_len, auth_len;
+
+ cipher_param = (void *)&req->serv_specif_rqpars;
+ auth_param = (void *)((uint8_t *)cipher_param +
+ ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET);
+
+ cipher_len = data_len - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ auth_len = data_len - ofs.ofs.auth.head - ofs.ofs.auth.tail;
+
+ assert(cipher_len > 0 && auth_len > 0);
+
+ cipher_param->cipher_offset = ofs.ofs.cipher.head;
+ cipher_param->cipher_length = cipher_len;
+ set_cipher_iv(cipher_param, a_data, ctx->cipher_iv.length, req);
+
+ auth_param->auth_off = ofs.ofs.auth.head;
+ auth_param->auth_len = auth_len;
+ auth_param->auth_res_addr = a_data->cipher_auth.digest_iova;
+
+ switch (ctx->qat_hash_alg) {
+ case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
+ case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
+ auth_param->u1.aad_adr = a_data->cipher_auth.auth_iv_iova;
+
+ if (unlikely(n_data_vecs > 1)) {
+ int auth_end_get = 0, i = n_data_vecs - 1;
+ struct rte_crypto_vec *cvec = &data[0];
+ uint32_t len;
+
+ len = data_len - ofs.ofs.auth.tail;
+
+ while (i >= 0 && len > 0) {
+ if (cvec->len >= len) {
+ auth_iova_end = cvec->iova +
+ (cvec->len - len);
+ len = 0;
+ auth_end_get = 1;
+ break;
+ }
+ len -= cvec->len;
+ i--;
+ cvec++;
+ }
+
+ assert(auth_end_get != 0);
+ } else
+ auth_iova_end = data[0].iova + auth_param->auth_off +
+ auth_param->auth_len;
+
+ /* Then check if digest-encrypted conditions are met */
+ if ((auth_param->auth_off + auth_param->auth_len <
+ cipher_param->cipher_offset +
+ cipher_param->cipher_length) &&
+ (a_data->cipher_auth.digest_iova == auth_iova_end)) {
+ /* Handle partial digest encryption */
+ if (cipher_param->cipher_offset +
+ cipher_param->cipher_length <
+ auth_param->auth_off +
+ auth_param->auth_len +
+ ctx->digest_length)
+ req->comn_mid.dst_length =
+ req->comn_mid.src_length =
+ auth_param->auth_off +
+ auth_param->auth_len +
+ ctx->digest_length;
+ struct icp_qat_fw_comn_req_hdr *header =
+ &req->comn_hdr;
+ ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(
+ header->serv_specif_flags,
+ ICP_QAT_FW_LA_DIGEST_IN_BUFFER);
+ }
+ break;
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+ break;
+ default:
+ break;
+ }
+}
+
+static __rte_always_inline int
+qat_sym_dp_submit_single_chain(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ union rte_crypto_sym_additional_data *a_data, void *opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+ uint32_t tail = dp_ctx->tail;
+
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+ rte_prefetch0((uint8_t *)tx_queue->base_addr + tail);
+ data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs);
+ if (unlikely(data_len < 0))
+ return -1;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque;
+
+ submit_one_chain_job(ctx, req, data, n_data_vecs, a_data, ofs,
+ (uint32_t)data_len);
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue++;
+
+ return 0;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_submit_chain_jobs(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ uint32_t i;
+ uint32_t tail;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+
+ if (unlikely(QAT_SYM_DP_CHECK_ENQ_POSSIBLE(qp,
+ dp_ctx->cached_enqueue, vec->num) == 0)) {
+ qat_sym_dp_fill_vec_status(vec->status, -1, vec->num);
+ return 0;
+ }
+
+ tail = dp_ctx->tail;
+
+ for (i = 0; i < vec->num; i++) {
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+
+ data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec,
+ vec->sgl[i].num) - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ if (unlikely(data_len < 0))
+ break;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque[i];
+ submit_one_chain_job(ctx, req, vec->sgl[i].vec, vec->sgl[i].num,
+ vec->additional_data + i, ofs, (uint32_t)data_len);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ }
+
+ if (unlikely(i < vec->num))
+ qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i);
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue += i;
+ return i;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_dequeue(void *qp_data, uint8_t *service_data,
+ rte_cryptodev_get_dequeue_count_t get_dequeue_count,
+ rte_cryptodev_post_dequeue_t post_dequeue,
+ void **out_opaque, uint8_t is_opaque_array,
+ uint32_t *n_success_jobs)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+ struct qat_queue *rx_queue = &qp->rx_q;
+ struct icp_qat_fw_comn_resp *resp;
+ void *resp_opaque;
+ uint32_t i, n, inflight;
+ uint32_t head;
+ uint8_t status;
+
+ *n_success_jobs = 0;
+ head = dp_ctx->head;
+
+ inflight = qp->enqueued - qp->dequeued;
+ if (unlikely(inflight == 0))
+ return 0;
+
+ resp = (struct icp_qat_fw_comn_resp *)((uint8_t *)rx_queue->base_addr +
+ head);
+ /* no operation ready */
+ if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG))
+ return 0;
+
+ resp_opaque = (void *)(uintptr_t)resp->opaque_data;
+ /* get the dequeue count */
+ n = get_dequeue_count(resp_opaque);
+ if (unlikely(n == 0))
+ return 0;
+
+ out_opaque[0] = resp_opaque;
+ status = QAT_SYM_DP_IS_RESP_SUCCESS(resp);
+ post_dequeue(resp_opaque, 0, status);
+ *n_success_jobs += status;
+
+ head = (head + rx_queue->msg_size) & rx_queue->modulo_mask;
+
+ /* we already finished dequeue when n == 1 */
+ if (unlikely(n == 1)) {
+ i = 1;
+ goto end_deq;
+ }
+
+ if (is_opaque_array) {
+ for (i = 1; i < n; i++) {
+ resp = (struct icp_qat_fw_comn_resp *)(
+ (uint8_t *)rx_queue->base_addr + head);
+ if (unlikely(*(uint32_t *)resp ==
+ ADF_RING_EMPTY_SIG))
+ goto end_deq;
+ out_opaque[i] = (void *)(uintptr_t)
+ resp->opaque_data;
+ status = QAT_SYM_DP_IS_RESP_SUCCESS(resp);
+ *n_success_jobs += status;
+ post_dequeue(out_opaque[i], i, status);
+ head = (head + rx_queue->msg_size) &
+ rx_queue->modulo_mask;
+ }
+
+ goto end_deq;
+ }
+
+ /* opaque is not array */
+ for (i = 1; i < n; i++) {
+ resp = (struct icp_qat_fw_comn_resp *)(
+ (uint8_t *)rx_queue->base_addr + head);
+ status = QAT_SYM_DP_IS_RESP_SUCCESS(resp);
+ if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG))
+ goto end_deq;
+ head = (head + rx_queue->msg_size) &
+ rx_queue->modulo_mask;
+ post_dequeue(resp_opaque, i, status);
+ *n_success_jobs += status;
+ }
+
+end_deq:
+ dp_ctx->head = head;
+ dp_ctx->cached_dequeue += i;
+ return i;
+}
+
+static __rte_always_inline int
+qat_sym_dp_dequeue_single_job(void *qp_data, uint8_t *service_data,
+ void **out_opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+ struct qat_queue *rx_queue = &qp->rx_q;
+
+ register struct icp_qat_fw_comn_resp *resp;
+
+ resp = (struct icp_qat_fw_comn_resp *)((uint8_t *)rx_queue->base_addr +
+ dp_ctx->head);
+
+ if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG))
+ return -1;
+
+ *out_opaque = (void *)(uintptr_t)resp->opaque_data;
+
+ dp_ctx->head = (dp_ctx->head + rx_queue->msg_size) &
+ rx_queue->modulo_mask;
+ dp_ctx->cached_dequeue++;
+
+ return QAT_SYM_DP_IS_RESP_SUCCESS(resp);
+}
+
+static __rte_always_inline void
+qat_sym_dp_kick_tail(void *qp_data, uint8_t *service_data, uint32_t n)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+
+ assert(dp_ctx->cached_enqueue == n);
+
+ qp->enqueued += n;
+ qp->stats.enqueued_count += n;
+
+ tx_queue->tail = dp_ctx->tail;
+
+ WRITE_CSR_RING_TAIL(qp->mmap_bar_addr,
+ tx_queue->hw_bundle_number,
+ tx_queue->hw_queue_number, tx_queue->tail);
+ tx_queue->csr_tail = tx_queue->tail;
+ dp_ctx->cached_enqueue = 0;
+}
+
+static __rte_always_inline void
+qat_sym_dp_update_head(void *qp_data, uint8_t *service_data, uint32_t n)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_queue *rx_queue = &qp->rx_q;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+
+ assert(dp_ctx->cached_dequeue == n);
+
+ rx_queue->head = dp_ctx->head;
+ rx_queue->nb_processed_responses += n;
+ qp->dequeued += n;
+ qp->stats.dequeued_count += n;
+ if (rx_queue->nb_processed_responses > QAT_CSR_HEAD_WRITE_THRESH) {
+ uint32_t old_head, new_head;
+ uint32_t max_head;
+
+ old_head = rx_queue->csr_head;
+ new_head = rx_queue->head;
+ max_head = qp->nb_descriptors * rx_queue->msg_size;
+
+ /* write out free descriptors */
+ void *cur_desc = (uint8_t *)rx_queue->base_addr + old_head;
+
+ if (new_head < old_head) {
+ memset(cur_desc, ADF_RING_EMPTY_SIG_BYTE,
+ max_head - old_head);
+ memset(rx_queue->base_addr, ADF_RING_EMPTY_SIG_BYTE,
+ new_head);
+ } else {
+ memset(cur_desc, ADF_RING_EMPTY_SIG_BYTE, new_head -
+ old_head);
+ }
+ rx_queue->nb_processed_responses = 0;
+ rx_queue->csr_head = new_head;
+
+ /* write current head to CSR */
+ WRITE_CSR_RING_HEAD(qp->mmap_bar_addr,
+ rx_queue->hw_bundle_number, rx_queue->hw_queue_number,
+ new_head);
+ }
+ dp_ctx->cached_dequeue = 0;
+}
+
+int
+qat_sym_dp_configure_service_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ struct rte_crypto_dp_service_ctx *service_ctx,
+ uint8_t is_update)
+{
+ struct qat_qp *qp;
+ struct qat_sym_session *ctx;
+ struct qat_sym_dp_service_ctx *dp_ctx;
+
+ if (service_ctx == NULL || session_ctx.crypto_sess == NULL ||
+ sess_type != RTE_CRYPTO_OP_WITH_SESSION)
+ return -EINVAL;
+
+ qp = dev->data->queue_pairs[qp_id];
+ ctx = (struct qat_sym_session *)get_sym_session_private_data(
+ session_ctx.crypto_sess, qat_sym_driver_id);
+ dp_ctx = (struct qat_sym_dp_service_ctx *)
+ service_ctx->drv_service_data;
+
+ dp_ctx->session = ctx;
+
+ if (!is_update) {
+ memset(service_ctx, 0, sizeof(*service_ctx) +
+ sizeof(struct qat_sym_dp_service_ctx));
+ service_ctx->qp_data = dev->data->queue_pairs[qp_id];
+ dp_ctx->tail = qp->tx_q.tail;
+ dp_ctx->head = qp->rx_q.head;
+ dp_ctx->cached_enqueue = dp_ctx->cached_dequeue = 0;
+ } else {
+ if (dp_ctx->last_service_type == service_type)
+ return 0;
+ }
+
+ dp_ctx->last_service_type = service_type;
+
+ service_ctx->submit_done = qat_sym_dp_kick_tail;
+ service_ctx->dequeue_opaque = qat_sym_dp_dequeue;
+ service_ctx->dequeue_single = qat_sym_dp_dequeue_single_job;
+ service_ctx->dequeue_done = qat_sym_dp_update_head;
+
+ if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER ||
+ ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) {
+ /* AES-GCM or AES-CCM */
+ if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64 ||
+ (ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_AES128
+ && ctx->qat_mode == ICP_QAT_HW_CIPHER_CTR_MODE
+ && ctx->qat_hash_alg ==
+ ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC)) {
+ if (service_type != RTE_CRYPTO_DP_SYM_AEAD)
+ return -1;
+ service_ctx->submit_vec = qat_sym_dp_submit_aead_jobs;
+ service_ctx->submit_single_job =
+ qat_sym_dp_submit_single_aead;
+ } else {
+ if (service_type != RTE_CRYPTO_DP_SYM_CHAIN)
+ return -1;
+ service_ctx->submit_vec = qat_sym_dp_submit_chain_jobs;
+ service_ctx->submit_single_job =
+ qat_sym_dp_submit_single_chain;
+ }
+ } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_AUTH) {
+ if (service_type != RTE_CRYPTO_DP_SYM_AUTH_ONLY)
+ return -1;
+ service_ctx->submit_vec = qat_sym_dp_submit_auth_jobs;
+ service_ctx->submit_single_job = qat_sym_dp_submit_single_auth;
+ } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) {
+ if (service_type != RTE_CRYPTO_DP_SYM_CIPHER_ONLY)
+ return -1;
+ service_ctx->submit_vec = qat_sym_dp_submit_cipher_jobs;
+ service_ctx->submit_single_job =
+ qat_sym_dp_submit_single_cipher;
+ }
+
+ return 0;
+}
+
+int
+qat_sym_get_service_ctx_size(__rte_unused struct rte_cryptodev *dev)
+{
+ return sizeof(struct qat_sym_dp_service_ctx);
+}
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index 314742f53..aaaf3e3f1 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -258,7 +258,11 @@ static struct rte_cryptodev_ops crypto_qat_ops = {
/* Crypto related operations */
.sym_session_get_size = qat_sym_session_get_private_size,
.sym_session_configure = qat_sym_session_configure,
- .sym_session_clear = qat_sym_session_clear
+ .sym_session_clear = qat_sym_session_clear,
+
+ /* Data plane service related operations */
+ .get_drv_ctx_size = qat_sym_get_service_ctx_size,
+ .configure_service = qat_sym_dp_configure_service_ctx,
};
#ifdef RTE_LIBRTE_SECURITY
@@ -376,7 +380,8 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
- RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
+ RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED |
+ RTE_CRYPTODEV_FF_DATA_PATH_SERVICE;
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v8 3/4] test/crypto: add unit-test for cryptodev direct APIs
2020-09-04 15:25 ` [dpdk-dev] [dpdk-dev v8 0/4] cryptodev: add data-path service APIs Fan Zhang
2020-09-04 15:25 ` [dpdk-dev] [dpdk-dev v8 1/4] cryptodev: add crypto " Fan Zhang
2020-09-04 15:25 ` [dpdk-dev] [dpdk-dev v8 2/4] crypto/qat: add crypto data-path service API support Fan Zhang
@ 2020-09-04 15:25 ` Fan Zhang
2020-09-04 15:25 ` [dpdk-dev] [dpdk-dev v8 4/4] doc: add cryptodev service APIs guide Fan Zhang
2020-09-08 8:42 ` [dpdk-dev] [dpdk-dev v9 0/4] cryptodev: add data-path service APIs Fan Zhang
4 siblings, 0 replies; 84+ messages in thread
From: Fan Zhang @ 2020-09-04 15:25 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski, Fan Zhang
This patch adds the QAT test to use cryptodev symmetric crypto
direct APIs.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
app/test/test_cryptodev.c | 452 +++++++++++++++++++++++---
app/test/test_cryptodev.h | 7 +
app/test/test_cryptodev_blockcipher.c | 51 ++-
3 files changed, 447 insertions(+), 63 deletions(-)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 70bf6fe2c..387a3cf15 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -49,6 +49,8 @@
#define VDEV_ARGS_SIZE 100
#define MAX_NB_SESSIONS 4
+#define MAX_DRV_SERVICE_CTX_SIZE 256
+
#define IN_PLACE 0
#define OUT_OF_PLACE 1
@@ -57,6 +59,8 @@ static int gbl_driver_id;
static enum rte_security_session_action_type gbl_action_type =
RTE_SECURITY_ACTION_TYPE_NONE;
+int cryptodev_dp_test;
+
struct crypto_testsuite_params {
struct rte_mempool *mbuf_pool;
struct rte_mempool *large_mbuf_pool;
@@ -147,6 +151,173 @@ ceil_byte_length(uint32_t num_bits)
return (num_bits >> 3);
}
+void
+process_sym_hw_api_op(uint8_t dev_id, uint16_t qp_id, struct rte_crypto_op *op,
+ uint8_t is_cipher, uint8_t is_auth, uint8_t len_in_bits,
+ uint8_t cipher_iv_len)
+{
+ int32_t n;
+ struct rte_crypto_sym_op *sop;
+ struct rte_crypto_op *ret_op = NULL;
+ struct rte_crypto_vec data_vec[UINT8_MAX];
+ union rte_crypto_sym_additional_data a_data;
+ union rte_crypto_sym_ofs ofs;
+ int32_t status;
+ uint32_t max_len;
+ union rte_cryptodev_session_ctx sess;
+ enum rte_crypto_dp_service service_type;
+ uint32_t count = 0;
+ uint8_t service_data[MAX_DRV_SERVICE_CTX_SIZE] = {0};
+ struct rte_crypto_dp_service_ctx *ctx = (void *)service_data;
+ uint32_t cipher_offset = 0, cipher_len = 0, auth_offset = 0,
+ auth_len = 0;
+ int ctx_service_size;
+
+ sop = op->sym;
+
+ sess.crypto_sess = sop->session;
+
+ if (is_cipher && is_auth) {
+ service_type = RTE_CRYPTO_DP_SYM_CHAIN;
+ cipher_offset = sop->cipher.data.offset;
+ cipher_len = sop->cipher.data.length;
+ auth_offset = sop->auth.data.offset;
+ auth_len = sop->auth.data.length;
+ max_len = RTE_MAX(cipher_offset + cipher_len,
+ auth_offset + auth_len);
+ } else if (is_cipher) {
+ service_type = RTE_CRYPTO_DP_SYM_CIPHER_ONLY;
+ cipher_offset = sop->cipher.data.offset;
+ cipher_len = sop->cipher.data.length;
+ max_len = cipher_len + cipher_offset;
+ } else if (is_auth) {
+ service_type = RTE_CRYPTO_DP_SYM_AUTH_ONLY;
+ auth_offset = sop->auth.data.offset;
+ auth_len = sop->auth.data.length;
+ max_len = auth_len + auth_offset;
+ } else { /* aead */
+ service_type = RTE_CRYPTO_DP_SYM_AEAD;
+ cipher_offset = sop->aead.data.offset;
+ cipher_len = sop->aead.data.length;
+ max_len = cipher_len + cipher_offset;
+ }
+
+ if (len_in_bits) {
+ max_len = max_len >> 3;
+ cipher_offset = cipher_offset >> 3;
+ auth_offset = auth_offset >> 3;
+ cipher_len = cipher_len >> 3;
+ auth_len = auth_len >> 3;
+ }
+
+ ctx_service_size = rte_cryptodev_dp_get_service_ctx_data_size(dev_id);
+ assert(ctx_service_size <= MAX_DRV_SERVICE_CTX_SIZE &&
+ ctx_service_size > 0);
+
+ if (rte_cryptodev_dp_configure_service(dev_id, qp_id, service_type,
+ RTE_CRYPTO_OP_WITH_SESSION, sess, ctx, 0) < 0) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ /* test update service */
+ if (rte_cryptodev_dp_configure_service(dev_id, qp_id, service_type,
+ RTE_CRYPTO_OP_WITH_SESSION, sess, ctx, 1) < 0) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ n = rte_crypto_mbuf_to_vec(sop->m_src, 0, max_len,
+ data_vec, RTE_DIM(data_vec));
+ if (n < 0 || n > sop->m_src->nb_segs) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ ofs.raw = 0;
+
+ switch (service_type) {
+ case RTE_CRYPTO_DP_SYM_AEAD:
+ ofs.ofs.cipher.head = cipher_offset;
+ ofs.ofs.cipher.tail = max_len - cipher_offset - cipher_len;
+ a_data.aead.iv_ptr = rte_crypto_op_ctod_offset(op, void *,
+ IV_OFFSET);
+ a_data.aead.iv_iova = rte_crypto_op_ctophys_offset(op,
+ IV_OFFSET);
+ a_data.aead.aad_ptr = (void *)sop->aead.aad.data;
+ a_data.aead.aad_iova = sop->aead.aad.phys_addr;
+ a_data.aead.digest_ptr = (void *)sop->aead.digest.data;
+ a_data.aead.digest_iova = sop->aead.digest.phys_addr;
+ break;
+ case RTE_CRYPTO_DP_SYM_CIPHER_ONLY:
+ ofs.ofs.cipher.head = cipher_offset;
+ ofs.ofs.cipher.tail = max_len - cipher_offset - cipher_len;
+ a_data.cipher_auth.cipher_iv_ptr = rte_crypto_op_ctod_offset(
+ op, void *, IV_OFFSET);
+ a_data.cipher_auth.cipher_iv_iova =
+ rte_crypto_op_ctophys_offset(op, IV_OFFSET);
+ break;
+ case RTE_CRYPTO_DP_SYM_AUTH_ONLY:
+ ofs.ofs.auth.head = auth_offset;
+ ofs.ofs.auth.tail = max_len - auth_offset - auth_len;
+ a_data.cipher_auth.auth_iv_ptr = rte_crypto_op_ctod_offset(
+ op, void *, IV_OFFSET + cipher_iv_len);
+ a_data.cipher_auth.auth_iv_iova =
+ rte_crypto_op_ctophys_offset(op, IV_OFFSET +
+ cipher_iv_len);
+ a_data.cipher_auth.digest_ptr = (void *)sop->auth.digest.data;
+ a_data.cipher_auth.digest_iova = sop->auth.digest.phys_addr;
+ break;
+ case RTE_CRYPTO_DP_SYM_CHAIN:
+ ofs.ofs.cipher.head = cipher_offset;
+ ofs.ofs.cipher.tail = max_len - cipher_offset - cipher_len;
+ ofs.ofs.auth.head = auth_offset;
+ ofs.ofs.auth.tail = max_len - auth_offset - auth_len;
+ a_data.cipher_auth.cipher_iv_ptr = rte_crypto_op_ctod_offset(
+ op, void *, IV_OFFSET);
+ a_data.cipher_auth.cipher_iv_iova =
+ rte_crypto_op_ctophys_offset(op, IV_OFFSET);
+ a_data.cipher_auth.auth_iv_ptr = rte_crypto_op_ctod_offset(
+ op, void *, IV_OFFSET + cipher_iv_len);
+ a_data.cipher_auth.auth_iv_iova =
+ rte_crypto_op_ctophys_offset(op, IV_OFFSET +
+ cipher_iv_len);
+ a_data.cipher_auth.digest_ptr = (void *)sop->auth.digest.data;
+ a_data.cipher_auth.digest_iova = sop->auth.digest.phys_addr;
+ break;
+ default:
+ break;
+ }
+
+ status = rte_cryptodev_dp_sym_submit_single_job(ctx, data_vec, n, ofs,
+ &a_data, (void *)op);
+ if (status < 0) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ rte_cryptodev_dp_sym_submit_done(ctx, 1);
+
+ status = -1;
+ while (count++ < 65535 && status == -1) {
+ status = rte_cryptodev_dp_sym_dequeue_single_job(ctx,
+ (void **)&ret_op);
+ if (status == -1)
+ rte_pause();
+ }
+
+ if (status != -1)
+ rte_cryptodev_dp_sym_dequeue_done(ctx, 1);
+
+ if (count == 65536 || status != 1 || ret_op != op) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ op->status = status == 1 ? RTE_CRYPTO_OP_STATUS_SUCCESS :
+ RTE_CRYPTO_OP_STATUS_ERROR;
+}
+
static void
process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op)
{
@@ -1656,6 +1827,9 @@ test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_sym_session *sess,
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -1710,12 +1884,18 @@ test_AES_cipheronly_all(void)
static int
test_AES_docsis_all(void)
{
+ /* Data-path service does not support DOCSIS yet */
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
return test_blockcipher(BLKCIPHER_AES_DOCSIS_TYPE);
}
static int
test_DES_docsis_all(void)
{
+ /* Data-path service does not support DOCSIS yet */
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
return test_blockcipher(BLKCIPHER_DES_DOCSIS_TYPE);
}
@@ -2470,7 +2650,11 @@ test_snow3g_authentication(const struct snow3g_hash_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1, 0);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
ut_params->obuf = ut_params->op->sym->m_src;
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -2549,7 +2733,11 @@ test_snow3g_authentication_verify(const struct snow3g_hash_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1, 0);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
@@ -2619,6 +2807,9 @@ test_kasumi_authentication(const struct kasumi_hash_test_data *tdata)
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1, 0);
else
ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
@@ -2690,7 +2881,11 @@ test_kasumi_authentication_verify(const struct kasumi_hash_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1, 0);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
@@ -2897,8 +3092,12 @@ test_kasumi_encryption(const struct kasumi_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
- ut_params->op);
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_dst;
@@ -2983,7 +3182,11 @@ test_kasumi_encryption_sgl(const struct kasumi_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -3026,8 +3229,9 @@ test_kasumi_encryption_oop(const struct kasumi_test_data *tdata)
struct rte_cryptodev_sym_capability_idx cap_idx;
cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_KASUMI_F8;
+ /* Data-path service does not support OOP */
if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
- &cap_idx) == NULL)
+ &cap_idx) == NULL || cryptodev_dp_test)
return -ENOTSUP;
/* Create KASUMI session */
@@ -3107,8 +3311,9 @@ test_kasumi_encryption_oop_sgl(const struct kasumi_test_data *tdata)
struct rte_cryptodev_sym_capability_idx cap_idx;
cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_KASUMI_F8;
+ /* Data-path service does not support OOP */
if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
- &cap_idx) == NULL)
+ &cap_idx) == NULL || cryptodev_dp_test)
return -ENOTSUP;
rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
@@ -3192,8 +3397,9 @@ test_kasumi_decryption_oop(const struct kasumi_test_data *tdata)
struct rte_cryptodev_sym_capability_idx cap_idx;
cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_KASUMI_F8;
+ /* Data-path service does not support OOP */
if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
- &cap_idx) == NULL)
+ &cap_idx) == NULL || cryptodev_dp_test)
return -ENOTSUP;
/* Create KASUMI session */
@@ -3306,7 +3512,11 @@ test_kasumi_decryption(const struct kasumi_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, 0);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -3381,7 +3591,11 @@ test_snow3g_encryption(const struct snow3g_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -3419,7 +3633,7 @@ test_snow3g_encryption_oop(const struct snow3g_test_data *tdata)
cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_SNOW3G_UEA2;
if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
- &cap_idx) == NULL)
+ &cap_idx) == NULL || cryptodev_dp_test)
return -ENOTSUP;
/* Create SNOW 3G session */
@@ -3502,7 +3716,7 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_SNOW3G_UEA2;
if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
- &cap_idx) == NULL)
+ &cap_idx) == NULL || cryptodev_dp_test)
return -ENOTSUP;
rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
@@ -3621,7 +3835,7 @@ test_snow3g_encryption_offset_oop(const struct snow3g_test_data *tdata)
cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_SNOW3G_UEA2;
if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
- &cap_idx) == NULL)
+ &cap_idx) == NULL || cryptodev_dp_test)
return -ENOTSUP;
/* Create SNOW 3G session */
@@ -3756,7 +3970,11 @@ static int test_snow3g_decryption(const struct snow3g_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_dst;
@@ -3791,7 +4009,7 @@ static int test_snow3g_decryption_oop(const struct snow3g_test_data *tdata)
cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_SNOW3G_UEA2;
if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
- &cap_idx) == NULL)
+ &cap_idx) == NULL || cryptodev_dp_test)
return -ENOTSUP;
/* Create SNOW 3G session */
@@ -3924,7 +4142,11 @@ test_zuc_cipher_auth(const struct wireless_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
@@ -4019,7 +4241,11 @@ test_snow3g_cipher_auth(const struct snow3g_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
@@ -4087,6 +4313,8 @@ test_snow3g_auth_cipher(const struct snow3g_test_data *tdata,
printf("Device doesn't support digest encrypted.\n");
return -ENOTSUP;
}
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
}
/* Create SNOW 3G session */
@@ -4155,7 +4383,11 @@ test_snow3g_auth_cipher(const struct snow3g_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4266,6 +4498,8 @@ test_snow3g_auth_cipher_sgl(const struct snow3g_test_data *tdata,
return -ENOTSUP;
}
} else {
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) {
printf("Device doesn't support out-of-place scatter-gather "
"in both input and output mbufs.\n");
@@ -4344,7 +4578,11 @@ test_snow3g_auth_cipher_sgl(const struct snow3g_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4453,6 +4691,8 @@ test_kasumi_auth_cipher(const struct kasumi_test_data *tdata,
uint64_t feat_flags = dev_info.feature_flags;
if (op_mode == OUT_OF_PLACE) {
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
if (!(feat_flags & RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED)) {
printf("Device doesn't support digest encrypted.\n");
return -ENOTSUP;
@@ -4526,7 +4766,11 @@ test_kasumi_auth_cipher(const struct kasumi_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4638,6 +4882,8 @@ test_kasumi_auth_cipher_sgl(const struct kasumi_test_data *tdata,
return -ENOTSUP;
}
} else {
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) {
printf("Device doesn't support out-of-place scatter-gather "
"in both input and output mbufs.\n");
@@ -4716,7 +4962,11 @@ test_kasumi_auth_cipher_sgl(const struct kasumi_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4857,7 +5107,11 @@ test_kasumi_cipher_auth(const struct kasumi_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4944,7 +5198,11 @@ test_zuc_encryption(const struct wireless_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5031,7 +5289,11 @@ test_zuc_encryption_sgl(const struct wireless_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5119,7 +5381,11 @@ test_zuc_authentication(const struct wireless_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1, 0);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
ut_params->obuf = ut_params->op->sym->m_src;
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5177,6 +5443,8 @@ test_zuc_auth_cipher(const struct wireless_test_data *tdata,
return -ENOTSUP;
}
} else {
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) {
printf("Device doesn't support out-of-place scatter-gather "
"in both input and output mbufs.\n");
@@ -5251,7 +5519,11 @@ test_zuc_auth_cipher(const struct wireless_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5359,6 +5631,8 @@ test_zuc_auth_cipher_sgl(const struct wireless_test_data *tdata,
return -ENOTSUP;
}
} else {
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) {
printf("Device doesn't support out-of-place scatter-gather "
"in both input and output mbufs.\n");
@@ -5437,7 +5711,11 @@ test_zuc_auth_cipher_sgl(const struct wireless_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5580,6 +5858,9 @@ test_kasumi_decryption_test_case_2(void)
static int
test_kasumi_decryption_test_case_3(void)
{
+ /* rte_crypto_mbuf_to_vec does not support incomplete mbuf build */
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
return test_kasumi_decryption(&kasumi_test_case_3);
}
@@ -5779,6 +6060,9 @@ test_snow3g_auth_cipher_part_digest_enc_oop(void)
static int
test_snow3g_auth_cipher_test_case_3_sgl(void)
{
+ /* rte_crypto_mbuf_to_vec does not support incomplete mbuf build */
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
return test_snow3g_auth_cipher_sgl(
&snow3g_auth_cipher_test_case_3, IN_PLACE, 0);
}
@@ -5793,6 +6077,9 @@ test_snow3g_auth_cipher_test_case_3_oop_sgl(void)
static int
test_snow3g_auth_cipher_part_digest_enc_sgl(void)
{
+ /* rte_crypto_mbuf_to_vec does not support incomplete mbuf build */
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
return test_snow3g_auth_cipher_sgl(
&snow3g_auth_cipher_partial_digest_encryption,
IN_PLACE, 0);
@@ -6146,10 +6433,9 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata,
unsigned int ciphertext_len;
struct rte_cryptodev_info dev_info;
- struct rte_crypto_op *op;
/* Check if device supports particular algorithms separately */
- if (test_mixed_check_if_unsupported(tdata))
+ if (test_mixed_check_if_unsupported(tdata) || cryptodev_dp_test)
return -ENOTSUP;
rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
@@ -6161,6 +6447,9 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata,
return -ENOTSUP;
}
+ if (op_mode == OUT_OF_PLACE)
+ return -ENOTSUP;
+
/* Create the session */
if (verify)
retval = create_wireless_algo_cipher_auth_session(
@@ -6192,9 +6481,11 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata,
/* clear mbuf payload */
memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
rte_pktmbuf_tailroom(ut_params->ibuf));
- if (op_mode == OUT_OF_PLACE)
+ if (op_mode == OUT_OF_PLACE) {
+
memset(rte_pktmbuf_mtod(ut_params->obuf, uint8_t *), 0,
rte_pktmbuf_tailroom(ut_params->obuf));
+ }
ciphertext_len = ceil_byte_length(tdata->ciphertext.len_bits);
plaintext_len = ceil_byte_length(tdata->plaintext.len_bits);
@@ -6235,18 +6526,17 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata,
if (retval < 0)
return retval;
- op = process_crypto_request(ts_params->valid_devs[0],
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
/* Check if the op failed because the device doesn't */
/* support this particular combination of algorithms */
- if (op == NULL && ut_params->op->status ==
+ if (ut_params->op == NULL && ut_params->op->status ==
RTE_CRYPTO_OP_STATUS_INVALID_SESSION) {
printf("Device doesn't support this mixed combination. "
"Test Skipped.\n");
return -ENOTSUP;
}
- ut_params->op = op;
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -6337,10 +6627,9 @@ test_mixed_auth_cipher_sgl(const struct mixed_cipher_auth_test_data *tdata,
uint8_t digest_buffer[10000];
struct rte_cryptodev_info dev_info;
- struct rte_crypto_op *op;
/* Check if device supports particular algorithms */
- if (test_mixed_check_if_unsupported(tdata))
+ if (test_mixed_check_if_unsupported(tdata) || cryptodev_dp_test)
return -ENOTSUP;
rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
@@ -6440,20 +6729,18 @@ test_mixed_auth_cipher_sgl(const struct mixed_cipher_auth_test_data *tdata,
if (retval < 0)
return retval;
- op = process_crypto_request(ts_params->valid_devs[0],
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
/* Check if the op failed because the device doesn't */
/* support this particular combination of algorithms */
- if (op == NULL && ut_params->op->status ==
+ if (ut_params->op == NULL && ut_params->op->status ==
RTE_CRYPTO_OP_STATUS_INVALID_SESSION) {
printf("Device doesn't support this mixed combination. "
"Test Skipped.\n");
return -ENOTSUP;
}
- ut_params->op = op;
-
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = (op_mode == IN_PLACE ?
@@ -7043,6 +7330,9 @@ test_authenticated_encryption(const struct aead_test_data *tdata)
/* Process crypto operation */
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 0, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -8540,6 +8830,9 @@ test_authenticated_decryption(const struct aead_test_data *tdata)
/* Process crypto operation */
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 0, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -8833,6 +9126,9 @@ test_authenticated_encryption_oop(const struct aead_test_data *tdata)
if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
&cap_idx) == NULL)
return -ENOTSUP;
+ /* Data-path service does not support OOP */
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
/* not supported with CPU crypto */
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
@@ -8923,8 +9219,9 @@ test_authenticated_decryption_oop(const struct aead_test_data *tdata)
&cap_idx) == NULL)
return -ENOTSUP;
- /* not supported with CPU crypto */
- if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+ /* not supported with CPU crypto and data-path service*/
+ if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO ||
+ cryptodev_dp_test)
return -ENOTSUP;
/* Create AEAD session */
@@ -9151,8 +9448,13 @@ test_authenticated_decryption_sessionless(
"crypto op session type not sessionless");
/* Process crypto operation */
- TEST_ASSERT_NOT_NULL(process_crypto_request(ts_params->valid_devs[0],
- ut_params->op), "failed to process sym crypto op");
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 0, 0, 0);
+ else
+ TEST_ASSERT_NOT_NULL(process_crypto_request(
+ ts_params->valid_devs[0], ut_params->op),
+ "failed to process sym crypto op");
TEST_ASSERT_NOT_NULL(ut_params->op, "failed crypto process");
@@ -9472,6 +9774,9 @@ test_MD5_HMAC_generate(const struct HMAC_MD5_vector *test_case)
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -9530,6 +9835,9 @@ test_MD5_HMAC_verify(const struct HMAC_MD5_vector *test_case)
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -10098,6 +10406,9 @@ test_AES_GMAC_authentication(const struct gmac_test_data *tdata)
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -10215,6 +10526,9 @@ test_AES_GMAC_authentication_verify(const struct gmac_test_data *tdata)
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -10780,7 +11094,10 @@ test_authentication_verify_fail_when_data_corruption(
TEST_ASSERT_NOT_EQUAL(ut_params->op->status,
RTE_CRYPTO_OP_STATUS_SUCCESS,
"authentication not failed");
- } else {
+ } else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 0, 0);
+ else {
ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NULL(ut_params->op, "authentication not failed");
@@ -10851,7 +11168,10 @@ test_authentication_verify_GMAC_fail_when_corruption(
TEST_ASSERT_NOT_EQUAL(ut_params->op->status,
RTE_CRYPTO_OP_STATUS_SUCCESS,
"authentication not failed");
- } else {
+ } else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 0, 0);
+ else {
ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NULL(ut_params->op, "authentication not failed");
@@ -10926,7 +11246,10 @@ test_authenticated_decryption_fail_when_corruption(
TEST_ASSERT_NOT_EQUAL(ut_params->op->status,
RTE_CRYPTO_OP_STATUS_SUCCESS,
"authentication not failed");
- } else {
+ } else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 0, 0);
+ else {
ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NULL(ut_params->op, "authentication not failed");
@@ -11021,6 +11344,9 @@ test_authenticated_encryt_with_esn(
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 0, 0);
else
ut_params->op = process_crypto_request(
ts_params->valid_devs[0], ut_params->op);
@@ -11141,6 +11467,9 @@ test_authenticated_decrypt_with_esn(
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 0, 0);
else
ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
@@ -11285,6 +11614,9 @@ test_authenticated_encryption_SGL(const struct aead_test_data *tdata,
unsigned int sgl_in = fragsz < tdata->plaintext.len;
unsigned int sgl_out = (fragsz_oop ? fragsz_oop : fragsz) <
tdata->plaintext.len;
+ /* Data path service does not support OOP */
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
if (sgl_in && !sgl_out) {
if (!(dev_info.feature_flags &
RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT))
@@ -11480,6 +11812,9 @@ test_authenticated_encryption_SGL(const struct aead_test_data *tdata,
if (oop == IN_PLACE &&
gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 0, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -13041,6 +13376,29 @@ test_cryptodev_nitrox(void)
return unit_test_suite_runner(&cryptodev_nitrox_testsuite);
}
+static int
+test_qat_sym_direct_api(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+ int ret;
+
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "QAT PMD must be loaded. Check that both "
+ "CONFIG_RTE_LIBRTE_PMD_QAT and CONFIG_RTE_LIBRTE_PMD_QAT_SYM "
+ "are enabled in config file to run this testsuite.\n");
+ return TEST_SKIPPED;
+ }
+
+ cryptodev_dp_test = 1;
+ ret = unit_test_suite_runner(&cryptodev_testsuite);
+ cryptodev_dp_test = 0;
+
+ return ret;
+}
+
+REGISTER_TEST_COMMAND(cryptodev_qat_sym_api_autotest, test_qat_sym_direct_api);
REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_mb_autotest,
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 41542e055..e4e4c7626 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -71,6 +71,8 @@
#define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
#define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
+extern int cryptodev_dp_test;
+
/**
* Write (spread) data from buffer to mbuf data
*
@@ -209,4 +211,9 @@ create_segmented_mbuf(struct rte_mempool *mbuf_pool, int pkt_len,
return NULL;
}
+void
+process_sym_hw_api_op(uint8_t dev_id, uint16_t qp_id, struct rte_crypto_op *op,
+ uint8_t is_cipher, uint8_t is_auth, uint8_t len_in_bits,
+ uint8_t cipher_iv_len);
+
#endif /* TEST_CRYPTODEV_H_ */
diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c
index 221262341..311b34c15 100644
--- a/app/test/test_cryptodev_blockcipher.c
+++ b/app/test/test_cryptodev_blockcipher.c
@@ -462,25 +462,44 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
}
/* Process crypto operation */
- if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
- snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
- "line %u FAILED: %s",
- __LINE__, "Error sending packet for encryption");
- status = TEST_FAILED;
- goto error_exit;
- }
+ if (cryptodev_dp_test) {
+ uint8_t is_cipher = 0, is_auth = 0;
- op = NULL;
+ if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) {
+ RTE_LOG(DEBUG, USER1,
+ "QAT direct API does not support OOP, Test Skipped.\n");
+ snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "SKIPPED");
+ status = TEST_SUCCESS;
+ goto error_exit;
+ }
+ if (t->op_mask & BLOCKCIPHER_TEST_OP_CIPHER)
+ is_cipher = 1;
+ if (t->op_mask & BLOCKCIPHER_TEST_OP_AUTH)
+ is_auth = 1;
- while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) == 0)
- rte_pause();
+ process_sym_hw_api_op(dev_id, 0, op, is_cipher, is_auth, 0,
+ tdata->iv.len);
+ } else {
+ if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
+ snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
+ "line %u FAILED: %s",
+ __LINE__, "Error sending packet for encryption");
+ status = TEST_FAILED;
+ goto error_exit;
+ }
- if (!op) {
- snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
- "line %u FAILED: %s",
- __LINE__, "Failed to process sym crypto op");
- status = TEST_FAILED;
- goto error_exit;
+ op = NULL;
+
+ while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) == 0)
+ rte_pause();
+
+ if (!op) {
+ snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
+ "line %u FAILED: %s",
+ __LINE__, "Failed to process sym crypto op");
+ status = TEST_FAILED;
+ goto error_exit;
+ }
}
debug_hexdump(stdout, "m_src(after):",
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v8 4/4] doc: add cryptodev service APIs guide
2020-09-04 15:25 ` [dpdk-dev] [dpdk-dev v8 0/4] cryptodev: add data-path service APIs Fan Zhang
` (2 preceding siblings ...)
2020-09-04 15:25 ` [dpdk-dev] [dpdk-dev v8 3/4] test/crypto: add unit-test for cryptodev direct APIs Fan Zhang
@ 2020-09-04 15:25 ` Fan Zhang
2020-09-08 8:42 ` [dpdk-dev] [dpdk-dev v9 0/4] cryptodev: add data-path service APIs Fan Zhang
4 siblings, 0 replies; 84+ messages in thread
From: Fan Zhang @ 2020-09-04 15:25 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski, Fan Zhang
This patch updates programmer's guide to demonstrate the usage
and limitations of cryptodev symmetric crypto data-path service
APIs.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
doc/guides/prog_guide/cryptodev_lib.rst | 90 +++++++++++++++++++++++++
1 file changed, 90 insertions(+)
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index c14f750fa..1321e4c5d 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -631,6 +631,96 @@ a call argument. Status different than zero must be treated as error.
For more details, e.g. how to convert an mbuf to an SGL, please refer to an
example usage in the IPsec library implementation.
+Cryptodev Direct Data-path Service API
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Direct crypto data-path service are a set of APIs that especially provided for
+the external libraries/applications who want to take advantage of the rich
+features provided by cryptodev, but not necessarily depend on cryptodev
+operations, mempools, or mbufs in the their data-path implementations.
+
+The direct crypto data-path service has the following advantages:
+- Supports raw data pointer and physical addresses as input.
+- Do not require specific data structure allocated from heap, such as
+ cryptodev operation.
+- Enqueue in a burst or single operation. The service allow enqueuing in
+ a burst similar to ``rte_cryptodev_enqueue_burst`` operation, or only
+ enqueue one job at a time but maintaining necessary context data locally for
+ next single job enqueue operation. The latter method is especially helpful
+ when the user application's crypto operations are clustered into a burst.
+ Allowing enqueue one operation at a time helps reducing one additional loop
+ and also reduced the cache misses during the double "looping" situation.
+- Customerizable dequeue count. Instead of dequeue maximum possible operations
+ as same as ``rte_cryptodev_dequeue_burst`` operation, the service allows the
+ user to provide a callback function to decide how many operations to be
+ dequeued. This is especially helpful when the expected dequeue count is
+ hidden inside the opaque data stored during enqueue. The user can provide
+ the callback function to parse the opaque data structure.
+- Abandon enqueue and dequeue anytime. One of the drawbacks of
+ ``rte_cryptodev_enqueue_burst`` and ``rte_cryptodev_dequeue_burst``
+ operations are: once an operation is enqueued/dequeued there is no way to
+ undo the operation. The service make the operation abandon possible by
+ creating a local copy of the queue operation data in the service context
+ data. The data will be written back to driver maintained operation data
+ when enqueue or dequeue done function is called.
+
+The cryptodev data-path service uses
+
+Cryptodev PMDs who supports this feature will have
+``RTE_CRYPTODEV_FF_SYM_HW_DIRECT_API`` feature flag presented. To use this
+feature the function ``rte_cryptodev_get_dp_service_ctx_data_size`` should
+be called to get the data path service context data size. The user should
+creates a local buffer at least this size long and initialize it using
+``rte_cryptodev_dp_configure_service`` function call.
+
+The ``rte_cryptodev_dp_configure_service`` function call initialize or
+updates the ``struct rte_crypto_dp_service_ctx`` buffer, in which contains the
+driver specific queue pair data pointer and service context buffer, and a
+set of function pointers to enqueue and dequeue different algorithms'
+operations. The ``rte_cryptodev_dp_configure_service`` should be called when:
+
+- Before enqueuing or dequeuing starts (set ``is_update`` parameter to 0).
+- When different cryptodev session, security session, or session-less xform
+ is used (set ``is_update`` parameter to 1).
+
+Two different enqueue functions are provided.
+
+- ``rte_cryptodev_dp_sym_submit_vec``: submit a burst of operations stored in
+ the ``rte_crypto_sym_vec`` structure.
+- ``rte_cryptodev_dp_submit_single_job``: submit single operation.
+
+Either enqueue functions will not command the crypto device to start processing
+until ``rte_cryptodev_dp_submit_done`` function is called. Before then the user
+shall expect the driver only stores the necessory context data in the
+``rte_crypto_dp_service_ctx`` buffer for the next enqueue operation. If the user
+wants to abandon the submitted operations, simply call
+``rte_cryptodev_dp_configure_service`` function instead with the parameter
+``is_update`` set to 0. The driver will recover the service context data to
+the previous state.
+
+To dequeue the operations the user also have two operations:
+
+- ``rte_cryptodev_dp_sym_dequeue``: fully customizable deuqueue operation. The
+ user needs to provide the callback function for the driver to get the
+ dequeue count and perform post processing such as write the status field.
+- ``rte_cryptodev_dp_sym_dequeue_single_job``: dequeue single job.
+
+Same as enqueue, the function ``rte_cryptodev_dp_dequeue_done`` is used to
+merge user's local service context data with the driver's queue operation
+data. Also to abandon the dequeue operation (still keep the operations in the
+queue), the user shall avoid ``rte_cryptodev_dp_dequeue_done`` function call
+but calling ``rte_cryptodev_dp_configure_service`` function with the parameter
+``is_update`` set to 0.
+
+There are a few limitations to the data path service:
+
+* Only support in-place operations.
+* APIs are NOT thread-safe.
+* CANNOT mix the direct API's enqueue with rte_cryptodev_enqueue_burst, or
+ vice versa.
+
+See *DPDK API Reference* for details on each API definitions.
+
Sample code
-----------
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v8 1/4] cryptodev: add crypto data-path service APIs
2020-09-04 15:25 ` [dpdk-dev] [dpdk-dev v8 1/4] cryptodev: add crypto " Fan Zhang
@ 2020-09-07 12:36 ` Dybkowski, AdamX
0 siblings, 0 replies; 84+ messages in thread
From: Dybkowski, AdamX @ 2020-09-07 12:36 UTC (permalink / raw)
To: Zhang, Roy Fan, dev
Cc: akhil.goyal, Trahe, Fiona, Kusztal, ArkadiuszX, Bronowski, PiotrX
> -----Original Message-----
> From: Zhang, Roy Fan <roy.fan.zhang@intel.com>
> Sent: Friday, 4 September, 2020 17:26
> To: dev@dpdk.org
> Cc: akhil.goyal@nxp.com; Trahe, Fiona <fiona.trahe@intel.com>; Kusztal,
> ArkadiuszX <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> <adamx.dybkowski@intel.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> Bronowski, PiotrX <piotrx.bronowski@intel.com>
> Subject: [dpdk-dev v8 1/4] cryptodev: add crypto data-path service APIs
>
> This patch adds data-path service APIs for enqueue and dequeue operations
> to cryptodev. The APIs support flexible user-define enqueue and dequeue
> behaviors and operation mode.
>
> Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
> Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Series
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v9 0/4] cryptodev: add data-path service APIs
2020-09-04 15:25 ` [dpdk-dev] [dpdk-dev v8 0/4] cryptodev: add data-path service APIs Fan Zhang
` (3 preceding siblings ...)
2020-09-04 15:25 ` [dpdk-dev] [dpdk-dev v8 4/4] doc: add cryptodev service APIs guide Fan Zhang
@ 2020-09-08 8:42 ` Fan Zhang
2020-09-08 8:42 ` [dpdk-dev] [dpdk-dev v9 1/4] cryptodev: add crypto " Fan Zhang
` (4 more replies)
4 siblings, 5 replies; 84+ messages in thread
From: Fan Zhang @ 2020-09-08 8:42 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski, Fan Zhang
Direct crypto data-path service are a set of APIs that especially provided for
the external libraries/applications who want to take advantage of the rich
features provided by cryptodev, but not necessarily depend on cryptodev
operations, mempools, or mbufs in the their data-path implementations.
The direct crypto data-path service has the following advantages:
- Supports raw data pointer and physical addresses as input.
- Do not require specific data structure allocated from heap, such as
cryptodev operation.
- Enqueue in a burst or single operation. The service allow enqueuing in
a burst similar to ``rte_cryptodev_enqueue_burst`` operation, or only
enqueue one job at a time but maintaining necessary context data locally for
next single job enqueue operation. The latter method is especially helpful
when the user application's crypto operations are clustered into a burst.
Allowing enqueue one operation at a time helps reducing one additional loop
and also reduced the cache misses during the double "looping" situation.
- Customerizable dequeue count. Instead of dequeue maximum possible operations
as same as ``rte_cryptodev_dequeue_burst`` operation, the service allows the
user to provide a callback function to decide how many operations to be
dequeued. This is especially helpful when the expected dequeue count is
hidden inside the opaque data stored during enqueue. The user can provide
the callback function to parse the opaque data structure.
- Abandon enqueue and dequeue anytime. One of the drawbacks of
``rte_cryptodev_enqueue_burst`` and ``rte_cryptodev_dequeue_burst``
operations are: once an operation is enqueued/dequeued there is no way to
undo the operation. The service make the operation abandon possible by
creating a local copy of the queue operation data in the service context
data. The data will be written back to driver maintained operation data
when enqueue or dequeue done function is called.
The cryptodev data-path service uses
Cryptodev PMDs who supports this feature will have
``RTE_CRYPTODEV_FF_SYM_HW_DIRECT_API`` feature flag presented. To use this
feature the function ``rte_cryptodev_get_dp_service_ctx_data_size`` should
be called to get the data path service context data size. The user should
creates a local buffer at least this size long and initialize it using
``rte_cryptodev_dp_configure_service`` function call.
The ``rte_cryptodev_dp_configure_service`` function call initialize or
updates the ``struct rte_crypto_dp_service_ctx`` buffer, in which contains the
driver specific queue pair data pointer and service context buffer, and a
set of function pointers to enqueue and dequeue different algorithms'
operations. The ``rte_cryptodev_dp_configure_service`` should be called when:
- Before enqueuing or dequeuing starts (set ``is_update`` parameter to 0).
- When different cryptodev session, security session, or session-less xform
is used (set ``is_update`` parameter to 1).
Two different enqueue functions are provided.
- ``rte_cryptodev_dp_sym_submit_vec``: submit a burst of operations stored in
the ``rte_crypto_sym_vec`` structure.
- ``rte_cryptodev_dp_submit_single_job``: submit single operation.
Either enqueue functions will not command the crypto device to start processing
until ``rte_cryptodev_dp_submit_done`` function is called. Before then the user
shall expect the driver only stores the necessory context data in the
``rte_crypto_dp_service_ctx`` buffer for the next enqueue operation. If the user
wants to abandon the submitted operations, simply call
``rte_cryptodev_dp_configure_service`` function instead with the parameter
``is_update`` set to 0. The driver will recover the service context data to
the previous state.
To dequeue the operations the user also have two operations:
- ``rte_cryptodev_dp_sym_dequeue``: fully customizable deuqueue operation. The
user needs to provide the callback function for the driver to get the
dequeue count and perform post processing such as write the status field.
- ``rte_cryptodev_dp_sym_dequeue_single_job``: dequeue single job.
Same as enqueue, the function ``rte_cryptodev_dp_dequeue_done`` is used to
merge user's local service context data with the driver's queue operation
data. Also to abandon the dequeue operation (still keep the operations in the
queue), the user shall avoid ``rte_cryptodev_dp_dequeue_done`` function call
but calling ``rte_cryptodev_dp_configure_service`` function with the parameter
``is_update`` set to 0.
There are a few limitations to the data path service:
* Only support in-place operations.
* APIs are NOT thread-safe.
* CANNOT mix the direct API's enqueue with rte_cryptodev_enqueue_burst, or
vice versa.
v9:
- Changed return types of submit_done() and dequeue_done() APIs.
- Added release note update.
v8:
- Updated following by comments.
- Fixed a few bugs.
- Fixed ARM build error.
- Updated the unit test covering all tests.
v7:
- Fixed a few typos.
- Fixed length calculation bugs.
v6:
- Rebased on top of DPDK 20.08.
- Changed to service ctx and added single job submit/dequeue.
v5:
- Changed to use rte_crypto_sym_vec as input.
- Changed to use public APIs instead of use function pointer.
v4:
- Added missed patch.
v3:
- Instead of QAT only API, moved the API to cryptodev.
- Added cryptodev feature flags.
v2:
- Used a structure to simplify parameters.
- Added unit tests.
- Added documentation.
Fan Zhang (4):
cryptodev: add crypto data-path service APIs
crypto/qat: add crypto data-path service API support
test/crypto: add unit-test for cryptodev direct APIs
doc: add cryptodev service APIs guide
app/test/test_cryptodev.c | 461 ++++++++-
app/test/test_cryptodev.h | 7 +
app/test/test_cryptodev_blockcipher.c | 51 +-
doc/guides/prog_guide/cryptodev_lib.rst | 90 ++
doc/guides/rel_notes/release_20_11.rst | 7 +
drivers/common/qat/Makefile | 1 +
drivers/crypto/qat/meson.build | 1 +
drivers/crypto/qat/qat_sym.h | 13 +
drivers/crypto/qat/qat_sym_hw_dp.c | 947 ++++++++++++++++++
drivers/crypto/qat/qat_sym_pmd.c | 9 +-
lib/librte_cryptodev/rte_crypto.h | 9 +
lib/librte_cryptodev/rte_crypto_sym.h | 49 +-
lib/librte_cryptodev/rte_cryptodev.c | 98 ++
lib/librte_cryptodev/rte_cryptodev.h | 335 ++++++-
lib/librte_cryptodev/rte_cryptodev_pmd.h | 48 +-
.../rte_cryptodev_version.map | 10 +
16 files changed, 2062 insertions(+), 74 deletions(-)
create mode 100644 drivers/crypto/qat/qat_sym_hw_dp.c
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
2020-09-08 8:42 ` [dpdk-dev] [dpdk-dev v9 0/4] cryptodev: add data-path service APIs Fan Zhang
@ 2020-09-08 8:42 ` Fan Zhang
2020-09-18 21:50 ` Akhil Goyal
2020-09-08 8:42 ` [dpdk-dev] [dpdk-dev v9 2/4] crypto/qat: add crypto data-path service API support Fan Zhang
` (3 subsequent siblings)
4 siblings, 1 reply; 84+ messages in thread
From: Fan Zhang @ 2020-09-08 8:42 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski,
Fan Zhang, Piotr Bronowski
This patch adds data-path service APIs for enqueue and dequeue
operations to cryptodev. The APIs support flexible user-define
enqueue and dequeue behaviors and operation mode.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
---
lib/librte_cryptodev/rte_crypto.h | 9 +
lib/librte_cryptodev/rte_crypto_sym.h | 49 ++-
lib/librte_cryptodev/rte_cryptodev.c | 98 +++++
lib/librte_cryptodev/rte_cryptodev.h | 335 +++++++++++++++++-
lib/librte_cryptodev/rte_cryptodev_pmd.h | 48 ++-
.../rte_cryptodev_version.map | 10 +
6 files changed, 540 insertions(+), 9 deletions(-)
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
index fd5ef3a87..f009be9af 100644
--- a/lib/librte_cryptodev/rte_crypto.h
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -438,6 +438,15 @@ rte_crypto_op_attach_asym_session(struct rte_crypto_op *op,
return 0;
}
+/** Crypto data-path service types */
+enum rte_crypto_dp_service {
+ RTE_CRYPTO_DP_SYM_CIPHER_ONLY = 0,
+ RTE_CRYPTO_DP_SYM_AUTH_ONLY,
+ RTE_CRYPTO_DP_SYM_CHAIN,
+ RTE_CRYPTO_DP_SYM_AEAD,
+ RTE_CRYPTO_DP_N_SERVICE
+};
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index f29c98051..376412e94 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -50,6 +50,30 @@ struct rte_crypto_sgl {
uint32_t num;
};
+/**
+ * Symmetri Crypto Addtional Data other than src and destination data.
+ * Supposed to be used to pass IV/digest/aad data buffers with lengths
+ * defined when creating crypto session.
+ */
+union rte_crypto_sym_additional_data {
+ struct {
+ void *cipher_iv_ptr;
+ rte_iova_t cipher_iv_iova;
+ void *auth_iv_ptr;
+ rte_iova_t auth_iv_iova;
+ void *digest_ptr;
+ rte_iova_t digest_iova;
+ } cipher_auth;
+ struct {
+ void *iv_ptr;
+ rte_iova_t iv_iova;
+ void *digest_ptr;
+ rte_iova_t digest_iova;
+ void *aad_ptr;
+ rte_iova_t aad_iova;
+ } aead;
+};
+
/**
* Synchronous operation descriptor.
* Supposed to be used with CPU crypto API call.
@@ -57,12 +81,25 @@ struct rte_crypto_sgl {
struct rte_crypto_sym_vec {
/** array of SGL vectors */
struct rte_crypto_sgl *sgl;
- /** array of pointers to IV */
- void **iv;
- /** array of pointers to AAD */
- void **aad;
- /** array of pointers to digest */
- void **digest;
+
+ union {
+
+ /* Supposed to be used with CPU crypto API call. */
+ struct {
+ /** array of pointers to IV */
+ void **iv;
+ /** array of pointers to AAD */
+ void **aad;
+ /** array of pointers to digest */
+ void **digest;
+ };
+
+ /* Supposed to be used with rte_cryptodev_dp_sym_submit_vec()
+ * call.
+ */
+ union rte_crypto_sym_additional_data *additional_data;
+ };
+
/**
* array of statuses for each operation:
* - 0 on success
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index 1dd795bcb..4f59cf800 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -1914,6 +1914,104 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec);
}
+int
+rte_cryptodev_dp_get_service_ctx_data_size(uint8_t dev_id)
+{
+ struct rte_cryptodev *dev;
+ int32_t size = sizeof(struct rte_crypto_dp_service_ctx);
+ int32_t priv_size;
+
+ if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+ return -1;
+
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+ if (*dev->dev_ops->get_drv_ctx_size == NULL ||
+ !(dev->feature_flags & RTE_CRYPTODEV_FF_DATA_PATH_SERVICE)) {
+ return -1;
+ }
+
+ priv_size = (*dev->dev_ops->get_drv_ctx_size)(dev);
+ if (priv_size < 0)
+ return -1;
+
+ return RTE_ALIGN_CEIL((size + priv_size), 8);
+}
+
+int
+rte_cryptodev_dp_configure_service(uint8_t dev_id, uint16_t qp_id,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ struct rte_crypto_dp_service_ctx *ctx, uint8_t is_update)
+{
+ struct rte_cryptodev *dev;
+
+ if (!rte_cryptodev_get_qp_status(dev_id, qp_id))
+ return -1;
+
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+ if (!(dev->feature_flags & RTE_CRYPTODEV_FF_DATA_PATH_SERVICE)
+ || dev->dev_ops->configure_service == NULL)
+ return -1;
+
+ return (*dev->dev_ops->configure_service)(dev, qp_id, service_type,
+ sess_type, session_ctx, ctx, is_update);
+}
+
+int
+rte_cryptodev_dp_sym_submit_single_job(struct rte_crypto_dp_service_ctx *ctx,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ union rte_crypto_sym_additional_data *additional_data,
+ void *opaque)
+{
+ return _cryptodev_dp_submit_single_job(ctx, data, n_data_vecs, ofs,
+ additional_data, opaque);
+}
+
+uint32_t
+rte_cryptodev_dp_sym_submit_vec(struct rte_crypto_dp_service_ctx *ctx,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ return (*ctx->submit_vec)(ctx->qp_data, ctx->drv_service_data, vec,
+ ofs, opaque);
+}
+
+int
+rte_cryptodev_dp_sym_dequeue_single_job(struct rte_crypto_dp_service_ctx *ctx,
+ void **out_opaque)
+{
+ return _cryptodev_dp_sym_dequeue_single_job(ctx, out_opaque);
+}
+
+int
+rte_cryptodev_dp_sym_submit_done(struct rte_crypto_dp_service_ctx *ctx,
+ uint32_t n)
+{
+ return (*ctx->submit_done)(ctx->qp_data, ctx->drv_service_data, n);
+}
+
+int
+rte_cryptodev_dp_sym_dequeue_done(struct rte_crypto_dp_service_ctx *ctx,
+ uint32_t n)
+{
+ return (*ctx->dequeue_done)(ctx->qp_data, ctx->drv_service_data, n);
+}
+
+uint32_t
+rte_cryptodev_dp_sym_dequeue(struct rte_crypto_dp_service_ctx *ctx,
+ rte_cryptodev_get_dequeue_count_t get_dequeue_count,
+ rte_cryptodev_post_dequeue_t post_dequeue,
+ void **out_opaque, uint8_t is_opaque_array,
+ uint32_t *n_success_jobs)
+{
+ return (*ctx->dequeue_opaque)(ctx->qp_data, ctx->drv_service_data,
+ get_dequeue_count, post_dequeue, out_opaque, is_opaque_array,
+ n_success_jobs);
+}
+
/** Initialise rte_crypto_op mempool element */
static void
rte_crypto_op_init(struct rte_mempool *mempool,
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 7b3ebc20f..4da0389d1 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -466,7 +466,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
/**< Support symmetric session-less operations */
#define RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA (1ULL << 23)
/**< Support operations on data which is not byte aligned */
-
+#define RTE_CRYPTODEV_FF_DATA_PATH_SERVICE (1ULL << 24)
+/**< Support accelerated specific raw data as input */
/**
* Get the name of a crypto device feature flag
@@ -1351,6 +1352,338 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
struct rte_crypto_sym_vec *vec);
+/**
+ * Get the size of the data-path service context for all registered drivers.
+ *
+ * @param dev_id The device identifier.
+ *
+ * @return
+ * - If the device supports data-path service, return the context size.
+ * - If the device does not support the data-dath service, return -1.
+ */
+__rte_experimental
+int
+rte_cryptodev_dp_get_service_ctx_data_size(uint8_t dev_id);
+
+/**
+ * Union of different crypto session types, including session-less xform
+ * pointer.
+ */
+union rte_cryptodev_session_ctx {
+ struct rte_cryptodev_sym_session *crypto_sess;
+ struct rte_crypto_sym_xform *xform;
+ struct rte_security_session *sec_sess;
+};
+
+/**
+ * Submit a data vector into device queue but the driver will not start
+ * processing until rte_cryptodev_dp_sym_submit_vec() is called.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param vec The array of job vectors.
+ * @param ofs Start and stop offsets for auth and cipher
+ * operations.
+ * @param opaque The array of opaque data for dequeue.
+ * @return
+ * - The number of jobs successfully submitted.
+ */
+typedef uint32_t (*cryptodev_dp_sym_submit_vec_t)(
+ void *qp, uint8_t *service_data, struct rte_crypto_sym_vec *vec,
+ union rte_crypto_sym_ofs ofs, void **opaque);
+
+/**
+ * Submit single job into device queue but the driver will not start
+ * processing until rte_cryptodev_dp_sym_submit_vec() is called.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param data The buffer vector.
+ * @param n_data_vecs Number of buffer vectors.
+ * @param ofs Start and stop offsets for auth and cipher
+ * operations.
+ * @param additional_data IV, digest, and aad data.
+ * @param opaque The opaque data for dequeue.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+typedef int (*cryptodev_dp_submit_single_job_t)(
+ void *qp, uint8_t *service_data, struct rte_crypto_vec *data,
+ uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs,
+ union rte_crypto_sym_additional_data *additional_data,
+ void *opaque);
+
+/**
+ * Inform the queue pair to start processing or finish dequeuing all
+ * submitted/dequeued jobs.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param n The total number of submitted jobs.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+typedef int (*cryptodev_dp_sym_operation_done_t)(void *qp,
+ uint8_t *service_data, uint32_t n);
+
+/**
+ * Typedef that the user provided for the driver to get the dequeue count.
+ * The function may return a fixed number or the number parsed from the opaque
+ * data stored in the first processed job.
+ *
+ * @param opaque Dequeued opaque data.
+ **/
+typedef uint32_t (*rte_cryptodev_get_dequeue_count_t)(void *opaque);
+
+/**
+ * Typedef that the user provided to deal with post dequeue operation, such
+ * as filling status.
+ *
+ * @param opaque Dequeued opaque data. In case
+ * RTE_CRYPTO_HW_DP_FF_GET_OPAQUE_ARRAY bit is
+ * set, this value will be the opaque data stored
+ * in the specific processed jobs referenced by
+ * index, otherwise it will be the opaque data
+ * stored in the first processed job in the burst.
+ * @param index Index number of the processed job.
+ * @param is_op_success Driver filled operation status.
+ **/
+typedef void (*rte_cryptodev_post_dequeue_t)(void *opaque, uint32_t index,
+ uint8_t is_op_success);
+
+/**
+ * Dequeue symmetric crypto processing of user provided data.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param get_dequeue_count User provided callback function to
+ * obtain dequeue count.
+ * @param post_dequeue User provided callback function to
+ * post-process a dequeued operation.
+ * @param out_opaque Opaque pointer array to be retrieve from
+ * device queue. In case of
+ * *is_opaque_array* is set there should
+ * be enough room to store all opaque data.
+ * @param is_opaque_array Set 1 if every dequeued job will be
+ * written the opaque data into
+ * *out_opaque* array.
+ * @param n_success_jobs Driver written value to specific the
+ * total successful operations count.
+ *
+ * @return
+ * - Returns number of dequeued packets.
+ */
+typedef uint32_t (*cryptodev_dp_sym_dequeue_t)(void *qp, uint8_t *service_data,
+ rte_cryptodev_get_dequeue_count_t get_dequeue_count,
+ rte_cryptodev_post_dequeue_t post_dequeue,
+ void **out_opaque, uint8_t is_opaque_array,
+ uint32_t *n_success_jobs);
+
+/**
+ * Dequeue symmetric crypto processing of user provided data.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param service_data Driver specific service data.
+ * @param out_opaque Opaque pointer to be retrieve from
+ * device queue.
+ *
+ * @return
+ * - 1 if the job is dequeued and the operation is a success.
+ * - 0 if the job is dequeued but the operation is failed.
+ * - -1 if no job is dequeued.
+ */
+typedef int (*cryptodev_dp_sym_dequeue_single_job_t)(
+ void *qp, uint8_t *service_data, void **out_opaque);
+
+/**
+ * Context data for asynchronous crypto process.
+ */
+struct rte_crypto_dp_service_ctx {
+ void *qp_data;
+
+ struct {
+ cryptodev_dp_submit_single_job_t submit_single_job;
+ cryptodev_dp_sym_submit_vec_t submit_vec;
+ cryptodev_dp_sym_operation_done_t submit_done;
+ cryptodev_dp_sym_dequeue_t dequeue_opaque;
+ cryptodev_dp_sym_dequeue_single_job_t dequeue_single;
+ cryptodev_dp_sym_operation_done_t dequeue_done;
+ };
+
+ /* Driver specific service data */
+ __extension__ uint8_t drv_service_data[];
+};
+
+/**
+ * Configure one DP service context data. Calling this function for the first
+ * time the user should unset the *is_update* parameter and the driver will
+ * fill necessary operation data into ctx buffer. Only when
+ * rte_cryptodev_dp_submit_done() is called the data stored in the ctx buffer
+ * will not be effective.
+ *
+ * @param dev_id The device identifier.
+ * @param qp_id The index of the queue pair from which to
+ * retrieve processed packets. The value must be
+ * in the range [0, nb_queue_pair - 1] previously
+ * supplied to rte_cryptodev_configure().
+ * @param service_type Type of the service requested.
+ * @param sess_type session type.
+ * @param session_ctx Session context data.
+ * @param ctx The data-path service context data.
+ * @param is_update Set 1 if ctx is pre-initialized but need
+ * update to different service type or session,
+ * but the rest driver data remains the same.
+ * Since service context data buffer is provided
+ * by user, the driver will not check the
+ * validity of the buffer nor its content. It is
+ * the user's obligation to initialize and
+ * uses the buffer properly by setting this field.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+__rte_experimental
+int
+rte_cryptodev_dp_configure_service(uint8_t dev_id, uint16_t qp_id,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ struct rte_crypto_dp_service_ctx *ctx, uint8_t is_update);
+
+static __rte_always_inline int
+_cryptodev_dp_submit_single_job(struct rte_crypto_dp_service_ctx *ctx,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ union rte_crypto_sym_additional_data *additional_data,
+ void *opaque)
+{
+ return (*ctx->submit_single_job)(ctx->qp_data, ctx->drv_service_data,
+ data, n_data_vecs, ofs, additional_data, opaque);
+}
+
+static __rte_always_inline int
+_cryptodev_dp_sym_dequeue_single_job(struct rte_crypto_dp_service_ctx *ctx,
+ void **out_opaque)
+{
+ return (*ctx->dequeue_single)(ctx->qp_data, ctx->drv_service_data,
+ out_opaque);
+}
+
+/**
+ * Submit single job into device queue but the driver will not start
+ * processing until rte_cryptodev_dp_submit_done() is called. This is a
+ * simplified
+ *
+ * @param ctx The initialized data-path service context data.
+ * @param data The buffer vector.
+ * @param n_data_vecs Number of buffer vectors.
+ * @param ofs Start and stop offsets for auth and cipher
+ * operations.
+ * @param additional_data IV, digest, and aad
+ * @param opaque The array of opaque data for dequeue.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+__rte_experimental
+int
+rte_cryptodev_dp_sym_submit_single_job(struct rte_crypto_dp_service_ctx *ctx,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ union rte_crypto_sym_additional_data *additional_data,
+ void *opaque);
+
+/**
+ * Submit a data vector into device queue but the driver will not start
+ * processing until rte_cryptodev_dp_submit_done() is called.
+ *
+ * @param ctx The initialized data-path service context data.
+ * @param vec The array of job vectors.
+ * @param ofs Start and stop offsets for auth and cipher operations.
+ * @param opaque The array of opaque data for dequeue.
+ * @return
+ * - The number of jobs successfully submitted.
+ */
+__rte_experimental
+uint32_t
+rte_cryptodev_dp_sym_submit_vec(struct rte_crypto_dp_service_ctx *ctx,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque);
+
+/**
+ * Command the queue pair to start processing all submitted jobs from last
+ * rte_cryptodev_init_dp_service() call.
+ *
+ * @param ctx The initialized data-path service context data.
+ * @param n The total number of submitted jobs.
+ */
+__rte_experimental
+int
+rte_cryptodev_dp_sym_submit_done(struct rte_crypto_dp_service_ctx *ctx,
+ uint32_t n);
+
+/**
+ * Dequeue symmetric crypto processing of user provided data.
+ *
+ * @param ctx The initialized data-path service
+ * context data.
+ * @param get_dequeue_count User provided callback function to
+ * obtain dequeue count.
+ * @param post_dequeue User provided callback function to
+ * post-process a dequeued operation.
+ * @param out_opaque Opaque pointer array to be retrieve from
+ * device queue. In case of
+ * *is_opaque_array* is set there should
+ * be enough room to store all opaque data.
+ * @param is_opaque_array Set 1 if every dequeued job will be
+ * written the opaque data into
+ * *out_opaque* array.
+ * @param n_success_jobs Driver written value to specific the
+ * total successful operations count.
+ *
+ * @return
+ * - Returns number of dequeued packets.
+ */
+__rte_experimental
+uint32_t
+rte_cryptodev_dp_sym_dequeue(struct rte_crypto_dp_service_ctx *ctx,
+ rte_cryptodev_get_dequeue_count_t get_dequeue_count,
+ rte_cryptodev_post_dequeue_t post_dequeue,
+ void **out_opaque, uint8_t is_opaque_array,
+ uint32_t *n_success_jobs);
+
+/**
+ * Dequeue Single symmetric crypto processing of user provided data.
+ *
+ * @param ctx The initialized data-path service
+ * context data.
+ * @param out_opaque Opaque pointer to be retrieve from
+ * device queue. The driver shall support
+ * NULL input of this parameter.
+ *
+ * @return
+ * - 1 if the job is dequeued and the operation is a success.
+ * - 0 if the job is dequeued but the operation is failed.
+ * - -1 if no job is dequeued.
+ */
+__rte_experimental
+int
+rte_cryptodev_dp_sym_dequeue_single_job(struct rte_crypto_dp_service_ctx *ctx,
+ void **out_opaque);
+
+/**
+ * Inform the queue pair dequeue jobs finished.
+ *
+ * @param ctx The initialized data-path service context data.
+ * @param n The total number of jobs already dequeued.
+ */
+__rte_experimental
+int
+rte_cryptodev_dp_sym_dequeue_done(struct rte_crypto_dp_service_ctx *ctx,
+ uint32_t n);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
index 81975d72b..e19de458c 100644
--- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -316,6 +316,42 @@ typedef uint32_t (*cryptodev_sym_cpu_crypto_process_t)
(struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess,
union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec);
+/**
+ * Typedef that the driver provided to get service context private date size.
+ *
+ * @param dev Crypto device pointer.
+ *
+ * @return
+ * - On success return the size of the device's service context private data.
+ * - On failure return negative integer.
+ */
+typedef int (*cryptodev_dp_get_service_ctx_size_t)(
+ struct rte_cryptodev *dev);
+
+/**
+ * Typedef that the driver provided to configure data-path service.
+ *
+ * @param dev Crypto device pointer.
+ * @param qp_id Crypto device queue pair index.
+ * @param service_type Type of the service requested.
+ * @param sess_type session type.
+ * @param session_ctx Session context data.
+ * @param ctx The data-path service context data.
+ * @param is_update Set 1 if ctx is pre-initialized but need
+ * update to different service type or session,
+ * but the rest driver data remains the same.
+ * buffer will always be one.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+typedef int (*cryptodev_dp_configure_service_t)(
+ struct rte_cryptodev *dev, uint16_t qp_id,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ struct rte_crypto_dp_service_ctx *ctx,
+ uint8_t is_update);
/** Crypto device operations function pointer table */
struct rte_cryptodev_ops {
@@ -348,8 +384,16 @@ struct rte_cryptodev_ops {
/**< Clear a Crypto sessions private data. */
cryptodev_asym_free_session_t asym_session_clear;
/**< Clear a Crypto sessions private data. */
- cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
- /**< process input data synchronously (cpu-crypto). */
+ union {
+ cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
+ /**< process input data synchronously (cpu-crypto). */
+ struct {
+ cryptodev_dp_get_service_ctx_size_t get_drv_ctx_size;
+ /**< Get data path service context data size. */
+ cryptodev_dp_configure_service_t configure_service;
+ /**< Initialize crypto service ctx data. */
+ };
+ };
};
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 02f6dcf72..10388ae90 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -105,4 +105,14 @@ EXPERIMENTAL {
# added in 20.08
rte_cryptodev_get_qp_status;
+
+ # added in 20.11
+ rte_cryptodev_dp_configure_service;
+ rte_cryptodev_dp_get_service_ctx_data_size;
+ rte_cryptodev_dp_sym_dequeue;
+ rte_cryptodev_dp_sym_dequeue_done;
+ rte_cryptodev_dp_sym_dequeue_single_job;
+ rte_cryptodev_dp_sym_submit_done;
+ rte_cryptodev_dp_sym_submit_single_job;
+ rte_cryptodev_dp_sym_submit_vec;
};
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v9 2/4] crypto/qat: add crypto data-path service API support
2020-09-08 8:42 ` [dpdk-dev] [dpdk-dev v9 0/4] cryptodev: add data-path service APIs Fan Zhang
2020-09-08 8:42 ` [dpdk-dev] [dpdk-dev v9 1/4] cryptodev: add crypto " Fan Zhang
@ 2020-09-08 8:42 ` Fan Zhang
2020-09-08 8:42 ` [dpdk-dev] [dpdk-dev v9 3/4] test/crypto: add unit-test for cryptodev direct APIs Fan Zhang
` (2 subsequent siblings)
4 siblings, 0 replies; 84+ messages in thread
From: Fan Zhang @ 2020-09-08 8:42 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski, Fan Zhang
This patch updates QAT PMD to add crypto service API support.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
drivers/common/qat/Makefile | 1 +
drivers/crypto/qat/meson.build | 1 +
drivers/crypto/qat/qat_sym.h | 13 +
drivers/crypto/qat/qat_sym_hw_dp.c | 947 +++++++++++++++++++++++++++++
drivers/crypto/qat/qat_sym_pmd.c | 9 +-
5 files changed, 969 insertions(+), 2 deletions(-)
create mode 100644 drivers/crypto/qat/qat_sym_hw_dp.c
diff --git a/drivers/common/qat/Makefile b/drivers/common/qat/Makefile
index 85d420709..1b71bbbab 100644
--- a/drivers/common/qat/Makefile
+++ b/drivers/common/qat/Makefile
@@ -42,6 +42,7 @@ endif
SRCS-y += qat_sym.c
SRCS-y += qat_sym_session.c
SRCS-y += qat_sym_pmd.c
+ SRCS-y += qat_sym_hw_dp.c
build_qat = yes
endif
endif
diff --git a/drivers/crypto/qat/meson.build b/drivers/crypto/qat/meson.build
index a225f374a..bc90ec44c 100644
--- a/drivers/crypto/qat/meson.build
+++ b/drivers/crypto/qat/meson.build
@@ -15,6 +15,7 @@ if dep.found()
qat_sources += files('qat_sym_pmd.c',
'qat_sym.c',
'qat_sym_session.c',
+ 'qat_sym_hw_dp.c',
'qat_asym_pmd.c',
'qat_asym.c')
qat_ext_deps += dep
diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h
index 1a9748849..ea2db0ca0 100644
--- a/drivers/crypto/qat/qat_sym.h
+++ b/drivers/crypto/qat/qat_sym.h
@@ -264,6 +264,18 @@ qat_sym_process_response(void **op, uint8_t *resp)
}
*op = (void *)rx_op;
}
+
+int
+qat_sym_dp_configure_service_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ struct rte_crypto_dp_service_ctx *service_ctx,
+ uint8_t is_update);
+
+int
+qat_sym_get_service_ctx_size(struct rte_cryptodev *dev);
+
#else
static inline void
@@ -276,5 +288,6 @@ static inline void
qat_sym_process_response(void **op __rte_unused, uint8_t *resp __rte_unused)
{
}
+
#endif
#endif /* _QAT_SYM_H_ */
diff --git a/drivers/crypto/qat/qat_sym_hw_dp.c b/drivers/crypto/qat/qat_sym_hw_dp.c
new file mode 100644
index 000000000..bc0dde9c5
--- /dev/null
+++ b/drivers/crypto/qat/qat_sym_hw_dp.c
@@ -0,0 +1,947 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_cryptodev_pmd.h>
+
+#include "adf_transport_access_macros.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#include "qat_sym.h"
+#include "qat_sym_pmd.h"
+#include "qat_sym_session.h"
+#include "qat_qp.h"
+
+struct qat_sym_dp_service_ctx {
+ struct qat_sym_session *session;
+ uint32_t tail;
+ uint32_t head;
+ uint16_t cached_enqueue;
+ uint16_t cached_dequeue;
+ enum rte_crypto_dp_service last_service_type;
+};
+
+static __rte_always_inline int32_t
+qat_sym_dp_get_data(struct qat_qp *qp, struct icp_qat_fw_la_bulk_req *req,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs)
+{
+ struct qat_queue *tx_queue;
+ struct qat_sym_op_cookie *cookie;
+ struct qat_sgl *list;
+ uint32_t i;
+ uint32_t total_len;
+
+ if (likely(n_data_vecs == 1)) {
+ req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr =
+ data[0].iova;
+ req->comn_mid.src_length = req->comn_mid.dst_length =
+ data[0].len;
+ return data[0].len;
+ }
+
+ if (n_data_vecs == 0 || n_data_vecs > QAT_SYM_SGL_MAX_NUMBER)
+ return -1;
+
+ total_len = 0;
+ tx_queue = &qp->tx_q;
+
+ ICP_QAT_FW_COMN_PTR_TYPE_SET(req->comn_hdr.comn_req_flags,
+ QAT_COMN_PTR_TYPE_SGL);
+ cookie = qp->op_cookies[tx_queue->tail >> tx_queue->trailz];
+ list = (struct qat_sgl *)&cookie->qat_sgl_src;
+
+ for (i = 0; i < n_data_vecs; i++) {
+ list->buffers[i].len = data[i].len;
+ list->buffers[i].resrvd = 0;
+ list->buffers[i].addr = data[i].iova;
+ if (total_len + data[i].len > UINT32_MAX) {
+ QAT_DP_LOG(ERR, "Message too long");
+ return -1;
+ }
+ total_len += data[i].len;
+ }
+
+ list->num_bufs = i;
+ req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr =
+ cookie->qat_sgl_src_phys_addr;
+ req->comn_mid.src_length = req->comn_mid.dst_length = 0;
+ return total_len;
+}
+
+static __rte_always_inline void
+set_cipher_iv(struct icp_qat_fw_la_cipher_req_params *cipher_param,
+ union rte_crypto_sym_additional_data *a_data, uint32_t iv_len,
+ struct icp_qat_fw_la_bulk_req *qat_req)
+{
+ /* copy IV into request if it fits */
+ if (iv_len <= sizeof(cipher_param->u.cipher_IV_array))
+ rte_memcpy(cipher_param->u.cipher_IV_array,
+ a_data->cipher_auth.cipher_iv_ptr, iv_len);
+ else {
+ ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(
+ qat_req->comn_hdr.serv_specif_flags,
+ ICP_QAT_FW_CIPH_IV_64BIT_PTR);
+ cipher_param->u.s.cipher_IV_ptr =
+ a_data->cipher_auth.cipher_iv_iova;
+ }
+}
+
+#define QAT_SYM_DP_IS_RESP_SUCCESS(resp) \
+ (ICP_QAT_FW_COMN_STATUS_FLAG_OK == \
+ ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(resp->comn_hdr.comn_status))
+
+static __rte_always_inline void
+qat_sym_dp_fill_vec_status(int32_t *sta, int status, uint32_t n)
+{
+ uint32_t i;
+
+ for (i = 0; i < n; i++)
+ sta[i] = status;
+}
+
+#define QAT_SYM_DP_CHECK_ENQ_POSSIBLE(q, c, n) \
+ (q->enqueued - q->dequeued + c + n < q->max_inflights)
+
+static __rte_always_inline void
+submit_one_aead_job(struct qat_sym_session *ctx,
+ struct icp_qat_fw_la_bulk_req *req,
+ union rte_crypto_sym_additional_data *a_data,
+ union rte_crypto_sym_ofs ofs, uint32_t data_len)
+{
+ struct icp_qat_fw_la_cipher_req_params *cipher_param =
+ (void *)&req->serv_specif_rqpars;
+ struct icp_qat_fw_la_auth_req_params *auth_param =
+ (void *)((uint8_t *)&req->serv_specif_rqpars +
+ ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET);
+ uint8_t *aad_data;
+ uint8_t aad_ccm_real_len;
+ uint8_t aad_len_field_sz;
+ uint32_t msg_len_be;
+ rte_iova_t aad_iova = 0;
+ uint8_t q;
+
+ switch (ctx->qat_hash_alg) {
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+ ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(
+ req->comn_hdr.serv_specif_flags,
+ ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS);
+ rte_memcpy(cipher_param->u.cipher_IV_array,
+ a_data->aead.iv_ptr, ctx->cipher_iv.length);
+ aad_iova = a_data->aead.aad_iova;
+ break;
+ case ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC:
+ aad_data = a_data->aead.aad_ptr;
+ aad_iova = a_data->aead.aad_iova;
+ aad_ccm_real_len = 0;
+ aad_len_field_sz = 0;
+ msg_len_be = rte_bswap32((uint32_t)data_len -
+ ofs.ofs.cipher.head);
+
+ if (ctx->aad_len > ICP_QAT_HW_CCM_AAD_DATA_OFFSET) {
+ aad_len_field_sz = ICP_QAT_HW_CCM_AAD_LEN_INFO;
+ aad_ccm_real_len = ctx->aad_len -
+ ICP_QAT_HW_CCM_AAD_B0_LEN -
+ ICP_QAT_HW_CCM_AAD_LEN_INFO;
+ } else {
+ aad_data = a_data->aead.iv_ptr;
+ aad_iova = a_data->aead.iv_iova;
+ }
+
+ q = ICP_QAT_HW_CCM_NQ_CONST - ctx->cipher_iv.length;
+ aad_data[0] = ICP_QAT_HW_CCM_BUILD_B0_FLAGS(
+ aad_len_field_sz, ctx->digest_length, q);
+ if (q > ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE) {
+ memcpy(aad_data + ctx->cipher_iv.length +
+ ICP_QAT_HW_CCM_NONCE_OFFSET + (q -
+ ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE),
+ (uint8_t *)&msg_len_be,
+ ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE);
+ } else {
+ memcpy(aad_data + ctx->cipher_iv.length +
+ ICP_QAT_HW_CCM_NONCE_OFFSET,
+ (uint8_t *)&msg_len_be +
+ (ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE
+ - q), q);
+ }
+
+ if (aad_len_field_sz > 0) {
+ *(uint16_t *)&aad_data[ICP_QAT_HW_CCM_AAD_B0_LEN] =
+ rte_bswap16(aad_ccm_real_len);
+
+ if ((aad_ccm_real_len + aad_len_field_sz)
+ % ICP_QAT_HW_CCM_AAD_B0_LEN) {
+ uint8_t pad_len = 0;
+ uint8_t pad_idx = 0;
+
+ pad_len = ICP_QAT_HW_CCM_AAD_B0_LEN -
+ ((aad_ccm_real_len +
+ aad_len_field_sz) %
+ ICP_QAT_HW_CCM_AAD_B0_LEN);
+ pad_idx = ICP_QAT_HW_CCM_AAD_B0_LEN +
+ aad_ccm_real_len +
+ aad_len_field_sz;
+ memset(&aad_data[pad_idx], 0, pad_len);
+ }
+ }
+
+ rte_memcpy(((uint8_t *)cipher_param->u.cipher_IV_array)
+ + ICP_QAT_HW_CCM_NONCE_OFFSET,
+ (uint8_t *)a_data->aead.iv_ptr +
+ ICP_QAT_HW_CCM_NONCE_OFFSET, ctx->cipher_iv.length);
+ *(uint8_t *)&cipher_param->u.cipher_IV_array[0] =
+ q - ICP_QAT_HW_CCM_NONCE_OFFSET;
+
+ rte_memcpy((uint8_t *)a_data->aead.aad_ptr +
+ ICP_QAT_HW_CCM_NONCE_OFFSET,
+ (uint8_t *)a_data->aead.iv_ptr +
+ ICP_QAT_HW_CCM_NONCE_OFFSET,
+ ctx->cipher_iv.length);
+ break;
+ default:
+ break;
+ }
+
+ cipher_param->cipher_offset = ofs.ofs.cipher.head;
+ cipher_param->cipher_length = data_len - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ auth_param->auth_off = ofs.ofs.cipher.head;
+ auth_param->auth_len = cipher_param->cipher_length;
+ auth_param->auth_res_addr = a_data->aead.digest_iova;
+ auth_param->u1.aad_adr = aad_iova;
+
+ if (ctx->is_single_pass) {
+ cipher_param->spc_aad_addr = aad_iova;
+ cipher_param->spc_auth_res_addr = a_data->aead.digest_iova;
+ }
+}
+
+static __rte_always_inline int
+qat_sym_dp_submit_single_aead(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ union rte_crypto_sym_additional_data *a_data,
+ void *opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+ uint32_t tail = dp_ctx->tail;
+
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+ rte_prefetch0((uint8_t *)tx_queue->base_addr + tail);
+ data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs);
+ if (unlikely(data_len < 0))
+ return -1;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque;
+
+ submit_one_aead_job(ctx, req, a_data, ofs,
+ (uint32_t)data_len);
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue++;
+
+ return 0;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_submit_aead_jobs(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ uint32_t i;
+ uint32_t tail;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+
+ if (unlikely(QAT_SYM_DP_CHECK_ENQ_POSSIBLE(qp,
+ dp_ctx->cached_enqueue, vec->num) == 0)) {
+ qat_sym_dp_fill_vec_status(vec->status, -1, vec->num);
+ return 0;
+ }
+
+ tail = dp_ctx->tail;
+
+ for (i = 0; i < vec->num; i++) {
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+
+ data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec,
+ vec->sgl[i].num) - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ if (unlikely(data_len < 0))
+ break;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque[i];
+ submit_one_aead_job(ctx, req, vec->additional_data + i, ofs,
+ (uint32_t)data_len);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ }
+
+ if (unlikely(i < vec->num))
+ qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i);
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue += i;
+ return i;
+}
+
+static __rte_always_inline void
+submit_one_cipher_job(struct qat_sym_session *ctx,
+ struct icp_qat_fw_la_bulk_req *req,
+ union rte_crypto_sym_additional_data *a_data,
+ union rte_crypto_sym_ofs ofs, uint32_t data_len)
+{
+ struct icp_qat_fw_la_cipher_req_params *cipher_param;
+
+ cipher_param = (void *)&req->serv_specif_rqpars;
+
+ /* cipher IV */
+ set_cipher_iv(cipher_param, a_data, ctx->cipher_iv.length, req);
+ cipher_param->cipher_offset = ofs.ofs.cipher.head;
+ cipher_param->cipher_length = data_len - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+}
+
+static __rte_always_inline int
+qat_sym_dp_submit_single_cipher(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ union rte_crypto_sym_additional_data *a_data,
+ void *opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+ uint32_t tail = dp_ctx->tail;
+
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+ rte_prefetch0((uint8_t *)tx_queue->base_addr + tail);
+ data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs);
+ if (unlikely(data_len < 0))
+ return -1;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque;
+
+ submit_one_cipher_job(ctx, req, a_data, ofs, (uint32_t)data_len);
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue++;
+
+ return 0;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_submit_cipher_jobs(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ uint32_t i;
+ uint32_t tail;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+
+ if (unlikely(QAT_SYM_DP_CHECK_ENQ_POSSIBLE(qp,
+ dp_ctx->cached_enqueue, vec->num) == 0)) {
+ qat_sym_dp_fill_vec_status(vec->status, -1, vec->num);
+ return 0;
+ }
+
+ tail = dp_ctx->tail;
+
+ for (i = 0; i < vec->num; i++) {
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+
+ data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec,
+ vec->sgl[i].num) - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ if (unlikely(data_len < 0))
+ break;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque[i];
+ submit_one_cipher_job(ctx, req, vec->additional_data + i, ofs,
+ (uint32_t)data_len);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ }
+
+ if (unlikely(i < vec->num))
+ qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i);
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue += i;
+ return i;
+}
+
+static __rte_always_inline void
+submit_one_auth_job(struct qat_sym_session *ctx,
+ struct icp_qat_fw_la_bulk_req *req,
+ union rte_crypto_sym_additional_data *a_data,
+ union rte_crypto_sym_ofs ofs, uint32_t data_len)
+{
+ struct icp_qat_fw_la_cipher_req_params *cipher_param;
+ struct icp_qat_fw_la_auth_req_params *auth_param;
+
+ cipher_param = (void *)&req->serv_specif_rqpars;
+ auth_param = (void *)((uint8_t *)cipher_param +
+ ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET);
+
+ auth_param->auth_off = ofs.ofs.auth.head;
+ auth_param->auth_len = data_len - ofs.ofs.auth.head -
+ ofs.ofs.auth.tail;
+ auth_param->auth_res_addr = a_data->cipher_auth.digest_iova;
+
+ switch (ctx->qat_hash_alg) {
+ case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
+ case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
+ auth_param->u1.aad_adr = a_data->cipher_auth.auth_iv_iova;
+ break;
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+ ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(
+ req->comn_hdr.serv_specif_flags,
+ ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS);
+ rte_memcpy(cipher_param->u.cipher_IV_array,
+ a_data->cipher_auth.auth_iv_ptr,
+ ctx->auth_iv.length);
+ break;
+ default:
+ break;
+ }
+}
+
+static __rte_always_inline int
+qat_sym_dp_submit_single_auth(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ union rte_crypto_sym_additional_data *a_data, void *opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+ uint32_t tail = dp_ctx->tail;
+
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+ rte_prefetch0((uint8_t *)tx_queue->base_addr + tail);
+ data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs);
+ if (unlikely(data_len < 0))
+ return -1;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque;
+
+ submit_one_auth_job(ctx, req, a_data, ofs, (uint32_t)data_len);
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue++;
+
+ return 0;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_submit_auth_jobs(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ uint32_t i;
+ uint32_t tail;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+
+ if (unlikely(QAT_SYM_DP_CHECK_ENQ_POSSIBLE(qp,
+ dp_ctx->cached_enqueue, vec->num) == 0)) {
+ qat_sym_dp_fill_vec_status(vec->status, -1, vec->num);
+ return 0;
+ }
+
+ tail = dp_ctx->tail;
+
+ for (i = 0; i < vec->num; i++) {
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+
+ data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec,
+ vec->sgl[i].num) - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ if (unlikely(data_len < 0))
+ break;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque[i];
+ submit_one_auth_job(ctx, req, vec->additional_data + i, ofs,
+ (uint32_t)data_len);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ }
+
+ if (unlikely(i < vec->num))
+ qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i);
+
+ dp_ctx->tail = tail;
+ return i;
+}
+
+static __rte_always_inline void
+submit_one_chain_job(struct qat_sym_session *ctx,
+ struct icp_qat_fw_la_bulk_req *req, struct rte_crypto_vec *data,
+ uint16_t n_data_vecs, union rte_crypto_sym_additional_data *a_data,
+ union rte_crypto_sym_ofs ofs, uint32_t data_len)
+{
+ struct icp_qat_fw_la_cipher_req_params *cipher_param;
+ struct icp_qat_fw_la_auth_req_params *auth_param;
+ rte_iova_t auth_iova_end;
+ int32_t cipher_len, auth_len;
+
+ cipher_param = (void *)&req->serv_specif_rqpars;
+ auth_param = (void *)((uint8_t *)cipher_param +
+ ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET);
+
+ cipher_len = data_len - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ auth_len = data_len - ofs.ofs.auth.head - ofs.ofs.auth.tail;
+
+ assert(cipher_len > 0 && auth_len > 0);
+
+ cipher_param->cipher_offset = ofs.ofs.cipher.head;
+ cipher_param->cipher_length = cipher_len;
+ set_cipher_iv(cipher_param, a_data, ctx->cipher_iv.length, req);
+
+ auth_param->auth_off = ofs.ofs.auth.head;
+ auth_param->auth_len = auth_len;
+ auth_param->auth_res_addr = a_data->cipher_auth.digest_iova;
+
+ switch (ctx->qat_hash_alg) {
+ case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
+ case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
+ auth_param->u1.aad_adr = a_data->cipher_auth.auth_iv_iova;
+
+ if (unlikely(n_data_vecs > 1)) {
+ int auth_end_get = 0, i = n_data_vecs - 1;
+ struct rte_crypto_vec *cvec = &data[0];
+ uint32_t len;
+
+ len = data_len - ofs.ofs.auth.tail;
+
+ while (i >= 0 && len > 0) {
+ if (cvec->len >= len) {
+ auth_iova_end = cvec->iova +
+ (cvec->len - len);
+ len = 0;
+ auth_end_get = 1;
+ break;
+ }
+ len -= cvec->len;
+ i--;
+ cvec++;
+ }
+
+ assert(auth_end_get != 0);
+ } else
+ auth_iova_end = data[0].iova + auth_param->auth_off +
+ auth_param->auth_len;
+
+ /* Then check if digest-encrypted conditions are met */
+ if ((auth_param->auth_off + auth_param->auth_len <
+ cipher_param->cipher_offset +
+ cipher_param->cipher_length) &&
+ (a_data->cipher_auth.digest_iova == auth_iova_end)) {
+ /* Handle partial digest encryption */
+ if (cipher_param->cipher_offset +
+ cipher_param->cipher_length <
+ auth_param->auth_off +
+ auth_param->auth_len +
+ ctx->digest_length)
+ req->comn_mid.dst_length =
+ req->comn_mid.src_length =
+ auth_param->auth_off +
+ auth_param->auth_len +
+ ctx->digest_length;
+ struct icp_qat_fw_comn_req_hdr *header =
+ &req->comn_hdr;
+ ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(
+ header->serv_specif_flags,
+ ICP_QAT_FW_LA_DIGEST_IN_BUFFER);
+ }
+ break;
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+ break;
+ default:
+ break;
+ }
+}
+
+static __rte_always_inline int
+qat_sym_dp_submit_single_chain(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ union rte_crypto_sym_additional_data *a_data, void *opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+ uint32_t tail = dp_ctx->tail;
+
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+ rte_prefetch0((uint8_t *)tx_queue->base_addr + tail);
+ data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs);
+ if (unlikely(data_len < 0))
+ return -1;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque;
+
+ submit_one_chain_job(ctx, req, data, n_data_vecs, a_data, ofs,
+ (uint32_t)data_len);
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue++;
+
+ return 0;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_submit_chain_jobs(void *qp_data, uint8_t *service_data,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ uint32_t i;
+ uint32_t tail;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+
+ if (unlikely(QAT_SYM_DP_CHECK_ENQ_POSSIBLE(qp,
+ dp_ctx->cached_enqueue, vec->num) == 0)) {
+ qat_sym_dp_fill_vec_status(vec->status, -1, vec->num);
+ return 0;
+ }
+
+ tail = dp_ctx->tail;
+
+ for (i = 0; i < vec->num; i++) {
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+
+ data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec,
+ vec->sgl[i].num) - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ if (unlikely(data_len < 0))
+ break;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque[i];
+ submit_one_chain_job(ctx, req, vec->sgl[i].vec, vec->sgl[i].num,
+ vec->additional_data + i, ofs, (uint32_t)data_len);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ }
+
+ if (unlikely(i < vec->num))
+ qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i);
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue += i;
+ return i;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_dequeue(void *qp_data, uint8_t *service_data,
+ rte_cryptodev_get_dequeue_count_t get_dequeue_count,
+ rte_cryptodev_post_dequeue_t post_dequeue,
+ void **out_opaque, uint8_t is_opaque_array,
+ uint32_t *n_success_jobs)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+ struct qat_queue *rx_queue = &qp->rx_q;
+ struct icp_qat_fw_comn_resp *resp;
+ void *resp_opaque;
+ uint32_t i, n, inflight;
+ uint32_t head;
+ uint8_t status;
+
+ *n_success_jobs = 0;
+ head = dp_ctx->head;
+
+ inflight = qp->enqueued - qp->dequeued;
+ if (unlikely(inflight == 0))
+ return 0;
+
+ resp = (struct icp_qat_fw_comn_resp *)((uint8_t *)rx_queue->base_addr +
+ head);
+ /* no operation ready */
+ if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG))
+ return 0;
+
+ resp_opaque = (void *)(uintptr_t)resp->opaque_data;
+ /* get the dequeue count */
+ n = get_dequeue_count(resp_opaque);
+ if (unlikely(n == 0))
+ return 0;
+
+ out_opaque[0] = resp_opaque;
+ status = QAT_SYM_DP_IS_RESP_SUCCESS(resp);
+ post_dequeue(resp_opaque, 0, status);
+ *n_success_jobs += status;
+
+ head = (head + rx_queue->msg_size) & rx_queue->modulo_mask;
+
+ /* we already finished dequeue when n == 1 */
+ if (unlikely(n == 1)) {
+ i = 1;
+ goto end_deq;
+ }
+
+ if (is_opaque_array) {
+ for (i = 1; i < n; i++) {
+ resp = (struct icp_qat_fw_comn_resp *)(
+ (uint8_t *)rx_queue->base_addr + head);
+ if (unlikely(*(uint32_t *)resp ==
+ ADF_RING_EMPTY_SIG))
+ goto end_deq;
+ out_opaque[i] = (void *)(uintptr_t)
+ resp->opaque_data;
+ status = QAT_SYM_DP_IS_RESP_SUCCESS(resp);
+ *n_success_jobs += status;
+ post_dequeue(out_opaque[i], i, status);
+ head = (head + rx_queue->msg_size) &
+ rx_queue->modulo_mask;
+ }
+
+ goto end_deq;
+ }
+
+ /* opaque is not array */
+ for (i = 1; i < n; i++) {
+ resp = (struct icp_qat_fw_comn_resp *)(
+ (uint8_t *)rx_queue->base_addr + head);
+ status = QAT_SYM_DP_IS_RESP_SUCCESS(resp);
+ if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG))
+ goto end_deq;
+ head = (head + rx_queue->msg_size) &
+ rx_queue->modulo_mask;
+ post_dequeue(resp_opaque, i, status);
+ *n_success_jobs += status;
+ }
+
+end_deq:
+ dp_ctx->head = head;
+ dp_ctx->cached_dequeue += i;
+ return i;
+}
+
+static __rte_always_inline int
+qat_sym_dp_dequeue_single_job(void *qp_data, uint8_t *service_data,
+ void **out_opaque)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+ struct qat_queue *rx_queue = &qp->rx_q;
+
+ register struct icp_qat_fw_comn_resp *resp;
+
+ resp = (struct icp_qat_fw_comn_resp *)((uint8_t *)rx_queue->base_addr +
+ dp_ctx->head);
+
+ if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG))
+ return -1;
+
+ *out_opaque = (void *)(uintptr_t)resp->opaque_data;
+
+ dp_ctx->head = (dp_ctx->head + rx_queue->msg_size) &
+ rx_queue->modulo_mask;
+ dp_ctx->cached_dequeue++;
+
+ return QAT_SYM_DP_IS_RESP_SUCCESS(resp);
+}
+
+static __rte_always_inline int
+qat_sym_dp_kick_tail(void *qp_data, uint8_t *service_data, uint32_t n)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+
+ if (unlikely(dp_ctx->cached_enqueue != n))
+ return -1;
+
+ qp->enqueued += n;
+ qp->stats.enqueued_count += n;
+
+ tx_queue->tail = dp_ctx->tail;
+
+ WRITE_CSR_RING_TAIL(qp->mmap_bar_addr,
+ tx_queue->hw_bundle_number,
+ tx_queue->hw_queue_number, tx_queue->tail);
+ tx_queue->csr_tail = tx_queue->tail;
+ dp_ctx->cached_enqueue = 0;
+
+ return 0;
+}
+
+static __rte_always_inline int
+qat_sym_dp_update_head(void *qp_data, uint8_t *service_data, uint32_t n)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_queue *rx_queue = &qp->rx_q;
+ struct qat_sym_dp_service_ctx *dp_ctx = (void *)service_data;
+
+ if (unlikely(dp_ctx->cached_dequeue != n))
+ return -1;
+
+ rx_queue->head = dp_ctx->head;
+ rx_queue->nb_processed_responses += n;
+ qp->dequeued += n;
+ qp->stats.dequeued_count += n;
+ if (rx_queue->nb_processed_responses > QAT_CSR_HEAD_WRITE_THRESH) {
+ uint32_t old_head, new_head;
+ uint32_t max_head;
+
+ old_head = rx_queue->csr_head;
+ new_head = rx_queue->head;
+ max_head = qp->nb_descriptors * rx_queue->msg_size;
+
+ /* write out free descriptors */
+ void *cur_desc = (uint8_t *)rx_queue->base_addr + old_head;
+
+ if (new_head < old_head) {
+ memset(cur_desc, ADF_RING_EMPTY_SIG_BYTE,
+ max_head - old_head);
+ memset(rx_queue->base_addr, ADF_RING_EMPTY_SIG_BYTE,
+ new_head);
+ } else {
+ memset(cur_desc, ADF_RING_EMPTY_SIG_BYTE, new_head -
+ old_head);
+ }
+ rx_queue->nb_processed_responses = 0;
+ rx_queue->csr_head = new_head;
+
+ /* write current head to CSR */
+ WRITE_CSR_RING_HEAD(qp->mmap_bar_addr,
+ rx_queue->hw_bundle_number, rx_queue->hw_queue_number,
+ new_head);
+ }
+
+ dp_ctx->cached_dequeue = 0;
+ return 0;
+}
+
+int
+qat_sym_dp_configure_service_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
+ enum rte_crypto_dp_service service_type,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ struct rte_crypto_dp_service_ctx *service_ctx,
+ uint8_t is_update)
+{
+ struct qat_qp *qp;
+ struct qat_sym_session *ctx;
+ struct qat_sym_dp_service_ctx *dp_ctx;
+
+ if (service_ctx == NULL || session_ctx.crypto_sess == NULL ||
+ sess_type != RTE_CRYPTO_OP_WITH_SESSION)
+ return -EINVAL;
+
+ qp = dev->data->queue_pairs[qp_id];
+ ctx = (struct qat_sym_session *)get_sym_session_private_data(
+ session_ctx.crypto_sess, qat_sym_driver_id);
+ dp_ctx = (struct qat_sym_dp_service_ctx *)
+ service_ctx->drv_service_data;
+
+ dp_ctx->session = ctx;
+
+ if (!is_update) {
+ memset(service_ctx, 0, sizeof(*service_ctx) +
+ sizeof(struct qat_sym_dp_service_ctx));
+ service_ctx->qp_data = dev->data->queue_pairs[qp_id];
+ dp_ctx->tail = qp->tx_q.tail;
+ dp_ctx->head = qp->rx_q.head;
+ dp_ctx->cached_enqueue = dp_ctx->cached_dequeue = 0;
+ } else {
+ if (dp_ctx->last_service_type == service_type)
+ return 0;
+ }
+
+ dp_ctx->last_service_type = service_type;
+
+ service_ctx->submit_done = qat_sym_dp_kick_tail;
+ service_ctx->dequeue_opaque = qat_sym_dp_dequeue;
+ service_ctx->dequeue_single = qat_sym_dp_dequeue_single_job;
+ service_ctx->dequeue_done = qat_sym_dp_update_head;
+
+ if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER ||
+ ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) {
+ /* AES-GCM or AES-CCM */
+ if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64 ||
+ (ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_AES128
+ && ctx->qat_mode == ICP_QAT_HW_CIPHER_CTR_MODE
+ && ctx->qat_hash_alg ==
+ ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC)) {
+ if (service_type != RTE_CRYPTO_DP_SYM_AEAD)
+ return -1;
+ service_ctx->submit_vec = qat_sym_dp_submit_aead_jobs;
+ service_ctx->submit_single_job =
+ qat_sym_dp_submit_single_aead;
+ } else {
+ if (service_type != RTE_CRYPTO_DP_SYM_CHAIN)
+ return -1;
+ service_ctx->submit_vec = qat_sym_dp_submit_chain_jobs;
+ service_ctx->submit_single_job =
+ qat_sym_dp_submit_single_chain;
+ }
+ } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_AUTH) {
+ if (service_type != RTE_CRYPTO_DP_SYM_AUTH_ONLY)
+ return -1;
+ service_ctx->submit_vec = qat_sym_dp_submit_auth_jobs;
+ service_ctx->submit_single_job = qat_sym_dp_submit_single_auth;
+ } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) {
+ if (service_type != RTE_CRYPTO_DP_SYM_CIPHER_ONLY)
+ return -1;
+ service_ctx->submit_vec = qat_sym_dp_submit_cipher_jobs;
+ service_ctx->submit_single_job =
+ qat_sym_dp_submit_single_cipher;
+ }
+
+ return 0;
+}
+
+int
+qat_sym_get_service_ctx_size(__rte_unused struct rte_cryptodev *dev)
+{
+ return sizeof(struct qat_sym_dp_service_ctx);
+}
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index 314742f53..aaaf3e3f1 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -258,7 +258,11 @@ static struct rte_cryptodev_ops crypto_qat_ops = {
/* Crypto related operations */
.sym_session_get_size = qat_sym_session_get_private_size,
.sym_session_configure = qat_sym_session_configure,
- .sym_session_clear = qat_sym_session_clear
+ .sym_session_clear = qat_sym_session_clear,
+
+ /* Data plane service related operations */
+ .get_drv_ctx_size = qat_sym_get_service_ctx_size,
+ .configure_service = qat_sym_dp_configure_service_ctx,
};
#ifdef RTE_LIBRTE_SECURITY
@@ -376,7 +380,8 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
- RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
+ RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED |
+ RTE_CRYPTODEV_FF_DATA_PATH_SERVICE;
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v9 3/4] test/crypto: add unit-test for cryptodev direct APIs
2020-09-08 8:42 ` [dpdk-dev] [dpdk-dev v9 0/4] cryptodev: add data-path service APIs Fan Zhang
2020-09-08 8:42 ` [dpdk-dev] [dpdk-dev v9 1/4] cryptodev: add crypto " Fan Zhang
2020-09-08 8:42 ` [dpdk-dev] [dpdk-dev v9 2/4] crypto/qat: add crypto data-path service API support Fan Zhang
@ 2020-09-08 8:42 ` Fan Zhang
2020-09-18 20:03 ` Akhil Goyal
2020-09-08 8:42 ` [dpdk-dev] [dpdk-dev v9 4/4] doc: add cryptodev service APIs guide Fan Zhang
2020-09-24 16:34 ` [dpdk-dev] [dpdk-dev v10 0/4] cryptodev: add raw data-path APIs Fan Zhang
4 siblings, 1 reply; 84+ messages in thread
From: Fan Zhang @ 2020-09-08 8:42 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski, Fan Zhang
This patch adds the QAT test to use cryptodev symmetric crypto
direct APIs.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
app/test/test_cryptodev.c | 461 +++++++++++++++++++++++---
app/test/test_cryptodev.h | 7 +
app/test/test_cryptodev_blockcipher.c | 51 ++-
3 files changed, 456 insertions(+), 63 deletions(-)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 70bf6fe2c..13f642e0e 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -49,6 +49,8 @@
#define VDEV_ARGS_SIZE 100
#define MAX_NB_SESSIONS 4
+#define MAX_DRV_SERVICE_CTX_SIZE 256
+
#define IN_PLACE 0
#define OUT_OF_PLACE 1
@@ -57,6 +59,8 @@ static int gbl_driver_id;
static enum rte_security_session_action_type gbl_action_type =
RTE_SECURITY_ACTION_TYPE_NONE;
+int cryptodev_dp_test;
+
struct crypto_testsuite_params {
struct rte_mempool *mbuf_pool;
struct rte_mempool *large_mbuf_pool;
@@ -147,6 +151,182 @@ ceil_byte_length(uint32_t num_bits)
return (num_bits >> 3);
}
+void
+process_sym_hw_api_op(uint8_t dev_id, uint16_t qp_id, struct rte_crypto_op *op,
+ uint8_t is_cipher, uint8_t is_auth, uint8_t len_in_bits,
+ uint8_t cipher_iv_len)
+{
+ int32_t n;
+ struct rte_crypto_sym_op *sop;
+ struct rte_crypto_op *ret_op = NULL;
+ struct rte_crypto_vec data_vec[UINT8_MAX];
+ union rte_crypto_sym_additional_data a_data;
+ union rte_crypto_sym_ofs ofs;
+ int32_t status;
+ uint32_t max_len;
+ union rte_cryptodev_session_ctx sess;
+ enum rte_crypto_dp_service service_type;
+ uint32_t count = 0;
+ uint8_t service_data[MAX_DRV_SERVICE_CTX_SIZE] = {0};
+ struct rte_crypto_dp_service_ctx *ctx = (void *)service_data;
+ uint32_t cipher_offset = 0, cipher_len = 0, auth_offset = 0,
+ auth_len = 0;
+ int ctx_service_size;
+
+ sop = op->sym;
+
+ sess.crypto_sess = sop->session;
+
+ if (is_cipher && is_auth) {
+ service_type = RTE_CRYPTO_DP_SYM_CHAIN;
+ cipher_offset = sop->cipher.data.offset;
+ cipher_len = sop->cipher.data.length;
+ auth_offset = sop->auth.data.offset;
+ auth_len = sop->auth.data.length;
+ max_len = RTE_MAX(cipher_offset + cipher_len,
+ auth_offset + auth_len);
+ } else if (is_cipher) {
+ service_type = RTE_CRYPTO_DP_SYM_CIPHER_ONLY;
+ cipher_offset = sop->cipher.data.offset;
+ cipher_len = sop->cipher.data.length;
+ max_len = cipher_len + cipher_offset;
+ } else if (is_auth) {
+ service_type = RTE_CRYPTO_DP_SYM_AUTH_ONLY;
+ auth_offset = sop->auth.data.offset;
+ auth_len = sop->auth.data.length;
+ max_len = auth_len + auth_offset;
+ } else { /* aead */
+ service_type = RTE_CRYPTO_DP_SYM_AEAD;
+ cipher_offset = sop->aead.data.offset;
+ cipher_len = sop->aead.data.length;
+ max_len = cipher_len + cipher_offset;
+ }
+
+ if (len_in_bits) {
+ max_len = max_len >> 3;
+ cipher_offset = cipher_offset >> 3;
+ auth_offset = auth_offset >> 3;
+ cipher_len = cipher_len >> 3;
+ auth_len = auth_len >> 3;
+ }
+
+ ctx_service_size = rte_cryptodev_dp_get_service_ctx_data_size(dev_id);
+ assert(ctx_service_size <= MAX_DRV_SERVICE_CTX_SIZE &&
+ ctx_service_size > 0);
+
+ if (rte_cryptodev_dp_configure_service(dev_id, qp_id, service_type,
+ RTE_CRYPTO_OP_WITH_SESSION, sess, ctx, 0) < 0) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ /* test update service */
+ if (rte_cryptodev_dp_configure_service(dev_id, qp_id, service_type,
+ RTE_CRYPTO_OP_WITH_SESSION, sess, ctx, 1) < 0) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ n = rte_crypto_mbuf_to_vec(sop->m_src, 0, max_len,
+ data_vec, RTE_DIM(data_vec));
+ if (n < 0 || n > sop->m_src->nb_segs) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ ofs.raw = 0;
+
+ switch (service_type) {
+ case RTE_CRYPTO_DP_SYM_AEAD:
+ ofs.ofs.cipher.head = cipher_offset;
+ ofs.ofs.cipher.tail = max_len - cipher_offset - cipher_len;
+ a_data.aead.iv_ptr = rte_crypto_op_ctod_offset(op, void *,
+ IV_OFFSET);
+ a_data.aead.iv_iova = rte_crypto_op_ctophys_offset(op,
+ IV_OFFSET);
+ a_data.aead.aad_ptr = (void *)sop->aead.aad.data;
+ a_data.aead.aad_iova = sop->aead.aad.phys_addr;
+ a_data.aead.digest_ptr = (void *)sop->aead.digest.data;
+ a_data.aead.digest_iova = sop->aead.digest.phys_addr;
+ break;
+ case RTE_CRYPTO_DP_SYM_CIPHER_ONLY:
+ ofs.ofs.cipher.head = cipher_offset;
+ ofs.ofs.cipher.tail = max_len - cipher_offset - cipher_len;
+ a_data.cipher_auth.cipher_iv_ptr = rte_crypto_op_ctod_offset(
+ op, void *, IV_OFFSET);
+ a_data.cipher_auth.cipher_iv_iova =
+ rte_crypto_op_ctophys_offset(op, IV_OFFSET);
+ break;
+ case RTE_CRYPTO_DP_SYM_AUTH_ONLY:
+ ofs.ofs.auth.head = auth_offset;
+ ofs.ofs.auth.tail = max_len - auth_offset - auth_len;
+ a_data.cipher_auth.auth_iv_ptr = rte_crypto_op_ctod_offset(
+ op, void *, IV_OFFSET + cipher_iv_len);
+ a_data.cipher_auth.auth_iv_iova =
+ rte_crypto_op_ctophys_offset(op, IV_OFFSET +
+ cipher_iv_len);
+ a_data.cipher_auth.digest_ptr = (void *)sop->auth.digest.data;
+ a_data.cipher_auth.digest_iova = sop->auth.digest.phys_addr;
+ break;
+ case RTE_CRYPTO_DP_SYM_CHAIN:
+ ofs.ofs.cipher.head = cipher_offset;
+ ofs.ofs.cipher.tail = max_len - cipher_offset - cipher_len;
+ ofs.ofs.auth.head = auth_offset;
+ ofs.ofs.auth.tail = max_len - auth_offset - auth_len;
+ a_data.cipher_auth.cipher_iv_ptr = rte_crypto_op_ctod_offset(
+ op, void *, IV_OFFSET);
+ a_data.cipher_auth.cipher_iv_iova =
+ rte_crypto_op_ctophys_offset(op, IV_OFFSET);
+ a_data.cipher_auth.auth_iv_ptr = rte_crypto_op_ctod_offset(
+ op, void *, IV_OFFSET + cipher_iv_len);
+ a_data.cipher_auth.auth_iv_iova =
+ rte_crypto_op_ctophys_offset(op, IV_OFFSET +
+ cipher_iv_len);
+ a_data.cipher_auth.digest_ptr = (void *)sop->auth.digest.data;
+ a_data.cipher_auth.digest_iova = sop->auth.digest.phys_addr;
+ break;
+ default:
+ break;
+ }
+
+ status = rte_cryptodev_dp_sym_submit_single_job(ctx, data_vec, n, ofs,
+ &a_data, (void *)op);
+ if (status < 0) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ status = rte_cryptodev_dp_sym_submit_done(ctx, 1);
+ if (status < 0) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+
+ status = -1;
+ while (count++ < 65535 && status == -1) {
+ status = rte_cryptodev_dp_sym_dequeue_single_job(ctx,
+ (void **)&ret_op);
+ if (status == -1)
+ rte_pause();
+ }
+
+ if (status != -1) {
+ if (rte_cryptodev_dp_sym_dequeue_done(ctx, 1) < 0) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+ }
+
+ if (count == 65536 || status != 1 || ret_op != op) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ op->status = status == 1 ? RTE_CRYPTO_OP_STATUS_SUCCESS :
+ RTE_CRYPTO_OP_STATUS_ERROR;
+}
+
static void
process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op)
{
@@ -1656,6 +1836,9 @@ test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_sym_session *sess,
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -1710,12 +1893,18 @@ test_AES_cipheronly_all(void)
static int
test_AES_docsis_all(void)
{
+ /* Data-path service does not support DOCSIS yet */
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
return test_blockcipher(BLKCIPHER_AES_DOCSIS_TYPE);
}
static int
test_DES_docsis_all(void)
{
+ /* Data-path service does not support DOCSIS yet */
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
return test_blockcipher(BLKCIPHER_DES_DOCSIS_TYPE);
}
@@ -2470,7 +2659,11 @@ test_snow3g_authentication(const struct snow3g_hash_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1, 0);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
ut_params->obuf = ut_params->op->sym->m_src;
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -2549,7 +2742,11 @@ test_snow3g_authentication_verify(const struct snow3g_hash_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1, 0);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
@@ -2619,6 +2816,9 @@ test_kasumi_authentication(const struct kasumi_hash_test_data *tdata)
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1, 0);
else
ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
@@ -2690,7 +2890,11 @@ test_kasumi_authentication_verify(const struct kasumi_hash_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1, 0);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
@@ -2897,8 +3101,12 @@ test_kasumi_encryption(const struct kasumi_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
- ut_params->op);
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_dst;
@@ -2983,7 +3191,11 @@ test_kasumi_encryption_sgl(const struct kasumi_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -3026,8 +3238,9 @@ test_kasumi_encryption_oop(const struct kasumi_test_data *tdata)
struct rte_cryptodev_sym_capability_idx cap_idx;
cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_KASUMI_F8;
+ /* Data-path service does not support OOP */
if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
- &cap_idx) == NULL)
+ &cap_idx) == NULL || cryptodev_dp_test)
return -ENOTSUP;
/* Create KASUMI session */
@@ -3107,8 +3320,9 @@ test_kasumi_encryption_oop_sgl(const struct kasumi_test_data *tdata)
struct rte_cryptodev_sym_capability_idx cap_idx;
cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_KASUMI_F8;
+ /* Data-path service does not support OOP */
if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
- &cap_idx) == NULL)
+ &cap_idx) == NULL || cryptodev_dp_test)
return -ENOTSUP;
rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
@@ -3192,8 +3406,9 @@ test_kasumi_decryption_oop(const struct kasumi_test_data *tdata)
struct rte_cryptodev_sym_capability_idx cap_idx;
cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_KASUMI_F8;
+ /* Data-path service does not support OOP */
if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
- &cap_idx) == NULL)
+ &cap_idx) == NULL || cryptodev_dp_test)
return -ENOTSUP;
/* Create KASUMI session */
@@ -3306,7 +3521,11 @@ test_kasumi_decryption(const struct kasumi_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, 0);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -3381,7 +3600,11 @@ test_snow3g_encryption(const struct snow3g_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -3419,7 +3642,7 @@ test_snow3g_encryption_oop(const struct snow3g_test_data *tdata)
cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_SNOW3G_UEA2;
if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
- &cap_idx) == NULL)
+ &cap_idx) == NULL || cryptodev_dp_test)
return -ENOTSUP;
/* Create SNOW 3G session */
@@ -3502,7 +3725,7 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_SNOW3G_UEA2;
if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
- &cap_idx) == NULL)
+ &cap_idx) == NULL || cryptodev_dp_test)
return -ENOTSUP;
rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
@@ -3621,7 +3844,7 @@ test_snow3g_encryption_offset_oop(const struct snow3g_test_data *tdata)
cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_SNOW3G_UEA2;
if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
- &cap_idx) == NULL)
+ &cap_idx) == NULL || cryptodev_dp_test)
return -ENOTSUP;
/* Create SNOW 3G session */
@@ -3756,7 +3979,11 @@ static int test_snow3g_decryption(const struct snow3g_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_dst;
@@ -3791,7 +4018,7 @@ static int test_snow3g_decryption_oop(const struct snow3g_test_data *tdata)
cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_SNOW3G_UEA2;
if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
- &cap_idx) == NULL)
+ &cap_idx) == NULL || cryptodev_dp_test)
return -ENOTSUP;
/* Create SNOW 3G session */
@@ -3924,7 +4151,11 @@ test_zuc_cipher_auth(const struct wireless_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
@@ -4019,7 +4250,11 @@ test_snow3g_cipher_auth(const struct snow3g_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
@@ -4087,6 +4322,8 @@ test_snow3g_auth_cipher(const struct snow3g_test_data *tdata,
printf("Device doesn't support digest encrypted.\n");
return -ENOTSUP;
}
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
}
/* Create SNOW 3G session */
@@ -4155,7 +4392,11 @@ test_snow3g_auth_cipher(const struct snow3g_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4266,6 +4507,8 @@ test_snow3g_auth_cipher_sgl(const struct snow3g_test_data *tdata,
return -ENOTSUP;
}
} else {
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) {
printf("Device doesn't support out-of-place scatter-gather "
"in both input and output mbufs.\n");
@@ -4344,7 +4587,11 @@ test_snow3g_auth_cipher_sgl(const struct snow3g_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4453,6 +4700,8 @@ test_kasumi_auth_cipher(const struct kasumi_test_data *tdata,
uint64_t feat_flags = dev_info.feature_flags;
if (op_mode == OUT_OF_PLACE) {
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
if (!(feat_flags & RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED)) {
printf("Device doesn't support digest encrypted.\n");
return -ENOTSUP;
@@ -4526,7 +4775,11 @@ test_kasumi_auth_cipher(const struct kasumi_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4638,6 +4891,8 @@ test_kasumi_auth_cipher_sgl(const struct kasumi_test_data *tdata,
return -ENOTSUP;
}
} else {
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) {
printf("Device doesn't support out-of-place scatter-gather "
"in both input and output mbufs.\n");
@@ -4716,7 +4971,11 @@ test_kasumi_auth_cipher_sgl(const struct kasumi_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4857,7 +5116,11 @@ test_kasumi_cipher_auth(const struct kasumi_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4944,7 +5207,11 @@ test_zuc_encryption(const struct wireless_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5031,7 +5298,11 @@ test_zuc_encryption_sgl(const struct wireless_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5119,7 +5390,11 @@ test_zuc_authentication(const struct wireless_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1, 0);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
ut_params->obuf = ut_params->op->sym->m_src;
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5177,6 +5452,8 @@ test_zuc_auth_cipher(const struct wireless_test_data *tdata,
return -ENOTSUP;
}
} else {
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) {
printf("Device doesn't support out-of-place scatter-gather "
"in both input and output mbufs.\n");
@@ -5251,7 +5528,11 @@ test_zuc_auth_cipher(const struct wireless_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5359,6 +5640,8 @@ test_zuc_auth_cipher_sgl(const struct wireless_test_data *tdata,
return -ENOTSUP;
}
} else {
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) {
printf("Device doesn't support out-of-place scatter-gather "
"in both input and output mbufs.\n");
@@ -5437,7 +5720,11 @@ test_zuc_auth_cipher_sgl(const struct wireless_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5580,6 +5867,9 @@ test_kasumi_decryption_test_case_2(void)
static int
test_kasumi_decryption_test_case_3(void)
{
+ /* rte_crypto_mbuf_to_vec does not support incomplete mbuf build */
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
return test_kasumi_decryption(&kasumi_test_case_3);
}
@@ -5779,6 +6069,9 @@ test_snow3g_auth_cipher_part_digest_enc_oop(void)
static int
test_snow3g_auth_cipher_test_case_3_sgl(void)
{
+ /* rte_crypto_mbuf_to_vec does not support incomplete mbuf build */
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
return test_snow3g_auth_cipher_sgl(
&snow3g_auth_cipher_test_case_3, IN_PLACE, 0);
}
@@ -5793,6 +6086,9 @@ test_snow3g_auth_cipher_test_case_3_oop_sgl(void)
static int
test_snow3g_auth_cipher_part_digest_enc_sgl(void)
{
+ /* rte_crypto_mbuf_to_vec does not support incomplete mbuf build */
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
return test_snow3g_auth_cipher_sgl(
&snow3g_auth_cipher_partial_digest_encryption,
IN_PLACE, 0);
@@ -6146,10 +6442,9 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata,
unsigned int ciphertext_len;
struct rte_cryptodev_info dev_info;
- struct rte_crypto_op *op;
/* Check if device supports particular algorithms separately */
- if (test_mixed_check_if_unsupported(tdata))
+ if (test_mixed_check_if_unsupported(tdata) || cryptodev_dp_test)
return -ENOTSUP;
rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
@@ -6161,6 +6456,9 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata,
return -ENOTSUP;
}
+ if (op_mode == OUT_OF_PLACE)
+ return -ENOTSUP;
+
/* Create the session */
if (verify)
retval = create_wireless_algo_cipher_auth_session(
@@ -6192,9 +6490,11 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata,
/* clear mbuf payload */
memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
rte_pktmbuf_tailroom(ut_params->ibuf));
- if (op_mode == OUT_OF_PLACE)
+ if (op_mode == OUT_OF_PLACE) {
+
memset(rte_pktmbuf_mtod(ut_params->obuf, uint8_t *), 0,
rte_pktmbuf_tailroom(ut_params->obuf));
+ }
ciphertext_len = ceil_byte_length(tdata->ciphertext.len_bits);
plaintext_len = ceil_byte_length(tdata->plaintext.len_bits);
@@ -6235,18 +6535,17 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata,
if (retval < 0)
return retval;
- op = process_crypto_request(ts_params->valid_devs[0],
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
/* Check if the op failed because the device doesn't */
/* support this particular combination of algorithms */
- if (op == NULL && ut_params->op->status ==
+ if (ut_params->op == NULL && ut_params->op->status ==
RTE_CRYPTO_OP_STATUS_INVALID_SESSION) {
printf("Device doesn't support this mixed combination. "
"Test Skipped.\n");
return -ENOTSUP;
}
- ut_params->op = op;
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -6337,10 +6636,9 @@ test_mixed_auth_cipher_sgl(const struct mixed_cipher_auth_test_data *tdata,
uint8_t digest_buffer[10000];
struct rte_cryptodev_info dev_info;
- struct rte_crypto_op *op;
/* Check if device supports particular algorithms */
- if (test_mixed_check_if_unsupported(tdata))
+ if (test_mixed_check_if_unsupported(tdata) || cryptodev_dp_test)
return -ENOTSUP;
rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
@@ -6440,20 +6738,18 @@ test_mixed_auth_cipher_sgl(const struct mixed_cipher_auth_test_data *tdata,
if (retval < 0)
return retval;
- op = process_crypto_request(ts_params->valid_devs[0],
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
/* Check if the op failed because the device doesn't */
/* support this particular combination of algorithms */
- if (op == NULL && ut_params->op->status ==
+ if (ut_params->op == NULL && ut_params->op->status ==
RTE_CRYPTO_OP_STATUS_INVALID_SESSION) {
printf("Device doesn't support this mixed combination. "
"Test Skipped.\n");
return -ENOTSUP;
}
- ut_params->op = op;
-
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = (op_mode == IN_PLACE ?
@@ -7043,6 +7339,9 @@ test_authenticated_encryption(const struct aead_test_data *tdata)
/* Process crypto operation */
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 0, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -8540,6 +8839,9 @@ test_authenticated_decryption(const struct aead_test_data *tdata)
/* Process crypto operation */
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 0, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -8833,6 +9135,9 @@ test_authenticated_encryption_oop(const struct aead_test_data *tdata)
if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
&cap_idx) == NULL)
return -ENOTSUP;
+ /* Data-path service does not support OOP */
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
/* not supported with CPU crypto */
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
@@ -8923,8 +9228,9 @@ test_authenticated_decryption_oop(const struct aead_test_data *tdata)
&cap_idx) == NULL)
return -ENOTSUP;
- /* not supported with CPU crypto */
- if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+ /* not supported with CPU crypto and data-path service*/
+ if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO ||
+ cryptodev_dp_test)
return -ENOTSUP;
/* Create AEAD session */
@@ -9151,8 +9457,13 @@ test_authenticated_decryption_sessionless(
"crypto op session type not sessionless");
/* Process crypto operation */
- TEST_ASSERT_NOT_NULL(process_crypto_request(ts_params->valid_devs[0],
- ut_params->op), "failed to process sym crypto op");
+ if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 0, 0, 0);
+ else
+ TEST_ASSERT_NOT_NULL(process_crypto_request(
+ ts_params->valid_devs[0], ut_params->op),
+ "failed to process sym crypto op");
TEST_ASSERT_NOT_NULL(ut_params->op, "failed crypto process");
@@ -9472,6 +9783,9 @@ test_MD5_HMAC_generate(const struct HMAC_MD5_vector *test_case)
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -9530,6 +9844,9 @@ test_MD5_HMAC_verify(const struct HMAC_MD5_vector *test_case)
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -10098,6 +10415,9 @@ test_AES_GMAC_authentication(const struct gmac_test_data *tdata)
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -10215,6 +10535,9 @@ test_AES_GMAC_authentication_verify(const struct gmac_test_data *tdata)
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -10780,7 +11103,10 @@ test_authentication_verify_fail_when_data_corruption(
TEST_ASSERT_NOT_EQUAL(ut_params->op->status,
RTE_CRYPTO_OP_STATUS_SUCCESS,
"authentication not failed");
- } else {
+ } else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 0, 0);
+ else {
ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NULL(ut_params->op, "authentication not failed");
@@ -10851,7 +11177,10 @@ test_authentication_verify_GMAC_fail_when_corruption(
TEST_ASSERT_NOT_EQUAL(ut_params->op->status,
RTE_CRYPTO_OP_STATUS_SUCCESS,
"authentication not failed");
- } else {
+ } else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 0, 0);
+ else {
ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NULL(ut_params->op, "authentication not failed");
@@ -10926,7 +11255,10 @@ test_authenticated_decryption_fail_when_corruption(
TEST_ASSERT_NOT_EQUAL(ut_params->op->status,
RTE_CRYPTO_OP_STATUS_SUCCESS,
"authentication not failed");
- } else {
+ } else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 0, 0);
+ else {
ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NULL(ut_params->op, "authentication not failed");
@@ -11021,6 +11353,9 @@ test_authenticated_encryt_with_esn(
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 0, 0);
else
ut_params->op = process_crypto_request(
ts_params->valid_devs[0], ut_params->op);
@@ -11141,6 +11476,9 @@ test_authenticated_decrypt_with_esn(
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 0, 0);
else
ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
@@ -11285,6 +11623,9 @@ test_authenticated_encryption_SGL(const struct aead_test_data *tdata,
unsigned int sgl_in = fragsz < tdata->plaintext.len;
unsigned int sgl_out = (fragsz_oop ? fragsz_oop : fragsz) <
tdata->plaintext.len;
+ /* Data path service does not support OOP */
+ if (cryptodev_dp_test)
+ return -ENOTSUP;
if (sgl_in && !sgl_out) {
if (!(dev_info.feature_flags &
RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT))
@@ -11480,6 +11821,9 @@ test_authenticated_encryption_SGL(const struct aead_test_data *tdata,
if (oop == IN_PLACE &&
gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+ else if (cryptodev_dp_test)
+ process_sym_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 0, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -13041,6 +13385,29 @@ test_cryptodev_nitrox(void)
return unit_test_suite_runner(&cryptodev_nitrox_testsuite);
}
+static int
+test_qat_sym_direct_api(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+ int ret;
+
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "QAT PMD must be loaded. Check that both "
+ "CONFIG_RTE_LIBRTE_PMD_QAT and CONFIG_RTE_LIBRTE_PMD_QAT_SYM "
+ "are enabled in config file to run this testsuite.\n");
+ return TEST_SKIPPED;
+ }
+
+ cryptodev_dp_test = 1;
+ ret = unit_test_suite_runner(&cryptodev_testsuite);
+ cryptodev_dp_test = 0;
+
+ return ret;
+}
+
+REGISTER_TEST_COMMAND(cryptodev_qat_sym_api_autotest, test_qat_sym_direct_api);
REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_mb_autotest,
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 41542e055..e4e4c7626 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -71,6 +71,8 @@
#define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
#define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
+extern int cryptodev_dp_test;
+
/**
* Write (spread) data from buffer to mbuf data
*
@@ -209,4 +211,9 @@ create_segmented_mbuf(struct rte_mempool *mbuf_pool, int pkt_len,
return NULL;
}
+void
+process_sym_hw_api_op(uint8_t dev_id, uint16_t qp_id, struct rte_crypto_op *op,
+ uint8_t is_cipher, uint8_t is_auth, uint8_t len_in_bits,
+ uint8_t cipher_iv_len);
+
#endif /* TEST_CRYPTODEV_H_ */
diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c
index 221262341..311b34c15 100644
--- a/app/test/test_cryptodev_blockcipher.c
+++ b/app/test/test_cryptodev_blockcipher.c
@@ -462,25 +462,44 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
}
/* Process crypto operation */
- if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
- snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
- "line %u FAILED: %s",
- __LINE__, "Error sending packet for encryption");
- status = TEST_FAILED;
- goto error_exit;
- }
+ if (cryptodev_dp_test) {
+ uint8_t is_cipher = 0, is_auth = 0;
- op = NULL;
+ if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) {
+ RTE_LOG(DEBUG, USER1,
+ "QAT direct API does not support OOP, Test Skipped.\n");
+ snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "SKIPPED");
+ status = TEST_SUCCESS;
+ goto error_exit;
+ }
+ if (t->op_mask & BLOCKCIPHER_TEST_OP_CIPHER)
+ is_cipher = 1;
+ if (t->op_mask & BLOCKCIPHER_TEST_OP_AUTH)
+ is_auth = 1;
- while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) == 0)
- rte_pause();
+ process_sym_hw_api_op(dev_id, 0, op, is_cipher, is_auth, 0,
+ tdata->iv.len);
+ } else {
+ if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
+ snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
+ "line %u FAILED: %s",
+ __LINE__, "Error sending packet for encryption");
+ status = TEST_FAILED;
+ goto error_exit;
+ }
- if (!op) {
- snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
- "line %u FAILED: %s",
- __LINE__, "Failed to process sym crypto op");
- status = TEST_FAILED;
- goto error_exit;
+ op = NULL;
+
+ while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) == 0)
+ rte_pause();
+
+ if (!op) {
+ snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
+ "line %u FAILED: %s",
+ __LINE__, "Failed to process sym crypto op");
+ status = TEST_FAILED;
+ goto error_exit;
+ }
}
debug_hexdump(stdout, "m_src(after):",
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v9 4/4] doc: add cryptodev service APIs guide
2020-09-08 8:42 ` [dpdk-dev] [dpdk-dev v9 0/4] cryptodev: add data-path service APIs Fan Zhang
` (2 preceding siblings ...)
2020-09-08 8:42 ` [dpdk-dev] [dpdk-dev v9 3/4] test/crypto: add unit-test for cryptodev direct APIs Fan Zhang
@ 2020-09-08 8:42 ` Fan Zhang
2020-09-18 20:39 ` Akhil Goyal
2020-09-24 16:34 ` [dpdk-dev] [dpdk-dev v10 0/4] cryptodev: add raw data-path APIs Fan Zhang
4 siblings, 1 reply; 84+ messages in thread
From: Fan Zhang @ 2020-09-08 8:42 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski, Fan Zhang
This patch updates programmer's guide to demonstrate the usage
and limitations of cryptodev symmetric crypto data-path service
APIs.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
doc/guides/prog_guide/cryptodev_lib.rst | 90 +++++++++++++++++++++++++
doc/guides/rel_notes/release_20_11.rst | 7 ++
2 files changed, 97 insertions(+)
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index c14f750fa..1321e4c5d 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -631,6 +631,96 @@ a call argument. Status different than zero must be treated as error.
For more details, e.g. how to convert an mbuf to an SGL, please refer to an
example usage in the IPsec library implementation.
+Cryptodev Direct Data-path Service API
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Direct crypto data-path service are a set of APIs that especially provided for
+the external libraries/applications who want to take advantage of the rich
+features provided by cryptodev, but not necessarily depend on cryptodev
+operations, mempools, or mbufs in the their data-path implementations.
+
+The direct crypto data-path service has the following advantages:
+- Supports raw data pointer and physical addresses as input.
+- Do not require specific data structure allocated from heap, such as
+ cryptodev operation.
+- Enqueue in a burst or single operation. The service allow enqueuing in
+ a burst similar to ``rte_cryptodev_enqueue_burst`` operation, or only
+ enqueue one job at a time but maintaining necessary context data locally for
+ next single job enqueue operation. The latter method is especially helpful
+ when the user application's crypto operations are clustered into a burst.
+ Allowing enqueue one operation at a time helps reducing one additional loop
+ and also reduced the cache misses during the double "looping" situation.
+- Customerizable dequeue count. Instead of dequeue maximum possible operations
+ as same as ``rte_cryptodev_dequeue_burst`` operation, the service allows the
+ user to provide a callback function to decide how many operations to be
+ dequeued. This is especially helpful when the expected dequeue count is
+ hidden inside the opaque data stored during enqueue. The user can provide
+ the callback function to parse the opaque data structure.
+- Abandon enqueue and dequeue anytime. One of the drawbacks of
+ ``rte_cryptodev_enqueue_burst`` and ``rte_cryptodev_dequeue_burst``
+ operations are: once an operation is enqueued/dequeued there is no way to
+ undo the operation. The service make the operation abandon possible by
+ creating a local copy of the queue operation data in the service context
+ data. The data will be written back to driver maintained operation data
+ when enqueue or dequeue done function is called.
+
+The cryptodev data-path service uses
+
+Cryptodev PMDs who supports this feature will have
+``RTE_CRYPTODEV_FF_SYM_HW_DIRECT_API`` feature flag presented. To use this
+feature the function ``rte_cryptodev_get_dp_service_ctx_data_size`` should
+be called to get the data path service context data size. The user should
+creates a local buffer at least this size long and initialize it using
+``rte_cryptodev_dp_configure_service`` function call.
+
+The ``rte_cryptodev_dp_configure_service`` function call initialize or
+updates the ``struct rte_crypto_dp_service_ctx`` buffer, in which contains the
+driver specific queue pair data pointer and service context buffer, and a
+set of function pointers to enqueue and dequeue different algorithms'
+operations. The ``rte_cryptodev_dp_configure_service`` should be called when:
+
+- Before enqueuing or dequeuing starts (set ``is_update`` parameter to 0).
+- When different cryptodev session, security session, or session-less xform
+ is used (set ``is_update`` parameter to 1).
+
+Two different enqueue functions are provided.
+
+- ``rte_cryptodev_dp_sym_submit_vec``: submit a burst of operations stored in
+ the ``rte_crypto_sym_vec`` structure.
+- ``rte_cryptodev_dp_submit_single_job``: submit single operation.
+
+Either enqueue functions will not command the crypto device to start processing
+until ``rte_cryptodev_dp_submit_done`` function is called. Before then the user
+shall expect the driver only stores the necessory context data in the
+``rte_crypto_dp_service_ctx`` buffer for the next enqueue operation. If the user
+wants to abandon the submitted operations, simply call
+``rte_cryptodev_dp_configure_service`` function instead with the parameter
+``is_update`` set to 0. The driver will recover the service context data to
+the previous state.
+
+To dequeue the operations the user also have two operations:
+
+- ``rte_cryptodev_dp_sym_dequeue``: fully customizable deuqueue operation. The
+ user needs to provide the callback function for the driver to get the
+ dequeue count and perform post processing such as write the status field.
+- ``rte_cryptodev_dp_sym_dequeue_single_job``: dequeue single job.
+
+Same as enqueue, the function ``rte_cryptodev_dp_dequeue_done`` is used to
+merge user's local service context data with the driver's queue operation
+data. Also to abandon the dequeue operation (still keep the operations in the
+queue), the user shall avoid ``rte_cryptodev_dp_dequeue_done`` function call
+but calling ``rte_cryptodev_dp_configure_service`` function with the parameter
+``is_update`` set to 0.
+
+There are a few limitations to the data path service:
+
+* Only support in-place operations.
+* APIs are NOT thread-safe.
+* CANNOT mix the direct API's enqueue with rte_cryptodev_enqueue_burst, or
+ vice versa.
+
+See *DPDK API Reference* for details on each API definitions.
+
Sample code
-----------
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index df227a177..159823345 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -55,6 +55,13 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+ * **Added data-path APIs for cryptodev library.**
+
+ Cryptodev is added data-path APIs to accelerate external libraries or
+ applications those want to avail fast cryptodev enqueue/dequeue
+ operations but does not necessarily depends on mbufs and cryptodev
+ operation mempool.
+
Removed Items
-------------
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v9 3/4] test/crypto: add unit-test for cryptodev direct APIs
2020-09-08 8:42 ` [dpdk-dev] [dpdk-dev v9 3/4] test/crypto: add unit-test for cryptodev direct APIs Fan Zhang
@ 2020-09-18 20:03 ` Akhil Goyal
2020-09-21 12:41 ` Zhang, Roy Fan
0 siblings, 1 reply; 84+ messages in thread
From: Akhil Goyal @ 2020-09-18 20:03 UTC (permalink / raw)
To: Fan Zhang, dev; +Cc: fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski
Hi Fan,
> Subject: [dpdk-dev v9 3/4] test/crypto: add unit-test for cryptodev direct APIs
>
> This patch adds the QAT test to use cryptodev symmetric crypto
> direct APIs.
>
> Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
> app/test/test_cryptodev.c | 461 +++++++++++++++++++++++---
> app/test/test_cryptodev.h | 7 +
> app/test/test_cryptodev_blockcipher.c | 51 ++-
> 3 files changed, 456 insertions(+), 63 deletions(-)
>
> diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
> index 70bf6fe2c..13f642e0e 100644
> --- a/app/test/test_cryptodev.c
> +++ b/app/test/test_cryptodev.c
> @@ -49,6 +49,8 @@
> #define VDEV_ARGS_SIZE 100
> #define MAX_NB_SESSIONS 4
>
> +#define MAX_DRV_SERVICE_CTX_SIZE 256
> +
> #define IN_PLACE 0
> #define OUT_OF_PLACE 1
>
> @@ -57,6 +59,8 @@ static int gbl_driver_id;
> static enum rte_security_session_action_type gbl_action_type =
> RTE_SECURITY_ACTION_TYPE_NONE;
>
> +int cryptodev_dp_test;
> +
Why do we need this? We should make decision based on Feature flag of crypto device.
> struct crypto_testsuite_params {
> struct rte_mempool *mbuf_pool;
> struct rte_mempool *large_mbuf_pool;
> @@ -147,6 +151,182 @@ ceil_byte_length(uint32_t num_bits)
> return (num_bits >> 3);
> }
>
> +void
> +process_sym_hw_api_op(uint8_t dev_id, uint16_t qp_id, struct rte_crypto_op
> *op,
> + uint8_t is_cipher, uint8_t is_auth, uint8_t len_in_bits,
> + uint8_t cipher_iv_len)
> +{
> + int32_t n;
> + struct rte_crypto_sym_op *sop;
> + struct rte_crypto_op *ret_op = NULL;
> + struct rte_crypto_vec data_vec[UINT8_MAX];
> + union rte_crypto_sym_additional_data a_data;
> + union rte_crypto_sym_ofs ofs;
> + int32_t status;
> + uint32_t max_len;
> + union rte_cryptodev_session_ctx sess;
> + enum rte_crypto_dp_service service_type;
> + uint32_t count = 0;
> + uint8_t service_data[MAX_DRV_SERVICE_CTX_SIZE] = {0};
> + struct rte_crypto_dp_service_ctx *ctx = (void *)service_data;
> + uint32_t cipher_offset = 0, cipher_len = 0, auth_offset = 0,
> + auth_len = 0;
> + int ctx_service_size;
> +
> + sop = op->sym;
> +
> + sess.crypto_sess = sop->session;
> +
> + if (is_cipher && is_auth) {
> + service_type = RTE_CRYPTO_DP_SYM_CHAIN;
> + cipher_offset = sop->cipher.data.offset;
> + cipher_len = sop->cipher.data.length;
> + auth_offset = sop->auth.data.offset;
> + auth_len = sop->auth.data.length;
> + max_len = RTE_MAX(cipher_offset + cipher_len,
> + auth_offset + auth_len);
> + } else if (is_cipher) {
> + service_type = RTE_CRYPTO_DP_SYM_CIPHER_ONLY;
> + cipher_offset = sop->cipher.data.offset;
> + cipher_len = sop->cipher.data.length;
> + max_len = cipher_len + cipher_offset;
> + } else if (is_auth) {
> + service_type = RTE_CRYPTO_DP_SYM_AUTH_ONLY;
> + auth_offset = sop->auth.data.offset;
> + auth_len = sop->auth.data.length;
> + max_len = auth_len + auth_offset;
> + } else { /* aead */
> + service_type = RTE_CRYPTO_DP_SYM_AEAD;
> + cipher_offset = sop->aead.data.offset;
> + cipher_len = sop->aead.data.length;
> + max_len = cipher_len + cipher_offset;
> + }
> +
> + if (len_in_bits) {
> + max_len = max_len >> 3;
> + cipher_offset = cipher_offset >> 3;
> + auth_offset = auth_offset >> 3;
> + cipher_len = cipher_len >> 3;
> + auth_len = auth_len >> 3;
> + }
> +
> + ctx_service_size = rte_cryptodev_dp_get_service_ctx_data_size(dev_id);
> + assert(ctx_service_size <= MAX_DRV_SERVICE_CTX_SIZE &&
> + ctx_service_size > 0);
> +
> + if (rte_cryptodev_dp_configure_service(dev_id, qp_id, service_type,
> + RTE_CRYPTO_OP_WITH_SESSION, sess, ctx, 0) < 0) {
> + op->status = RTE_CRYPTO_OP_STATUS_ERROR;
> + return;
> + }
**_dp_configure_service does not provide context of the API. What does service mean here?
Can we rename it to rte_cryptodev_configure_raw_dp?
This gives better readability to have an API to configure crypto device for raw data path.
> +
> + /* test update service */
> + if (rte_cryptodev_dp_configure_service(dev_id, qp_id, service_type,
> + RTE_CRYPTO_OP_WITH_SESSION, sess, ctx, 1) < 0) {
Do we really need an extra parameter to specify update? Can we not call again same API
for update? Anyway the implementation will be copying the complete information from
the sess.
> + op->status = RTE_CRYPTO_OP_STATUS_ERROR;
> + return;
> + }
> +
> + n = rte_crypto_mbuf_to_vec(sop->m_src, 0, max_len,
> + data_vec, RTE_DIM(data_vec));
> + if (n < 0 || n > sop->m_src->nb_segs) {
> + op->status = RTE_CRYPTO_OP_STATUS_ERROR;
> + return;
> + }
> +
> + ofs.raw = 0;
> +
> + switch (service_type) {
> + case RTE_CRYPTO_DP_SYM_AEAD:
> + ofs.ofs.cipher.head = cipher_offset;
> + ofs.ofs.cipher.tail = max_len - cipher_offset - cipher_len;
> + a_data.aead.iv_ptr = rte_crypto_op_ctod_offset(op, void *,
> + IV_OFFSET);
> + a_data.aead.iv_iova = rte_crypto_op_ctophys_offset(op,
> + IV_OFFSET);
> + a_data.aead.aad_ptr = (void *)sop->aead.aad.data;
> + a_data.aead.aad_iova = sop->aead.aad.phys_addr;
> + a_data.aead.digest_ptr = (void *)sop->aead.digest.data;
> + a_data.aead.digest_iova = sop->aead.digest.phys_addr;
> + break;
> + case RTE_CRYPTO_DP_SYM_CIPHER_ONLY:
> + ofs.ofs.cipher.head = cipher_offset;
> + ofs.ofs.cipher.tail = max_len - cipher_offset - cipher_len;
> + a_data.cipher_auth.cipher_iv_ptr = rte_crypto_op_ctod_offset(
> + op, void *, IV_OFFSET);
> + a_data.cipher_auth.cipher_iv_iova =
> + rte_crypto_op_ctophys_offset(op, IV_OFFSET);
> + break;
> + case RTE_CRYPTO_DP_SYM_AUTH_ONLY:
> + ofs.ofs.auth.head = auth_offset;
> + ofs.ofs.auth.tail = max_len - auth_offset - auth_len;
> + a_data.cipher_auth.auth_iv_ptr = rte_crypto_op_ctod_offset(
> + op, void *, IV_OFFSET + cipher_iv_len);
> + a_data.cipher_auth.auth_iv_iova =
> + rte_crypto_op_ctophys_offset(op, IV_OFFSET +
> + cipher_iv_len);
> + a_data.cipher_auth.digest_ptr = (void *)sop->auth.digest.data;
> + a_data.cipher_auth.digest_iova = sop->auth.digest.phys_addr;
> + break;
> + case RTE_CRYPTO_DP_SYM_CHAIN:
> + ofs.ofs.cipher.head = cipher_offset;
> + ofs.ofs.cipher.tail = max_len - cipher_offset - cipher_len;
> + ofs.ofs.auth.head = auth_offset;
> + ofs.ofs.auth.tail = max_len - auth_offset - auth_len;
> + a_data.cipher_auth.cipher_iv_ptr = rte_crypto_op_ctod_offset(
> + op, void *, IV_OFFSET);
> + a_data.cipher_auth.cipher_iv_iova =
> + rte_crypto_op_ctophys_offset(op, IV_OFFSET);
> + a_data.cipher_auth.auth_iv_ptr = rte_crypto_op_ctod_offset(
> + op, void *, IV_OFFSET + cipher_iv_len);
> + a_data.cipher_auth.auth_iv_iova =
> + rte_crypto_op_ctophys_offset(op, IV_OFFSET +
> + cipher_iv_len);
> + a_data.cipher_auth.digest_ptr = (void *)sop->auth.digest.data;
> + a_data.cipher_auth.digest_iova = sop->auth.digest.phys_addr;
Instead of cipher_auth, it should be chain. Can we also support "hash then cipher" case also?
> + break;
> + default:
> + break;
> + }
> +
> + status = rte_cryptodev_dp_sym_submit_single_job(ctx, data_vec, n, ofs,
> + &a_data, (void *)op);
> + if (status < 0) {
> + op->status = RTE_CRYPTO_OP_STATUS_ERROR;
> + return;
> + }
> +
> + status = rte_cryptodev_dp_sym_submit_done(ctx, 1);
> + if (status < 0) {
> + op->status = RTE_CRYPTO_OP_STATUS_ERROR;
> + return;
> + }
> +
> +
> + status = -1;
> + while (count++ < 65535 && status == -1) {
> + status = rte_cryptodev_dp_sym_dequeue_single_job(ctx,
> + (void **)&ret_op);
> + if (status == -1)
> + rte_pause();
> + }
> +
> + if (status != -1) {
> + if (rte_cryptodev_dp_sym_dequeue_done(ctx, 1) < 0) {
> + op->status = RTE_CRYPTO_OP_STATUS_ERROR;
> + return;
> + }
> + }
> +
> + if (count == 65536 || status != 1 || ret_op != op) {
Remove hardcode 65536
> + op->status = RTE_CRYPTO_OP_STATUS_ERROR;
> + return;
> + }
> +
> + op->status = status == 1 ? RTE_CRYPTO_OP_STATUS_SUCCESS :
> + RTE_CRYPTO_OP_STATUS_ERROR;
> +}
> +
> static void
> process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op)
> {
> @@ -1656,6 +1836,9 @@
> test_AES_CBC_HMAC_SHA512_decrypt_perform(struct
> rte_cryptodev_sym_session *sess,
> if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> process_cpu_crypt_auth_op(ts_params->valid_devs[0],
> ut_params->op);
> + else if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 1, 1, 0, 0);
Can we rename process_sym_hw_api_op to process_sym_hw_raw_op?
> else
> TEST_ASSERT_NOT_NULL(
> process_crypto_request(ts_params-
> >valid_devs[0],
> @@ -1710,12 +1893,18 @@ test_AES_cipheronly_all(void)
> static int
> test_AES_docsis_all(void)
> {
> + /* Data-path service does not support DOCSIS yet */
> + if (cryptodev_dp_test)
> + return -ENOTSUP;
> return test_blockcipher(BLKCIPHER_AES_DOCSIS_TYPE);
> }
>
> static int
> test_DES_docsis_all(void)
> {
> + /* Data-path service does not support DOCSIS yet */
> + if (cryptodev_dp_test)
> + return -ENOTSUP;
> return test_blockcipher(BLKCIPHER_DES_DOCSIS_TYPE);
> }
>
> @@ -2470,7 +2659,11 @@ test_snow3g_authentication(const struct
> snow3g_hash_test_data *tdata)
> if (retval < 0)
> return retval;
>
> - ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 0, 1, 1, 0);
> + else
> + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
> ut_params->obuf = ut_params->op->sym->m_src;
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> @@ -2549,7 +2742,11 @@ test_snow3g_authentication_verify(const struct
> snow3g_hash_test_data *tdata)
> if (retval < 0)
> return retval;
>
> - ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 0, 1, 1, 0);
> + else
> + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> ut_params->obuf = ut_params->op->sym->m_src;
> @@ -2619,6 +2816,9 @@ test_kasumi_authentication(const struct
> kasumi_hash_test_data *tdata)
> if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> process_cpu_crypt_auth_op(ts_params->valid_devs[0],
> ut_params->op);
> + else if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 0, 1, 1, 0);
> else
> ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
> @@ -2690,7 +2890,11 @@ test_kasumi_authentication_verify(const struct
> kasumi_hash_test_data *tdata)
> if (retval < 0)
> return retval;
>
> - ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 0, 1, 1, 0);
> + else
> + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> ut_params->obuf = ut_params->op->sym->m_src;
> @@ -2897,8 +3101,12 @@ test_kasumi_encryption(const struct
> kasumi_test_data *tdata)
> if (retval < 0)
> return retval;
>
> - ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> - ut_params->op);
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
> + else
> + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> + ut_params->op);
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
>
> ut_params->obuf = ut_params->op->sym->m_dst;
> @@ -2983,7 +3191,11 @@ test_kasumi_encryption_sgl(const struct
> kasumi_test_data *tdata)
> if (retval < 0)
> return retval;
>
> - ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
> + else
> + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
>
> @@ -3026,8 +3238,9 @@ test_kasumi_encryption_oop(const struct
> kasumi_test_data *tdata)
> struct rte_cryptodev_sym_capability_idx cap_idx;
> cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
> cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_KASUMI_F8;
> + /* Data-path service does not support OOP */
> if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
> - &cap_idx) == NULL)
> + &cap_idx) == NULL || cryptodev_dp_test)
> return -ENOTSUP;
>
> /* Create KASUMI session */
> @@ -3107,8 +3320,9 @@ test_kasumi_encryption_oop_sgl(const struct
> kasumi_test_data *tdata)
> struct rte_cryptodev_sym_capability_idx cap_idx;
> cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
> cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_KASUMI_F8;
> + /* Data-path service does not support OOP */
> if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
> - &cap_idx) == NULL)
> + &cap_idx) == NULL || cryptodev_dp_test)
> return -ENOTSUP;
>
> rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
> @@ -3192,8 +3406,9 @@ test_kasumi_decryption_oop(const struct
> kasumi_test_data *tdata)
> struct rte_cryptodev_sym_capability_idx cap_idx;
> cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
> cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_KASUMI_F8;
> + /* Data-path service does not support OOP */
> if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
> - &cap_idx) == NULL)
> + &cap_idx) == NULL || cryptodev_dp_test)
> return -ENOTSUP;
>
> /* Create KASUMI session */
> @@ -3306,7 +3521,11 @@ test_kasumi_decryption(const struct
> kasumi_test_data *tdata)
> if (retval < 0)
> return retval;
>
> - ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 1, 0, 1, 0);
> + else
> + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
>
> @@ -3381,7 +3600,11 @@ test_snow3g_encryption(const struct
> snow3g_test_data *tdata)
> if (retval < 0)
> return retval;
>
> - ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
> + else
> + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
>
> @@ -3419,7 +3642,7 @@ test_snow3g_encryption_oop(const struct
> snow3g_test_data *tdata)
> cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
> cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_SNOW3G_UEA2;
> if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
> - &cap_idx) == NULL)
> + &cap_idx) == NULL || cryptodev_dp_test)
> return -ENOTSUP;
>
> /* Create SNOW 3G session */
> @@ -3502,7 +3725,7 @@ test_snow3g_encryption_oop_sgl(const struct
> snow3g_test_data *tdata)
> cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
> cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_SNOW3G_UEA2;
> if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
> - &cap_idx) == NULL)
> + &cap_idx) == NULL || cryptodev_dp_test)
> return -ENOTSUP;
>
> rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
> @@ -3621,7 +3844,7 @@ test_snow3g_encryption_offset_oop(const struct
> snow3g_test_data *tdata)
> cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
> cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_SNOW3G_UEA2;
> if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
> - &cap_idx) == NULL)
> + &cap_idx) == NULL || cryptodev_dp_test)
> return -ENOTSUP;
>
> /* Create SNOW 3G session */
> @@ -3756,7 +3979,11 @@ static int test_snow3g_decryption(const struct
> snow3g_test_data *tdata)
> if (retval < 0)
> return retval;
>
> - ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
> + else
> + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> ut_params->obuf = ut_params->op->sym->m_dst;
> @@ -3791,7 +4018,7 @@ static int test_snow3g_decryption_oop(const struct
> snow3g_test_data *tdata)
> cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
> cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_SNOW3G_UEA2;
> if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
> - &cap_idx) == NULL)
> + &cap_idx) == NULL || cryptodev_dp_test)
> return -ENOTSUP;
>
> /* Create SNOW 3G session */
> @@ -3924,7 +4151,11 @@ test_zuc_cipher_auth(const struct
> wireless_test_data *tdata)
> if (retval < 0)
> return retval;
>
> - ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
> + else
> + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> ut_params->obuf = ut_params->op->sym->m_src;
> @@ -4019,7 +4250,11 @@ test_snow3g_cipher_auth(const struct
> snow3g_test_data *tdata)
> if (retval < 0)
> return retval;
>
> - ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
> + else
> + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> ut_params->obuf = ut_params->op->sym->m_src;
> @@ -4087,6 +4322,8 @@ test_snow3g_auth_cipher(const struct
> snow3g_test_data *tdata,
> printf("Device doesn't support digest encrypted.\n");
> return -ENOTSUP;
> }
> + if (cryptodev_dp_test)
> + return -ENOTSUP;
> }
>
> /* Create SNOW 3G session */
> @@ -4155,7 +4392,11 @@ test_snow3g_auth_cipher(const struct
> snow3g_test_data *tdata,
> if (retval < 0)
> return retval;
>
> - ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
> + else
> + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
>
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> @@ -4266,6 +4507,8 @@ test_snow3g_auth_cipher_sgl(const struct
> snow3g_test_data *tdata,
> return -ENOTSUP;
> }
> } else {
> + if (cryptodev_dp_test)
> + return -ENOTSUP;
> if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT))
> {
> printf("Device doesn't support out-of-place scatter-
> gather "
> "in both input and output mbufs.\n");
> @@ -4344,7 +4587,11 @@ test_snow3g_auth_cipher_sgl(const struct
> snow3g_test_data *tdata,
> if (retval < 0)
> return retval;
>
> - ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
> + else
> + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
>
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> @@ -4453,6 +4700,8 @@ test_kasumi_auth_cipher(const struct
> kasumi_test_data *tdata,
> uint64_t feat_flags = dev_info.feature_flags;
>
> if (op_mode == OUT_OF_PLACE) {
> + if (cryptodev_dp_test)
> + return -ENOTSUP;
> if (!(feat_flags & RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED)) {
> printf("Device doesn't support digest encrypted.\n");
> return -ENOTSUP;
> @@ -4526,7 +4775,11 @@ test_kasumi_auth_cipher(const struct
> kasumi_test_data *tdata,
> if (retval < 0)
> return retval;
>
> - ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
> + else
> + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
>
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> @@ -4638,6 +4891,8 @@ test_kasumi_auth_cipher_sgl(const struct
> kasumi_test_data *tdata,
> return -ENOTSUP;
> }
> } else {
> + if (cryptodev_dp_test)
> + return -ENOTSUP;
> if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT))
> {
> printf("Device doesn't support out-of-place scatter-
> gather "
> "in both input and output mbufs.\n");
> @@ -4716,7 +4971,11 @@ test_kasumi_auth_cipher_sgl(const struct
> kasumi_test_data *tdata,
> if (retval < 0)
> return retval;
>
> - ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
> + else
> + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
>
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> @@ -4857,7 +5116,11 @@ test_kasumi_cipher_auth(const struct
> kasumi_test_data *tdata)
> if (retval < 0)
> return retval;
>
> - ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
> + else
> + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
>
> @@ -4944,7 +5207,11 @@ test_zuc_encryption(const struct wireless_test_data
> *tdata)
> if (retval < 0)
> return retval;
>
> - ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
> + else
> + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
>
> @@ -5031,7 +5298,11 @@ test_zuc_encryption_sgl(const struct
> wireless_test_data *tdata)
> if (retval < 0)
> return retval;
>
> - ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
> + else
> + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
>
> @@ -5119,7 +5390,11 @@ test_zuc_authentication(const struct
> wireless_test_data *tdata)
> if (retval < 0)
> return retval;
>
> - ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 0, 1, 1, 0);
> + else
> + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
> ut_params->obuf = ut_params->op->sym->m_src;
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> @@ -5177,6 +5452,8 @@ test_zuc_auth_cipher(const struct wireless_test_data
> *tdata,
> return -ENOTSUP;
> }
> } else {
> + if (cryptodev_dp_test)
> + return -ENOTSUP;
> if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT))
> {
> printf("Device doesn't support out-of-place scatter-
> gather "
> "in both input and output mbufs.\n");
> @@ -5251,7 +5528,11 @@ test_zuc_auth_cipher(const struct
> wireless_test_data *tdata,
> if (retval < 0)
> return retval;
>
> - ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
> + else
> + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
>
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> @@ -5359,6 +5640,8 @@ test_zuc_auth_cipher_sgl(const struct
> wireless_test_data *tdata,
> return -ENOTSUP;
> }
> } else {
> + if (cryptodev_dp_test)
> + return -ENOTSUP;
> if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT))
> {
> printf("Device doesn't support out-of-place scatter-
> gather "
> "in both input and output mbufs.\n");
> @@ -5437,7 +5720,11 @@ test_zuc_auth_cipher_sgl(const struct
> wireless_test_data *tdata,
> if (retval < 0)
> return retval;
>
> - ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
> + else
> + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
>
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> @@ -5580,6 +5867,9 @@ test_kasumi_decryption_test_case_2(void)
> static int
> test_kasumi_decryption_test_case_3(void)
> {
> + /* rte_crypto_mbuf_to_vec does not support incomplete mbuf build */
> + if (cryptodev_dp_test)
> + return -ENOTSUP;
> return test_kasumi_decryption(&kasumi_test_case_3);
> }
>
> @@ -5779,6 +6069,9 @@ test_snow3g_auth_cipher_part_digest_enc_oop(void)
> static int
> test_snow3g_auth_cipher_test_case_3_sgl(void)
> {
> + /* rte_crypto_mbuf_to_vec does not support incomplete mbuf build */
> + if (cryptodev_dp_test)
> + return -ENOTSUP;
> return test_snow3g_auth_cipher_sgl(
> &snow3g_auth_cipher_test_case_3, IN_PLACE, 0);
> }
> @@ -5793,6 +6086,9 @@ test_snow3g_auth_cipher_test_case_3_oop_sgl(void)
> static int
> test_snow3g_auth_cipher_part_digest_enc_sgl(void)
> {
> + /* rte_crypto_mbuf_to_vec does not support incomplete mbuf build */
> + if (cryptodev_dp_test)
> + return -ENOTSUP;
> return test_snow3g_auth_cipher_sgl(
> &snow3g_auth_cipher_partial_digest_encryption,
> IN_PLACE, 0);
> @@ -6146,10 +6442,9 @@ test_mixed_auth_cipher(const struct
> mixed_cipher_auth_test_data *tdata,
> unsigned int ciphertext_len;
>
> struct rte_cryptodev_info dev_info;
> - struct rte_crypto_op *op;
>
> /* Check if device supports particular algorithms separately */
> - if (test_mixed_check_if_unsupported(tdata))
> + if (test_mixed_check_if_unsupported(tdata) || cryptodev_dp_test)
> return -ENOTSUP;
>
> rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
> @@ -6161,6 +6456,9 @@ test_mixed_auth_cipher(const struct
> mixed_cipher_auth_test_data *tdata,
> return -ENOTSUP;
> }
>
> + if (op_mode == OUT_OF_PLACE)
> + return -ENOTSUP;
> +
> /* Create the session */
> if (verify)
> retval = create_wireless_algo_cipher_auth_session(
> @@ -6192,9 +6490,11 @@ test_mixed_auth_cipher(const struct
> mixed_cipher_auth_test_data *tdata,
> /* clear mbuf payload */
> memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
> rte_pktmbuf_tailroom(ut_params->ibuf));
> - if (op_mode == OUT_OF_PLACE)
> + if (op_mode == OUT_OF_PLACE) {
> +
> memset(rte_pktmbuf_mtod(ut_params->obuf, uint8_t *), 0,
> rte_pktmbuf_tailroom(ut_params->obuf));
> + }
Unnecessary change.
>
> ciphertext_len = ceil_byte_length(tdata->ciphertext.len_bits);
> plaintext_len = ceil_byte_length(tdata->plaintext.len_bits);
> @@ -6235,18 +6535,17 @@ test_mixed_auth_cipher(const struct
> mixed_cipher_auth_test_data *tdata,
> if (retval < 0)
> return retval;
>
> - op = process_crypto_request(ts_params->valid_devs[0],
> + ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> ut_params->op);
>
> /* Check if the op failed because the device doesn't */
> /* support this particular combination of algorithms */
> - if (op == NULL && ut_params->op->status ==
> + if (ut_params->op == NULL && ut_params->op->status ==
> RTE_CRYPTO_OP_STATUS_INVALID_SESSION) {
> printf("Device doesn't support this mixed combination. "
> "Test Skipped.\n");
> return -ENOTSUP;
> }
> - ut_params->op = op;
Unnecessary change
>
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
>
> @@ -6337,10 +6636,9 @@ test_mixed_auth_cipher_sgl(const struct
> mixed_cipher_auth_test_data *tdata,
> uint8_t digest_buffer[10000];
>
> struct rte_cryptodev_info dev_info;
> - struct rte_crypto_op *op;
>
> /* Check if device supports particular algorithms */
> - if (test_mixed_check_if_unsupported(tdata))
> + if (test_mixed_check_if_unsupported(tdata) || cryptodev_dp_test)
> return -ENOTSUP;
>
> rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
> @@ -6440,20 +6738,18 @@ test_mixed_auth_cipher_sgl(const struct
> mixed_cipher_auth_test_data *tdata,
> if (retval < 0)
> return retval;
>
> - op = process_crypto_request(ts_params->valid_devs[0],
> + ut_params->op = process_crypto_request(ts_params->valid_devs[0],
> ut_params->op);
>
> /* Check if the op failed because the device doesn't */
> /* support this particular combination of algorithms */
> - if (op == NULL && ut_params->op->status ==
> + if (ut_params->op == NULL && ut_params->op->status ==
> RTE_CRYPTO_OP_STATUS_INVALID_SESSION) {
> printf("Device doesn't support this mixed combination. "
> "Test Skipped.\n");
> return -ENOTSUP;
> }
Unnecessary change
>
> - ut_params->op = op;
> -
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
>
> ut_params->obuf = (op_mode == IN_PLACE ?
> @@ -7043,6 +7339,9 @@ test_authenticated_encryption(const struct
> aead_test_data *tdata)
> /* Process crypto operation */
> if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> process_cpu_aead_op(ts_params->valid_devs[0], ut_params-
> >op);
> + else if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 0, 0, 0, 0);
> else
> TEST_ASSERT_NOT_NULL(
> process_crypto_request(ts_params->valid_devs[0],
> @@ -8540,6 +8839,9 @@ test_authenticated_decryption(const struct
> aead_test_data *tdata)
> /* Process crypto operation */
> if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> process_cpu_aead_op(ts_params->valid_devs[0], ut_params-
> >op);
> + else if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 0, 0, 0, 0);
> else
> TEST_ASSERT_NOT_NULL(
> process_crypto_request(ts_params->valid_devs[0],
> @@ -8833,6 +9135,9 @@ test_authenticated_encryption_oop(const struct
> aead_test_data *tdata)
> if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
> &cap_idx) == NULL)
> return -ENOTSUP;
> + /* Data-path service does not support OOP */
> + if (cryptodev_dp_test)
> + return -ENOTSUP;
>
> /* not supported with CPU crypto */
> if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> @@ -8923,8 +9228,9 @@ test_authenticated_decryption_oop(const struct
> aead_test_data *tdata)
> &cap_idx) == NULL)
> return -ENOTSUP;
>
> - /* not supported with CPU crypto */
> - if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> + /* not supported with CPU crypto and data-path service*/
> + if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO ||
> + cryptodev_dp_test)
> return -ENOTSUP;
>
> /* Create AEAD session */
> @@ -9151,8 +9457,13 @@ test_authenticated_decryption_sessionless(
> "crypto op session type not sessionless");
>
> /* Process crypto operation */
> - TEST_ASSERT_NOT_NULL(process_crypto_request(ts_params-
> >valid_devs[0],
> - ut_params->op), "failed to process sym crypto op");
> + if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 0, 0, 0, 0);
> + else
> + TEST_ASSERT_NOT_NULL(process_crypto_request(
> + ts_params->valid_devs[0], ut_params->op),
> + "failed to process sym crypto op");
>
> TEST_ASSERT_NOT_NULL(ut_params->op, "failed crypto process");
>
> @@ -9472,6 +9783,9 @@ test_MD5_HMAC_generate(const struct
> HMAC_MD5_vector *test_case)
> if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> process_cpu_crypt_auth_op(ts_params->valid_devs[0],
> ut_params->op);
> + else if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 0, 1, 0, 0);
> else
> TEST_ASSERT_NOT_NULL(
> process_crypto_request(ts_params->valid_devs[0],
> @@ -9530,6 +9844,9 @@ test_MD5_HMAC_verify(const struct
> HMAC_MD5_vector *test_case)
> if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> process_cpu_crypt_auth_op(ts_params->valid_devs[0],
> ut_params->op);
> + else if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 0, 1, 0, 0);
> else
> TEST_ASSERT_NOT_NULL(
> process_crypto_request(ts_params->valid_devs[0],
> @@ -10098,6 +10415,9 @@ test_AES_GMAC_authentication(const struct
> gmac_test_data *tdata)
> if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> process_cpu_crypt_auth_op(ts_params->valid_devs[0],
> ut_params->op);
> + else if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 0, 1, 0, 0);
> else
> TEST_ASSERT_NOT_NULL(
> process_crypto_request(ts_params->valid_devs[0],
> @@ -10215,6 +10535,9 @@ test_AES_GMAC_authentication_verify(const
> struct gmac_test_data *tdata)
> if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> process_cpu_crypt_auth_op(ts_params->valid_devs[0],
> ut_params->op);
> + else if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 0, 1, 0, 0);
> else
> TEST_ASSERT_NOT_NULL(
> process_crypto_request(ts_params->valid_devs[0],
> @@ -10780,7 +11103,10 @@
> test_authentication_verify_fail_when_data_corruption(
> TEST_ASSERT_NOT_EQUAL(ut_params->op->status,
> RTE_CRYPTO_OP_STATUS_SUCCESS,
> "authentication not failed");
> - } else {
> + } else if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 0, 1, 0, 0);
> + else {
> ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
> TEST_ASSERT_NULL(ut_params->op, "authentication not
> failed");
> @@ -10851,7 +11177,10 @@
> test_authentication_verify_GMAC_fail_when_corruption(
> TEST_ASSERT_NOT_EQUAL(ut_params->op->status,
> RTE_CRYPTO_OP_STATUS_SUCCESS,
> "authentication not failed");
> - } else {
> + } else if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 0, 1, 0, 0);
> + else {
> ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
> TEST_ASSERT_NULL(ut_params->op, "authentication not
> failed");
> @@ -10926,7 +11255,10 @@
> test_authenticated_decryption_fail_when_corruption(
> TEST_ASSERT_NOT_EQUAL(ut_params->op->status,
> RTE_CRYPTO_OP_STATUS_SUCCESS,
> "authentication not failed");
> - } else {
> + } else if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 1, 1, 0, 0);
> + else {
> ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
> TEST_ASSERT_NULL(ut_params->op, "authentication not
> failed");
> @@ -11021,6 +11353,9 @@ test_authenticated_encryt_with_esn(
> if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> process_cpu_crypt_auth_op(ts_params->valid_devs[0],
> ut_params->op);
> + else if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 1, 1, 0, 0);
> else
> ut_params->op = process_crypto_request(
> ts_params->valid_devs[0], ut_params->op);
> @@ -11141,6 +11476,9 @@ test_authenticated_decrypt_with_esn(
> if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> process_cpu_crypt_auth_op(ts_params->valid_devs[0],
> ut_params->op);
> + else if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 1, 1, 0, 0);
> else
> ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> ut_params->op);
> @@ -11285,6 +11623,9 @@ test_authenticated_encryption_SGL(const struct
> aead_test_data *tdata,
> unsigned int sgl_in = fragsz < tdata->plaintext.len;
> unsigned int sgl_out = (fragsz_oop ? fragsz_oop : fragsz) <
> tdata->plaintext.len;
> + /* Data path service does not support OOP */
> + if (cryptodev_dp_test)
> + return -ENOTSUP;
> if (sgl_in && !sgl_out) {
> if (!(dev_info.feature_flags &
>
> RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT))
> @@ -11480,6 +11821,9 @@ test_authenticated_encryption_SGL(const struct
> aead_test_data *tdata,
> if (oop == IN_PLACE &&
> gbl_action_type ==
> RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> process_cpu_aead_op(ts_params->valid_devs[0], ut_params-
> >op);
> + else if (cryptodev_dp_test)
> + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> + ut_params->op, 0, 0, 0, 0);
> else
> TEST_ASSERT_NOT_NULL(
> process_crypto_request(ts_params->valid_devs[0],
> @@ -13041,6 +13385,29 @@ test_cryptodev_nitrox(void)
> return unit_test_suite_runner(&cryptodev_nitrox_testsuite);
> }
>
> +static int
> +test_qat_sym_direct_api(void /*argv __rte_unused, int argc __rte_unused*/)
> +{
> + int ret;
> +
> + gbl_driver_id = rte_cryptodev_driver_id_get(
> + RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
> +
> + if (gbl_driver_id == -1) {
> + RTE_LOG(ERR, USER1, "QAT PMD must be loaded. Check that
> both "
> + "CONFIG_RTE_LIBRTE_PMD_QAT and
> CONFIG_RTE_LIBRTE_PMD_QAT_SYM "
> + "are enabled in config file to run this testsuite.\n");
> + return TEST_SKIPPED;
> + }
> +
> + cryptodev_dp_test = 1;
Cryptodev_dp_test cannot be set blindly. You should check for feature flag of the device
If the feature is supported or not.
Can we also rename this flag as "test_raw_crypto_dp"
> + ret = unit_test_suite_runner(&cryptodev_testsuite);
> + cryptodev_dp_test = 0;
> +
> + return ret;
> +}
> +
> +REGISTER_TEST_COMMAND(cryptodev_qat_sym_api_autotest,
> test_qat_sym_direct_api);
It would be better to name the test string as test_cryptodev_qat_raw_dp
> REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
> REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest,
> test_cryptodev_aesni_mb);
> REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_mb_autotest,
> diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
> index 41542e055..e4e4c7626 100644
> --- a/app/test/test_cryptodev.h
> +++ b/app/test/test_cryptodev.h
> @@ -71,6 +71,8 @@
> #define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
> #define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
>
> +extern int cryptodev_dp_test;
> +
> /**
> * Write (spread) data from buffer to mbuf data
> *
> @@ -209,4 +211,9 @@ create_segmented_mbuf(struct rte_mempool
> *mbuf_pool, int pkt_len,
> return NULL;
> }
>
> +void
> +process_sym_hw_api_op(uint8_t dev_id, uint16_t qp_id, struct rte_crypto_op
> *op,
> + uint8_t is_cipher, uint8_t is_auth, uint8_t len_in_bits,
> + uint8_t cipher_iv_len);
> +
> #endif /* TEST_CRYPTODEV_H_ */
> diff --git a/app/test/test_cryptodev_blockcipher.c
> b/app/test/test_cryptodev_blockcipher.c
> index 221262341..311b34c15 100644
> --- a/app/test/test_cryptodev_blockcipher.c
> +++ b/app/test/test_cryptodev_blockcipher.c
> @@ -462,25 +462,44 @@ test_blockcipher_one_case(const struct
> blockcipher_test_case *t,
> }
>
> /* Process crypto operation */
> - if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
> - snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
> - "line %u FAILED: %s",
> - __LINE__, "Error sending packet for encryption");
> - status = TEST_FAILED;
> - goto error_exit;
> - }
> + if (cryptodev_dp_test) {
> + uint8_t is_cipher = 0, is_auth = 0;
>
> - op = NULL;
> + if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) {
> + RTE_LOG(DEBUG, USER1,
> + "QAT direct API does not support OOP, Test
> Skipped.\n");
> + snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
> "SKIPPED");
> + status = TEST_SUCCESS;
> + goto error_exit;
> + }
> + if (t->op_mask & BLOCKCIPHER_TEST_OP_CIPHER)
> + is_cipher = 1;
> + if (t->op_mask & BLOCKCIPHER_TEST_OP_AUTH)
> + is_auth = 1;
>
> - while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) == 0)
> - rte_pause();
> + process_sym_hw_api_op(dev_id, 0, op, is_cipher, is_auth, 0,
> + tdata->iv.len);
> + } else {
> + if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
> + snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
> + "line %u FAILED: %s",
> + __LINE__, "Error sending packet for
> encryption");
> + status = TEST_FAILED;
> + goto error_exit;
> + }
>
> - if (!op) {
> - snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
> - "line %u FAILED: %s",
> - __LINE__, "Failed to process sym crypto op");
> - status = TEST_FAILED;
> - goto error_exit;
> + op = NULL;
> +
> + while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) == 0)
> + rte_pause();
> +
> + if (!op) {
> + snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
> + "line %u FAILED: %s",
> + __LINE__, "Failed to process sym crypto op");
> + status = TEST_FAILED;
> + goto error_exit;
> + }
> }
>
> debug_hexdump(stdout, "m_src(after):",
> --
> 2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v9 4/4] doc: add cryptodev service APIs guide
2020-09-08 8:42 ` [dpdk-dev] [dpdk-dev v9 4/4] doc: add cryptodev service APIs guide Fan Zhang
@ 2020-09-18 20:39 ` Akhil Goyal
2020-09-21 12:28 ` Zhang, Roy Fan
2020-09-23 13:37 ` Zhang, Roy Fan
0 siblings, 2 replies; 84+ messages in thread
From: Akhil Goyal @ 2020-09-18 20:39 UTC (permalink / raw)
To: Fan Zhang, dev
Cc: fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski, Anoob Joseph
Hi Fan,
> Subject: [dpdk-dev v9 4/4] doc: add cryptodev service APIs guide
>
> This patch updates programmer's guide to demonstrate the usage
> and limitations of cryptodev symmetric crypto data-path service
> APIs.
>
> Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
> doc/guides/prog_guide/cryptodev_lib.rst | 90 +++++++++++++++++++++++++
> doc/guides/rel_notes/release_20_11.rst | 7 ++
> 2 files changed, 97 insertions(+)
We generally do not take separate patches for documentation. Squash it the patches which
Implement the feature.
>
> diff --git a/doc/guides/prog_guide/cryptodev_lib.rst
> b/doc/guides/prog_guide/cryptodev_lib.rst
> index c14f750fa..1321e4c5d 100644
> --- a/doc/guides/prog_guide/cryptodev_lib.rst
> +++ b/doc/guides/prog_guide/cryptodev_lib.rst
> @@ -631,6 +631,96 @@ a call argument. Status different than zero must be
> treated as error.
> For more details, e.g. how to convert an mbuf to an SGL, please refer to an
> example usage in the IPsec library implementation.
>
> +Cryptodev Direct Data-path Service API
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
What do you mean by Direct here. It should be referenced as raw APIs?
Moreover, service keyword can also be dropped. We normally use it for
Software implementation of a feature which is normally done in hardware.
> +
> +Direct crypto data-path service are a set of APIs that especially provided for
> +the external libraries/applications who want to take advantage of the rich
> +features provided by cryptodev, but not necessarily depend on cryptodev
> +operations, mempools, or mbufs in the their data-path implementations.
Raw crypto data path is a set of APIs which can be used by external
libraries/applications to take advantage of the rich features provided by cryptodev,
but not necessarily depend on cryptodev operations, mempools, or mbufs in the
data-path implementations.
> +
> +The direct crypto data-path service has the following advantages:
> +- Supports raw data pointer and physical addresses as input.
> +- Do not require specific data structure allocated from heap, such as
> + cryptodev operation.
> +- Enqueue in a burst or single operation. The service allow enqueuing in
> + a burst similar to ``rte_cryptodev_enqueue_burst`` operation, or only
> + enqueue one job at a time but maintaining necessary context data locally for
> + next single job enqueue operation. The latter method is especially helpful
> + when the user application's crypto operations are clustered into a burst.
> + Allowing enqueue one operation at a time helps reducing one additional loop
> + and also reduced the cache misses during the double "looping" situation.
> +- Customerizable dequeue count. Instead of dequeue maximum possible
> operations
> + as same as ``rte_cryptodev_dequeue_burst`` operation, the service allows the
> + user to provide a callback function to decide how many operations to be
> + dequeued. This is especially helpful when the expected dequeue count is
> + hidden inside the opaque data stored during enqueue. The user can provide
> + the callback function to parse the opaque data structure.
> +- Abandon enqueue and dequeue anytime. One of the drawbacks of
> + ``rte_cryptodev_enqueue_burst`` and ``rte_cryptodev_dequeue_burst``
> + operations are: once an operation is enqueued/dequeued there is no way to
> + undo the operation. The service make the operation abandon possible by
> + creating a local copy of the queue operation data in the service context
> + data. The data will be written back to driver maintained operation data
> + when enqueue or dequeue done function is called.
> +
The language in the above test need to be re-written. Some sentences does not
Make complete sense and has grammatical errors.
I suggest to have an internal review within Intel first before sending the next version.
> +The cryptodev data-path service uses
> +
> +Cryptodev PMDs who supports this feature will have
> +``RTE_CRYPTODEV_FF_SYM_HW_DIRECT_API`` feature flag presented. To use
RTE_CRYPTODEV_FF_SYM_HW_RAW_DP looks better.
> this
> +feature the function ``rte_cryptodev_get_dp_service_ctx_data_size`` should
Can be renamed as rte_cryptodev_get_raw_dp_ctx_size
> +be called to get the data path service context data size. The user should
> +creates a local buffer at least this size long and initialize it using
The user should create a local buffer of at least this size and initialize it using
> +``rte_cryptodev_dp_configure_service`` function call.
rte_cryptodev_raw_dp_configure or rte_cryptodev_configure _raw_dp can be used here.
> +
> +The ``rte_cryptodev_dp_configure_service`` function call initialize or
> +updates the ``struct rte_crypto_dp_service_ctx`` buffer, in which contains the
> +driver specific queue pair data pointer and service context buffer, and a
> +set of function pointers to enqueue and dequeue different algorithms'
> +operations. The ``rte_cryptodev_dp_configure_service`` should be called when:
> +
> +- Before enqueuing or dequeuing starts (set ``is_update`` parameter to 0).
> +- When different cryptodev session, security session, or session-less xform
> + is used (set ``is_update`` parameter to 1).
The use of is_update is not clear with above text. IMO, we do not need this flag.
Whenever an update is required, we change the session information and call the
Same API again and driver can copy all information blindly without checking.
> +
> +Two different enqueue functions are provided.
> +
> +- ``rte_cryptodev_dp_sym_submit_vec``: submit a burst of operations stored in
> + the ``rte_crypto_sym_vec`` structure.
> +- ``rte_cryptodev_dp_submit_single_job``: submit single operation.
What is the meaning of single job here? Can we use multiple buffers/vectors of same session
In a single job? Or we can submit only a single buffer/vector in a job?
> +
> +Either enqueue functions will not command the crypto device to start
> processing
> +until ``rte_cryptodev_dp_submit_done`` function is called. Before then the user
> +shall expect the driver only stores the necessory context data in the
> +``rte_crypto_dp_service_ctx`` buffer for the next enqueue operation. If the
> user
> +wants to abandon the submitted operations, simply call
> +``rte_cryptodev_dp_configure_service`` function instead with the parameter
> +``is_update`` set to 0. The driver will recover the service context data to
> +the previous state.
Can you explain a use case where this is actually being used? This looks fancy but
Do we have this type of requirement in any protocol stacks/specifications?
I believe it to be an extra burden on the application writer if it is not a protocol requirement.
> +
> +To dequeue the operations the user also have two operations:
> +
> +- ``rte_cryptodev_dp_sym_dequeue``: fully customizable deuqueue operation.
> The
> + user needs to provide the callback function for the driver to get the
> + dequeue count and perform post processing such as write the status field.
> +- ``rte_cryptodev_dp_sym_dequeue_single_job``: dequeue single job.
> +
> +Same as enqueue, the function ``rte_cryptodev_dp_dequeue_done`` is used to
> +merge user's local service context data with the driver's queue operation
> +data. Also to abandon the dequeue operation (still keep the operations in the
> +queue), the user shall avoid ``rte_cryptodev_dp_dequeue_done`` function call
> +but calling ``rte_cryptodev_dp_configure_service`` function with the parameter
> +``is_update`` set to 0.
> +
> +There are a few limitations to the data path service:
> +
> +* Only support in-place operations.
> +* APIs are NOT thread-safe.
> +* CANNOT mix the direct API's enqueue with rte_cryptodev_enqueue_burst, or
> + vice versa.
> +
> +See *DPDK API Reference* for details on each API definitions.
> +
> Sample code
> -----------
>
> diff --git a/doc/guides/rel_notes/release_20_11.rst
> b/doc/guides/rel_notes/release_20_11.rst
> index df227a177..159823345 100644
> --- a/doc/guides/rel_notes/release_20_11.rst
> +++ b/doc/guides/rel_notes/release_20_11.rst
> @@ -55,6 +55,13 @@ New Features
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> + * **Added data-path APIs for cryptodev library.**
> +
> + Cryptodev is added data-path APIs to accelerate external libraries or
> + applications those want to avail fast cryptodev enqueue/dequeue
> + operations but does not necessarily depends on mbufs and cryptodev
> + operation mempool.
> +
>
> Removed Items
> -------------
Regards,
Akhil
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
2020-09-08 8:42 ` [dpdk-dev] [dpdk-dev v9 1/4] cryptodev: add crypto " Fan Zhang
@ 2020-09-18 21:50 ` Akhil Goyal
2020-09-21 10:40 ` Zhang, Roy Fan
0 siblings, 1 reply; 84+ messages in thread
From: Akhil Goyal @ 2020-09-18 21:50 UTC (permalink / raw)
To: Fan Zhang, dev
Cc: fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski,
Piotr Bronowski, Anoob Joseph
Hi Fan,
> Subject: [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
>
> This patch adds data-path service APIs for enqueue and dequeue
> operations to cryptodev. The APIs support flexible user-define
> enqueue and dequeue behaviors and operation mode.
>
> Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
> Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
> ---
> lib/librte_cryptodev/rte_crypto.h | 9 +
> lib/librte_cryptodev/rte_crypto_sym.h | 49 ++-
> lib/librte_cryptodev/rte_cryptodev.c | 98 +++++
> lib/librte_cryptodev/rte_cryptodev.h | 335 +++++++++++++++++-
> lib/librte_cryptodev/rte_cryptodev_pmd.h | 48 ++-
> .../rte_cryptodev_version.map | 10 +
> 6 files changed, 540 insertions(+), 9 deletions(-)
>
> diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
> index fd5ef3a87..f009be9af 100644
> --- a/lib/librte_cryptodev/rte_crypto.h
> +++ b/lib/librte_cryptodev/rte_crypto.h
> @@ -438,6 +438,15 @@ rte_crypto_op_attach_asym_session(struct
> rte_crypto_op *op,
> return 0;
> }
>
> +/** Crypto data-path service types */
> +enum rte_crypto_dp_service {
> + RTE_CRYPTO_DP_SYM_CIPHER_ONLY = 0,
> + RTE_CRYPTO_DP_SYM_AUTH_ONLY,
> + RTE_CRYPTO_DP_SYM_CHAIN,
> + RTE_CRYPTO_DP_SYM_AEAD,
> + RTE_CRYPTO_DP_N_SERVICE
> +};
Comments missing for this enum.
Do we really need this enum?
Can we not have this info in the driver from the xform list?
And if we really want to add this, why to have it specific to raw data path APIs?
> +
> #ifdef __cplusplus
> }
> #endif
> diff --git a/lib/librte_cryptodev/rte_crypto_sym.h
> b/lib/librte_cryptodev/rte_crypto_sym.h
> index f29c98051..376412e94 100644
> --- a/lib/librte_cryptodev/rte_crypto_sym.h
> +++ b/lib/librte_cryptodev/rte_crypto_sym.h
> @@ -50,6 +50,30 @@ struct rte_crypto_sgl {
> uint32_t num;
> };
>
> +/**
> + * Symmetri Crypto Addtional Data other than src and destination data.
> + * Supposed to be used to pass IV/digest/aad data buffers with lengths
> + * defined when creating crypto session.
> + */
Fix typo
> +union rte_crypto_sym_additional_data {
> + struct {
> + void *cipher_iv_ptr;
> + rte_iova_t cipher_iv_iova;
> + void *auth_iv_ptr;
> + rte_iova_t auth_iv_iova;
> + void *digest_ptr;
> + rte_iova_t digest_iova;
> + } cipher_auth;
Should be chain instead of cipher_auth
> + struct {
> + void *iv_ptr;
> + rte_iova_t iv_iova;
> + void *digest_ptr;
> + rte_iova_t digest_iova;
> + void *aad_ptr;
> + rte_iova_t aad_iova;
> + } aead;
> +};
> +
> /**
> * Synchronous operation descriptor.
> * Supposed to be used with CPU crypto API call.
> @@ -57,12 +81,25 @@ struct rte_crypto_sgl {
> struct rte_crypto_sym_vec {
> /** array of SGL vectors */
> struct rte_crypto_sgl *sgl;
> - /** array of pointers to IV */
> - void **iv;
> - /** array of pointers to AAD */
> - void **aad;
> - /** array of pointers to digest */
> - void **digest;
> +
> + union {
> +
> + /* Supposed to be used with CPU crypto API call. */
> + struct {
> + /** array of pointers to IV */
> + void **iv;
> + /** array of pointers to AAD */
> + void **aad;
> + /** array of pointers to digest */
> + void **digest;
> + };
Can we also name this struct?
Probably we should split this as a separate patch.
> +
> + /* Supposed to be used with
> rte_cryptodev_dp_sym_submit_vec()
> + * call.
> + */
> + union rte_crypto_sym_additional_data *additional_data;
> + };
> +
Can we get rid of this unnecessary union rte_crypto_sym_additional_data
And place chain and aead directly in the union? At any point, only one of the three
would be used.
> /**
> * array of statuses for each operation:
> * - 0 on success
> diff --git a/lib/librte_cryptodev/rte_cryptodev.c
> b/lib/librte_cryptodev/rte_cryptodev.c
> index 1dd795bcb..4f59cf800 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.c
> +++ b/lib/librte_cryptodev/rte_cryptodev.c
> @@ -1914,6 +1914,104 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t
> dev_id,
> return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec);
> }
>
> +int
> +rte_cryptodev_dp_get_service_ctx_data_size(uint8_t dev_id)
> +{
> + struct rte_cryptodev *dev;
> + int32_t size = sizeof(struct rte_crypto_dp_service_ctx);
> + int32_t priv_size;
> +
> + if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
> + return -1;
> +
> + dev = rte_cryptodev_pmd_get_dev(dev_id);
> +
> + if (*dev->dev_ops->get_drv_ctx_size == NULL ||
> + !(dev->feature_flags &
> RTE_CRYPTODEV_FF_DATA_PATH_SERVICE)) {
> + return -1;
> + }
I have some suggestions for the naming of the APIs / flags in the doc patch,
Please check that and make changes in this patch.
Also, you have missed adding this feature flag in the
doc/guides/cryptodevs/features/default.ini file.
And Subsequently in the doc/guides/cryptodevs/features/qat.ini file.
> +
> + priv_size = (*dev->dev_ops->get_drv_ctx_size)(dev);
> + if (priv_size < 0)
> + return -1;
> +
> + return RTE_ALIGN_CEIL((size + priv_size), 8);
> +}
> +
> +int
> +rte_cryptodev_dp_configure_service(uint8_t dev_id, uint16_t qp_id,
> + enum rte_crypto_dp_service service_type,
> + enum rte_crypto_op_sess_type sess_type,
> + union rte_cryptodev_session_ctx session_ctx,
> + struct rte_crypto_dp_service_ctx *ctx, uint8_t is_update)
> +{
> + struct rte_cryptodev *dev;
> +
> + if (!rte_cryptodev_get_qp_status(dev_id, qp_id))
> + return -1;
> +
> + dev = rte_cryptodev_pmd_get_dev(dev_id);
> + if (!(dev->feature_flags & RTE_CRYPTODEV_FF_DATA_PATH_SERVICE)
> + || dev->dev_ops->configure_service == NULL)
> + return -1;
It would be better to return actual error number like ENOTSUP/EINVAL.
It would be helpful in debugging.
> +
> + return (*dev->dev_ops->configure_service)(dev, qp_id, service_type,
> + sess_type, session_ctx, ctx, is_update);
> +}
> +
> +int
> +rte_cryptodev_dp_sym_submit_single_job(struct rte_crypto_dp_service_ctx
> *ctx,
> + struct rte_crypto_vec *data, uint16_t n_data_vecs,
> + union rte_crypto_sym_ofs ofs,
> + union rte_crypto_sym_additional_data *additional_data,
> + void *opaque)
> +{
Can we have some debug checks for NULL checking.
> + return _cryptodev_dp_submit_single_job(ctx, data, n_data_vecs, ofs,
> + additional_data, opaque);
Unnecessary function call _cryptodev_dp_submit_single_job.
You can directly call
return (*ctx->submit_single_job)(ctx->qp_data, ctx->drv_service_data,
data, n_data_vecs, ofs, additional_data, opaque);
> +}
> +
> +uint32_t
> +rte_cryptodev_dp_sym_submit_vec(struct rte_crypto_dp_service_ctx *ctx,
> + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
> + void **opaque)
> +{
> + return (*ctx->submit_vec)(ctx->qp_data, ctx->drv_service_data, vec,
> + ofs, opaque);
> +}
> +
> +int
> +rte_cryptodev_dp_sym_dequeue_single_job(struct rte_crypto_dp_service_ctx
> *ctx,
> + void **out_opaque)
> +{
> + return _cryptodev_dp_sym_dequeue_single_job(ctx, out_opaque);
Same here.
> +}
> +
> +int
> +rte_cryptodev_dp_sym_submit_done(struct rte_crypto_dp_service_ctx *ctx,
> + uint32_t n)
> +{
> + return (*ctx->submit_done)(ctx->qp_data, ctx->drv_service_data, n);
> +}
> +
> +int
> +rte_cryptodev_dp_sym_dequeue_done(struct rte_crypto_dp_service_ctx *ctx,
> + uint32_t n)
> +{
> + return (*ctx->dequeue_done)(ctx->qp_data, ctx->drv_service_data, n);
> +}
> +
> +uint32_t
> +rte_cryptodev_dp_sym_dequeue(struct rte_crypto_dp_service_ctx *ctx,
> + rte_cryptodev_get_dequeue_count_t get_dequeue_count,
> + rte_cryptodev_post_dequeue_t post_dequeue,
> + void **out_opaque, uint8_t is_opaque_array,
> + uint32_t *n_success_jobs)
> +{
> + return (*ctx->dequeue_opaque)(ctx->qp_data, ctx->drv_service_data,
> + get_dequeue_count, post_dequeue, out_opaque,
> is_opaque_array,
> + n_success_jobs);
> +}
> +
> /** Initialise rte_crypto_op mempool element */
> static void
> rte_crypto_op_init(struct rte_mempool *mempool,
> diff --git a/lib/librte_cryptodev/rte_cryptodev.h
> b/lib/librte_cryptodev/rte_cryptodev.h
> index 7b3ebc20f..4da0389d1 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.h
> +++ b/lib/librte_cryptodev/rte_cryptodev.h
> @@ -466,7 +466,8 @@ rte_cryptodev_asym_get_xform_enum(enum
> rte_crypto_asym_xform_type *xform_enum,
> /**< Support symmetric session-less operations */
> #define RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA (1ULL
> << 23)
> /**< Support operations on data which is not byte aligned */
> -
> +#define RTE_CRYPTODEV_FF_DATA_PATH_SERVICE (1ULL << 24)
> +/**< Support accelerated specific raw data as input */
Support data path APIs for raw data as input.
>
> /**
> * Get the name of a crypto device feature flag
> @@ -1351,6 +1352,338 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t
> dev_id,
> struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
> struct rte_crypto_sym_vec *vec);
>
> +/**
> + * Get the size of the data-path service context for all registered drivers.
For all drivers ? or for a device?
> + *
> + * @param dev_id The device identifier.
> + *
> + * @return
> + * - If the device supports data-path service, return the context size.
> + * - If the device does not support the data-dath service, return -1.
> + */
> +__rte_experimental
> +int
> +rte_cryptodev_dp_get_service_ctx_data_size(uint8_t dev_id);
> +
> +/**
> + * Union of different crypto session types, including session-less xform
> + * pointer.
Union of different symmetric crypto session types ..
> + */
> +union rte_cryptodev_session_ctx {
> + struct rte_cryptodev_sym_session *crypto_sess;
> + struct rte_crypto_sym_xform *xform;
> + struct rte_security_session *sec_sess;
> +};
> +
> +/**
> + * Submit a data vector into device queue but the driver will not start
> + * processing until rte_cryptodev_dp_sym_submit_vec() is called.
> + *
> + * @param qp Driver specific queue pair data.
> + * @param service_data Driver specific service data.
> + * @param vec The array of job vectors.
> + * @param ofs Start and stop offsets for auth and cipher
> + * operations.
> + * @param opaque The array of opaque data for dequeue.
Can you elaborate the usage of opaque here?
> + * @return
> + * - The number of jobs successfully submitted.
> + */
> +typedef uint32_t (*cryptodev_dp_sym_submit_vec_t)(
> + void *qp, uint8_t *service_data, struct rte_crypto_sym_vec *vec,
> + union rte_crypto_sym_ofs ofs, void **opaque);
> +
> +/**
> + * Submit single job into device queue but the driver will not start
> + * processing until rte_cryptodev_dp_sym_submit_vec() is called.
> + *
> + * @param qp Driver specific queue pair data.
> + * @param service_data Driver specific service data.
> + * @param data The buffer vector.
> + * @param n_data_vecs Number of buffer vectors.
> + * @param ofs Start and stop offsets for auth and cipher
> + * operations.
> + * @param additional_data IV, digest, and aad data.
> + * @param opaque The opaque data for dequeue.
> + * @return
> + * - On success return 0.
> + * - On failure return negative integer.
> + */
> +typedef int (*cryptodev_dp_submit_single_job_t)(
> + void *qp, uint8_t *service_data, struct rte_crypto_vec *data,
> + uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs,
> + union rte_crypto_sym_additional_data *additional_data,
> + void *opaque);
> +
> +/**
> + * Inform the queue pair to start processing or finish dequeuing all
> + * submitted/dequeued jobs.
> + *
> + * @param qp Driver specific queue pair data.
> + * @param service_data Driver specific service data.
> + * @param n The total number of submitted jobs.
> + * @return
> + * - On success return 0.
> + * - On failure return negative integer.
> + */
> +typedef int (*cryptodev_dp_sym_operation_done_t)(void *qp,
> + uint8_t *service_data, uint32_t n);
> +
> +/**
> + * Typedef that the user provided for the driver to get the dequeue count.
> + * The function may return a fixed number or the number parsed from the
> opaque
> + * data stored in the first processed job.
> + *
> + * @param opaque Dequeued opaque data.
> + **/
> +typedef uint32_t (*rte_cryptodev_get_dequeue_count_t)(void *opaque);
> +
> +/**
> + * Typedef that the user provided to deal with post dequeue operation, such
> + * as filling status.
> + *
> + * @param opaque Dequeued opaque data. In case
> + *
> RTE_CRYPTO_HW_DP_FF_GET_OPAQUE_ARRAY bit is
> + * set, this value will be the opaque data stored
> + * in the specific processed jobs referenced by
> + * index, otherwise it will be the opaque data
> + * stored in the first processed job in the burst.
What is RTE_CRYPTO_HW_DP_FF_GET_OPAQUE_ARRAY, I did not find this in the series.
> + * @param index Index number of the processed job.
> + * @param is_op_success Driver filled operation status.
> + **/
> +typedef void (*rte_cryptodev_post_dequeue_t)(void *opaque, uint32_t index,
> + uint8_t is_op_success);
> +
> +/**
> + * Dequeue symmetric crypto processing of user provided data.
> + *
> + * @param qp Driver specific queue pair data.
> + * @param service_data Driver specific service data.
> + * @param get_dequeue_count User provided callback function to
> + * obtain dequeue count.
> + * @param post_dequeue User provided callback function to
> + * post-process a dequeued operation.
> + * @param out_opaque Opaque pointer array to be retrieve
> from
> + * device queue. In case of
> + * *is_opaque_array* is set there should
> + * be enough room to store all opaque
> data.
> + * @param is_opaque_array Set 1 if every dequeued job will
> be
> + * written the opaque data into
> + * *out_opaque* array.
> + * @param n_success_jobs Driver written value to specific the
> + * total successful operations count.
> + *
> + * @return
> + * - Returns number of dequeued packets.
> + */
> +typedef uint32_t (*cryptodev_dp_sym_dequeue_t)(void *qp, uint8_t
> *service_data,
> + rte_cryptodev_get_dequeue_count_t get_dequeue_count,
> + rte_cryptodev_post_dequeue_t post_dequeue,
> + void **out_opaque, uint8_t is_opaque_array,
> + uint32_t *n_success_jobs);
> +
> +/**
> + * Dequeue symmetric crypto processing of user provided data.
> + *
> + * @param qp Driver specific queue pair data.
> + * @param service_data Driver specific service data.
> + * @param out_opaque Opaque pointer to be retrieve from
> + * device queue.
> + *
> + * @return
> + * - 1 if the job is dequeued and the operation is a success.
> + * - 0 if the job is dequeued but the operation is failed.
> + * - -1 if no job is dequeued.
> + */
> +typedef int (*cryptodev_dp_sym_dequeue_single_job_t)(
> + void *qp, uint8_t *service_data, void **out_opaque);
> +
> +/**
> + * Context data for asynchronous crypto process.
> + */
> +struct rte_crypto_dp_service_ctx {
> + void *qp_data;
> +
> + struct {
> + cryptodev_dp_submit_single_job_t submit_single_job;
> + cryptodev_dp_sym_submit_vec_t submit_vec;
> + cryptodev_dp_sym_operation_done_t submit_done;
> + cryptodev_dp_sym_dequeue_t dequeue_opaque;
> + cryptodev_dp_sym_dequeue_single_job_t dequeue_single;
> + cryptodev_dp_sym_operation_done_t dequeue_done;
> + };
> +
> + /* Driver specific service data */
> + __extension__ uint8_t drv_service_data[];
> +};
Comments missing for structure params.
Struct name can be rte_crypto_raw_dp_ctx.
Who allocate and free this structure?
> +
> +/**
> + * Configure one DP service context data. Calling this function for the first
> + * time the user should unset the *is_update* parameter and the driver will
> + * fill necessary operation data into ctx buffer. Only when
> + * rte_cryptodev_dp_submit_done() is called the data stored in the ctx buffer
> + * will not be effective.
> + *
> + * @param dev_id The device identifier.
> + * @param qp_id The index of the queue pair from which to
> + * retrieve processed packets. The value must be
> + * in the range [0, nb_queue_pair - 1] previously
> + * supplied to rte_cryptodev_configure().
> + * @param service_type Type of the service requested.
> + * @param sess_type session type.
> + * @param session_ctx Session context data.
> + * @param ctx The data-path service context data.
> + * @param is_update Set 1 if ctx is pre-initialized but need
> + * update to different service type or session,
> + * but the rest driver data remains the same.
> + * Since service context data buffer is provided
> + * by user, the driver will not check the
> + * validity of the buffer nor its content. It is
> + * the user's obligation to initialize and
> + * uses the buffer properly by setting this field.
> + * @return
> + * - On success return 0.
> + * - On failure return negative integer.
> + */
> +__rte_experimental
> +int
> +rte_cryptodev_dp_configure_service(uint8_t dev_id, uint16_t qp_id,
> + enum rte_crypto_dp_service service_type,
> + enum rte_crypto_op_sess_type sess_type,
> + union rte_cryptodev_session_ctx session_ctx,
> + struct rte_crypto_dp_service_ctx *ctx, uint8_t is_update);
> +
> +static __rte_always_inline int
> +_cryptodev_dp_submit_single_job(struct rte_crypto_dp_service_ctx *ctx,
> + struct rte_crypto_vec *data, uint16_t n_data_vecs,
> + union rte_crypto_sym_ofs ofs,
> + union rte_crypto_sym_additional_data *additional_data,
> + void *opaque)
> +{
> + return (*ctx->submit_single_job)(ctx->qp_data, ctx->drv_service_data,
> + data, n_data_vecs, ofs, additional_data, opaque);
> +}
> +
> +static __rte_always_inline int
> +_cryptodev_dp_sym_dequeue_single_job(struct rte_crypto_dp_service_ctx
> *ctx,
> + void **out_opaque)
> +{
> + return (*ctx->dequeue_single)(ctx->qp_data, ctx->drv_service_data,
> + out_opaque);
> +}
> +
> +/**
> + * Submit single job into device queue but the driver will not start
> + * processing until rte_cryptodev_dp_submit_done() is called. This is a
> + * simplified
> + *
> + * @param ctx The initialized data-path service context data.
> + * @param data The buffer vector.
> + * @param n_data_vecs Number of buffer vectors.
> + * @param ofs Start and stop offsets for auth and cipher
> + * operations.
> + * @param additional_data IV, digest, and aad
> + * @param opaque The array of opaque data for dequeue.
> + * @return
> + * - On success return 0.
> + * - On failure return negative integer.
> + */
> +__rte_experimental
> +int
> +rte_cryptodev_dp_sym_submit_single_job(struct rte_crypto_dp_service_ctx
> *ctx,
> + struct rte_crypto_vec *data, uint16_t n_data_vecs,
> + union rte_crypto_sym_ofs ofs,
> + union rte_crypto_sym_additional_data *additional_data,
> + void *opaque);
> +
> +/**
> + * Submit a data vector into device queue but the driver will not start
> + * processing until rte_cryptodev_dp_submit_done() is called.
> + *
> + * @param ctx The initialized data-path service context data.
> + * @param vec The array of job vectors.
> + * @param ofs Start and stop offsets for auth and cipher operations.
> + * @param opaque The array of opaque data for dequeue.
> + * @return
> + * - The number of jobs successfully submitted.
> + */
> +__rte_experimental
> +uint32_t
> +rte_cryptodev_dp_sym_submit_vec(struct rte_crypto_dp_service_ctx *ctx,
> + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
> + void **opaque);
> +
> +/**
> + * Command the queue pair to start processing all submitted jobs from last
> + * rte_cryptodev_init_dp_service() call.
> + *
> + * @param ctx The initialized data-path service context data.
> + * @param n The total number of submitted jobs.
> + */
> +__rte_experimental
> +int
> +rte_cryptodev_dp_sym_submit_done(struct rte_crypto_dp_service_ctx *ctx,
> + uint32_t n);
> +
> +/**
> + * Dequeue symmetric crypto processing of user provided data.
> + *
> + * @param ctx The initialized data-path service
> + * context data.
> + * @param get_dequeue_count User provided callback function to
> + * obtain dequeue count.
> + * @param post_dequeue User provided callback function to
> + * post-process a dequeued operation.
> + * @param out_opaque Opaque pointer array to be retrieve
> from
> + * device queue. In case of
> + * *is_opaque_array* is set there should
> + * be enough room to store all opaque
> data.
> + * @param is_opaque_array Set 1 if every dequeued job will
> be
> + * written the opaque data into
> + * *out_opaque* array.
> + * @param n_success_jobs Driver written value to specific the
> + * total successful operations count.
> + *
> + * @return
> + * - Returns number of dequeued packets.
> + */
> +__rte_experimental
> +uint32_t
> +rte_cryptodev_dp_sym_dequeue(struct rte_crypto_dp_service_ctx *ctx,
> + rte_cryptodev_get_dequeue_count_t get_dequeue_count,
> + rte_cryptodev_post_dequeue_t post_dequeue,
> + void **out_opaque, uint8_t is_opaque_array,
> + uint32_t *n_success_jobs);
> +
> +/**
> + * Dequeue Single symmetric crypto processing of user provided data.
> + *
> + * @param ctx The initialized data-path service
> + * context data.
> + * @param out_opaque Opaque pointer to be retrieve from
> + * device queue. The driver shall support
> + * NULL input of this parameter.
> + *
> + * @return
> + * - 1 if the job is dequeued and the operation is a success.
> + * - 0 if the job is dequeued but the operation is failed.
> + * - -1 if no job is dequeued.
> + */
> +__rte_experimental
> +int
> +rte_cryptodev_dp_sym_dequeue_single_job(struct rte_crypto_dp_service_ctx
> *ctx,
> + void **out_opaque);
> +
> +/**
> + * Inform the queue pair dequeue jobs finished.
> + *
> + * @param ctx The initialized data-path service context data.
> + * @param n The total number of jobs already dequeued.
> + */
> +__rte_experimental
> +int
> +rte_cryptodev_dp_sym_dequeue_done(struct rte_crypto_dp_service_ctx *ctx,
> + uint32_t n);
> +
> #ifdef __cplusplus
> }
> #endif
> diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h
> b/lib/librte_cryptodev/rte_cryptodev_pmd.h
> index 81975d72b..e19de458c 100644
> --- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
> +++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
> @@ -316,6 +316,42 @@ typedef uint32_t
> (*cryptodev_sym_cpu_crypto_process_t)
> (struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess,
> union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec);
>
> +/**
> + * Typedef that the driver provided to get service context private date size.
> + *
> + * @param dev Crypto device pointer.
> + *
> + * @return
> + * - On success return the size of the device's service context private data.
> + * - On failure return negative integer.
> + */
> +typedef int (*cryptodev_dp_get_service_ctx_size_t)(
> + struct rte_cryptodev *dev);
> +
> +/**
> + * Typedef that the driver provided to configure data-path service.
> + *
> + * @param dev Crypto device pointer.
> + * @param qp_id Crypto device queue pair index.
> + * @param service_type Type of the service requested.
> + * @param sess_type session type.
> + * @param session_ctx Session context data.
> + * @param ctx The data-path service context data.
> + * @param is_update Set 1 if ctx is pre-initialized but need
> + * update to different service type or session,
> + * but the rest driver data remains the same.
> + * buffer will always be one.
> + * @return
> + * - On success return 0.
> + * - On failure return negative integer.
> + */
> +typedef int (*cryptodev_dp_configure_service_t)(
> + struct rte_cryptodev *dev, uint16_t qp_id,
> + enum rte_crypto_dp_service service_type,
> + enum rte_crypto_op_sess_type sess_type,
> + union rte_cryptodev_session_ctx session_ctx,
> + struct rte_crypto_dp_service_ctx *ctx,
> + uint8_t is_update);
>
> /** Crypto device operations function pointer table */
> struct rte_cryptodev_ops {
> @@ -348,8 +384,16 @@ struct rte_cryptodev_ops {
> /**< Clear a Crypto sessions private data. */
> cryptodev_asym_free_session_t asym_session_clear;
> /**< Clear a Crypto sessions private data. */
> - cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
> - /**< process input data synchronously (cpu-crypto). */
> + union {
> + cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
> + /**< process input data synchronously (cpu-crypto). */
> + struct {
> + cryptodev_dp_get_service_ctx_size_t get_drv_ctx_size;
> + /**< Get data path service context data size. */
> + cryptodev_dp_configure_service_t configure_service;
> + /**< Initialize crypto service ctx data. */
> + };
> + };
> };
>
>
> diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map
> b/lib/librte_cryptodev/rte_cryptodev_version.map
> index 02f6dcf72..10388ae90 100644
> --- a/lib/librte_cryptodev/rte_cryptodev_version.map
> +++ b/lib/librte_cryptodev/rte_cryptodev_version.map
> @@ -105,4 +105,14 @@ EXPERIMENTAL {
>
> # added in 20.08
> rte_cryptodev_get_qp_status;
> +
> + # added in 20.11
> + rte_cryptodev_dp_configure_service;
Rte_cryptodev_configure_raw_dp
> + rte_cryptodev_dp_get_service_ctx_data_size;
Rte_cryptodev_get_raw_dp_ctx_size
> + rte_cryptodev_dp_sym_dequeue;
rte_cryptodev_raw_dequeue_burst
> + rte_cryptodev_dp_sym_dequeue_done;
rte_cryptodev_raw_dequeue_done
> + rte_cryptodev_dp_sym_dequeue_single_job;
rte_cryptodev_raw_dequeue
> + rte_cryptodev_dp_sym_submit_done;
rte_cryptodev_raw_enqueue_done
> + rte_cryptodev_dp_sym_submit_single_job;
rte_cryptodev_raw_enqueue
> + rte_cryptodev_dp_sym_submit_vec;
rte_cryptodev_raw_enqueue_burst
> };
Please use above names for the APIs.
No need for keyword dp in enq/deq APIs as it is implicit that enq/deq APIs are data path APIs.
I could not complete the review of this patch specifically as I see a lot of issues in the current version
Please send reply to my queries so that review can be completed.
Regards,
Akhil
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
2020-09-18 21:50 ` Akhil Goyal
@ 2020-09-21 10:40 ` Zhang, Roy Fan
2020-09-21 11:59 ` Akhil Goyal
0 siblings, 1 reply; 84+ messages in thread
From: Zhang, Roy Fan @ 2020-09-21 10:40 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: Trahe, Fiona, Kusztal, ArkadiuszX, Dybkowski, AdamX, Bronowski,
PiotrX, Anoob Joseph
Hi Akhil,
Thanks for the review!
> -----Original Message-----
> From: Akhil Goyal <akhil.goyal@nxp.com>
> Sent: Friday, September 18, 2020 10:50 PM
> To: Zhang, Roy Fan <roy.fan.zhang@intel.com>; dev@dpdk.org
> Cc: Trahe, Fiona <fiona.trahe@intel.com>; Kusztal, ArkadiuszX
> <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> <adamx.dybkowski@intel.com>; Bronowski, PiotrX
> <piotrx.bronowski@intel.com>; Anoob Joseph <anoobj@marvell.com>
> Subject: RE: [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
>
> Hi Fan,
>
> > Subject: [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
> >
> > This patch adds data-path service APIs for enqueue and dequeue
> > operations to cryptodev. The APIs support flexible user-define
> > enqueue and dequeue behaviors and operation mode.
> >
> > Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
> > Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
> > ---
> > lib/librte_cryptodev/rte_crypto.h | 9 +
> > lib/librte_cryptodev/rte_crypto_sym.h | 49 ++-
> > lib/librte_cryptodev/rte_cryptodev.c | 98 +++++
> > lib/librte_cryptodev/rte_cryptodev.h | 335 +++++++++++++++++-
> > lib/librte_cryptodev/rte_cryptodev_pmd.h | 48 ++-
> > .../rte_cryptodev_version.map | 10 +
> > 6 files changed, 540 insertions(+), 9 deletions(-)
> >
> > diff --git a/lib/librte_cryptodev/rte_crypto.h
> b/lib/librte_cryptodev/rte_crypto.h
> > index fd5ef3a87..f009be9af 100644
> > --- a/lib/librte_cryptodev/rte_crypto.h
> > +++ b/lib/librte_cryptodev/rte_crypto.h
> > @@ -438,6 +438,15 @@ rte_crypto_op_attach_asym_session(struct
> > rte_crypto_op *op,
> > return 0;
> > }
> >
> > +/** Crypto data-path service types */
> > +enum rte_crypto_dp_service {
> > + RTE_CRYPTO_DP_SYM_CIPHER_ONLY = 0,
> > + RTE_CRYPTO_DP_SYM_AUTH_ONLY,
> > + RTE_CRYPTO_DP_SYM_CHAIN,
> > + RTE_CRYPTO_DP_SYM_AEAD,
> > + RTE_CRYPTO_DP_N_SERVICE
> > +};
>
> Comments missing for this enum.
> Do we really need this enum?
> Can we not have this info in the driver from the xform list?
> And if we really want to add this, why to have it specific to raw data path APIs?
>
Will add comments to this enum.
Unless the driver will store xform data in certain way (in fact QAT has it) the driver may not know which data-path to choose from.
The purpose of having this enum is that the driver knows to attach the correct handler into the service data structure fast.
> > +
> > #ifdef __cplusplus
> > }
> > #endif
> > diff --git a/lib/librte_cryptodev/rte_crypto_sym.h
> > b/lib/librte_cryptodev/rte_crypto_sym.h
> > index f29c98051..376412e94 100644
> > --- a/lib/librte_cryptodev/rte_crypto_sym.h
> > +++ b/lib/librte_cryptodev/rte_crypto_sym.h
> > @@ -50,6 +50,30 @@ struct rte_crypto_sgl {
> > uint32_t num;
> > };
> >
> > +/**
> > + * Symmetri Crypto Addtional Data other than src and destination data.
> > + * Supposed to be used to pass IV/digest/aad data buffers with lengths
> > + * defined when creating crypto session.
> > + */
>
> Fix typo
Thanks will change.
>
> > +union rte_crypto_sym_additional_data {
> > + struct {
> > + void *cipher_iv_ptr;
> > + rte_iova_t cipher_iv_iova;
> > + void *auth_iv_ptr;
> > + rte_iova_t auth_iv_iova;
> > + void *digest_ptr;
> > + rte_iova_t digest_iova;
> > + } cipher_auth;
>
> Should be chain instead of cipher_auth
This field is used for cipher only, auth only, or chain use-cases so I believe this is a better name for it.
>
> > + struct {
> > + void *iv_ptr;
> > + rte_iova_t iv_iova;
> > + void *digest_ptr;
> > + rte_iova_t digest_iova;
> > + void *aad_ptr;
> > + rte_iova_t aad_iova;
> > + } aead;
> > +};
> > +
> > /**
> > * Synchronous operation descriptor.
> > * Supposed to be used with CPU crypto API call.
> > @@ -57,12 +81,25 @@ struct rte_crypto_sgl {
> > struct rte_crypto_sym_vec {
> > /** array of SGL vectors */
> > struct rte_crypto_sgl *sgl;
> > - /** array of pointers to IV */
> > - void **iv;
> > - /** array of pointers to AAD */
> > - void **aad;
> > - /** array of pointers to digest */
> > - void **digest;
> > +
> > + union {
> > +
> > + /* Supposed to be used with CPU crypto API call. */
> > + struct {
> > + /** array of pointers to IV */
> > + void **iv;
> > + /** array of pointers to AAD */
> > + void **aad;
> > + /** array of pointers to digest */
> > + void **digest;
> > + };
>
> Can we also name this struct?
> Probably we should split this as a separate patch.
[Then this is an API break right?]
>
> > +
> > + /* Supposed to be used with
> > rte_cryptodev_dp_sym_submit_vec()
> > + * call.
> > + */
> > + union rte_crypto_sym_additional_data *additional_data;
> > + };
> > +
>
> Can we get rid of this unnecessary union rte_crypto_sym_additional_data
> And place chain and aead directly in the union? At any point, only one of the
> three
> would be used.
We have 2 main different uses cases, 1 for cpu crypto and 1 for data path APIs. Within each main uses case there are 4 types of algo (cipher only/auth only/aead/chain), one requiring HW address and virtual address, the other doesn't.
It seems to causing too much confusion to include these many union into the structure that initially was designed for cpu crypto only.
I suggest better to use different structure than squeeze all into a big union.
>
> > /**
> > * array of statuses for each operation:
> > * - 0 on success
> > diff --git a/lib/librte_cryptodev/rte_cryptodev.c
> > b/lib/librte_cryptodev/rte_cryptodev.c
> > index 1dd795bcb..4f59cf800 100644
> > --- a/lib/librte_cryptodev/rte_cryptodev.c
> > +++ b/lib/librte_cryptodev/rte_cryptodev.c
> > @@ -1914,6 +1914,104 @@
> rte_cryptodev_sym_cpu_crypto_process(uint8_t
> > dev_id,
> > return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec);
> > }
> >
> > +int
> > +rte_cryptodev_dp_get_service_ctx_data_size(uint8_t dev_id)
> > +{
> > + struct rte_cryptodev *dev;
> > + int32_t size = sizeof(struct rte_crypto_dp_service_ctx);
> > + int32_t priv_size;
> > +
> > + if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
> > + return -1;
> > +
> > + dev = rte_cryptodev_pmd_get_dev(dev_id);
> > +
> > + if (*dev->dev_ops->get_drv_ctx_size == NULL ||
> > + !(dev->feature_flags &
> > RTE_CRYPTODEV_FF_DATA_PATH_SERVICE)) {
> > + return -1;
> > + }
>
> I have some suggestions for the naming of the APIs / flags in the doc patch,
> Please check that and make changes in this patch.
> Also, you have missed adding this feature flag in the
> doc/guides/cryptodevs/features/default.ini file.
> And Subsequently in the doc/guides/cryptodevs/features/qat.ini file.
>
Will update. Thanks a lot!
> > +
> > + priv_size = (*dev->dev_ops->get_drv_ctx_size)(dev);
> > + if (priv_size < 0)
> > + return -1;
> > +
> > + return RTE_ALIGN_CEIL((size + priv_size), 8);
> > +}
> > +
> > +int
> > +rte_cryptodev_dp_configure_service(uint8_t dev_id, uint16_t qp_id,
> > + enum rte_crypto_dp_service service_type,
> > + enum rte_crypto_op_sess_type sess_type,
> > + union rte_cryptodev_session_ctx session_ctx,
> > + struct rte_crypto_dp_service_ctx *ctx, uint8_t is_update)
> > +{
> > + struct rte_cryptodev *dev;
> > +
> > + if (!rte_cryptodev_get_qp_status(dev_id, qp_id))
> > + return -1;
> > +
> > + dev = rte_cryptodev_pmd_get_dev(dev_id);
> > + if (!(dev->feature_flags &
> RTE_CRYPTODEV_FF_DATA_PATH_SERVICE)
> > + || dev->dev_ops->configure_service == NULL)
> > + return -1;
> It would be better to return actual error number like ENOTSUP/EINVAL.
> It would be helpful in debugging.
Will change.
>
> > +
> > + return (*dev->dev_ops->configure_service)(dev, qp_id,
> service_type,
> > + sess_type, session_ctx, ctx, is_update);
> > +}
> > +
> > +int
> > +rte_cryptodev_dp_sym_submit_single_job(struct
> rte_crypto_dp_service_ctx
> > *ctx,
> > + struct rte_crypto_vec *data, uint16_t n_data_vecs,
> > + union rte_crypto_sym_ofs ofs,
> > + union rte_crypto_sym_additional_data *additional_data,
> > + void *opaque)
> > +{
>
> Can we have some debug checks for NULL checking.
Will do.
>
> > + return _cryptodev_dp_submit_single_job(ctx, data, n_data_vecs,
> ofs,
> > + additional_data, opaque);
>
> Unnecessary function call _cryptodev_dp_submit_single_job.
> You can directly call
> return (*ctx->submit_single_job)(ctx->qp_data, ctx->drv_service_data,
> data, n_data_vecs, ofs, additional_data, opaque);
>
Will change.
>
> > +}
> > +
> > +uint32_t
> > +rte_cryptodev_dp_sym_submit_vec(struct rte_crypto_dp_service_ctx
> *ctx,
> > + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
> > + void **opaque)
> > +{
> > + return (*ctx->submit_vec)(ctx->qp_data, ctx->drv_service_data, vec,
> > + ofs, opaque);
> > +}
> > +
> > +int
> > +rte_cryptodev_dp_sym_dequeue_single_job(struct
> rte_crypto_dp_service_ctx
> > *ctx,
> > + void **out_opaque)
> > +{
> > + return _cryptodev_dp_sym_dequeue_single_job(ctx, out_opaque);
>
> Same here.
> > +}
> > +
> > +int
> > +rte_cryptodev_dp_sym_submit_done(struct rte_crypto_dp_service_ctx
> *ctx,
> > + uint32_t n)
> > +{
> > + return (*ctx->submit_done)(ctx->qp_data, ctx->drv_service_data,
> n);
> > +}
> > +
> > +int
> > +rte_cryptodev_dp_sym_dequeue_done(struct
> rte_crypto_dp_service_ctx *ctx,
> > + uint32_t n)
> > +{
> > + return (*ctx->dequeue_done)(ctx->qp_data, ctx->drv_service_data,
> n);
> > +}
> > +
> > +uint32_t
> > +rte_cryptodev_dp_sym_dequeue(struct rte_crypto_dp_service_ctx *ctx,
> > + rte_cryptodev_get_dequeue_count_t get_dequeue_count,
> > + rte_cryptodev_post_dequeue_t post_dequeue,
> > + void **out_opaque, uint8_t is_opaque_array,
> > + uint32_t *n_success_jobs)
> > +{
> > + return (*ctx->dequeue_opaque)(ctx->qp_data, ctx-
> >drv_service_data,
> > + get_dequeue_count, post_dequeue, out_opaque,
> > is_opaque_array,
> > + n_success_jobs);
> > +}
> > +
> > /** Initialise rte_crypto_op mempool element */
> > static void
> > rte_crypto_op_init(struct rte_mempool *mempool,
> > diff --git a/lib/librte_cryptodev/rte_cryptodev.h
> > b/lib/librte_cryptodev/rte_cryptodev.h
> > index 7b3ebc20f..4da0389d1 100644
> > --- a/lib/librte_cryptodev/rte_cryptodev.h
> > +++ b/lib/librte_cryptodev/rte_cryptodev.h
> > @@ -466,7 +466,8 @@ rte_cryptodev_asym_get_xform_enum(enum
> > rte_crypto_asym_xform_type *xform_enum,
> > /**< Support symmetric session-less operations */
> > #define RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA (1ULL
> > << 23)
> > /**< Support operations on data which is not byte aligned */
> > -
> > +#define RTE_CRYPTODEV_FF_DATA_PATH_SERVICE (1ULL
> << 24)
> > +/**< Support accelerated specific raw data as input */
>
> Support data path APIs for raw data as input.
Will update.
>
> >
> > /**
> > * Get the name of a crypto device feature flag
> > @@ -1351,6 +1352,338 @@
> rte_cryptodev_sym_cpu_crypto_process(uint8_t
> > dev_id,
> > struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs
> ofs,
> > struct rte_crypto_sym_vec *vec);
> >
> > +/**
> > + * Get the size of the data-path service context for all registered drivers.
>
> For all drivers ? or for a device?
For a device - sorry for the typo.
>
> > + *
> > + * @param dev_id The device identifier.
> > + *
> > + * @return
> > + * - If the device supports data-path service, return the context size.
> > + * - If the device does not support the data-dath service, return -1.
> > + */
> > +__rte_experimental
> > +int
> > +rte_cryptodev_dp_get_service_ctx_data_size(uint8_t dev_id);
> > +
> > +/**
> > + * Union of different crypto session types, including session-less xform
> > + * pointer.
>
> Union of different symmetric crypto session types ..
Sorry, will change.
>
> > + */
> > +union rte_cryptodev_session_ctx {
> > + struct rte_cryptodev_sym_session *crypto_sess;
> > + struct rte_crypto_sym_xform *xform;
> > + struct rte_security_session *sec_sess;
> > +};
> > +
> > +/**
> > + * Submit a data vector into device queue but the driver will not start
> > + * processing until rte_cryptodev_dp_sym_submit_vec() is called.
> > + *
> > + * @param qp Driver specific queue pair data.
> > + * @param service_data Driver specific service data.
> > + * @param vec The array of job vectors.
> > + * @param ofs Start and stop offsets for auth and cipher
> > + * operations.
> > + * @param opaque The array of opaque data for
> dequeue.
>
> Can you elaborate the usage of opaque here?
User provided data that wants to retrieve when dequeue. Will do.
>
> > + * @return
> > + * - The number of jobs successfully submitted.
> > + */
> > +typedef uint32_t (*cryptodev_dp_sym_submit_vec_t)(
> > + void *qp, uint8_t *service_data, struct rte_crypto_sym_vec *vec,
> > + union rte_crypto_sym_ofs ofs, void **opaque);
> > +
> > +/**
> > + * Submit single job into device queue but the driver will not start
> > + * processing until rte_cryptodev_dp_sym_submit_vec() is called.
> > + *
> > + * @param qp Driver specific queue pair data.
> > + * @param service_data Driver specific service data.
> > + * @param data The buffer vector.
> > + * @param n_data_vecs Number of buffer vectors.
> > + * @param ofs Start and stop offsets for auth and cipher
> > + * operations.
> > + * @param additional_data IV, digest, and aad data.
> > + * @param opaque The opaque data for dequeue.
> > + * @return
> > + * - On success return 0.
> > + * - On failure return negative integer.
> > + */
> > +typedef int (*cryptodev_dp_submit_single_job_t)(
> > + void *qp, uint8_t *service_data, struct rte_crypto_vec *data,
> > + uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs,
> > + union rte_crypto_sym_additional_data *additional_data,
> > + void *opaque);
> > +
> > +/**
> > + * Inform the queue pair to start processing or finish dequeuing all
> > + * submitted/dequeued jobs.
> > + *
> > + * @param qp Driver specific queue pair data.
> > + * @param service_data Driver specific service data.
> > + * @param n The total number of submitted jobs.
> > + * @return
> > + * - On success return 0.
> > + * - On failure return negative integer.
> > + */
> > +typedef int (*cryptodev_dp_sym_operation_done_t)(void *qp,
> > + uint8_t *service_data, uint32_t n);
> > +
> > +/**
> > + * Typedef that the user provided for the driver to get the dequeue count.
> > + * The function may return a fixed number or the number parsed from
> the
> > opaque
> > + * data stored in the first processed job.
> > + *
> > + * @param opaque Dequeued opaque data.
> > + **/
> > +typedef uint32_t (*rte_cryptodev_get_dequeue_count_t)(void
> *opaque);
> > +
> > +/**
> > + * Typedef that the user provided to deal with post dequeue operation,
> such
> > + * as filling status.
> > + *
> > + * @param opaque Dequeued opaque data. In case
> > + *
> > RTE_CRYPTO_HW_DP_FF_GET_OPAQUE_ARRAY bit is
> > + * set, this value will be the opaque data stored
> > + * in the specific processed jobs referenced by
> > + * index, otherwise it will be the opaque data
> > + * stored in the first processed job in the burst.
>
> What is RTE_CRYPTO_HW_DP_FF_GET_OPAQUE_ARRAY, I did not find this in
> the series.
Will remove.
>
> > + * @param index Index number of the processed job.
> > + * @param is_op_success Driver filled operation status.
> > + **/
> > +typedef void (*rte_cryptodev_post_dequeue_t)(void *opaque, uint32_t
> index,
> > + uint8_t is_op_success);
> > +
> > +/**
> > + * Dequeue symmetric crypto processing of user provided data.
> > + *
> > + * @param qp Driver specific queue pair data.
> > + * @param service_data Driver specific service data.
> > + * @param get_dequeue_count User provided callback function to
> > + * obtain dequeue count.
> > + * @param post_dequeue User provided callback function to
> > + * post-process a dequeued operation.
> > + * @param out_opaque Opaque pointer array to be retrieve
> > from
> > + * device queue. In case of
> > + * *is_opaque_array* is set there
> should
> > + * be enough room to store all opaque
> > data.
> > + * @param is_opaque_array Set 1 if every dequeued job
> will
> > be
> > + * written the opaque data into
> > + * *out_opaque* array.
> > + * @param n_success_jobs Driver written value to
> specific the
> > + * total successful operations count.
> > + *
> > + * @return
> > + * - Returns number of dequeued packets.
> > + */
> > +typedef uint32_t (*cryptodev_dp_sym_dequeue_t)(void *qp, uint8_t
> > *service_data,
> > + rte_cryptodev_get_dequeue_count_t get_dequeue_count,
> > + rte_cryptodev_post_dequeue_t post_dequeue,
> > + void **out_opaque, uint8_t is_opaque_array,
> > + uint32_t *n_success_jobs);
> > +
> > +/**
> > + * Dequeue symmetric crypto processing of user provided data.
> > + *
> > + * @param qp Driver specific queue pair data.
> > + * @param service_data Driver specific service data.
> > + * @param out_opaque Opaque pointer to be retrieve from
> > + * device queue.
> > + *
> > + * @return
> > + * - 1 if the job is dequeued and the operation is a success.
> > + * - 0 if the job is dequeued but the operation is failed.
> > + * - -1 if no job is dequeued.
> > + */
> > +typedef int (*cryptodev_dp_sym_dequeue_single_job_t)(
> > + void *qp, uint8_t *service_data, void **out_opaque);
> > +
> > +/**
> > + * Context data for asynchronous crypto process.
> > + */
> > +struct rte_crypto_dp_service_ctx {
> > + void *qp_data;
> > +
> > + struct {
> > + cryptodev_dp_submit_single_job_t submit_single_job;
> > + cryptodev_dp_sym_submit_vec_t submit_vec;
> > + cryptodev_dp_sym_operation_done_t submit_done;
> > + cryptodev_dp_sym_dequeue_t dequeue_opaque;
> > + cryptodev_dp_sym_dequeue_single_job_t dequeue_single;
> > + cryptodev_dp_sym_operation_done_t dequeue_done;
> > + };
> > +
> > + /* Driver specific service data */
> > + __extension__ uint8_t drv_service_data[];
> > +};
>
> Comments missing for structure params.
> Struct name can be rte_crypto_raw_dp_ctx.
>
> Who allocate and free this structure?
Same as crypto session, the user need to query the driver specific service data
Size and allocate the buffer accordingly. The difference is it does not have to
Be from mempool as it can be reused.
>
> > +
> > +/**
> > + * Configure one DP service context data. Calling this function for the first
> > + * time the user should unset the *is_update* parameter and the driver
> will
> > + * fill necessary operation data into ctx buffer. Only when
> > + * rte_cryptodev_dp_submit_done() is called the data stored in the ctx
> buffer
> > + * will not be effective.
> > + *
> > + * @param dev_id The device identifier.
> > + * @param qp_id The index of the queue pair from which to
> > + * retrieve processed packets. The value must
> be
> > + * in the range [0, nb_queue_pair - 1]
> previously
> > + * supplied to rte_cryptodev_configure().
> > + * @param service_type Type of the service requested.
> > + * @param sess_type session type.
> > + * @param session_ctx Session context data.
> > + * @param ctx The data-path service context data.
> > + * @param is_update Set 1 if ctx is pre-initialized but need
> > + * update to different service type or session,
> > + * but the rest driver data remains the same.
> > + * Since service context data buffer is provided
> > + * by user, the driver will not check the
> > + * validity of the buffer nor its content. It is
> > + * the user's obligation to initialize and
> > + * uses the buffer properly by setting this field.
> > + * @return
> > + * - On success return 0.
> > + * - On failure return negative integer.
> > + */
> > +__rte_experimental
> > +int
> > +rte_cryptodev_dp_configure_service(uint8_t dev_id, uint16_t qp_id,
> > + enum rte_crypto_dp_service service_type,
> > + enum rte_crypto_op_sess_type sess_type,
> > + union rte_cryptodev_session_ctx session_ctx,
> > + struct rte_crypto_dp_service_ctx *ctx, uint8_t is_update);
> > +
> > +static __rte_always_inline int
> > +_cryptodev_dp_submit_single_job(struct rte_crypto_dp_service_ctx
> *ctx,
> > + struct rte_crypto_vec *data, uint16_t n_data_vecs,
> > + union rte_crypto_sym_ofs ofs,
> > + union rte_crypto_sym_additional_data *additional_data,
> > + void *opaque)
> > +{
> > + return (*ctx->submit_single_job)(ctx->qp_data, ctx-
> >drv_service_data,
> > + data, n_data_vecs, ofs, additional_data, opaque);
> > +}
> > +
> > +static __rte_always_inline int
> > +_cryptodev_dp_sym_dequeue_single_job(struct
> rte_crypto_dp_service_ctx
> > *ctx,
> > + void **out_opaque)
> > +{
> > + return (*ctx->dequeue_single)(ctx->qp_data, ctx->drv_service_data,
> > + out_opaque);
> > +}
> > +
> > +/**
> > + * Submit single job into device queue but the driver will not start
> > + * processing until rte_cryptodev_dp_submit_done() is called. This is a
> > + * simplified
> > + *
> > + * @param ctx The initialized data-path service context data.
> > + * @param data The buffer vector.
> > + * @param n_data_vecs Number of buffer vectors.
> > + * @param ofs Start and stop offsets for auth and cipher
> > + * operations.
> > + * @param additional_data IV, digest, and aad
> > + * @param opaque The array of opaque data for
> dequeue.
> > + * @return
> > + * - On success return 0.
> > + * - On failure return negative integer.
> > + */
> > +__rte_experimental
> > +int
> > +rte_cryptodev_dp_sym_submit_single_job(struct
> rte_crypto_dp_service_ctx
> > *ctx,
> > + struct rte_crypto_vec *data, uint16_t n_data_vecs,
> > + union rte_crypto_sym_ofs ofs,
> > + union rte_crypto_sym_additional_data *additional_data,
> > + void *opaque);
> > +
> > +/**
> > + * Submit a data vector into device queue but the driver will not start
> > + * processing until rte_cryptodev_dp_submit_done() is called.
> > + *
> > + * @param ctx The initialized data-path service context data.
> > + * @param vec The array of job vectors.
> > + * @param ofs Start and stop offsets for auth and cipher operations.
> > + * @param opaque The array of opaque data for dequeue.
> > + * @return
> > + * - The number of jobs successfully submitted.
> > + */
> > +__rte_experimental
> > +uint32_t
> > +rte_cryptodev_dp_sym_submit_vec(struct rte_crypto_dp_service_ctx
> *ctx,
> > + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
> > + void **opaque);
> > +
> > +/**
> > + * Command the queue pair to start processing all submitted jobs from
> last
> > + * rte_cryptodev_init_dp_service() call.
> > + *
> > + * @param ctx The initialized data-path service context data.
> > + * @param n The total number of submitted jobs.
> > + */
> > +__rte_experimental
> > +int
> > +rte_cryptodev_dp_sym_submit_done(struct rte_crypto_dp_service_ctx
> *ctx,
> > + uint32_t n);
> > +
> > +/**
> > + * Dequeue symmetric crypto processing of user provided data.
> > + *
> > + * @param ctx The initialized data-path service
> > + * context data.
> > + * @param get_dequeue_count User provided callback function to
> > + * obtain dequeue count.
> > + * @param post_dequeue User provided callback function to
> > + * post-process a dequeued operation.
> > + * @param out_opaque Opaque pointer array to be retrieve
> > from
> > + * device queue. In case of
> > + * *is_opaque_array* is set there
> should
> > + * be enough room to store all opaque
> > data.
> > + * @param is_opaque_array Set 1 if every dequeued job
> will
> > be
> > + * written the opaque data into
> > + * *out_opaque* array.
> > + * @param n_success_jobs Driver written value to
> specific the
> > + * total successful operations count.
> > + *
> > + * @return
> > + * - Returns number of dequeued packets.
> > + */
> > +__rte_experimental
> > +uint32_t
> > +rte_cryptodev_dp_sym_dequeue(struct rte_crypto_dp_service_ctx *ctx,
> > + rte_cryptodev_get_dequeue_count_t get_dequeue_count,
> > + rte_cryptodev_post_dequeue_t post_dequeue,
> > + void **out_opaque, uint8_t is_opaque_array,
> > + uint32_t *n_success_jobs);
> > +
> > +/**
> > + * Dequeue Single symmetric crypto processing of user provided data.
> > + *
> > + * @param ctx The initialized data-path service
> > + * context data.
> > + * @param out_opaque Opaque pointer to be retrieve from
> > + * device queue. The driver shall
> support
> > + * NULL input of this parameter.
> > + *
> > + * @return
> > + * - 1 if the job is dequeued and the operation is a success.
> > + * - 0 if the job is dequeued but the operation is failed.
> > + * - -1 if no job is dequeued.
> > + */
> > +__rte_experimental
> > +int
> > +rte_cryptodev_dp_sym_dequeue_single_job(struct
> rte_crypto_dp_service_ctx
> > *ctx,
> > + void **out_opaque);
> > +
> > +/**
> > + * Inform the queue pair dequeue jobs finished.
> > + *
> > + * @param ctx The initialized data-path service context data.
> > + * @param n The total number of jobs already dequeued.
> > + */
> > +__rte_experimental
> > +int
> > +rte_cryptodev_dp_sym_dequeue_done(struct
> rte_crypto_dp_service_ctx *ctx,
> > + uint32_t n);
> > +
> > #ifdef __cplusplus
> > }
> > #endif
> > diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h
> > b/lib/librte_cryptodev/rte_cryptodev_pmd.h
> > index 81975d72b..e19de458c 100644
> > --- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
> > +++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
> > @@ -316,6 +316,42 @@ typedef uint32_t
> > (*cryptodev_sym_cpu_crypto_process_t)
> > (struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess,
> > union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec);
> >
> > +/**
> > + * Typedef that the driver provided to get service context private date
> size.
> > + *
> > + * @param dev Crypto device pointer.
> > + *
> > + * @return
> > + * - On success return the size of the device's service context private data.
> > + * - On failure return negative integer.
> > + */
> > +typedef int (*cryptodev_dp_get_service_ctx_size_t)(
> > + struct rte_cryptodev *dev);
> > +
> > +/**
> > + * Typedef that the driver provided to configure data-path service.
> > + *
> > + * @param dev Crypto device pointer.
> > + * @param qp_id Crypto device queue pair index.
> > + * @param service_type Type of the service requested.
> > + * @param sess_type session type.
> > + * @param session_ctx Session context data.
> > + * @param ctx The data-path service context data.
> > + * @param is_update Set 1 if ctx is pre-initialized but need
> > + * update to different service type or session,
> > + * but the rest driver data remains the same.
> > + * buffer will always be one.
> > + * @return
> > + * - On success return 0.
> > + * - On failure return negative integer.
> > + */
> > +typedef int (*cryptodev_dp_configure_service_t)(
> > + struct rte_cryptodev *dev, uint16_t qp_id,
> > + enum rte_crypto_dp_service service_type,
> > + enum rte_crypto_op_sess_type sess_type,
> > + union rte_cryptodev_session_ctx session_ctx,
> > + struct rte_crypto_dp_service_ctx *ctx,
> > + uint8_t is_update);
> >
> > /** Crypto device operations function pointer table */
> > struct rte_cryptodev_ops {
> > @@ -348,8 +384,16 @@ struct rte_cryptodev_ops {
> > /**< Clear a Crypto sessions private data. */
> > cryptodev_asym_free_session_t asym_session_clear;
> > /**< Clear a Crypto sessions private data. */
> > - cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
> > - /**< process input data synchronously (cpu-crypto). */
> > + union {
> > + cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
> > + /**< process input data synchronously (cpu-crypto). */
> > + struct {
> > + cryptodev_dp_get_service_ctx_size_t
> get_drv_ctx_size;
> > + /**< Get data path service context data size. */
> > + cryptodev_dp_configure_service_t
> configure_service;
> > + /**< Initialize crypto service ctx data. */
> > + };
> > + };
> > };
> >
> >
> > diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map
> > b/lib/librte_cryptodev/rte_cryptodev_version.map
> > index 02f6dcf72..10388ae90 100644
> > --- a/lib/librte_cryptodev/rte_cryptodev_version.map
> > +++ b/lib/librte_cryptodev/rte_cryptodev_version.map
> > @@ -105,4 +105,14 @@ EXPERIMENTAL {
> >
> > # added in 20.08
> > rte_cryptodev_get_qp_status;
> > +
> > + # added in 20.11
> > + rte_cryptodev_dp_configure_service;
>
> Rte_cryptodev_configure_raw_dp
>
> > + rte_cryptodev_dp_get_service_ctx_data_size;
>
> Rte_cryptodev_get_raw_dp_ctx_size
>
> > + rte_cryptodev_dp_sym_dequeue;
>
> rte_cryptodev_raw_dequeue_burst
>
> > + rte_cryptodev_dp_sym_dequeue_done;
> rte_cryptodev_raw_dequeue_done
>
> > + rte_cryptodev_dp_sym_dequeue_single_job;
> rte_cryptodev_raw_dequeue
>
> > + rte_cryptodev_dp_sym_submit_done;
>
> rte_cryptodev_raw_enqueue_done
>
> > + rte_cryptodev_dp_sym_submit_single_job;
>
> rte_cryptodev_raw_enqueue
>
> > + rte_cryptodev_dp_sym_submit_vec;
>
> rte_cryptodev_raw_enqueue_burst
>
> > };
>
> Please use above names for the APIs.
> No need for keyword dp in enq/deq APIs as it is implicit that enq/deq APIs
> are data path APIs.
>
Will do.
> I could not complete the review of this patch specifically as I see a lot of
> issues in the current version
> Please send reply to my queries so that review can be completed.
>
> Regards,
> Akhil
>
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
2020-09-21 10:40 ` Zhang, Roy Fan
@ 2020-09-21 11:59 ` Akhil Goyal
2020-09-21 15:26 ` Zhang, Roy Fan
2020-09-21 15:41 ` Zhang, Roy Fan
0 siblings, 2 replies; 84+ messages in thread
From: Akhil Goyal @ 2020-09-21 11:59 UTC (permalink / raw)
To: Zhang, Roy Fan, dev, Ananyev, Konstantin, Thomas Monjalon
Cc: Trahe, Fiona, Kusztal, ArkadiuszX, Dybkowski, AdamX, Bronowski,
PiotrX, Anoob Joseph
Hi Fan,
> > >
> > > +/** Crypto data-path service types */
> > > +enum rte_crypto_dp_service {
> > > + RTE_CRYPTO_DP_SYM_CIPHER_ONLY = 0,
> > > + RTE_CRYPTO_DP_SYM_AUTH_ONLY,
> > > + RTE_CRYPTO_DP_SYM_CHAIN,
> > > + RTE_CRYPTO_DP_SYM_AEAD,
> > > + RTE_CRYPTO_DP_N_SERVICE
> > > +};
> >
> > Comments missing for this enum.
> > Do we really need this enum?
> > Can we not have this info in the driver from the xform list?
> > And if we really want to add this, why to have it specific to raw data path APIs?
> >
> Will add comments to this enum.
> Unless the driver will store xform data in certain way (in fact QAT has it) the
> driver may not know which data-path to choose from.
> The purpose of having this enum is that the driver knows to attach the correct
> handler into the service data structure fast.
>
I believe all drivers are storing that information already in some way in the session private data.
This enum is maintained inside driver as of current implementation. This is not specific to raw
Data path APIs. If you are introducing this enum in library, then it should be generic for the legacy
Case as well.
> >
> > > +union rte_crypto_sym_additional_data {
> > > + struct {
> > > + void *cipher_iv_ptr;
> > > + rte_iova_t cipher_iv_iova;
> > > + void *auth_iv_ptr;
> > > + rte_iova_t auth_iv_iova;
> > > + void *digest_ptr;
> > > + rte_iova_t digest_iova;
> > > + } cipher_auth;
> >
> > Should be chain instead of cipher_auth
> This field is used for cipher only, auth only, or chain use-cases so I believe this is
> a better name for it.
Agreed that this struct will be used for all 3 cases, that is what is happening in
Other crypto cases. We use chain for all these three cases in legacy codepath.
Chain can be of one or two xforms and ordering can be anything -
Cipher only, auth only, cipher auth and auth cipher.
> >
> > > + struct {
> > > + void *iv_ptr;
> > > + rte_iova_t iv_iova;
> > > + void *digest_ptr;
> > > + rte_iova_t digest_iova;
> > > + void *aad_ptr;
> > > + rte_iova_t aad_iova;
> > > + } aead;
> > > +};
> > > +
> > > /**
> > > * Synchronous operation descriptor.
> > > * Supposed to be used with CPU crypto API call.
> > > @@ -57,12 +81,25 @@ struct rte_crypto_sgl {
> > > struct rte_crypto_sym_vec {
> > > /** array of SGL vectors */
> > > struct rte_crypto_sgl *sgl;
> > > - /** array of pointers to IV */
> > > - void **iv;
> > > - /** array of pointers to AAD */
> > > - void **aad;
> > > - /** array of pointers to digest */
> > > - void **digest;
> > > +
> > > + union {
> > > +
> > > + /* Supposed to be used with CPU crypto API call. */
> > > + struct {
> > > + /** array of pointers to IV */
> > > + void **iv;
> > > + /** array of pointers to AAD */
> > > + void **aad;
> > > + /** array of pointers to digest */
> > > + void **digest;
> > > + };
> >
> > Can we also name this struct?
> > Probably we should split this as a separate patch.
> [Then this is an API break right?]
Since this an LTS release, I am ok to take this change.
But others can comment on this.
@Ananyev, Konstantin, @Thomas Monjalon
Can you comment on this?
> >
> > > +
> > > + /* Supposed to be used with
> > > rte_cryptodev_dp_sym_submit_vec()
> > > + * call.
> > > + */
> > > + union rte_crypto_sym_additional_data *additional_data;
> > > + };
> > > +
> >
> > Can we get rid of this unnecessary union rte_crypto_sym_additional_data
> > And place chain and aead directly in the union? At any point, only one of the
> > three
> > would be used.
> We have 2 main different uses cases, 1 for cpu crypto and 1 for data path APIs.
> Within each main uses case there are 4 types of algo (cipher only/auth
> only/aead/chain), one requiring HW address and virtual address, the other
> doesn't.
> It seems to causing too much confusion to include these many union into the
> structure that initially was designed for cpu crypto only.
> I suggest better to use different structure than squeeze all into a big union.
>
IMO, the following union can clarify all doubts.
@Ananyev, Konstantin: Any suggestions from your side?
/** IV and aad information for various use cases. */
union {
/** Supposed to be used with CPU crypto API call. */
struct {
/** array of pointers to IV */
void **iv;
/** array of pointers to AAD */
void **aad;
/** array of pointers to digest */
void **digest;
} cpu_crypto; < or any other useful name>
/* Supposed to be used with HW raw crypto API call. */
struct {
void *cipher_iv_ptr;
rte_iova_t cipher_iv_iova;
void *auth_iv_ptr;
rte_iova_t auth_iv_iova;
void *digest_ptr;
rte_iova_t digest_iova;
} hw_chain;
/* Supposed to be used with HW raw crypto API call. */
struct {
void *iv_ptr;
rte_iova_t iv_iova;
void *digest_ptr;
rte_iova_t digest_iova;
void *aad_ptr;
rte_iova_t aad_iova;
} hw_aead;
};
> > > +/**
> > > + * Context data for asynchronous crypto process.
> > > + */
> > > +struct rte_crypto_dp_service_ctx {
> > > + void *qp_data;
> > > +
> > > + struct {
> > > + cryptodev_dp_submit_single_job_t submit_single_job;
> > > + cryptodev_dp_sym_submit_vec_t submit_vec;
> > > + cryptodev_dp_sym_operation_done_t submit_done;
> > > + cryptodev_dp_sym_dequeue_t dequeue_opaque;
> > > + cryptodev_dp_sym_dequeue_single_job_t dequeue_single;
> > > + cryptodev_dp_sym_operation_done_t dequeue_done;
> > > + };
> > > +
> > > + /* Driver specific service data */
> > > + __extension__ uint8_t drv_service_data[];
> > > +};
> >
> > Comments missing for structure params.
> > Struct name can be rte_crypto_raw_dp_ctx.
> >
> > Who allocate and free this structure?
> Same as crypto session, the user need to query the driver specific service data
> Size and allocate the buffer accordingly. The difference is it does not have to
> Be from mempool as it can be reused.
So this structure is saved and filled by the lib/driver and not the application. Right?
This struct is opaque to application and will be part of session private data. Right?
Assignment and calling appropriate driver's call backs will be hidden inside library
and will be opaque to the application. In other words, the structure is not exposed
to the application.
Please add relevant comments on top of this structure.
> > > +static __rte_always_inline int
> > > +_cryptodev_dp_sym_dequeue_single_job(struct
> > rte_crypto_dp_service_ctx
> > > *ctx,
> > > + void **out_opaque)
> > > +{
> > > + return (*ctx->dequeue_single)(ctx->qp_data, ctx->drv_service_data,
> > > + out_opaque);
> > > +}
> > > +
> > > +/**
> > > + * Submit single job into device queue but the driver will not start
> > > + * processing until rte_cryptodev_dp_submit_done() is called. This is a
> > > + * simplified
Comment not complete.
Regards,
Akhil
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v9 4/4] doc: add cryptodev service APIs guide
2020-09-18 20:39 ` Akhil Goyal
@ 2020-09-21 12:28 ` Zhang, Roy Fan
2020-09-23 13:37 ` Zhang, Roy Fan
1 sibling, 0 replies; 84+ messages in thread
From: Zhang, Roy Fan @ 2020-09-21 12:28 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: Trahe, Fiona, Kusztal, ArkadiuszX, Dybkowski, AdamX, Anoob Joseph
Hi Akhil,
> -----Original Message-----
> From: Akhil Goyal <akhil.goyal@nxp.com>
> Sent: Friday, September 18, 2020 9:39 PM
> To: Zhang, Roy Fan <roy.fan.zhang@intel.com>; dev@dpdk.org
> Cc: Trahe, Fiona <fiona.trahe@intel.com>; Kusztal, ArkadiuszX
> <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> <adamx.dybkowski@intel.com>; Anoob Joseph <anoobj@marvell.com>
> Subject: RE: [dpdk-dev v9 4/4] doc: add cryptodev service APIs guide
>
> Hi Fan,
>
> > Subject: [dpdk-dev v9 4/4] doc: add cryptodev service APIs guide
> >
> > This patch updates programmer's guide to demonstrate the usage
> > and limitations of cryptodev symmetric crypto data-path service
> > APIs.
> >
> > Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
> > ---
> > doc/guides/prog_guide/cryptodev_lib.rst | 90
> +++++++++++++++++++++++++
> > doc/guides/rel_notes/release_20_11.rst | 7 ++
> > 2 files changed, 97 insertions(+)
>
> We generally do not take separate patches for documentation. Squash it the
> patches which
> Implement the feature.
>
> >
> > diff --git a/doc/guides/prog_guide/cryptodev_lib.rst
> > b/doc/guides/prog_guide/cryptodev_lib.rst
> > index c14f750fa..1321e4c5d 100644
> > --- a/doc/guides/prog_guide/cryptodev_lib.rst
> > +++ b/doc/guides/prog_guide/cryptodev_lib.rst
> > @@ -631,6 +631,96 @@ a call argument. Status different than zero must be
> > treated as error.
> > For more details, e.g. how to convert an mbuf to an SGL, please refer to an
> > example usage in the IPsec library implementation.
> >
> > +Cryptodev Direct Data-path Service API
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> What do you mean by Direct here. It should be referenced as raw APIs?
> Moreover, service keyword can also be dropped. We normally use it for
> Software implementation of a feature which is normally done in hardware.
>
You are right, raw API is a good name. Will remove service keyword.
>
> > +
> > +Direct crypto data-path service are a set of APIs that especially provided
> for
> > +the external libraries/applications who want to take advantage of the rich
> > +features provided by cryptodev, but not necessarily depend on cryptodev
> > +operations, mempools, or mbufs in the their data-path implementations.
>
> Raw crypto data path is a set of APIs which can be used by external
> libraries/applications to take advantage of the rich features provided by
> cryptodev,
> but not necessarily depend on cryptodev operations, mempools, or mbufs in
> the
> data-path implementations.
>
Thanks.
> > +
> > +The direct crypto data-path service has the following advantages:
> > +- Supports raw data pointer and physical addresses as input.
> > +- Do not require specific data structure allocated from heap, such as
> > + cryptodev operation.
> > +- Enqueue in a burst or single operation. The service allow enqueuing in
> > + a burst similar to ``rte_cryptodev_enqueue_burst`` operation, or only
> > + enqueue one job at a time but maintaining necessary context data locally
> for
> > + next single job enqueue operation. The latter method is especially
> helpful
> > + when the user application's crypto operations are clustered into a burst.
> > + Allowing enqueue one operation at a time helps reducing one additional
> loop
> > + and also reduced the cache misses during the double "looping" situation.
> > +- Customerizable dequeue count. Instead of dequeue maximum possible
> > operations
> > + as same as ``rte_cryptodev_dequeue_burst`` operation, the service
> allows the
> > + user to provide a callback function to decide how many operations to be
> > + dequeued. This is especially helpful when the expected dequeue count
> is
> > + hidden inside the opaque data stored during enqueue. The user can
> provide
> > + the callback function to parse the opaque data structure.
> > +- Abandon enqueue and dequeue anytime. One of the drawbacks of
> > + ``rte_cryptodev_enqueue_burst`` and ``rte_cryptodev_dequeue_burst``
> > + operations are: once an operation is enqueued/dequeued there is no
> way to
> > + undo the operation. The service make the operation abandon possible
> by
> > + creating a local copy of the queue operation data in the service context
> > + data. The data will be written back to driver maintained operation data
> > + when enqueue or dequeue done function is called.
> > +
>
> The language in the above test need to be re-written. Some sentences does
> not
> Make complete sense and has grammatical errors.
>
> I suggest to have an internal review within Intel first before sending the next
> version.
>
> > +The cryptodev data-path service uses
> > +
> > +Cryptodev PMDs who supports this feature will have
> > +``RTE_CRYPTODEV_FF_SYM_HW_DIRECT_API`` feature flag presented. To
> use
>
> RTE_CRYPTODEV_FF_SYM_HW_RAW_DP looks better.
>
> > this
> > +feature the function ``rte_cryptodev_get_dp_service_ctx_data_size``
> should
>
> Can be renamed as rte_cryptodev_get_raw_dp_ctx_size
>
> > +be called to get the data path service context data size. The user should
> > +creates a local buffer at least this size long and initialize it using
>
> The user should create a local buffer of at least this size and initialize it using
>
> > +``rte_cryptodev_dp_configure_service`` function call.
>
> rte_cryptodev_raw_dp_configure or rte_cryptodev_configure _raw_dp can
> be used here.
>
> > +
> > +The ``rte_cryptodev_dp_configure_service`` function call initialize or
> > +updates the ``struct rte_crypto_dp_service_ctx`` buffer, in which contains
> the
> > +driver specific queue pair data pointer and service context buffer, and a
> > +set of function pointers to enqueue and dequeue different algorithms'
> > +operations. The ``rte_cryptodev_dp_configure_service`` should be called
> when:
> > +
> > +- Before enqueuing or dequeuing starts (set ``is_update`` parameter to 0).
> > +- When different cryptodev session, security session, or session-less
> xform
> > + is used (set ``is_update`` parameter to 1).
>
> The use of is_update is not clear with above text. IMO, we do not need this
> flag.
> Whenever an update is required, we change the session information and call
> the
> Same API again and driver can copy all information blindly without checking.
>
The flag is very important
- When flag is not set much queue specific data will be write into driver specific
Data field in the service data. This data is written back to the driver when
"submit_done" or "dequeue_done" function is called.
- When flag is set the above step is not executed, driver only updates the function
handlers attached the service data for different algorithm used. Will update this
into the description.
> > +
> > +Two different enqueue functions are provided.
> > +
> > +- ``rte_cryptodev_dp_sym_submit_vec``: submit a burst of operations
> stored in
> > + the ``rte_crypto_sym_vec`` structure.
> > +- ``rte_cryptodev_dp_submit_single_job``: submit single operation.
>
> What is the meaning of single job here? Can we use multiple buffers/vectors
> of same session
> In a single job? Or we can submit only a single buffer/vector in a job?
>
Same as CPU crypto, a sym vec will contains one or more jobs with same session.
But it is not an inline function as it assumes we are bursting multiple ops up.
Single job - the sole purpose is to use it as an inline function that pushes one
job into the queue, but not kicking the HW to start processing.
When the raw APIs are used in external application/lib that also works in burst,
such as VPP, single job submission is very useful to reduce the cycle cost. Since
they also have data structure translation to their specific data structure and also
work in burst, you don't want the application loops a burst of 32
ops to translate into a burst of DPDK crypto sym vec first, passing to the driver, the
driver then loops the jobs the second time to write to the HW one by one. Use of
inline "submit single" API can help reducing the cycles and cache misses, especially
when the burst size is 256.
> > +
> > +Either enqueue functions will not command the crypto device to start
> > processing
> > +until ``rte_cryptodev_dp_submit_done`` function is called. Before then
> the user
> > +shall expect the driver only stores the necessory context data in the
> > +``rte_crypto_dp_service_ctx`` buffer for the next enqueue operation. If
> the
> > user
> > +wants to abandon the submitted operations, simply call
> > +``rte_cryptodev_dp_configure_service`` function instead with the
> parameter
> > +``is_update`` set to 0. The driver will recover the service context data to
> > +the previous state.
>
> Can you explain a use case where this is actually being used? This looks fancy
> but
> Do we have this type of requirement in any protocol stacks/specifications?
> I believe it to be an extra burden on the application writer if it is not a
> protocol requirement.
It is not protocol stack specific.
For now all crypto ops need to be translated into HW device/queue operation
data in a looped manner. The context between last and current ops is stored in
the driver and is updated within enqueue burst function (e.g. for QAT such
context is the shadow copy of the QAT queue tail/head).
As mentioned earlier such burst operation introduce more cycles and cache
Misses. So we want to introduce single job enqueue in DPDK so we don't loop
the same burst twice. However we have to take care of the context between
last and current submit single function. The idea is let the user allocate the buffer
for that. The "is_update" param is used to tell the driver if the it
needs to write the context to the driver (is_update == 0), or only update context
unrelated data inside dp_service buffer (updating function handler to the algo etc).
When the burst is processed the user can call submit_done function so the user
maintained context is written back to the driver. So in next burst enqueue the user
should call ``rte_cryptodev_dp_configure_service`` with "is_update" = 0 before submit any
jobs into the driver, or when the session is changed use "is_update" = 1 to only update
the function pointer.
>
> > +
> > +To dequeue the operations the user also have two operations:
> > +
> > +- ``rte_cryptodev_dp_sym_dequeue``: fully customizable deuqueue
> operation.
> > The
> > + user needs to provide the callback function for the driver to get the
> > + dequeue count and perform post processing such as write the status
> field.
> > +- ``rte_cryptodev_dp_sym_dequeue_single_job``: dequeue single job.
> > +
> > +Same as enqueue, the function ``rte_cryptodev_dp_dequeue_done`` is
> used to
> > +merge user's local service context data with the driver's queue operation
> > +data. Also to abandon the dequeue operation (still keep the operations in
> the
> > +queue), the user shall avoid ``rte_cryptodev_dp_dequeue_done``
> function call
> > +but calling ``rte_cryptodev_dp_configure_service`` function with the
> parameter
> > +``is_update`` set to 0.
> > +
> > +There are a few limitations to the data path service:
> > +
> > +* Only support in-place operations.
> > +* APIs are NOT thread-safe.
> > +* CANNOT mix the direct API's enqueue with
> rte_cryptodev_enqueue_burst, or
> > + vice versa.
> > +
> > +See *DPDK API Reference* for details on each API definitions.
> > +
> > Sample code
> > -----------
> >
> > diff --git a/doc/guides/rel_notes/release_20_11.rst
> > b/doc/guides/rel_notes/release_20_11.rst
> > index df227a177..159823345 100644
> > --- a/doc/guides/rel_notes/release_20_11.rst
> > +++ b/doc/guides/rel_notes/release_20_11.rst
> > @@ -55,6 +55,13 @@ New Features
> > Also, make sure to start the actual text at the margin.
> > =======================================================
> >
> > + * **Added data-path APIs for cryptodev library.**
> > +
> > + Cryptodev is added data-path APIs to accelerate external libraries or
> > + applications those want to avail fast cryptodev enqueue/dequeue
> > + operations but does not necessarily depends on mbufs and cryptodev
> > + operation mempool.
> > +
> >
> > Removed Items
> > -------------
>
>
> Regards,
> Akhil
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v9 3/4] test/crypto: add unit-test for cryptodev direct APIs
2020-09-18 20:03 ` Akhil Goyal
@ 2020-09-21 12:41 ` Zhang, Roy Fan
0 siblings, 0 replies; 84+ messages in thread
From: Zhang, Roy Fan @ 2020-09-21 12:41 UTC (permalink / raw)
To: Akhil Goyal, dev; +Cc: Trahe, Fiona, Kusztal, ArkadiuszX, Dybkowski, AdamX
Hi Akhil,
Comments inline.
> -----Original Message-----
> From: Akhil Goyal <akhil.goyal@nxp.com>
> Sent: Friday, September 18, 2020 9:03 PM
> To: Zhang, Roy Fan <roy.fan.zhang@intel.com>; dev@dpdk.org
> Cc: Trahe, Fiona <fiona.trahe@intel.com>; Kusztal, ArkadiuszX
> <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> <adamx.dybkowski@intel.com>
> Subject: RE: [dpdk-dev v9 3/4] test/crypto: add unit-test for cryptodev direct
> APIs
>
> Hi Fan,
>
> > Subject: [dpdk-dev v9 3/4] test/crypto: add unit-test for cryptodev direct
> APIs
> >
> > This patch adds the QAT test to use cryptodev symmetric crypto
> > direct APIs.
> >
> > Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
> > ---
> > app/test/test_cryptodev.c | 461 +++++++++++++++++++++++---
> > app/test/test_cryptodev.h | 7 +
> > app/test/test_cryptodev_blockcipher.c | 51 ++-
> > 3 files changed, 456 insertions(+), 63 deletions(-)
> >
> > diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
> > index 70bf6fe2c..13f642e0e 100644
> > --- a/app/test/test_cryptodev.c
> > +++ b/app/test/test_cryptodev.c
> > @@ -49,6 +49,8 @@
> > #define VDEV_ARGS_SIZE 100
> > #define MAX_NB_SESSIONS 4
> >
> > +#define MAX_DRV_SERVICE_CTX_SIZE 256
> > +
> > #define IN_PLACE 0
> > #define OUT_OF_PLACE 1
> >
> > @@ -57,6 +59,8 @@ static int gbl_driver_id;
> > static enum rte_security_session_action_type gbl_action_type =
> > RTE_SECURITY_ACTION_TYPE_NONE;
> >
> > +int cryptodev_dp_test;
> > +
>
> Why do we need this? We should make decision based on Feature flag of
> crypto device.
Even the feature flag is supported we may want to test both cryptodev_op
Datapath and the new data pathes. The parameter is set by the command
Only.
>
>
> > struct crypto_testsuite_params {
> > struct rte_mempool *mbuf_pool;
> > struct rte_mempool *large_mbuf_pool;
> > @@ -147,6 +151,182 @@ ceil_byte_length(uint32_t num_bits)
> > return (num_bits >> 3);
> > }
> >
> > +void
> > +process_sym_hw_api_op(uint8_t dev_id, uint16_t qp_id, struct
> rte_crypto_op
> > *op,
> > + uint8_t is_cipher, uint8_t is_auth, uint8_t len_in_bits,
> > + uint8_t cipher_iv_len)
> > +{
> > + int32_t n;
> > + struct rte_crypto_sym_op *sop;
> > + struct rte_crypto_op *ret_op = NULL;
> > + struct rte_crypto_vec data_vec[UINT8_MAX];
> > + union rte_crypto_sym_additional_data a_data;
> > + union rte_crypto_sym_ofs ofs;
> > + int32_t status;
> > + uint32_t max_len;
> > + union rte_cryptodev_session_ctx sess;
> > + enum rte_crypto_dp_service service_type;
> > + uint32_t count = 0;
> > + uint8_t service_data[MAX_DRV_SERVICE_CTX_SIZE] = {0};
> > + struct rte_crypto_dp_service_ctx *ctx = (void *)service_data;
> > + uint32_t cipher_offset = 0, cipher_len = 0, auth_offset = 0,
> > + auth_len = 0;
> > + int ctx_service_size;
> > +
> > + sop = op->sym;
> > +
> > + sess.crypto_sess = sop->session;
> > +
> > + if (is_cipher && is_auth) {
> > + service_type = RTE_CRYPTO_DP_SYM_CHAIN;
> > + cipher_offset = sop->cipher.data.offset;
> > + cipher_len = sop->cipher.data.length;
> > + auth_offset = sop->auth.data.offset;
> > + auth_len = sop->auth.data.length;
> > + max_len = RTE_MAX(cipher_offset + cipher_len,
> > + auth_offset + auth_len);
> > + } else if (is_cipher) {
> > + service_type = RTE_CRYPTO_DP_SYM_CIPHER_ONLY;
> > + cipher_offset = sop->cipher.data.offset;
> > + cipher_len = sop->cipher.data.length;
> > + max_len = cipher_len + cipher_offset;
> > + } else if (is_auth) {
> > + service_type = RTE_CRYPTO_DP_SYM_AUTH_ONLY;
> > + auth_offset = sop->auth.data.offset;
> > + auth_len = sop->auth.data.length;
> > + max_len = auth_len + auth_offset;
> > + } else { /* aead */
> > + service_type = RTE_CRYPTO_DP_SYM_AEAD;
> > + cipher_offset = sop->aead.data.offset;
> > + cipher_len = sop->aead.data.length;
> > + max_len = cipher_len + cipher_offset;
> > + }
> > +
> > + if (len_in_bits) {
> > + max_len = max_len >> 3;
> > + cipher_offset = cipher_offset >> 3;
> > + auth_offset = auth_offset >> 3;
> > + cipher_len = cipher_len >> 3;
> > + auth_len = auth_len >> 3;
> > + }
> > +
> > + ctx_service_size =
> rte_cryptodev_dp_get_service_ctx_data_size(dev_id);
> > + assert(ctx_service_size <= MAX_DRV_SERVICE_CTX_SIZE &&
> > + ctx_service_size > 0);
> > +
> > + if (rte_cryptodev_dp_configure_service(dev_id, qp_id, service_type,
> > + RTE_CRYPTO_OP_WITH_SESSION, sess, ctx, 0) < 0) {
> > + op->status = RTE_CRYPTO_OP_STATUS_ERROR;
> > + return;
> > + }
>
> **_dp_configure_service does not provide context of the API. What does
> service mean here?
> Can we rename it to rte_cryptodev_configure_raw_dp?
> This gives better readability to have an API to configure crypto device for raw
> data path.
Great name. Thanks. Will change.
>
> > +
> > + /* test update service */
> > + if (rte_cryptodev_dp_configure_service(dev_id, qp_id, service_type,
> > + RTE_CRYPTO_OP_WITH_SESSION, sess, ctx, 1) < 0) {
>
> Do we really need an extra parameter to specify update? Can we not call
> again same API
> for update? Anyway the implementation will be copying the complete
> information from
> the sess.
Explained in the last email replying your comments to the 4th patch.
>
> > + op->status = RTE_CRYPTO_OP_STATUS_ERROR;
> > + return;
> > + }
> > +
> > + n = rte_crypto_mbuf_to_vec(sop->m_src, 0, max_len,
> > + data_vec, RTE_DIM(data_vec));
> > + if (n < 0 || n > sop->m_src->nb_segs) {
> > + op->status = RTE_CRYPTO_OP_STATUS_ERROR;
> > + return;
> > + }
> > +
> > + ofs.raw = 0;
> > +
> > + switch (service_type) {
> > + case RTE_CRYPTO_DP_SYM_AEAD:
> > + ofs.ofs.cipher.head = cipher_offset;
> > + ofs.ofs.cipher.tail = max_len - cipher_offset - cipher_len;
> > + a_data.aead.iv_ptr = rte_crypto_op_ctod_offset(op, void *,
> > + IV_OFFSET);
> > + a_data.aead.iv_iova = rte_crypto_op_ctophys_offset(op,
> > + IV_OFFSET);
> > + a_data.aead.aad_ptr = (void *)sop->aead.aad.data;
> > + a_data.aead.aad_iova = sop->aead.aad.phys_addr;
> > + a_data.aead.digest_ptr = (void *)sop->aead.digest.data;
> > + a_data.aead.digest_iova = sop->aead.digest.phys_addr;
> > + break;
> > + case RTE_CRYPTO_DP_SYM_CIPHER_ONLY:
> > + ofs.ofs.cipher.head = cipher_offset;
> > + ofs.ofs.cipher.tail = max_len - cipher_offset - cipher_len;
> > + a_data.cipher_auth.cipher_iv_ptr =
> rte_crypto_op_ctod_offset(
> > + op, void *, IV_OFFSET);
> > + a_data.cipher_auth.cipher_iv_iova =
> > + rte_crypto_op_ctophys_offset(op,
> IV_OFFSET);
> > + break;
> > + case RTE_CRYPTO_DP_SYM_AUTH_ONLY:
> > + ofs.ofs.auth.head = auth_offset;
> > + ofs.ofs.auth.tail = max_len - auth_offset - auth_len;
> > + a_data.cipher_auth.auth_iv_ptr =
> rte_crypto_op_ctod_offset(
> > + op, void *, IV_OFFSET + cipher_iv_len);
> > + a_data.cipher_auth.auth_iv_iova =
> > + rte_crypto_op_ctophys_offset(op,
> IV_OFFSET +
> > + cipher_iv_len);
> > + a_data.cipher_auth.digest_ptr = (void *)sop-
> >auth.digest.data;
> > + a_data.cipher_auth.digest_iova = sop-
> >auth.digest.phys_addr;
> > + break;
> > + case RTE_CRYPTO_DP_SYM_CHAIN:
> > + ofs.ofs.cipher.head = cipher_offset;
> > + ofs.ofs.cipher.tail = max_len - cipher_offset - cipher_len;
> > + ofs.ofs.auth.head = auth_offset;
> > + ofs.ofs.auth.tail = max_len - auth_offset - auth_len;
> > + a_data.cipher_auth.cipher_iv_ptr =
> rte_crypto_op_ctod_offset(
> > + op, void *, IV_OFFSET);
> > + a_data.cipher_auth.cipher_iv_iova =
> > + rte_crypto_op_ctophys_offset(op,
> IV_OFFSET);
> > + a_data.cipher_auth.auth_iv_ptr =
> rte_crypto_op_ctod_offset(
> > + op, void *, IV_OFFSET + cipher_iv_len);
> > + a_data.cipher_auth.auth_iv_iova =
> > + rte_crypto_op_ctophys_offset(op,
> IV_OFFSET +
> > + cipher_iv_len);
> > + a_data.cipher_auth.digest_ptr = (void *)sop-
> >auth.digest.data;
> > + a_data.cipher_auth.digest_iova = sop-
> >auth.digest.phys_addr;
>
> Instead of cipher_auth, it should be chain. Can we also support "hash then
> cipher" case also?
Cipher_auth is a name to describe the fields needed for cipher only, auth only,
and chain Operation. The "cipher" is shown before "auth" is merely because of
alphabetical order. Of course hash then cipher is supported - as long as a
session can be created successfully by a PMD, the API supports it (but not OOP).
>
> > + break;
> > + default:
> > + break;
> > + }
> > +
> > + status = rte_cryptodev_dp_sym_submit_single_job(ctx, data_vec, n,
> ofs,
> > + &a_data, (void *)op);
> > + if (status < 0) {
> > + op->status = RTE_CRYPTO_OP_STATUS_ERROR;
> > + return;
> > + }
> > +
> > + status = rte_cryptodev_dp_sym_submit_done(ctx, 1);
> > + if (status < 0) {
> > + op->status = RTE_CRYPTO_OP_STATUS_ERROR;
> > + return;
> > + }
> > +
> > +
> > + status = -1;
> > + while (count++ < 65535 && status == -1) {
> > + status = rte_cryptodev_dp_sym_dequeue_single_job(ctx,
> > + (void **)&ret_op);
> > + if (status == -1)
> > + rte_pause();
> > + }
> > +
> > + if (status != -1) {
> > + if (rte_cryptodev_dp_sym_dequeue_done(ctx, 1) < 0) {
> > + op->status = RTE_CRYPTO_OP_STATUS_ERROR;
> > + return;
> > + }
> > + }
> > +
> > + if (count == 65536 || status != 1 || ret_op != op) {
>
> Remove hardcode 65536
Will do.
>
> > + op->status = RTE_CRYPTO_OP_STATUS_ERROR;
> > + return;
> > + }
> > +
> > + op->status = status == 1 ? RTE_CRYPTO_OP_STATUS_SUCCESS :
> > + RTE_CRYPTO_OP_STATUS_ERROR;
> > +}
> > +
> > static void
> > process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op)
> > {
> > @@ -1656,6 +1836,9 @@
> > test_AES_CBC_HMAC_SHA512_decrypt_perform(struct
> > rte_cryptodev_sym_session *sess,
> > if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> > process_cpu_crypt_auth_op(ts_params->valid_devs[0],
> > ut_params->op);
> > + else if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 1, 1, 0, 0);
>
> Can we rename process_sym_hw_api_op to process_sym_hw_raw_op?
Yes.
>
> > else
> > TEST_ASSERT_NOT_NULL(
> > process_crypto_request(ts_params-
> > >valid_devs[0],
> > @@ -1710,12 +1893,18 @@ test_AES_cipheronly_all(void)
> > static int
> > test_AES_docsis_all(void)
> > {
> > + /* Data-path service does not support DOCSIS yet */
> > + if (cryptodev_dp_test)
> > + return -ENOTSUP;
> > return test_blockcipher(BLKCIPHER_AES_DOCSIS_TYPE);
> > }
> >
> > static int
> > test_DES_docsis_all(void)
> > {
> > + /* Data-path service does not support DOCSIS yet */
> > + if (cryptodev_dp_test)
> > + return -ENOTSUP;
> > return test_blockcipher(BLKCIPHER_DES_DOCSIS_TYPE);
> > }
> >
> > @@ -2470,7 +2659,11 @@ test_snow3g_authentication(const struct
> > snow3g_hash_test_data *tdata)
> > if (retval < 0)
> > return retval;
> >
> > - ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 0, 1, 1, 0);
> > + else
> > + ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> > ut_params->obuf = ut_params->op->sym->m_src;
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> > @@ -2549,7 +2742,11 @@ test_snow3g_authentication_verify(const struct
> > snow3g_hash_test_data *tdata)
> > if (retval < 0)
> > return retval;
> >
> > - ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 0, 1, 1, 0);
> > + else
> > + ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> > ut_params->obuf = ut_params->op->sym->m_src;
> > @@ -2619,6 +2816,9 @@ test_kasumi_authentication(const struct
> > kasumi_hash_test_data *tdata)
> > if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> > process_cpu_crypt_auth_op(ts_params->valid_devs[0],
> > ut_params->op);
> > + else if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 0, 1, 1, 0);
> > else
> > ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> > @@ -2690,7 +2890,11 @@ test_kasumi_authentication_verify(const struct
> > kasumi_hash_test_data *tdata)
> > if (retval < 0)
> > return retval;
> >
> > - ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 0, 1, 1, 0);
> > + else
> > + ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> > ut_params->obuf = ut_params->op->sym->m_src;
> > @@ -2897,8 +3101,12 @@ test_kasumi_encryption(const struct
> > kasumi_test_data *tdata)
> > if (retval < 0)
> > return retval;
> >
> > - ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > - ut_params->op);
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
> > + else
> > + ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > + ut_params->op);
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> >
> > ut_params->obuf = ut_params->op->sym->m_dst;
> > @@ -2983,7 +3191,11 @@ test_kasumi_encryption_sgl(const struct
> > kasumi_test_data *tdata)
> > if (retval < 0)
> > return retval;
> >
> > - ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
> > + else
> > + ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> >
> > @@ -3026,8 +3238,9 @@ test_kasumi_encryption_oop(const struct
> > kasumi_test_data *tdata)
> > struct rte_cryptodev_sym_capability_idx cap_idx;
> > cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
> > cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_KASUMI_F8;
> > + /* Data-path service does not support OOP */
> > if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
> > - &cap_idx) == NULL)
> > + &cap_idx) == NULL || cryptodev_dp_test)
> > return -ENOTSUP;
> >
> > /* Create KASUMI session */
> > @@ -3107,8 +3320,9 @@ test_kasumi_encryption_oop_sgl(const struct
> > kasumi_test_data *tdata)
> > struct rte_cryptodev_sym_capability_idx cap_idx;
> > cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
> > cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_KASUMI_F8;
> > + /* Data-path service does not support OOP */
> > if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
> > - &cap_idx) == NULL)
> > + &cap_idx) == NULL || cryptodev_dp_test)
> > return -ENOTSUP;
> >
> > rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
> > @@ -3192,8 +3406,9 @@ test_kasumi_decryption_oop(const struct
> > kasumi_test_data *tdata)
> > struct rte_cryptodev_sym_capability_idx cap_idx;
> > cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
> > cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_KASUMI_F8;
> > + /* Data-path service does not support OOP */
> > if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
> > - &cap_idx) == NULL)
> > + &cap_idx) == NULL || cryptodev_dp_test)
> > return -ENOTSUP;
> >
> > /* Create KASUMI session */
> > @@ -3306,7 +3521,11 @@ test_kasumi_decryption(const struct
> > kasumi_test_data *tdata)
> > if (retval < 0)
> > return retval;
> >
> > - ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 1, 0, 1, 0);
> > + else
> > + ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> >
> > @@ -3381,7 +3600,11 @@ test_snow3g_encryption(const struct
> > snow3g_test_data *tdata)
> > if (retval < 0)
> > return retval;
> >
> > - ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
> > + else
> > + ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> >
> > @@ -3419,7 +3642,7 @@ test_snow3g_encryption_oop(const struct
> > snow3g_test_data *tdata)
> > cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
> > cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_SNOW3G_UEA2;
> > if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
> > - &cap_idx) == NULL)
> > + &cap_idx) == NULL || cryptodev_dp_test)
> > return -ENOTSUP;
> >
> > /* Create SNOW 3G session */
> > @@ -3502,7 +3725,7 @@ test_snow3g_encryption_oop_sgl(const struct
> > snow3g_test_data *tdata)
> > cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
> > cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_SNOW3G_UEA2;
> > if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
> > - &cap_idx) == NULL)
> > + &cap_idx) == NULL || cryptodev_dp_test)
> > return -ENOTSUP;
> >
> > rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
> > @@ -3621,7 +3844,7 @@ test_snow3g_encryption_offset_oop(const
> struct
> > snow3g_test_data *tdata)
> > cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
> > cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_SNOW3G_UEA2;
> > if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
> > - &cap_idx) == NULL)
> > + &cap_idx) == NULL || cryptodev_dp_test)
> > return -ENOTSUP;
> >
> > /* Create SNOW 3G session */
> > @@ -3756,7 +3979,11 @@ static int test_snow3g_decryption(const struct
> > snow3g_test_data *tdata)
> > if (retval < 0)
> > return retval;
> >
> > - ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
> > + else
> > + ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> > ut_params->obuf = ut_params->op->sym->m_dst;
> > @@ -3791,7 +4018,7 @@ static int test_snow3g_decryption_oop(const
> struct
> > snow3g_test_data *tdata)
> > cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
> > cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_SNOW3G_UEA2;
> > if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
> > - &cap_idx) == NULL)
> > + &cap_idx) == NULL || cryptodev_dp_test)
> > return -ENOTSUP;
> >
> > /* Create SNOW 3G session */
> > @@ -3924,7 +4151,11 @@ test_zuc_cipher_auth(const struct
> > wireless_test_data *tdata)
> > if (retval < 0)
> > return retval;
> >
> > - ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
> > + else
> > + ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> > ut_params->obuf = ut_params->op->sym->m_src;
> > @@ -4019,7 +4250,11 @@ test_snow3g_cipher_auth(const struct
> > snow3g_test_data *tdata)
> > if (retval < 0)
> > return retval;
> >
> > - ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
> > + else
> > + ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> > ut_params->obuf = ut_params->op->sym->m_src;
> > @@ -4087,6 +4322,8 @@ test_snow3g_auth_cipher(const struct
> > snow3g_test_data *tdata,
> > printf("Device doesn't support digest encrypted.\n");
> > return -ENOTSUP;
> > }
> > + if (cryptodev_dp_test)
> > + return -ENOTSUP;
> > }
> >
> > /* Create SNOW 3G session */
> > @@ -4155,7 +4392,11 @@ test_snow3g_auth_cipher(const struct
> > snow3g_test_data *tdata,
> > if (retval < 0)
> > return retval;
> >
> > - ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
> > + else
> > + ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> >
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> > @@ -4266,6 +4507,8 @@ test_snow3g_auth_cipher_sgl(const struct
> > snow3g_test_data *tdata,
> > return -ENOTSUP;
> > }
> > } else {
> > + if (cryptodev_dp_test)
> > + return -ENOTSUP;
> > if (!(feat_flags &
> RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT))
> > {
> > printf("Device doesn't support out-of-place scatter-
> > gather "
> > "in both input and output mbufs.\n");
> > @@ -4344,7 +4587,11 @@ test_snow3g_auth_cipher_sgl(const struct
> > snow3g_test_data *tdata,
> > if (retval < 0)
> > return retval;
> >
> > - ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
> > + else
> > + ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> >
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> > @@ -4453,6 +4700,8 @@ test_kasumi_auth_cipher(const struct
> > kasumi_test_data *tdata,
> > uint64_t feat_flags = dev_info.feature_flags;
> >
> > if (op_mode == OUT_OF_PLACE) {
> > + if (cryptodev_dp_test)
> > + return -ENOTSUP;
> > if (!(feat_flags & RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED))
> {
> > printf("Device doesn't support digest encrypted.\n");
> > return -ENOTSUP;
> > @@ -4526,7 +4775,11 @@ test_kasumi_auth_cipher(const struct
> > kasumi_test_data *tdata,
> > if (retval < 0)
> > return retval;
> >
> > - ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
> > + else
> > + ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> >
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> > @@ -4638,6 +4891,8 @@ test_kasumi_auth_cipher_sgl(const struct
> > kasumi_test_data *tdata,
> > return -ENOTSUP;
> > }
> > } else {
> > + if (cryptodev_dp_test)
> > + return -ENOTSUP;
> > if (!(feat_flags &
> RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT))
> > {
> > printf("Device doesn't support out-of-place scatter-
> > gather "
> > "in both input and output mbufs.\n");
> > @@ -4716,7 +4971,11 @@ test_kasumi_auth_cipher_sgl(const struct
> > kasumi_test_data *tdata,
> > if (retval < 0)
> > return retval;
> >
> > - ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
> > + else
> > + ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> >
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> > @@ -4857,7 +5116,11 @@ test_kasumi_cipher_auth(const struct
> > kasumi_test_data *tdata)
> > if (retval < 0)
> > return retval;
> >
> > - ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
> > + else
> > + ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> >
> > @@ -4944,7 +5207,11 @@ test_zuc_encryption(const struct
> wireless_test_data
> > *tdata)
> > if (retval < 0)
> > return retval;
> >
> > - ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
> > + else
> > + ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> >
> > @@ -5031,7 +5298,11 @@ test_zuc_encryption_sgl(const struct
> > wireless_test_data *tdata)
> > if (retval < 0)
> > return retval;
> >
> > - ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
> > + else
> > + ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> >
> > @@ -5119,7 +5390,11 @@ test_zuc_authentication(const struct
> > wireless_test_data *tdata)
> > if (retval < 0)
> > return retval;
> >
> > - ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 0, 1, 1, 0);
> > + else
> > + ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> > ut_params->obuf = ut_params->op->sym->m_src;
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> > @@ -5177,6 +5452,8 @@ test_zuc_auth_cipher(const struct
> wireless_test_data
> > *tdata,
> > return -ENOTSUP;
> > }
> > } else {
> > + if (cryptodev_dp_test)
> > + return -ENOTSUP;
> > if (!(feat_flags &
> RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT))
> > {
> > printf("Device doesn't support out-of-place scatter-
> > gather "
> > "in both input and output mbufs.\n");
> > @@ -5251,7 +5528,11 @@ test_zuc_auth_cipher(const struct
> > wireless_test_data *tdata,
> > if (retval < 0)
> > return retval;
> >
> > - ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
> > + else
> > + ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> >
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> > @@ -5359,6 +5640,8 @@ test_zuc_auth_cipher_sgl(const struct
> > wireless_test_data *tdata,
> > return -ENOTSUP;
> > }
> > } else {
> > + if (cryptodev_dp_test)
> > + return -ENOTSUP;
> > if (!(feat_flags &
> RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT))
> > {
> > printf("Device doesn't support out-of-place scatter-
> > gather "
> > "in both input and output mbufs.\n");
> > @@ -5437,7 +5720,11 @@ test_zuc_auth_cipher_sgl(const struct
> > wireless_test_data *tdata,
> > if (retval < 0)
> > return retval;
> >
> > - ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
> > + else
> > + ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> >
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> > @@ -5580,6 +5867,9 @@ test_kasumi_decryption_test_case_2(void)
> > static int
> > test_kasumi_decryption_test_case_3(void)
> > {
> > + /* rte_crypto_mbuf_to_vec does not support incomplete mbuf build
> */
> > + if (cryptodev_dp_test)
> > + return -ENOTSUP;
> > return test_kasumi_decryption(&kasumi_test_case_3);
> > }
> >
> > @@ -5779,6 +6069,9 @@
> test_snow3g_auth_cipher_part_digest_enc_oop(void)
> > static int
> > test_snow3g_auth_cipher_test_case_3_sgl(void)
> > {
> > + /* rte_crypto_mbuf_to_vec does not support incomplete mbuf build
> */
> > + if (cryptodev_dp_test)
> > + return -ENOTSUP;
> > return test_snow3g_auth_cipher_sgl(
> > &snow3g_auth_cipher_test_case_3, IN_PLACE, 0);
> > }
> > @@ -5793,6 +6086,9 @@
> test_snow3g_auth_cipher_test_case_3_oop_sgl(void)
> > static int
> > test_snow3g_auth_cipher_part_digest_enc_sgl(void)
> > {
> > + /* rte_crypto_mbuf_to_vec does not support incomplete mbuf build
> */
> > + if (cryptodev_dp_test)
> > + return -ENOTSUP;
> > return test_snow3g_auth_cipher_sgl(
> > &snow3g_auth_cipher_partial_digest_encryption,
> > IN_PLACE, 0);
> > @@ -6146,10 +6442,9 @@ test_mixed_auth_cipher(const struct
> > mixed_cipher_auth_test_data *tdata,
> > unsigned int ciphertext_len;
> >
> > struct rte_cryptodev_info dev_info;
> > - struct rte_crypto_op *op;
> >
> > /* Check if device supports particular algorithms separately */
> > - if (test_mixed_check_if_unsupported(tdata))
> > + if (test_mixed_check_if_unsupported(tdata) || cryptodev_dp_test)
> > return -ENOTSUP;
> >
> > rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
> > @@ -6161,6 +6456,9 @@ test_mixed_auth_cipher(const struct
> > mixed_cipher_auth_test_data *tdata,
> > return -ENOTSUP;
> > }
> >
> > + if (op_mode == OUT_OF_PLACE)
> > + return -ENOTSUP;
> > +
> > /* Create the session */
> > if (verify)
> > retval = create_wireless_algo_cipher_auth_session(
> > @@ -6192,9 +6490,11 @@ test_mixed_auth_cipher(const struct
> > mixed_cipher_auth_test_data *tdata,
> > /* clear mbuf payload */
> > memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
> > rte_pktmbuf_tailroom(ut_params->ibuf));
> > - if (op_mode == OUT_OF_PLACE)
> > + if (op_mode == OUT_OF_PLACE) {
> > +
> > memset(rte_pktmbuf_mtod(ut_params->obuf, uint8_t *), 0,
> > rte_pktmbuf_tailroom(ut_params->obuf));
> > + }
>
> Unnecessary change.
>
> >
> > ciphertext_len = ceil_byte_length(tdata->ciphertext.len_bits);
> > plaintext_len = ceil_byte_length(tdata->plaintext.len_bits);
> > @@ -6235,18 +6535,17 @@ test_mixed_auth_cipher(const struct
> > mixed_cipher_auth_test_data *tdata,
> > if (retval < 0)
> > return retval;
> >
> > - op = process_crypto_request(ts_params->valid_devs[0],
> > + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > ut_params->op);
> >
> > /* Check if the op failed because the device doesn't */
> > /* support this particular combination of algorithms */
> > - if (op == NULL && ut_params->op->status ==
> > + if (ut_params->op == NULL && ut_params->op->status ==
> > RTE_CRYPTO_OP_STATUS_INVALID_SESSION) {
> > printf("Device doesn't support this mixed combination. "
> > "Test Skipped.\n");
> > return -ENOTSUP;
> > }
> > - ut_params->op = op;
>
> Unnecessary change
>
>
>
> >
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> >
> > @@ -6337,10 +6636,9 @@ test_mixed_auth_cipher_sgl(const struct
> > mixed_cipher_auth_test_data *tdata,
> > uint8_t digest_buffer[10000];
> >
> > struct rte_cryptodev_info dev_info;
> > - struct rte_crypto_op *op;
> >
> > /* Check if device supports particular algorithms */
> > - if (test_mixed_check_if_unsupported(tdata))
> > + if (test_mixed_check_if_unsupported(tdata) || cryptodev_dp_test)
> > return -ENOTSUP;
> >
> > rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
> > @@ -6440,20 +6738,18 @@ test_mixed_auth_cipher_sgl(const struct
> > mixed_cipher_auth_test_data *tdata,
> > if (retval < 0)
> > return retval;
> >
> > - op = process_crypto_request(ts_params->valid_devs[0],
> > + ut_params->op = process_crypto_request(ts_params-
> >valid_devs[0],
> > ut_params->op);
> >
> > /* Check if the op failed because the device doesn't */
> > /* support this particular combination of algorithms */
> > - if (op == NULL && ut_params->op->status ==
> > + if (ut_params->op == NULL && ut_params->op->status ==
> > RTE_CRYPTO_OP_STATUS_INVALID_SESSION) {
> > printf("Device doesn't support this mixed combination. "
> > "Test Skipped.\n");
> > return -ENOTSUP;
> > }
>
> Unnecessary change
>
> >
> > - ut_params->op = op;
> > -
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
> >
> > ut_params->obuf = (op_mode == IN_PLACE ?
> > @@ -7043,6 +7339,9 @@ test_authenticated_encryption(const struct
> > aead_test_data *tdata)
> > /* Process crypto operation */
> > if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> > process_cpu_aead_op(ts_params->valid_devs[0],
> ut_params-
> > >op);
> > + else if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 0, 0, 0, 0);
> > else
> > TEST_ASSERT_NOT_NULL(
> > process_crypto_request(ts_params->valid_devs[0],
> > @@ -8540,6 +8839,9 @@ test_authenticated_decryption(const struct
> > aead_test_data *tdata)
> > /* Process crypto operation */
> > if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> > process_cpu_aead_op(ts_params->valid_devs[0],
> ut_params-
> > >op);
> > + else if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 0, 0, 0, 0);
> > else
> > TEST_ASSERT_NOT_NULL(
> > process_crypto_request(ts_params->valid_devs[0],
> > @@ -8833,6 +9135,9 @@ test_authenticated_encryption_oop(const struct
> > aead_test_data *tdata)
> > if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
> > &cap_idx) == NULL)
> > return -ENOTSUP;
> > + /* Data-path service does not support OOP */
> > + if (cryptodev_dp_test)
> > + return -ENOTSUP;
> >
> > /* not supported with CPU crypto */
> > if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> > @@ -8923,8 +9228,9 @@ test_authenticated_decryption_oop(const struct
> > aead_test_data *tdata)
> > &cap_idx) == NULL)
> > return -ENOTSUP;
> >
> > - /* not supported with CPU crypto */
> > - if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> > + /* not supported with CPU crypto and data-path service*/
> > + if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO
> ||
> > + cryptodev_dp_test)
> > return -ENOTSUP;
> >
> > /* Create AEAD session */
> > @@ -9151,8 +9457,13 @@ test_authenticated_decryption_sessionless(
> > "crypto op session type not sessionless");
> >
> > /* Process crypto operation */
> > - TEST_ASSERT_NOT_NULL(process_crypto_request(ts_params-
> > >valid_devs[0],
> > - ut_params->op), "failed to process sym crypto op");
> > + if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 0, 0, 0, 0);
> > + else
> > + TEST_ASSERT_NOT_NULL(process_crypto_request(
> > + ts_params->valid_devs[0], ut_params->op),
> > + "failed to process sym crypto op");
> >
> > TEST_ASSERT_NOT_NULL(ut_params->op, "failed crypto process");
> >
> > @@ -9472,6 +9783,9 @@ test_MD5_HMAC_generate(const struct
> > HMAC_MD5_vector *test_case)
> > if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> > process_cpu_crypt_auth_op(ts_params->valid_devs[0],
> > ut_params->op);
> > + else if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 0, 1, 0, 0);
> > else
> > TEST_ASSERT_NOT_NULL(
> > process_crypto_request(ts_params->valid_devs[0],
> > @@ -9530,6 +9844,9 @@ test_MD5_HMAC_verify(const struct
> > HMAC_MD5_vector *test_case)
> > if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> > process_cpu_crypt_auth_op(ts_params->valid_devs[0],
> > ut_params->op);
> > + else if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 0, 1, 0, 0);
> > else
> > TEST_ASSERT_NOT_NULL(
> > process_crypto_request(ts_params->valid_devs[0],
> > @@ -10098,6 +10415,9 @@ test_AES_GMAC_authentication(const struct
> > gmac_test_data *tdata)
> > if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> > process_cpu_crypt_auth_op(ts_params->valid_devs[0],
> > ut_params->op);
> > + else if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 0, 1, 0, 0);
> > else
> > TEST_ASSERT_NOT_NULL(
> > process_crypto_request(ts_params->valid_devs[0],
> > @@ -10215,6 +10535,9 @@ test_AES_GMAC_authentication_verify(const
> > struct gmac_test_data *tdata)
> > if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> > process_cpu_crypt_auth_op(ts_params->valid_devs[0],
> > ut_params->op);
> > + else if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 0, 1, 0, 0);
> > else
> > TEST_ASSERT_NOT_NULL(
> > process_crypto_request(ts_params->valid_devs[0],
> > @@ -10780,7 +11103,10 @@
> > test_authentication_verify_fail_when_data_corruption(
> > TEST_ASSERT_NOT_EQUAL(ut_params->op->status,
> > RTE_CRYPTO_OP_STATUS_SUCCESS,
> > "authentication not failed");
> > - } else {
> > + } else if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 0, 1, 0, 0);
> > + else {
> > ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> > TEST_ASSERT_NULL(ut_params->op, "authentication not
> > failed");
> > @@ -10851,7 +11177,10 @@
> > test_authentication_verify_GMAC_fail_when_corruption(
> > TEST_ASSERT_NOT_EQUAL(ut_params->op->status,
> > RTE_CRYPTO_OP_STATUS_SUCCESS,
> > "authentication not failed");
> > - } else {
> > + } else if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 0, 1, 0, 0);
> > + else {
> > ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> > TEST_ASSERT_NULL(ut_params->op, "authentication not
> > failed");
> > @@ -10926,7 +11255,10 @@
> > test_authenticated_decryption_fail_when_corruption(
> > TEST_ASSERT_NOT_EQUAL(ut_params->op->status,
> > RTE_CRYPTO_OP_STATUS_SUCCESS,
> > "authentication not failed");
> > - } else {
> > + } else if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 1, 1, 0, 0);
> > + else {
> > ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> > TEST_ASSERT_NULL(ut_params->op, "authentication not
> > failed");
> > @@ -11021,6 +11353,9 @@ test_authenticated_encryt_with_esn(
> > if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> > process_cpu_crypt_auth_op(ts_params->valid_devs[0],
> > ut_params->op);
> > + else if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 1, 1, 0, 0);
> > else
> > ut_params->op = process_crypto_request(
> > ts_params->valid_devs[0], ut_params->op);
> > @@ -11141,6 +11476,9 @@ test_authenticated_decrypt_with_esn(
> > if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> > process_cpu_crypt_auth_op(ts_params->valid_devs[0],
> > ut_params->op);
> > + else if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 1, 1, 0, 0);
> > else
> > ut_params->op = process_crypto_request(ts_params-
> > >valid_devs[0],
> > ut_params->op);
> > @@ -11285,6 +11623,9 @@ test_authenticated_encryption_SGL(const
> struct
> > aead_test_data *tdata,
> > unsigned int sgl_in = fragsz < tdata->plaintext.len;
> > unsigned int sgl_out = (fragsz_oop ? fragsz_oop : fragsz) <
> > tdata->plaintext.len;
> > + /* Data path service does not support OOP */
> > + if (cryptodev_dp_test)
> > + return -ENOTSUP;
> > if (sgl_in && !sgl_out) {
> > if (!(dev_info.feature_flags &
> >
> > RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT))
> > @@ -11480,6 +11821,9 @@ test_authenticated_encryption_SGL(const
> struct
> > aead_test_data *tdata,
> > if (oop == IN_PLACE &&
> > gbl_action_type ==
> > RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
> > process_cpu_aead_op(ts_params->valid_devs[0],
> ut_params-
> > >op);
> > + else if (cryptodev_dp_test)
> > + process_sym_hw_api_op(ts_params->valid_devs[0], 0,
> > + ut_params->op, 0, 0, 0, 0);
> > else
> > TEST_ASSERT_NOT_NULL(
> > process_crypto_request(ts_params->valid_devs[0],
> > @@ -13041,6 +13385,29 @@ test_cryptodev_nitrox(void)
> > return unit_test_suite_runner(&cryptodev_nitrox_testsuite);
> > }
> >
> > +static int
> > +test_qat_sym_direct_api(void /*argv __rte_unused, int argc
> __rte_unused*/)
> > +{
> > + int ret;
> > +
> > + gbl_driver_id = rte_cryptodev_driver_id_get(
> > + RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
> > +
> > + if (gbl_driver_id == -1) {
> > + RTE_LOG(ERR, USER1, "QAT PMD must be loaded. Check that
> > both "
> > + "CONFIG_RTE_LIBRTE_PMD_QAT and
> > CONFIG_RTE_LIBRTE_PMD_QAT_SYM "
> > + "are enabled in config file to run this testsuite.\n");
> > + return TEST_SKIPPED;
> > + }
> > +
> > + cryptodev_dp_test = 1;
>
> Cryptodev_dp_test cannot be set blindly. You should check for feature flag
> of the device
> If the feature is supported or not.
>
> Can we also rename this flag as "test_raw_crypto_dp"
>
> > + ret = unit_test_suite_runner(&cryptodev_testsuite);
> > + cryptodev_dp_test = 0;
> > +
> > + return ret;
> > +}
> > +
> > +REGISTER_TEST_COMMAND(cryptodev_qat_sym_api_autotest,
> > test_qat_sym_direct_api);
>
> It would be better to name the test string as test_cryptodev_qat_raw_dp
>
> > REGISTER_TEST_COMMAND(cryptodev_qat_autotest,
> test_cryptodev_qat);
> > REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest,
> > test_cryptodev_aesni_mb);
> > REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_mb_autotest,
> > diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
> > index 41542e055..e4e4c7626 100644
> > --- a/app/test/test_cryptodev.h
> > +++ b/app/test/test_cryptodev.h
> > @@ -71,6 +71,8 @@
> > #define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
> > #define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
> >
> > +extern int cryptodev_dp_test;
> > +
> > /**
> > * Write (spread) data from buffer to mbuf data
> > *
> > @@ -209,4 +211,9 @@ create_segmented_mbuf(struct rte_mempool
> > *mbuf_pool, int pkt_len,
> > return NULL;
> > }
> >
> > +void
> > +process_sym_hw_api_op(uint8_t dev_id, uint16_t qp_id, struct
> rte_crypto_op
> > *op,
> > + uint8_t is_cipher, uint8_t is_auth, uint8_t len_in_bits,
> > + uint8_t cipher_iv_len);
> > +
> > #endif /* TEST_CRYPTODEV_H_ */
> > diff --git a/app/test/test_cryptodev_blockcipher.c
> > b/app/test/test_cryptodev_blockcipher.c
> > index 221262341..311b34c15 100644
> > --- a/app/test/test_cryptodev_blockcipher.c
> > +++ b/app/test/test_cryptodev_blockcipher.c
> > @@ -462,25 +462,44 @@ test_blockcipher_one_case(const struct
> > blockcipher_test_case *t,
> > }
> >
> > /* Process crypto operation */
> > - if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
> > - snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
> > - "line %u FAILED: %s",
> > - __LINE__, "Error sending packet for encryption");
> > - status = TEST_FAILED;
> > - goto error_exit;
> > - }
> > + if (cryptodev_dp_test) {
> > + uint8_t is_cipher = 0, is_auth = 0;
> >
> > - op = NULL;
> > + if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) {
> > + RTE_LOG(DEBUG, USER1,
> > + "QAT direct API does not support OOP, Test
> > Skipped.\n");
> > + snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
> > "SKIPPED");
> > + status = TEST_SUCCESS;
> > + goto error_exit;
> > + }
> > + if (t->op_mask & BLOCKCIPHER_TEST_OP_CIPHER)
> > + is_cipher = 1;
> > + if (t->op_mask & BLOCKCIPHER_TEST_OP_AUTH)
> > + is_auth = 1;
> >
> > - while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) == 0)
> > - rte_pause();
> > + process_sym_hw_api_op(dev_id, 0, op, is_cipher, is_auth, 0,
> > + tdata->iv.len);
> > + } else {
> > + if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
> > + snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
> > + "line %u FAILED: %s",
> > + __LINE__, "Error sending packet for
> > encryption");
> > + status = TEST_FAILED;
> > + goto error_exit;
> > + }
> >
> > - if (!op) {
> > - snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
> > - "line %u FAILED: %s",
> > - __LINE__, "Failed to process sym crypto op");
> > - status = TEST_FAILED;
> > - goto error_exit;
> > + op = NULL;
> > +
> > + while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) ==
> 0)
> > + rte_pause();
> > +
> > + if (!op) {
> > + snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
> > + "line %u FAILED: %s",
> > + __LINE__, "Failed to process sym crypto op");
> > + status = TEST_FAILED;
> > + goto error_exit;
> > + }
> > }
> >
> > debug_hexdump(stdout, "m_src(after):",
> > --
> > 2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
2020-09-21 11:59 ` Akhil Goyal
@ 2020-09-21 15:26 ` Zhang, Roy Fan
2020-09-21 15:41 ` Zhang, Roy Fan
1 sibling, 0 replies; 84+ messages in thread
From: Zhang, Roy Fan @ 2020-09-21 15:26 UTC (permalink / raw)
To: Akhil Goyal, dev, Ananyev, Konstantin, Thomas Monjalon
Cc: Trahe, Fiona, Kusztal, ArkadiuszX, Dybkowski, AdamX, Bronowski,
PiotrX, Anoob Joseph
Hi Akhil,
> -----Original Message-----
> From: Akhil Goyal <akhil.goyal@nxp.com>
> Sent: Monday, September 21, 2020 1:00 PM
> To: Zhang, Roy Fan <roy.fan.zhang@intel.com>; dev@dpdk.org; Ananyev,
> Konstantin <konstantin.ananyev@intel.com>; Thomas Monjalon
> <thomas@monjalon.net>
> Cc: Trahe, Fiona <fiona.trahe@intel.com>; Kusztal, ArkadiuszX
> <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> <adamx.dybkowski@intel.com>; Bronowski, PiotrX
> <piotrx.bronowski@intel.com>; Anoob Joseph <anoobj@marvell.com>
> Subject: RE: [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
>
> Hi Fan,
>
> > > >
> > > > +/** Crypto data-path service types */
> > > > +enum rte_crypto_dp_service {
> > > > + RTE_CRYPTO_DP_SYM_CIPHER_ONLY = 0,
> > > > + RTE_CRYPTO_DP_SYM_AUTH_ONLY,
> > > > + RTE_CRYPTO_DP_SYM_CHAIN,
> > > > + RTE_CRYPTO_DP_SYM_AEAD,
> > > > + RTE_CRYPTO_DP_N_SERVICE
> > > > +};
> > >
> > > Comments missing for this enum.
> > > Do we really need this enum?
> > > Can we not have this info in the driver from the xform list?
> > > And if we really want to add this, why to have it specific to raw data path
> APIs?
> > >
> > Will add comments to this enum.
> > Unless the driver will store xform data in certain way (in fact QAT has it) the
> > driver may not know which data-path to choose from.
> > The purpose of having this enum is that the driver knows to attach the
> correct
> > handler into the service data structure fast.
> >
> I believe all drivers are storing that information already in some way in the
> session private data.
> This enum is maintained inside driver as of current implementation. This is
> not specific to raw
> Data path APIs. If you are introducing this enum in library, then it should be
> generic for the legacy
> Case as well.
>
[Fan: I am not sure other drivers (that's part of the reason why it is here), but indeed QAT does, Ok]
>
> > >
> > > > +union rte_crypto_sym_additional_data {
> > > > + struct {
> > > > + void *cipher_iv_ptr;
> > > > + rte_iova_t cipher_iv_iova;
> > > > + void *auth_iv_ptr;
> > > > + rte_iova_t auth_iv_iova;
> > > > + void *digest_ptr;
> > > > + rte_iova_t digest_iova;
> > > > + } cipher_auth;
> > >
> > > Should be chain instead of cipher_auth
> > This field is used for cipher only, auth only, or chain use-cases so I believe
> this is
> > a better name for it.
>
> Agreed that this struct will be used for all 3 cases, that is what is happening in
> Other crypto cases. We use chain for all these three cases in legacy codepath.
> Chain can be of one or two xforms and ordering can be anything -
> Cipher only, auth only, cipher auth and auth cipher.
>
>
> > >
> > > > + struct {
> > > > + void *iv_ptr;
> > > > + rte_iova_t iv_iova;
> > > > + void *digest_ptr;
> > > > + rte_iova_t digest_iova;
> > > > + void *aad_ptr;
> > > > + rte_iova_t aad_iova;
> > > > + } aead;
> > > > +};
> > > > +
> > > > /**
> > > > * Synchronous operation descriptor.
> > > > * Supposed to be used with CPU crypto API call.
> > > > @@ -57,12 +81,25 @@ struct rte_crypto_sgl {
> > > > struct rte_crypto_sym_vec {
> > > > /** array of SGL vectors */
> > > > struct rte_crypto_sgl *sgl;
> > > > - /** array of pointers to IV */
> > > > - void **iv;
> > > > - /** array of pointers to AAD */
> > > > - void **aad;
> > > > - /** array of pointers to digest */
> > > > - void **digest;
> > > > +
> > > > + union {
> > > > +
> > > > + /* Supposed to be used with CPU crypto API call. */
> > > > + struct {
> > > > + /** array of pointers to IV */
> > > > + void **iv;
> > > > + /** array of pointers to AAD */
> > > > + void **aad;
> > > > + /** array of pointers to digest */
> > > > + void **digest;
> > > > + };
> > >
> > > Can we also name this struct?
> > > Probably we should split this as a separate patch.
> > [Then this is an API break right?]
>
> Since this an LTS release, I am ok to take this change.
> But others can comment on this.
> @Ananyev, Konstantin, @Thomas Monjalon
> Can you comment on this?
>
> > >
> > > > +
> > > > + /* Supposed to be used with
> > > > rte_cryptodev_dp_sym_submit_vec()
> > > > + * call.
> > > > + */
> > > > + union rte_crypto_sym_additional_data *additional_data;
> > > > + };
> > > > +
> > >
> > > Can we get rid of this unnecessary union
> rte_crypto_sym_additional_data
> > > And place chain and aead directly in the union? At any point, only one of
> the
> > > three
> > > would be used.
> > We have 2 main different uses cases, 1 for cpu crypto and 1 for data path
> APIs.
> > Within each main uses case there are 4 types of algo (cipher only/auth
> > only/aead/chain), one requiring HW address and virtual address, the other
> > doesn't.
> > It seems to causing too much confusion to include these many union into
> the
> > structure that initially was designed for cpu crypto only.
> > I suggest better to use different structure than squeeze all into a big union.
> >
>
> IMO, the following union can clarify all doubts.
> @Ananyev, Konstantin: Any suggestions from your side?
>
> /** IV and aad information for various use cases. */
> union {
> /** Supposed to be used with CPU crypto API call. */
> struct {
> /** array of pointers to IV */
> void **iv;
> /** array of pointers to AAD */
> void **aad;
> /** array of pointers to digest */
> void **digest;
> } cpu_crypto; < or any other useful name>
> /* Supposed to be used with HW raw crypto API call. */
> struct {
> void *cipher_iv_ptr;
> rte_iova_t cipher_iv_iova;
> void *auth_iv_ptr;
> rte_iova_t auth_iv_iova;
> void *digest_ptr;
> rte_iova_t digest_iova;
> } hw_chain;
> /* Supposed to be used with HW raw crypto API call. */
> struct {
> void *iv_ptr;
> rte_iova_t iv_iova;
> void *digest_ptr;
> rte_iova_t digest_iova;
> void *aad_ptr;
> rte_iova_t aad_iova;
> } hw_aead;
> };
>
>
[Structure looks good to me thanks!]
>
> > > > +/**
> > > > + * Context data for asynchronous crypto process.
> > > > + */
> > > > +struct rte_crypto_dp_service_ctx {
> > > > + void *qp_data;
> > > > +
> > > > + struct {
> > > > + cryptodev_dp_submit_single_job_t submit_single_job;
> > > > + cryptodev_dp_sym_submit_vec_t submit_vec;
> > > > + cryptodev_dp_sym_operation_done_t submit_done;
> > > > + cryptodev_dp_sym_dequeue_t dequeue_opaque;
> > > > + cryptodev_dp_sym_dequeue_single_job_t dequeue_single;
> > > > + cryptodev_dp_sym_operation_done_t dequeue_done;
> > > > + };
> > > > +
> > > > + /* Driver specific service data */
> > > > + __extension__ uint8_t drv_service_data[];
> > > > +};
> > >
> > > Comments missing for structure params.
> > > Struct name can be rte_crypto_raw_dp_ctx.
> > >
> > > Who allocate and free this structure?
> > Same as crypto session, the user need to query the driver specific service
> data
> > Size and allocate the buffer accordingly. The difference is it does not have
> to
> > Be from mempool as it can be reused.
>
> So this structure is saved and filled by the lib/driver and not the application.
> Right?
> This struct is opaque to application and will be part of session private data.
> Right?
> Assignment and calling appropriate driver's call backs will be hidden inside
> library
> and will be opaque to the application. In other words, the structure is not
> exposed
> to the application.
> Please add relevant comments on top of this structure.
>
[Fan: will do]
>
> > > > +static __rte_always_inline int
> > > > +_cryptodev_dp_sym_dequeue_single_job(struct
> > > rte_crypto_dp_service_ctx
> > > > *ctx,
> > > > + void **out_opaque)
> > > > +{
> > > > + return (*ctx->dequeue_single)(ctx->qp_data, ctx->drv_service_data,
> > > > + out_opaque);
> > > > +}
> > > > +
> > > > +/**
> > > > + * Submit single job into device queue but the driver will not start
> > > > + * processing until rte_cryptodev_dp_submit_done() is called. This is a
> > > > + * simplified
>
> Comment not complete.
>
> Regards,
> Akhil
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
2020-09-21 11:59 ` Akhil Goyal
2020-09-21 15:26 ` Zhang, Roy Fan
@ 2020-09-21 15:41 ` Zhang, Roy Fan
2020-09-21 15:49 ` Akhil Goyal
1 sibling, 1 reply; 84+ messages in thread
From: Zhang, Roy Fan @ 2020-09-21 15:41 UTC (permalink / raw)
To: Akhil Goyal, dev, Ananyev, Konstantin, Thomas Monjalon
Cc: Trahe, Fiona, Kusztal, ArkadiuszX, Dybkowski, AdamX, Bronowski,
PiotrX, Anoob Joseph
Hi AKhil
> -----Original Message-----
> From: Akhil Goyal <akhil.goyal@nxp.com>
> Sent: Monday, September 21, 2020 1:00 PM
> To: Zhang, Roy Fan <roy.fan.zhang@intel.com>; dev@dpdk.org; Ananyev,
> Konstantin <konstantin.ananyev@intel.com>; Thomas Monjalon
> <thomas@monjalon.net>
> Cc: Trahe, Fiona <fiona.trahe@intel.com>; Kusztal, ArkadiuszX
> <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> <adamx.dybkowski@intel.com>; Bronowski, PiotrX
> <piotrx.bronowski@intel.com>; Anoob Joseph <anoobj@marvell.com>
> Subject: RE: [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
>
...
> IMO, the following union can clarify all doubts.
> @Ananyev, Konstantin: Any suggestions from your side?
>
> /** IV and aad information for various use cases. */
> union {
> /** Supposed to be used with CPU crypto API call. */
> struct {
> /** array of pointers to IV */
> void **iv;
> /** array of pointers to AAD */
> void **aad;
> /** array of pointers to digest */
> void **digest;
> } cpu_crypto; < or any other useful name>
> /* Supposed to be used with HW raw crypto API call. */
> struct {
> void *cipher_iv_ptr;
> rte_iova_t cipher_iv_iova;
> void *auth_iv_ptr;
> rte_iova_t auth_iv_iova;
> void *digest_ptr;
> rte_iova_t digest_iova;
> } hw_chain;
> /* Supposed to be used with HW raw crypto API call. */
> struct {
> void *iv_ptr;
> rte_iova_t iv_iova;
> void *digest_ptr;
> rte_iova_t digest_iova;
> void *aad_ptr;
> rte_iova_t aad_iova;
> } hw_aead;
> };
>
>
The above structure cannot support the array of multiple jobs but a single job.
So we have to use something like
struct {
void **cipher_iv_ptr;
rtei_iova_t *cipher_iv_iova;
...
} hw_chain;
struct {
void **iv_ptr;
rte_iova_t *iv_iova;
...
} hw_aead;
Is it ok?
Regards,
Fan
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
2020-09-21 15:41 ` Zhang, Roy Fan
@ 2020-09-21 15:49 ` Akhil Goyal
2020-09-22 8:08 ` Zhang, Roy Fan
2020-09-22 8:21 ` Zhang, Roy Fan
0 siblings, 2 replies; 84+ messages in thread
From: Akhil Goyal @ 2020-09-21 15:49 UTC (permalink / raw)
To: Zhang, Roy Fan, dev, Ananyev, Konstantin, Thomas Monjalon
Cc: Trahe, Fiona, Kusztal, ArkadiuszX, Dybkowski, AdamX, Bronowski,
PiotrX, Anoob Joseph
Hi Fan,
> Hi AKhil
>
> ...
> > IMO, the following union can clarify all doubts.
> > @Ananyev, Konstantin: Any suggestions from your side?
> >
> > /** IV and aad information for various use cases. */
> > union {
> > /** Supposed to be used with CPU crypto API call. */
> > struct {
> > /** array of pointers to IV */
> > void **iv;
> > /** array of pointers to AAD */
> > void **aad;
> > /** array of pointers to digest */
> > void **digest;
> > } cpu_crypto; < or any other useful name>
> > /* Supposed to be used with HW raw crypto API call. */
> > struct {
> > void *cipher_iv_ptr;
> > rte_iova_t cipher_iv_iova;
> > void *auth_iv_ptr;
> > rte_iova_t auth_iv_iova;
> > void *digest_ptr;
> > rte_iova_t digest_iova;
> > } hw_chain;
> > /* Supposed to be used with HW raw crypto API call. */
> > struct {
> > void *iv_ptr;
> > rte_iova_t iv_iova;
> > void *digest_ptr;
> > rte_iova_t digest_iova;
> > void *aad_ptr;
> > rte_iova_t aad_iova;
> > } hw_aead;
> > };
> >
> >
>
> The above structure cannot support the array of multiple jobs but a single job.
So was your previous structure. Was it not tested before?
> So we have to use something like
>
> struct {
> void **cipher_iv_ptr;
You can even drop _ptr from the name of each of them.
> rtei_iova_t *cipher_iv_iova;
> ...
> } hw_chain;
> struct {
> void **iv_ptr;
> rte_iova_t *iv_iova;
> ...
> } hw_aead;
>
> Is it ok?
>
> Regards,
> Fan
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
2020-09-21 15:49 ` Akhil Goyal
@ 2020-09-22 8:08 ` Zhang, Roy Fan
2020-09-22 8:21 ` Zhang, Roy Fan
1 sibling, 0 replies; 84+ messages in thread
From: Zhang, Roy Fan @ 2020-09-22 8:08 UTC (permalink / raw)
To: Akhil Goyal, dev, Ananyev, Konstantin, Thomas Monjalon
Cc: Trahe, Fiona, Kusztal, ArkadiuszX, Dybkowski, AdamX, Bronowski,
PiotrX, Anoob Joseph
Hi Akhil,
> -----Original Message-----
> From: Akhil Goyal <akhil.goyal@nxp.com>
> Sent: Monday, September 21, 2020 4:49 PM
> To: Zhang, Roy Fan <roy.fan.zhang@intel.com>; dev@dpdk.org; Ananyev,
> Konstantin <konstantin.ananyev@intel.com>; Thomas Monjalon
> <thomas@monjalon.net>
> Cc: Trahe, Fiona <fiona.trahe@intel.com>; Kusztal, ArkadiuszX
> <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> <adamx.dybkowski@intel.com>; Bronowski, PiotrX
> <piotrx.bronowski@intel.com>; Anoob Joseph <anoobj@marvell.com>
> Subject: RE: [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
>
> Hi Fan,
> > Hi AKhil
> >
> > ...
> > > IMO, the following union can clarify all doubts.
> > > @Ananyev, Konstantin: Any suggestions from your side?
> > >
> > > /** IV and aad information for various use cases. */
> > > union {
> > > /** Supposed to be used with CPU crypto API call. */
> > > struct {
> > > /** array of pointers to IV */
> > > void **iv;
> > > /** array of pointers to AAD */
> > > void **aad;
> > > /** array of pointers to digest */
> > > void **digest;
> > > } cpu_crypto; < or any other useful name>
> > > /* Supposed to be used with HW raw crypto API call. */
> > > struct {
> > > void *cipher_iv_ptr;
> > > rte_iova_t cipher_iv_iova;
> > > void *auth_iv_ptr;
> > > rte_iova_t auth_iv_iova;
> > > void *digest_ptr;
> > > rte_iova_t digest_iova;
> > > } hw_chain;
> > > /* Supposed to be used with HW raw crypto API call. */
> > > struct {
> > > void *iv_ptr;
> > > rte_iova_t iv_iova;
> > > void *digest_ptr;
> > > rte_iova_t digest_iova;
> > > void *aad_ptr;
> > > rte_iova_t aad_iova;
> > > } hw_aead;
> > > };
> > >
> > >
> >
> > The above structure cannot support the array of multiple jobs but a single
> job.
>
> So was your previous structure. Was it not tested before?
Of course it was tested in both DPDK and VPP. The previous structure is an array to the union additional_data
" union rte_crypto_sym_additional_data *additional_data;". That's why the union was declared in the first place.
I am ok with the way you propose with "*" in the beginning of every member.
>
> > So we have to use something like
> >
> > struct {
> > void **cipher_iv_ptr;
>
> You can even drop _ptr from the name of each of them.
>
> > rtei_iova_t *cipher_iv_iova;
> > ...
> > } hw_chain;
> > struct {
> > void **iv_ptr;
> > rte_iova_t *iv_iova;
> > ...
> > } hw_aead;
> >
> > Is it ok?
> >
> > Regards,
> > Fan
Regards,
Fan
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
2020-09-21 15:49 ` Akhil Goyal
2020-09-22 8:08 ` Zhang, Roy Fan
@ 2020-09-22 8:21 ` Zhang, Roy Fan
2020-09-22 8:48 ` Ananyev, Konstantin
1 sibling, 1 reply; 84+ messages in thread
From: Zhang, Roy Fan @ 2020-09-22 8:21 UTC (permalink / raw)
To: Akhil Goyal, dev, Ananyev, Konstantin, Thomas Monjalon
Cc: Trahe, Fiona, Kusztal, ArkadiuszX, Dybkowski, AdamX, Bronowski,
PiotrX, Anoob Joseph
Hi Akhil,
Thanks again for the review!
To summarize, the following places to be changed for v10.
1. Documentation update and reviewed internally in Intel first.
2. Add the missing comments to the structure.
3. Change the name "dp_service" to "raw_dp" to all APIs and documentation.
4. Change the structure
struct rte_crypto_sym_vec {
/** array of SGL vectors */
struct rte_crypto_sgl *sgl;
union {
/** Supposed to be used with CPU crypto API call. */
struct {
/** array of pointers to IV */
void **iv;
/** array of pointers to AAD */
void **aad;
/** array of pointers to digest */
void **digest;
} cpu_crypto;
/** Supposed to be used with HW raw crypto API call. */
struct {
/** array of pointers to cipher IV */
void **cipher_iv_ptr;
/** array of IOVA addresses to cipher IV */
rte_iova_t *cipher_iv_iova;
/** array of pointers to auth IV */
void **auth_iv_ptr;
/** array of IOVA addresses to auth IV */
rte_iova_t *auth_iv_iova;
/** array of pointers to digest */
void **digest_ptr;
/** array of IOVA addresses to digest */
rte_iova_t *digest_iova;
} hw_chain;
/** Supposed to be used with HW raw crypto API call. */
struct {
/** array of pointers to AEAD IV */
void **iv_ptr;
/** array of IOVA addresses to AEAD IV */
rte_iova_t *iv_iova;
/** array of pointers to AAD */
void **aad_ptr;
/** array of IOVA addresses to AAD */
rte_iova_t *aad_iova;
/** array of pointers to digest */
void **digest_ptr;
/** array of IOVA addresses to digest */
rte_iova_t *digest_iova;
} hw_aead;
};
/**
* array of statuses for each operation:
* - 0 on success
* - errno on error
*/
int32_t *status;
/** number of operations to perform */
uint32_t num;
};
5. Remove enum rte_crypto_dp_service, let the PMDs using the session private data to decide function handler.
6. Remove is_update parameter.
The main point that is uncertain is the existance of "submit_single".
I am ok to remove "submit_single" function. In VPP we can use rte_cryptodev_dp_sym_submit_vec() with vec.num=1 each time to avoid double looping.
But we have to put the rte_cryptodev_dp_sym_submit_vec() as an inline function - this will cause the API not traced in version map.
Any ideas?
Regards,
Fan
> -----Original Message-----
> From: Akhil Goyal <akhil.goyal@nxp.com>
> Sent: Monday, September 21, 2020 4:49 PM
> To: Zhang, Roy Fan <roy.fan.zhang@intel.com>; dev@dpdk.org; Ananyev,
> Konstantin <konstantin.ananyev@intel.com>; Thomas Monjalon
> <thomas@monjalon.net>
> Cc: Trahe, Fiona <fiona.trahe@intel.com>; Kusztal, ArkadiuszX
> <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> <adamx.dybkowski@intel.com>; Bronowski, PiotrX
> <piotrx.bronowski@intel.com>; Anoob Joseph <anoobj@marvell.com>
> Subject: RE: [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
>
> Hi Fan,
> > Hi AKhil
> >
> > ...
> > > IMO, the following union can clarify all doubts.
> > > @Ananyev, Konstantin: Any suggestions from your side?
> > >
> > > /** IV and aad information for various use cases. */
> > > union {
> > > /** Supposed to be used with CPU crypto API call. */
> > > struct {
> > > /** array of pointers to IV */
> > > void **iv;
> > > /** array of pointers to AAD */
> > > void **aad;
> > > /** array of pointers to digest */
> > > void **digest;
> > > } cpu_crypto; < or any other useful name>
> > > /* Supposed to be used with HW raw crypto API call. */
> > > struct {
> > > void *cipher_iv_ptr;
> > > rte_iova_t cipher_iv_iova;
> > > void *auth_iv_ptr;
> > > rte_iova_t auth_iv_iova;
> > > void *digest_ptr;
> > > rte_iova_t digest_iova;
> > > } hw_chain;
> > > /* Supposed to be used with HW raw crypto API call. */
> > > struct {
> > > void *iv_ptr;
> > > rte_iova_t iv_iova;
> > > void *digest_ptr;
> > > rte_iova_t digest_iova;
> > > void *aad_ptr;
> > > rte_iova_t aad_iova;
> > > } hw_aead;
> > > };
> > >
> > >
> >
> > The above structure cannot support the array of multiple jobs but a single
> job.
>
> So was your previous structure. Was it not tested before?
>
> > So we have to use something like
> >
> > struct {
> > void **cipher_iv_ptr;
>
> You can even drop _ptr from the name of each of them.
>
> > rtei_iova_t *cipher_iv_iova;
> > ...
> > } hw_chain;
> > struct {
> > void **iv_ptr;
> > rte_iova_t *iv_iova;
> > ...
> > } hw_aead;
> >
> > Is it ok?
> >
> > Regards,
> > Fan
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
2020-09-22 8:21 ` Zhang, Roy Fan
@ 2020-09-22 8:48 ` Ananyev, Konstantin
2020-09-22 9:05 ` Akhil Goyal
0 siblings, 1 reply; 84+ messages in thread
From: Ananyev, Konstantin @ 2020-09-22 8:48 UTC (permalink / raw)
To: Zhang, Roy Fan, Akhil Goyal, dev, Thomas Monjalon
Cc: Trahe, Fiona, Kusztal, ArkadiuszX, Dybkowski, AdamX, Bronowski,
PiotrX, Anoob Joseph
Hi lads,
>
> Hi Akhil,
>
> Thanks again for the review!
> To summarize, the following places to be changed for v10.
>
> 1. Documentation update and reviewed internally in Intel first.
> 2. Add the missing comments to the structure.
> 3. Change the name "dp_service" to "raw_dp" to all APIs and documentation.
> 4. Change the structure
> struct rte_crypto_sym_vec {
> /** array of SGL vectors */
> struct rte_crypto_sgl *sgl;
>
> union {
> /** Supposed to be used with CPU crypto API call. */
> struct {
> /** array of pointers to IV */
> void **iv;
> /** array of pointers to AAD */
> void **aad;
> /** array of pointers to digest */
> void **digest;
> } cpu_crypto;
> /** Supposed to be used with HW raw crypto API call. */
> struct {
> /** array of pointers to cipher IV */
> void **cipher_iv_ptr;
> /** array of IOVA addresses to cipher IV */
> rte_iova_t *cipher_iv_iova;
> /** array of pointers to auth IV */
> void **auth_iv_ptr;
> /** array of IOVA addresses to auth IV */
> rte_iova_t *auth_iv_iova;
> /** array of pointers to digest */
> void **digest_ptr;
> /** array of IOVA addresses to digest */
> rte_iova_t *digest_iova;
> } hw_chain;
> /** Supposed to be used with HW raw crypto API call. */
> struct {
> /** array of pointers to AEAD IV */
> void **iv_ptr;
> /** array of IOVA addresses to AEAD IV */
> rte_iova_t *iv_iova;
> /** array of pointers to AAD */
> void **aad_ptr;
> /** array of IOVA addresses to AAD */
> rte_iova_t *aad_iova;
> /** array of pointers to digest */
> void **digest_ptr;
> /** array of IOVA addresses to digest */
> rte_iova_t *digest_iova;
> } hw_aead;
> };
>
> /**
> * array of statuses for each operation:
> * - 0 on success
> * - errno on error
> */
> int32_t *status;
> /** number of operations to perform */
> uint32_t num;
> };
As I understand you just need to add pointers to iova[] for iv, aad and digest, correct?
If so, why not simply:
struct rte_va_iova_ptr {
void *va;
rte_iova_t *iova;
};
struct rte_crypto_sym_vec {
/** array of SGL vectors */
struct rte_crypto_sgl *sgl;
/** array of pointers to IV */
struct rte_va_iova_ptr iv;
/** array of pointers to AAD */
struct rte_va_iova_ptr aad;
/** array of pointers to digest */
struct rte_va_iova_ptr digest;
/**
* array of statuses for each operation:
* - 0 on success
* - errno on error
*/
int32_t *status;
/** number of operations to perform */
uint32_t num;
};
BTW, it would be both ABI and API breakage,
though all functions using this struct are marked as experimental,
plus it is an LTS release, so it seems to be ok.
Though I think it needs to be flagged in RN.
Another option obviously - introduce completely new structure for it
and leave existing one unaffected.
>
> 5. Remove enum rte_crypto_dp_service, let the PMDs using the session private data to decide function handler.
> 6. Remove is_update parameter.
>
> The main point that is uncertain is the existance of "submit_single".
> I am ok to remove "submit_single" function. In VPP we can use rte_cryptodev_dp_sym_submit_vec() with vec.num=1 each time to avoid
> double looping.
> But we have to put the rte_cryptodev_dp_sym_submit_vec() as an inline function - this will cause the API not traced in version map.
>
> Any ideas?
>
> Regards,
> Fan
>
> > -----Original Message-----
> > From: Akhil Goyal <akhil.goyal@nxp.com>
> > Sent: Monday, September 21, 2020 4:49 PM
> > To: Zhang, Roy Fan <roy.fan.zhang@intel.com>; dev@dpdk.org; Ananyev,
> > Konstantin <konstantin.ananyev@intel.com>; Thomas Monjalon
> > <thomas@monjalon.net>
> > Cc: Trahe, Fiona <fiona.trahe@intel.com>; Kusztal, ArkadiuszX
> > <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> > <adamx.dybkowski@intel.com>; Bronowski, PiotrX
> > <piotrx.bronowski@intel.com>; Anoob Joseph <anoobj@marvell.com>
> > Subject: RE: [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
> >
> > Hi Fan,
> > > Hi AKhil
> > >
> > > ...
> > > > IMO, the following union can clarify all doubts.
> > > > @Ananyev, Konstantin: Any suggestions from your side?
> > > >
> > > > /** IV and aad information for various use cases. */
> > > > union {
> > > > /** Supposed to be used with CPU crypto API call. */
> > > > struct {
> > > > /** array of pointers to IV */
> > > > void **iv;
> > > > /** array of pointers to AAD */
> > > > void **aad;
> > > > /** array of pointers to digest */
> > > > void **digest;
> > > > } cpu_crypto; < or any other useful name>
> > > > /* Supposed to be used with HW raw crypto API call. */
> > > > struct {
> > > > void *cipher_iv_ptr;
> > > > rte_iova_t cipher_iv_iova;
> > > > void *auth_iv_ptr;
> > > > rte_iova_t auth_iv_iova;
> > > > void *digest_ptr;
> > > > rte_iova_t digest_iova;
> > > > } hw_chain;
> > > > /* Supposed to be used with HW raw crypto API call. */
> > > > struct {
> > > > void *iv_ptr;
> > > > rte_iova_t iv_iova;
> > > > void *digest_ptr;
> > > > rte_iova_t digest_iova;
> > > > void *aad_ptr;
> > > > rte_iova_t aad_iova;
> > > > } hw_aead;
> > > > };
> > > >
> > > >
> > >
> > > The above structure cannot support the array of multiple jobs but a single
> > job.
> >
> > So was your previous structure. Was it not tested before?
> >
> > > So we have to use something like
> > >
> > > struct {
> > > void **cipher_iv_ptr;
> >
> > You can even drop _ptr from the name of each of them.
> >
> > > rtei_iova_t *cipher_iv_iova;
> > > ...
> > > } hw_chain;
> > > struct {
> > > void **iv_ptr;
> > > rte_iova_t *iv_iova;
> > > ...
> > > } hw_aead;
> > >
> > > Is it ok?
> > >
> > > Regards,
> > > Fan
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
2020-09-22 8:48 ` Ananyev, Konstantin
@ 2020-09-22 9:05 ` Akhil Goyal
2020-09-22 9:28 ` Zhang, Roy Fan
0 siblings, 1 reply; 84+ messages in thread
From: Akhil Goyal @ 2020-09-22 9:05 UTC (permalink / raw)
To: Ananyev, Konstantin, Zhang, Roy Fan, dev, Thomas Monjalon
Cc: Trahe, Fiona, Kusztal, ArkadiuszX, Dybkowski, AdamX, Bronowski,
PiotrX, Anoob Joseph
Hi Konstantin,
> Hi lads,
>
> >
> > Hi Akhil,
> >
> > Thanks again for the review!
> > To summarize, the following places to be changed for v10.
> >
> > 1. Documentation update and reviewed internally in Intel first.
> > 2. Add the missing comments to the structure.
> > 3. Change the name "dp_service" to "raw_dp" to all APIs and documentation.
> > 4. Change the structure
> > struct rte_crypto_sym_vec {
> > /** array of SGL vectors */
> > struct rte_crypto_sgl *sgl;
> >
> > union {
> > /** Supposed to be used with CPU crypto API call. */
> > struct {
> > /** array of pointers to IV */
> > void **iv;
> > /** array of pointers to AAD */
> > void **aad;
> > /** array of pointers to digest */
> > void **digest;
> > } cpu_crypto;
> > /** Supposed to be used with HW raw crypto API call. */
> > struct {
> > /** array of pointers to cipher IV */
> > void **cipher_iv_ptr;
> > /** array of IOVA addresses to cipher IV */
> > rte_iova_t *cipher_iv_iova;
> > /** array of pointers to auth IV */
> > void **auth_iv_ptr;
> > /** array of IOVA addresses to auth IV */
> > rte_iova_t *auth_iv_iova;
> > /** array of pointers to digest */
> > void **digest_ptr;
> > /** array of IOVA addresses to digest */
> > rte_iova_t *digest_iova;
> > } hw_chain;
> > /** Supposed to be used with HW raw crypto API call. */
> > struct {
> > /** array of pointers to AEAD IV */
> > void **iv_ptr;
> > /** array of IOVA addresses to AEAD IV */
> > rte_iova_t *iv_iova;
> > /** array of pointers to AAD */
> > void **aad_ptr;
> > /** array of IOVA addresses to AAD */
> > rte_iova_t *aad_iova;
> > /** array of pointers to digest */
> > void **digest_ptr;
> > /** array of IOVA addresses to digest */
> > rte_iova_t *digest_iova;
> > } hw_aead;
> > };
> >
> > /**
> > * array of statuses for each operation:
> > * - 0 on success
> > * - errno on error
> > */
> > int32_t *status;
> > /** number of operations to perform */
> > uint32_t num;
> > };
>
>
> As I understand you just need to add pointers to iova[] for iv, aad and digest,
> correct?
> If so, why not simply:
>
> struct rte_va_iova_ptr {
> void *va;
> rte_iova_t *iova;
> };
>
> struct rte_crypto_sym_vec {
> /** array of SGL vectors */
> struct rte_crypto_sgl *sgl;
> /** array of pointers to IV */
> struct rte_va_iova_ptr iv;
> /** array of pointers to AAD */
> struct rte_va_iova_ptr aad;
> /** array of pointers to digest */
> struct rte_va_iova_ptr digest;
> /**
> * array of statuses for each operation:
> * - 0 on success
> * - errno on error
> */
> int32_t *status;
> /** number of operations to perform */
> uint32_t num;
> };
>
> BTW, it would be both ABI and API breakage,
> though all functions using this struct are marked as experimental,
> plus it is an LTS release, so it seems to be ok.
> Though I think it needs to be flagged in RN.
This is a good suggestion. This will make some changes in the cpu-crypto support as well
And should be a separate patch.
We can take the API and ABI breakage in this release. That is not an issue.
>
> Another option obviously - introduce completely new structure for it
> and leave existing one unaffected.
>
This will create some duplicate code. Would not prefer that.
> >
> > 5. Remove enum rte_crypto_dp_service, let the PMDs using the session private
> data to decide function handler.
> > 6. Remove is_update parameter.
> >
> > The main point that is uncertain is the existance of "submit_single".
> > I am ok to remove "submit_single" function. In VPP we can use
> rte_cryptodev_dp_sym_submit_vec() with vec.num=1 each time to avoid
> > double looping.
> > But we have to put the rte_cryptodev_dp_sym_submit_vec() as an inline
> function - this will cause the API not traced in version map.
> >
> > Any ideas?
> >
> > Regards,
> > Fan
> >
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
2020-09-22 9:05 ` Akhil Goyal
@ 2020-09-22 9:28 ` Zhang, Roy Fan
2020-09-22 10:18 ` Ananyev, Konstantin
0 siblings, 1 reply; 84+ messages in thread
From: Zhang, Roy Fan @ 2020-09-22 9:28 UTC (permalink / raw)
To: Akhil Goyal, Ananyev, Konstantin, dev, Thomas Monjalon
Cc: Trahe, Fiona, Kusztal, ArkadiuszX, Dybkowski, AdamX, Bronowski,
PiotrX, Anoob Joseph
Hi Akhil and Konstantin,
> -----Original Message-----
> From: Akhil Goyal <akhil.goyal@nxp.com>
> Sent: Tuesday, September 22, 2020 10:06 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Zhang, Roy Fan
> <roy.fan.zhang@intel.com>; dev@dpdk.org; Thomas Monjalon
> <thomas@monjalon.net>
> Cc: Trahe, Fiona <fiona.trahe@intel.com>; Kusztal, ArkadiuszX
> <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> <adamx.dybkowski@intel.com>; Bronowski, PiotrX
> <piotrx.bronowski@intel.com>; Anoob Joseph <anoobj@marvell.com>
> Subject: RE: [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
>
> Hi Konstantin,
> > Hi lads,
> >
> > >
> > > Hi Akhil,
> > >
> > > Thanks again for the review!
> > > To summarize, the following places to be changed for v10.
> > >
> > > 1. Documentation update and reviewed internally in Intel first.
> > > 2. Add the missing comments to the structure.
> > > 3. Change the name "dp_service" to "raw_dp" to all APIs and
> documentation.
> > > 4. Change the structure
> > > struct rte_crypto_sym_vec {
> > > /** array of SGL vectors */
> > > struct rte_crypto_sgl *sgl;
> > >
> > > union {
> > > /** Supposed to be used with CPU crypto API call. */
> > > struct {
> > > /** array of pointers to IV */
> > > void **iv;
> > > /** array of pointers to AAD */
> > > void **aad;
> > > /** array of pointers to digest */
> > > void **digest;
> > > } cpu_crypto;
> > > /** Supposed to be used with HW raw crypto API call. */
> > > struct {
> > > /** array of pointers to cipher IV */
> > > void **cipher_iv_ptr;
> > > /** array of IOVA addresses to cipher IV */
> > > rte_iova_t *cipher_iv_iova;
> > > /** array of pointers to auth IV */
> > > void **auth_iv_ptr;
> > > /** array of IOVA addresses to auth IV */
> > > rte_iova_t *auth_iv_iova;
> > > /** array of pointers to digest */
> > > void **digest_ptr;
> > > /** array of IOVA addresses to digest */
> > > rte_iova_t *digest_iova;
> > > } hw_chain;
> > > /** Supposed to be used with HW raw crypto API call. */
> > > struct {
> > > /** array of pointers to AEAD IV */
> > > void **iv_ptr;
> > > /** array of IOVA addresses to AEAD IV */
> > > rte_iova_t *iv_iova;
> > > /** array of pointers to AAD */
> > > void **aad_ptr;
> > > /** array of IOVA addresses to AAD */
> > > rte_iova_t *aad_iova;
> > > /** array of pointers to digest */
> > > void **digest_ptr;
> > > /** array of IOVA addresses to digest */
> > > rte_iova_t *digest_iova;
> > > } hw_aead;
> > > };
> > >
> > > /**
> > > * array of statuses for each operation:
> > > * - 0 on success
> > > * - errno on error
> > > */
> > > int32_t *status;
> > > /** number of operations to perform */
> > > uint32_t num;
> > > };
> >
> >
> > As I understand you just need to add pointers to iova[] for iv, aad and
> digest,
> > correct?
> > If so, why not simply:
> >
> > struct rte_va_iova_ptr {
> > void *va;
> > rte_iova_t *iova;
> > };
> >
> > struct rte_crypto_sym_vec {
> > /** array of SGL vectors */
> > struct rte_crypto_sgl *sgl;
> > /** array of pointers to IV */
> > struct rte_va_iova_ptr iv;
We will need 2 IV here, one for cipher and one for auth (GMAC for example).
> > /** array of pointers to AAD */
> > struct rte_va_iova_ptr aad;
> > /** array of pointers to digest */
> > struct rte_va_iova_ptr digest;
> > /**
> > * array of statuses for each operation:
> > * - 0 on success
> > * - errno on error
> > */
> > int32_t *status;
> > /** number of operations to perform */
> > uint32_t num;
> > };
> >
> > BTW, it would be both ABI and API breakage,
> > though all functions using this struct are marked as experimental,
> > plus it is an LTS release, so it seems to be ok.
> > Though I think it needs to be flagged in RN.
>
> This is a good suggestion. This will make some changes in the cpu-crypto
> support as well
> And should be a separate patch.
> We can take the API and ABI breakage in this release. That is not an issue.
>
>
> >
> > Another option obviously - introduce completely new structure for it
> > and leave existing one unaffected.
> >
> This will create some duplicate code. Would not prefer that.
>
> > >
> > > 5. Remove enum rte_crypto_dp_service, let the PMDs using the session
> private
> > data to decide function handler.
> > > 6. Remove is_update parameter.
> > >
> > > The main point that is uncertain is the existance of "submit_single".
> > > I am ok to remove "submit_single" function. In VPP we can use
> > rte_cryptodev_dp_sym_submit_vec() with vec.num=1 each time to avoid
> > > double looping.
> > > But we have to put the rte_cryptodev_dp_sym_submit_vec() as an inline
> > function - this will cause the API not traced in version map.
> > >
> > > Any ideas?
> > >
> > > Regards,
> > > Fan
> > >
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
2020-09-22 9:28 ` Zhang, Roy Fan
@ 2020-09-22 10:18 ` Ananyev, Konstantin
2020-09-22 12:15 ` Zhang, Roy Fan
2020-09-22 12:50 ` Zhang, Roy Fan
0 siblings, 2 replies; 84+ messages in thread
From: Ananyev, Konstantin @ 2020-09-22 10:18 UTC (permalink / raw)
To: Zhang, Roy Fan, Akhil Goyal, dev, Thomas Monjalon
Cc: Trahe, Fiona, Kusztal, ArkadiuszX, Dybkowski, AdamX, Bronowski,
PiotrX, Anoob Joseph
>
> Hi Akhil and Konstantin,
>
> > -----Original Message-----
> > From: Akhil Goyal <akhil.goyal@nxp.com>
> > Sent: Tuesday, September 22, 2020 10:06 AM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Zhang, Roy Fan
> > <roy.fan.zhang@intel.com>; dev@dpdk.org; Thomas Monjalon
> > <thomas@monjalon.net>
> > Cc: Trahe, Fiona <fiona.trahe@intel.com>; Kusztal, ArkadiuszX
> > <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> > <adamx.dybkowski@intel.com>; Bronowski, PiotrX
> > <piotrx.bronowski@intel.com>; Anoob Joseph <anoobj@marvell.com>
> > Subject: RE: [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
> >
> > Hi Konstantin,
> > > Hi lads,
> > >
> > > >
> > > > Hi Akhil,
> > > >
> > > > Thanks again for the review!
> > > > To summarize, the following places to be changed for v10.
> > > >
> > > > 1. Documentation update and reviewed internally in Intel first.
> > > > 2. Add the missing comments to the structure.
> > > > 3. Change the name "dp_service" to "raw_dp" to all APIs and
> > documentation.
> > > > 4. Change the structure
> > > > struct rte_crypto_sym_vec {
> > > > /** array of SGL vectors */
> > > > struct rte_crypto_sgl *sgl;
> > > >
> > > > union {
> > > > /** Supposed to be used with CPU crypto API call. */
> > > > struct {
> > > > /** array of pointers to IV */
> > > > void **iv;
> > > > /** array of pointers to AAD */
> > > > void **aad;
> > > > /** array of pointers to digest */
> > > > void **digest;
> > > > } cpu_crypto;
> > > > /** Supposed to be used with HW raw crypto API call. */
> > > > struct {
> > > > /** array of pointers to cipher IV */
> > > > void **cipher_iv_ptr;
> > > > /** array of IOVA addresses to cipher IV */
> > > > rte_iova_t *cipher_iv_iova;
> > > > /** array of pointers to auth IV */
> > > > void **auth_iv_ptr;
> > > > /** array of IOVA addresses to auth IV */
> > > > rte_iova_t *auth_iv_iova;
> > > > /** array of pointers to digest */
> > > > void **digest_ptr;
> > > > /** array of IOVA addresses to digest */
> > > > rte_iova_t *digest_iova;
> > > > } hw_chain;
> > > > /** Supposed to be used with HW raw crypto API call. */
> > > > struct {
> > > > /** array of pointers to AEAD IV */
> > > > void **iv_ptr;
> > > > /** array of IOVA addresses to AEAD IV */
> > > > rte_iova_t *iv_iova;
> > > > /** array of pointers to AAD */
> > > > void **aad_ptr;
> > > > /** array of IOVA addresses to AAD */
> > > > rte_iova_t *aad_iova;
> > > > /** array of pointers to digest */
> > > > void **digest_ptr;
> > > > /** array of IOVA addresses to digest */
> > > > rte_iova_t *digest_iova;
> > > > } hw_aead;
> > > > };
> > > >
> > > > /**
> > > > * array of statuses for each operation:
> > > > * - 0 on success
> > > > * - errno on error
> > > > */
> > > > int32_t *status;
> > > > /** number of operations to perform */
> > > > uint32_t num;
> > > > };
> > >
> > >
> > > As I understand you just need to add pointers to iova[] for iv, aad and
> > digest,
> > > correct?
> > > If so, why not simply:
> > >
> > > struct rte_va_iova_ptr {
> > > void *va;
> > > rte_iova_t *iova;
> > > };
> > >
> > > struct rte_crypto_sym_vec {
> > > /** array of SGL vectors */
> > > struct rte_crypto_sgl *sgl;
> > > /** array of pointers to IV */
> > > struct rte_va_iova_ptr iv;
>
> We will need 2 IV here, one for cipher and one for auth (GMAC for example).
Hmm, why do we need to different IVs for GMAC?
And if so how does it work now with either rte_crypto_op or with rte_crypto_sym_vec?
>
> > > /** array of pointers to AAD */
> > > struct rte_va_iova_ptr aad;
> > > /** array of pointers to digest */
> > > struct rte_va_iova_ptr digest;
> > > /**
> > > * array of statuses for each operation:
> > > * - 0 on success
> > > * - errno on error
> > > */
> > > int32_t *status;
> > > /** number of operations to perform */
> > > uint32_t num;
> > > };
> > >
> > > BTW, it would be both ABI and API breakage,
> > > though all functions using this struct are marked as experimental,
> > > plus it is an LTS release, so it seems to be ok.
> > > Though I think it needs to be flagged in RN.
> >
> > This is a good suggestion. This will make some changes in the cpu-crypto
> > support as well
> > And should be a separate patch.
> > We can take the API and ABI breakage in this release. That is not an issue.
> >
> >
> > >
> > > Another option obviously - introduce completely new structure for it
> > > and leave existing one unaffected.
> > >
> > This will create some duplicate code. Would not prefer that.
> >
> > > >
> > > > 5. Remove enum rte_crypto_dp_service, let the PMDs using the session
> > private
> > > data to decide function handler.
> > > > 6. Remove is_update parameter.
> > > >
> > > > The main point that is uncertain is the existance of "submit_single".
> > > > I am ok to remove "submit_single" function. In VPP we can use
> > > rte_cryptodev_dp_sym_submit_vec() with vec.num=1 each time to avoid
> > > > double looping.
> > > > But we have to put the rte_cryptodev_dp_sym_submit_vec() as an inline
> > > function - this will cause the API not traced in version map.
> > > >
> > > > Any ideas?
> > > >
> > > > Regards,
> > > > Fan
> > > >
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
2020-09-22 10:18 ` Ananyev, Konstantin
@ 2020-09-22 12:15 ` Zhang, Roy Fan
2020-09-22 12:50 ` Zhang, Roy Fan
1 sibling, 0 replies; 84+ messages in thread
From: Zhang, Roy Fan @ 2020-09-22 12:15 UTC (permalink / raw)
To: Ananyev, Konstantin, Akhil Goyal, dev, Thomas Monjalon
Cc: Trahe, Fiona, Kusztal, ArkadiuszX, Dybkowski, AdamX, Bronowski,
PiotrX, Anoob Joseph
Hi Konstantin,
> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Tuesday, September 22, 2020 11:18 AM
> To: Zhang, Roy Fan <roy.fan.zhang@intel.com>; Akhil Goyal
> <akhil.goyal@nxp.com>; dev@dpdk.org; Thomas Monjalon
> <thomas@monjalon.net>
> Cc: Trahe, Fiona <fiona.trahe@intel.com>; Kusztal, ArkadiuszX
> <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> <adamx.dybkowski@intel.com>; Bronowski, PiotrX
> <piotrx.bronowski@intel.com>; Anoob Joseph <anoobj@marvell.com>
> Subject: RE: [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
>
>
>
> >
> > Hi Akhil and Konstantin,
> >
> > > -----Original Message-----
> > > From: Akhil Goyal <akhil.goyal@nxp.com>
> > > Sent: Tuesday, September 22, 2020 10:06 AM
> > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Zhang, Roy
> Fan
> > > <roy.fan.zhang@intel.com>; dev@dpdk.org; Thomas Monjalon
> > > <thomas@monjalon.net>
> > > Cc: Trahe, Fiona <fiona.trahe@intel.com>; Kusztal, ArkadiuszX
> > > <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> > > <adamx.dybkowski@intel.com>; Bronowski, PiotrX
> > > <piotrx.bronowski@intel.com>; Anoob Joseph <anoobj@marvell.com>
> > > Subject: RE: [dpdk-dev v9 1/4] cryptodev: add crypto data-path service
> APIs
> > >
> > > Hi Konstantin,
> > > > Hi lads,
> > > >
> > > > >
> > > > > Hi Akhil,
> > > > >
> > > > > Thanks again for the review!
> > > > > To summarize, the following places to be changed for v10.
> > > > >
> > > > > 1. Documentation update and reviewed internally in Intel first.
> > > > > 2. Add the missing comments to the structure.
> > > > > 3. Change the name "dp_service" to "raw_dp" to all APIs and
> > > documentation.
> > > > > 4. Change the structure
> > > > > struct rte_crypto_sym_vec {
> > > > > /** array of SGL vectors */
> > > > > struct rte_crypto_sgl *sgl;
> > > > >
> > > > > union {
> > > > > /** Supposed to be used with CPU crypto API call. */
> > > > > struct {
> > > > > /** array of pointers to IV */
> > > > > void **iv;
> > > > > /** array of pointers to AAD */
> > > > > void **aad;
> > > > > /** array of pointers to digest */
> > > > > void **digest;
> > > > > } cpu_crypto;
> > > > > /** Supposed to be used with HW raw crypto API call. */
> > > > > struct {
> > > > > /** array of pointers to cipher IV */
> > > > > void **cipher_iv_ptr;
> > > > > /** array of IOVA addresses to cipher IV */
> > > > > rte_iova_t *cipher_iv_iova;
> > > > > /** array of pointers to auth IV */
> > > > > void **auth_iv_ptr;
> > > > > /** array of IOVA addresses to auth IV */
> > > > > rte_iova_t *auth_iv_iova;
> > > > > /** array of pointers to digest */
> > > > > void **digest_ptr;
> > > > > /** array of IOVA addresses to digest */
> > > > > rte_iova_t *digest_iova;
> > > > > } hw_chain;
> > > > > /** Supposed to be used with HW raw crypto API call. */
> > > > > struct {
> > > > > /** array of pointers to AEAD IV */
> > > > > void **iv_ptr;
> > > > > /** array of IOVA addresses to AEAD IV */
> > > > > rte_iova_t *iv_iova;
> > > > > /** array of pointers to AAD */
> > > > > void **aad_ptr;
> > > > > /** array of IOVA addresses to AAD */
> > > > > rte_iova_t *aad_iova;
> > > > > /** array of pointers to digest */
> > > > > void **digest_ptr;
> > > > > /** array of IOVA addresses to digest */
> > > > > rte_iova_t *digest_iova;
> > > > > } hw_aead;
> > > > > };
> > > > >
> > > > > /**
> > > > > * array of statuses for each operation:
> > > > > * - 0 on success
> > > > > * - errno on error
> > > > > */
> > > > > int32_t *status;
> > > > > /** number of operations to perform */
> > > > > uint32_t num;
> > > > > };
> > > >
> > > >
> > > > As I understand you just need to add pointers to iova[] for iv, aad and
> > > digest,
> > > > correct?
> > > > If so, why not simply:
> > > >
> > > > struct rte_va_iova_ptr {
> > > > void *va;
> > > > rte_iova_t *iova;
> > > > };
> > > >
> > > > struct rte_crypto_sym_vec {
> > > > /** array of SGL vectors */
> > > > struct rte_crypto_sgl *sgl;
> > > > /** array of pointers to IV */
> > > > struct rte_va_iova_ptr iv;
> >
> > We will need 2 IV here, one for cipher and one for auth (GMAC for
> example).
>
> Hmm, why do we need to different IVs for GMAC?
> And if so how does it work now with either rte_crypto_op or with
> rte_crypto_sym_vec?
>
Not only GMAC, the wireless chain algos such as SNOW3G requires IV in auth field (test_snow3g_cipher_auth() in unit test).
Rte_crypto_sym_op has auth_iv.offset to indicate where the iv is stored in crypto op, so the virtual and physical addresses of it can be deducted through the offset, but not yet for rte_crypto_sym_vec.
> >
> > > > /** array of pointers to AAD */
> > > > struct rte_va_iova_ptr aad;
> > > > /** array of pointers to digest */
> > > > struct rte_va_iova_ptr digest;
> > > > /**
> > > > * array of statuses for each operation:
> > > > * - 0 on success
> > > > * - errno on error
> > > > */
> > > > int32_t *status;
> > > > /** number of operations to perform */
> > > > uint32_t num;
> > > > };
> > > >
> > > > BTW, it would be both ABI and API breakage,
> > > > though all functions using this struct are marked as experimental,
> > > > plus it is an LTS release, so it seems to be ok.
> > > > Though I think it needs to be flagged in RN.
> > >
> > > This is a good suggestion. This will make some changes in the cpu-crypto
> > > support as well
> > > And should be a separate patch.
> > > We can take the API and ABI breakage in this release. That is not an issue.
> > >
> > >
> > > >
> > > > Another option obviously - introduce completely new structure for it
> > > > and leave existing one unaffected.
> > > >
> > > This will create some duplicate code. Would not prefer that.
> > >
> > > > >
> > > > > 5. Remove enum rte_crypto_dp_service, let the PMDs using the
> session
> > > private
> > > > data to decide function handler.
> > > > > 6. Remove is_update parameter.
> > > > >
> > > > > The main point that is uncertain is the existance of "submit_single".
> > > > > I am ok to remove "submit_single" function. In VPP we can use
> > > > rte_cryptodev_dp_sym_submit_vec() with vec.num=1 each time to
> avoid
> > > > > double looping.
> > > > > But we have to put the rte_cryptodev_dp_sym_submit_vec() as an
> inline
> > > > function - this will cause the API not traced in version map.
> > > > >
> > > > > Any ideas?
> > > > >
> > > > > Regards,
> > > > > Fan
> > > > >
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
2020-09-22 10:18 ` Ananyev, Konstantin
2020-09-22 12:15 ` Zhang, Roy Fan
@ 2020-09-22 12:50 ` Zhang, Roy Fan
2020-09-22 12:52 ` Akhil Goyal
1 sibling, 1 reply; 84+ messages in thread
From: Zhang, Roy Fan @ 2020-09-22 12:50 UTC (permalink / raw)
To: Ananyev, Konstantin, Akhil Goyal, dev, Thomas Monjalon
Cc: Trahe, Fiona, Kusztal, ArkadiuszX, Dybkowski, AdamX, Bronowski,
PiotrX, Anoob Joseph
Hi Akhil,
Konstantin and I had an off-line discussion. Is this structure ok for you?
/**
* Crypto virtual and IOVA address descriptor. Used to describe cryptographic
* data without
*
*/
struct rte_crypto_va_iova_ptr {
void *va;
rte_iova_t *iova;
};
/**
* Raw data operation descriptor.
* Supposed to be used with synchronous CPU crypto API call or asynchronous
* RAW data path API call.
*/
struct rte_crypto_sym_vec {
/** array of SGL vectors */
struct rte_crypto_sgl *sgl;
/** array of pointers to cipher IV */
struct rte_crypto_va_iova_ptr *iv;
/** array of pointers to digest */
struct rte_crypto_va_iova_ptr *digest;
__extension__
union {
/** array of pointers to auth IV, used for chain operation */
struct rte_crypto_va_iova_ptr *auth_iv;
/** array of pointers to AAD, used for AEAD operation */
struct rte_crypto_va_iova_ptr *aad;
};
/**
* array of statuses for each operation:
* - 0 on success
* - errno on error
*/
int32_t *status;
/** number of operations to perform */
uint32_t num;
};
Regards,
Fan
> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Tuesday, September 22, 2020 11:18 AM
> To: Zhang, Roy Fan <roy.fan.zhang@intel.com>; Akhil Goyal
> <akhil.goyal@nxp.com>; dev@dpdk.org; Thomas Monjalon
> <thomas@monjalon.net>
> Cc: Trahe, Fiona <fiona.trahe@intel.com>; Kusztal, ArkadiuszX
> <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> <adamx.dybkowski@intel.com>; Bronowski, PiotrX
> <piotrx.bronowski@intel.com>; Anoob Joseph <anoobj@marvell.com>
> Subject: RE: [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
>
>
>
> >
> > Hi Akhil and Konstantin,
> >
> > > -----Original Message-----
> > > From: Akhil Goyal <akhil.goyal@nxp.com>
> > > Sent: Tuesday, September 22, 2020 10:06 AM
> > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Zhang, Roy
> Fan
> > > <roy.fan.zhang@intel.com>; dev@dpdk.org; Thomas Monjalon
> > > <thomas@monjalon.net>
> > > Cc: Trahe, Fiona <fiona.trahe@intel.com>; Kusztal, ArkadiuszX
> > > <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> > > <adamx.dybkowski@intel.com>; Bronowski, PiotrX
> > > <piotrx.bronowski@intel.com>; Anoob Joseph <anoobj@marvell.com>
> > > Subject: RE: [dpdk-dev v9 1/4] cryptodev: add crypto data-path service
> APIs
> > >
> > > Hi Konstantin,
> > > > Hi lads,
> > > >
> > > > >
> > > > > Hi Akhil,
> > > > >
> > > > > Thanks again for the review!
> > > > > To summarize, the following places to be changed for v10.
> > > > >
> > > > > 1. Documentation update and reviewed internally in Intel first.
> > > > > 2. Add the missing comments to the structure.
> > > > > 3. Change the name "dp_service" to "raw_dp" to all APIs and
> > > documentation.
> > > > > 4. Change the structure
> > > > > struct rte_crypto_sym_vec {
> > > > > /** array of SGL vectors */
> > > > > struct rte_crypto_sgl *sgl;
> > > > >
> > > > > union {
> > > > > /** Supposed to be used with CPU crypto API call. */
> > > > > struct {
> > > > > /** array of pointers to IV */
> > > > > void **iv;
> > > > > /** array of pointers to AAD */
> > > > > void **aad;
> > > > > /** array of pointers to digest */
> > > > > void **digest;
> > > > > } cpu_crypto;
> > > > > /** Supposed to be used with HW raw crypto API call. */
> > > > > struct {
> > > > > /** array of pointers to cipher IV */
> > > > > void **cipher_iv_ptr;
> > > > > /** array of IOVA addresses to cipher IV */
> > > > > rte_iova_t *cipher_iv_iova;
> > > > > /** array of pointers to auth IV */
> > > > > void **auth_iv_ptr;
> > > > > /** array of IOVA addresses to auth IV */
> > > > > rte_iova_t *auth_iv_iova;
> > > > > /** array of pointers to digest */
> > > > > void **digest_ptr;
> > > > > /** array of IOVA addresses to digest */
> > > > > rte_iova_t *digest_iova;
> > > > > } hw_chain;
> > > > > /** Supposed to be used with HW raw crypto API call. */
> > > > > struct {
> > > > > /** array of pointers to AEAD IV */
> > > > > void **iv_ptr;
> > > > > /** array of IOVA addresses to AEAD IV */
> > > > > rte_iova_t *iv_iova;
> > > > > /** array of pointers to AAD */
> > > > > void **aad_ptr;
> > > > > /** array of IOVA addresses to AAD */
> > > > > rte_iova_t *aad_iova;
> > > > > /** array of pointers to digest */
> > > > > void **digest_ptr;
> > > > > /** array of IOVA addresses to digest */
> > > > > rte_iova_t *digest_iova;
> > > > > } hw_aead;
> > > > > };
> > > > >
> > > > > /**
> > > > > * array of statuses for each operation:
> > > > > * - 0 on success
> > > > > * - errno on error
> > > > > */
> > > > > int32_t *status;
> > > > > /** number of operations to perform */
> > > > > uint32_t num;
> > > > > };
> > > >
> > > >
> > > > As I understand you just need to add pointers to iova[] for iv, aad and
> > > digest,
> > > > correct?
> > > > If so, why not simply:
> > > >
> > > > struct rte_va_iova_ptr {
> > > > void *va;
> > > > rte_iova_t *iova;
> > > > };
> > > >
> > > > struct rte_crypto_sym_vec {
> > > > /** array of SGL vectors */
> > > > struct rte_crypto_sgl *sgl;
> > > > /** array of pointers to IV */
> > > > struct rte_va_iova_ptr iv;
> >
> > We will need 2 IV here, one for cipher and one for auth (GMAC for
> example).
>
> Hmm, why do we need to different IVs for GMAC?
> And if so how does it work now with either rte_crypto_op or with
> rte_crypto_sym_vec?
>
> >
> > > > /** array of pointers to AAD */
> > > > struct rte_va_iova_ptr aad;
> > > > /** array of pointers to digest */
> > > > struct rte_va_iova_ptr digest;
> > > > /**
> > > > * array of statuses for each operation:
> > > > * - 0 on success
> > > > * - errno on error
> > > > */
> > > > int32_t *status;
> > > > /** number of operations to perform */
> > > > uint32_t num;
> > > > };
> > > >
> > > > BTW, it would be both ABI and API breakage,
> > > > though all functions using this struct are marked as experimental,
> > > > plus it is an LTS release, so it seems to be ok.
> > > > Though I think it needs to be flagged in RN.
> > >
> > > This is a good suggestion. This will make some changes in the cpu-crypto
> > > support as well
> > > And should be a separate patch.
> > > We can take the API and ABI breakage in this release. That is not an issue.
> > >
> > >
> > > >
> > > > Another option obviously - introduce completely new structure for it
> > > > and leave existing one unaffected.
> > > >
> > > This will create some duplicate code. Would not prefer that.
> > >
> > > > >
> > > > > 5. Remove enum rte_crypto_dp_service, let the PMDs using the
> session
> > > private
> > > > data to decide function handler.
> > > > > 6. Remove is_update parameter.
> > > > >
> > > > > The main point that is uncertain is the existance of "submit_single".
> > > > > I am ok to remove "submit_single" function. In VPP we can use
> > > > rte_cryptodev_dp_sym_submit_vec() with vec.num=1 each time to
> avoid
> > > > > double looping.
> > > > > But we have to put the rte_cryptodev_dp_sym_submit_vec() as an
> inline
> > > > function - this will cause the API not traced in version map.
> > > > >
> > > > > Any ideas?
> > > > >
> > > > > Regards,
> > > > > Fan
> > > > >
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
2020-09-22 12:50 ` Zhang, Roy Fan
@ 2020-09-22 12:52 ` Akhil Goyal
0 siblings, 0 replies; 84+ messages in thread
From: Akhil Goyal @ 2020-09-22 12:52 UTC (permalink / raw)
To: Zhang, Roy Fan, Ananyev, Konstantin, dev, Thomas Monjalon
Cc: Trahe, Fiona, Kusztal, ArkadiuszX, Dybkowski, AdamX, Bronowski,
PiotrX, Anoob Joseph
Hi Fan,
> Hi Akhil,
>
> Konstantin and I had an off-line discussion. Is this structure ok for you?
>
> /**
> * Crypto virtual and IOVA address descriptor. Used to describe cryptographic
> * data without
The comment is incomplete, however the structure is fine.
> *
> */
> struct rte_crypto_va_iova_ptr {
> void *va;
> rte_iova_t *iova;
> };
>
> /**
> * Raw data operation descriptor.
> * Supposed to be used with synchronous CPU crypto API call or asynchronous
> * RAW data path API call.
> */
> struct rte_crypto_sym_vec {
> /** array of SGL vectors */
> struct rte_crypto_sgl *sgl;
> /** array of pointers to cipher IV */
> struct rte_crypto_va_iova_ptr *iv;
> /** array of pointers to digest */
> struct rte_crypto_va_iova_ptr *digest;
>
> __extension__
> union {
> /** array of pointers to auth IV, used for chain operation */
> struct rte_crypto_va_iova_ptr *auth_iv;
> /** array of pointers to AAD, used for AEAD operation */
> struct rte_crypto_va_iova_ptr *aad;
> };
>
> /**
> * array of statuses for each operation:
> * - 0 on success
> * - errno on error
> */
> int32_t *status;
> /** number of operations to perform */
> uint32_t num;
> };
>
> Regards,
> Fan
>
> > -----Original Message-----
> > From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > Sent: Tuesday, September 22, 2020 11:18 AM
> > To: Zhang, Roy Fan <roy.fan.zhang@intel.com>; Akhil Goyal
> > <akhil.goyal@nxp.com>; dev@dpdk.org; Thomas Monjalon
> > <thomas@monjalon.net>
> > Cc: Trahe, Fiona <fiona.trahe@intel.com>; Kusztal, ArkadiuszX
> > <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> > <adamx.dybkowski@intel.com>; Bronowski, PiotrX
> > <piotrx.bronowski@intel.com>; Anoob Joseph <anoobj@marvell.com>
> > Subject: RE: [dpdk-dev v9 1/4] cryptodev: add crypto data-path service APIs
> >
> >
> >
> > >
> > > Hi Akhil and Konstantin,
> > >
> > > > -----Original Message-----
> > > > From: Akhil Goyal <akhil.goyal@nxp.com>
> > > > Sent: Tuesday, September 22, 2020 10:06 AM
> > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Zhang, Roy
> > Fan
> > > > <roy.fan.zhang@intel.com>; dev@dpdk.org; Thomas Monjalon
> > > > <thomas@monjalon.net>
> > > > Cc: Trahe, Fiona <fiona.trahe@intel.com>; Kusztal, ArkadiuszX
> > > > <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> > > > <adamx.dybkowski@intel.com>; Bronowski, PiotrX
> > > > <piotrx.bronowski@intel.com>; Anoob Joseph <anoobj@marvell.com>
> > > > Subject: RE: [dpdk-dev v9 1/4] cryptodev: add crypto data-path service
> > APIs
> > > >
> > > > Hi Konstantin,
> > > > > Hi lads,
> > > > >
> > > > > >
> > > > > > Hi Akhil,
> > > > > >
> > > > > > Thanks again for the review!
> > > > > > To summarize, the following places to be changed for v10.
> > > > > >
> > > > > > 1. Documentation update and reviewed internally in Intel first.
> > > > > > 2. Add the missing comments to the structure.
> > > > > > 3. Change the name "dp_service" to "raw_dp" to all APIs and
> > > > documentation.
> > > > > > 4. Change the structure
> > > > > > struct rte_crypto_sym_vec {
> > > > > > /** array of SGL vectors */
> > > > > > struct rte_crypto_sgl *sgl;
> > > > > >
> > > > > > union {
> > > > > > /** Supposed to be used with CPU crypto API call. */
> > > > > > struct {
> > > > > > /** array of pointers to IV */
> > > > > > void **iv;
> > > > > > /** array of pointers to AAD */
> > > > > > void **aad;
> > > > > > /** array of pointers to digest */
> > > > > > void **digest;
> > > > > > } cpu_crypto;
> > > > > > /** Supposed to be used with HW raw crypto API call.
> */
> > > > > > struct {
> > > > > > /** array of pointers to cipher IV */
> > > > > > void **cipher_iv_ptr;
> > > > > > /** array of IOVA addresses to cipher IV */
> > > > > > rte_iova_t *cipher_iv_iova;
> > > > > > /** array of pointers to auth IV */
> > > > > > void **auth_iv_ptr;
> > > > > > /** array of IOVA addresses to auth IV */
> > > > > > rte_iova_t *auth_iv_iova;
> > > > > > /** array of pointers to digest */
> > > > > > void **digest_ptr;
> > > > > > /** array of IOVA addresses to digest */
> > > > > > rte_iova_t *digest_iova;
> > > > > > } hw_chain;
> > > > > > /** Supposed to be used with HW raw crypto API call.
> */
> > > > > > struct {
> > > > > > /** array of pointers to AEAD IV */
> > > > > > void **iv_ptr;
> > > > > > /** array of IOVA addresses to AEAD IV */
> > > > > > rte_iova_t *iv_iova;
> > > > > > /** array of pointers to AAD */
> > > > > > void **aad_ptr;
> > > > > > /** array of IOVA addresses to AAD */
> > > > > > rte_iova_t *aad_iova;
> > > > > > /** array of pointers to digest */
> > > > > > void **digest_ptr;
> > > > > > /** array of IOVA addresses to digest */
> > > > > > rte_iova_t *digest_iova;
> > > > > > } hw_aead;
> > > > > > };
> > > > > >
> > > > > > /**
> > > > > > * array of statuses for each operation:
> > > > > > * - 0 on success
> > > > > > * - errno on error
> > > > > > */
> > > > > > int32_t *status;
> > > > > > /** number of operations to perform */
> > > > > > uint32_t num;
> > > > > > };
> > > > >
> > > > >
> > > > > As I understand you just need to add pointers to iova[] for iv, aad and
> > > > digest,
> > > > > correct?
> > > > > If so, why not simply:
> > > > >
> > > > > struct rte_va_iova_ptr {
> > > > > void *va;
> > > > > rte_iova_t *iova;
> > > > > };
> > > > >
> > > > > struct rte_crypto_sym_vec {
> > > > > /** array of SGL vectors */
> > > > > struct rte_crypto_sgl *sgl;
> > > > > /** array of pointers to IV */
> > > > > struct rte_va_iova_ptr iv;
> > >
> > > We will need 2 IV here, one for cipher and one for auth (GMAC for
> > example).
> >
> > Hmm, why do we need to different IVs for GMAC?
> > And if so how does it work now with either rte_crypto_op or with
> > rte_crypto_sym_vec?
> >
> > >
> > > > > /** array of pointers to AAD */
> > > > > struct rte_va_iova_ptr aad;
> > > > > /** array of pointers to digest */
> > > > > struct rte_va_iova_ptr digest;
> > > > > /**
> > > > > * array of statuses for each operation:
> > > > > * - 0 on success
> > > > > * - errno on error
> > > > > */
> > > > > int32_t *status;
> > > > > /** number of operations to perform */
> > > > > uint32_t num;
> > > > > };
> > > > >
> > > > > BTW, it would be both ABI and API breakage,
> > > > > though all functions using this struct are marked as experimental,
> > > > > plus it is an LTS release, so it seems to be ok.
> > > > > Though I think it needs to be flagged in RN.
> > > >
> > > > This is a good suggestion. This will make some changes in the cpu-crypto
> > > > support as well
> > > > And should be a separate patch.
> > > > We can take the API and ABI breakage in this release. That is not an issue.
> > > >
> > > >
> > > > >
> > > > > Another option obviously - introduce completely new structure for it
> > > > > and leave existing one unaffected.
> > > > >
> > > > This will create some duplicate code. Would not prefer that.
> > > >
> > > > > >
> > > > > > 5. Remove enum rte_crypto_dp_service, let the PMDs using the
> > session
> > > > private
> > > > > data to decide function handler.
> > > > > > 6. Remove is_update parameter.
> > > > > >
> > > > > > The main point that is uncertain is the existance of "submit_single".
> > > > > > I am ok to remove "submit_single" function. In VPP we can use
> > > > > rte_cryptodev_dp_sym_submit_vec() with vec.num=1 each time to
> > avoid
> > > > > > double looping.
> > > > > > But we have to put the rte_cryptodev_dp_sym_submit_vec() as an
> > inline
> > > > > function - this will cause the API not traced in version map.
> > > > > >
> > > > > > Any ideas?
> > > > > >
> > > > > > Regards,
> > > > > > Fan
> > > > > >
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v9 4/4] doc: add cryptodev service APIs guide
2020-09-18 20:39 ` Akhil Goyal
2020-09-21 12:28 ` Zhang, Roy Fan
@ 2020-09-23 13:37 ` Zhang, Roy Fan
1 sibling, 0 replies; 84+ messages in thread
From: Zhang, Roy Fan @ 2020-09-23 13:37 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: Trahe, Fiona, Kusztal, ArkadiuszX, Dybkowski, AdamX, Anoob Joseph
Hi Akhil
> -----Original Message-----
...
> > +
> > +Either enqueue functions will not command the crypto device to start
> > processing
> > +until ``rte_cryptodev_dp_submit_done`` function is called. Before then
> the user
> > +shall expect the driver only stores the necessory context data in the
> > +``rte_crypto_dp_service_ctx`` buffer for the next enqueue operation. If
> the
> > user
> > +wants to abandon the submitted operations, simply call
> > +``rte_cryptodev_dp_configure_service`` function instead with the
> parameter
> > +``is_update`` set to 0. The driver will recover the service context data to
> > +the previous state.
>
> Can you explain a use case where this is actually being used? This looks fancy
> but
> Do we have this type of requirement in any protocol stacks/specifications?
> I believe it to be an extra burden on the application writer if it is not a
> protocol requirement.
>
I missed responding this one.
The requirement comes from cooping with VPP crypto framework.
The reason for this feature is fill the gap of cryptodev enqueue and dequeue operations.
If the user application/library uses the approach similar to " rte_crypto_sym_vec" (such as VPP vnet_crypto_async_frame_t) that clusters multiple crypto ops as a burst, the application requires enqueuing and dequeuing all ops as a whole inside, or nothing.
It is very slow for rte_cryptodev_enqueue/dequeue_burst to achieve this today as the user has no control over how many ops I want to enqueue/dequeue preciously. For example I want to enqueue a " rte_crypto_sym_vec" buffer contains 32 descriptors, and stores " rte_crypto_sym_vec" as opaque data in enqueue, but rte_cryptodev_enqueue_burst returns 31, I have no option but cache the 1 left job for next enqueue attempt (or I manually check the inflight count in every enqueue). Also during dequeue since the number "32" is stored inside rte_crypto_sym_vec.num, I have no way to know how many ops to dequeue, but blindly dequeue them and store in a software ring, parse the the dequeue count from retrieved opaque data, and check the ring count against dequeue count.
With the new way provided we can easily achieve the goal. For HW crypto PMD such implementation is relatively easy, we only need to create a shadow copy to the queue pair data in ``rte_crypto_dp_service_ctx`` and updates in enqueue/dequeue, when "enqueue/dequeue_done" is called the queue is kicked to start processing jobs already set in the queue and merge the shadow copy queue data into driver maintained one.
Regards,
Fan
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v10 0/4] cryptodev: add raw data-path APIs
2020-09-08 8:42 ` [dpdk-dev] [dpdk-dev v9 0/4] cryptodev: add data-path service APIs Fan Zhang
` (3 preceding siblings ...)
2020-09-08 8:42 ` [dpdk-dev] [dpdk-dev v9 4/4] doc: add cryptodev service APIs guide Fan Zhang
@ 2020-09-24 16:34 ` Fan Zhang
2020-09-24 16:34 ` [dpdk-dev] [dpdk-dev v10 1/4] cryptodev: change crypto symmetric vector structure Fan Zhang
` (5 more replies)
4 siblings, 6 replies; 84+ messages in thread
From: Fan Zhang @ 2020-09-24 16:34 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski,
anoobj, konstantin.ananyev, Fan Zhang
The Crypto Raw data-path APIs are a set of APIs are designed to enable
externel libraries/applications which want to leverage the cryptographic
processing provided by DPDK crypto PMDs through the cryptodev API but in a
manner that is not dependent on native DPDK data structures (eg. rte_mbuf,
rte_crypto_op, ... etc) in their data-path implementation.
The raw data-path APIs have the following advantages:
- External data structure friendly design. The new APIs uses the operation
descriptor ``struct rte_crypto_sym_vec`` that supports raw data pointer and
IOVA addresses as input. Moreover, the APIs does not require the user to
allocate the descriptor from mempool, nor requiring mbufs to describe input
data's virtual and IOVA addresses. All these features made the translation
from user's own data structure into the descriptor easier and more efficient.
- Flexible enqueue and dequeue operation. The raw data-path APIs gives the
user more control to the enqueue and dequeue operations, including the
capability of precious enqueue/dequeue count, abandoning enqueue or dequeue
at any time, and operation status translation and set on the fly.
V10:
- Changed rte_crypto_sym_vec structure to support both sync cpu_crypto and
async raw data-path API.
- Changed documentation.
- Changed API names.
- Changed the way data-path context is initialized.
- Added new API to attach session or xform to existing context.
- Changed QAT PMD accordingly with new APIs.
- Changed unit test to use the device feature flag for the raw API tests.
v9:
- Changed return types of submit_done() and dequeue_done() APIs.
- Added release note update.
v8:
- Updated following by comments.
- Fixed a few bugs.
- Fixed ARM build error.
- Updated the unit test covering all tests.
v7:
- Fixed a few typos.
- Fixed length calculation bugs.
v6:
- Rebased on top of DPDK 20.08.
- Changed to service ctx and added single job submit/dequeue.
v5:
- Changed to use rte_crypto_sym_vec as input.
- Changed to use public APIs instead of use function pointer.
v4:
- Added missed patch.
v3:
- Instead of QAT only API, moved the API to cryptodev.
- Added cryptodev feature flags.
v2:
- Used a structure to simplify parameters.
- Added unit tests.
- Added documentation.
Fan Zhang (4):
cryptodev: change crypto symmetric vector structure
cryptodev: add raw crypto data-path APIs
crypto/qat: add raw crypto data-path API support
test/crypto: add unit-test for cryptodev raw API test
app/test/test_cryptodev.c | 784 ++++++++++++++-
app/test/test_cryptodev.h | 12 +
app/test/test_cryptodev_blockcipher.c | 58 +-
doc/guides/prog_guide/cryptodev_lib.rst | 96 +-
doc/guides/rel_notes/release_20_11.rst | 10 +
drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 18 +-
drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 9 +-
drivers/crypto/qat/meson.build | 1 +
drivers/crypto/qat/qat_sym.h | 11 +
drivers/crypto/qat/qat_sym_hw_dp.c | 951 ++++++++++++++++++
drivers/crypto/qat/qat_sym_pmd.c | 9 +-
lib/librte_cryptodev/rte_crypto_sym.h | 40 +-
lib/librte_cryptodev/rte_cryptodev.c | 104 ++
lib/librte_cryptodev/rte_cryptodev.h | 354 ++++++-
lib/librte_cryptodev/rte_cryptodev_pmd.h | 47 +-
.../rte_cryptodev_version.map | 11 +
lib/librte_ipsec/esp_inb.c | 12 +-
lib/librte_ipsec/esp_outb.c | 12 +-
lib/librte_ipsec/misc.h | 6 +-
19 files changed, 2437 insertions(+), 108 deletions(-)
create mode 100644 drivers/crypto/qat/qat_sym_hw_dp.c
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v10 1/4] cryptodev: change crypto symmetric vector structure
2020-09-24 16:34 ` [dpdk-dev] [dpdk-dev v10 0/4] cryptodev: add raw data-path APIs Fan Zhang
@ 2020-09-24 16:34 ` Fan Zhang
2020-09-25 8:03 ` Dybkowski, AdamX
2020-09-28 17:01 ` Ananyev, Konstantin
2020-09-24 16:34 ` [dpdk-dev] [dpdk-dev v10 2/4] cryptodev: add raw crypto data-path APIs Fan Zhang
` (4 subsequent siblings)
5 siblings, 2 replies; 84+ messages in thread
From: Fan Zhang @ 2020-09-24 16:34 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski,
anoobj, konstantin.ananyev, Fan Zhang
This patch updates ``rte_crypto_sym_vec`` structure to add
support for both cpu_crypto synchrounous operation and
asynchronous raw data-path APIs. The patch also includes
AESNI-MB and AESNI-GCM PMD changes, unit test changes and
documentation updates.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
app/test/test_cryptodev.c | 25 ++++++++------
doc/guides/prog_guide/cryptodev_lib.rst | 3 +-
doc/guides/rel_notes/release_20_11.rst | 3 ++
drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 18 +++++-----
drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 9 +++--
lib/librte_cryptodev/rte_crypto_sym.h | 40 ++++++++++++++++------
lib/librte_ipsec/esp_inb.c | 12 +++----
lib/librte_ipsec/esp_outb.c | 12 +++----
lib/librte_ipsec/misc.h | 6 ++--
9 files changed, 79 insertions(+), 49 deletions(-)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 70bf6fe2c..99f1eed82 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -151,11 +151,11 @@ static void
process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op)
{
int32_t n, st;
- void *iv;
struct rte_crypto_sym_op *sop;
union rte_crypto_sym_ofs ofs;
struct rte_crypto_sgl sgl;
struct rte_crypto_sym_vec symvec;
+ struct rte_crypto_va_iova_ptr iv_ptr, aad_ptr, digest_ptr;
struct rte_crypto_vec vec[UINT8_MAX];
sop = op->sym;
@@ -171,13 +171,17 @@ process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op)
sgl.vec = vec;
sgl.num = n;
symvec.sgl = &sgl;
- iv = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
- symvec.iv = &iv;
- symvec.aad = (void **)&sop->aead.aad.data;
- symvec.digest = (void **)&sop->aead.digest.data;
+ symvec.iv = &iv_ptr;
+ symvec.digest = &digest_ptr;
+ symvec.aad = &aad_ptr;
symvec.status = &st;
symvec.num = 1;
+ /* for CPU crypto the IOVA address is not required */
+ iv_ptr.va = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
+ digest_ptr.va = (void *)sop->aead.digest.data;
+ aad_ptr.va = (void *)sop->aead.aad.data;
+
ofs.raw = 0;
n = rte_cryptodev_sym_cpu_crypto_process(dev_id, sop->session, ofs,
@@ -193,11 +197,11 @@ static void
process_cpu_crypt_auth_op(uint8_t dev_id, struct rte_crypto_op *op)
{
int32_t n, st;
- void *iv;
struct rte_crypto_sym_op *sop;
union rte_crypto_sym_ofs ofs;
struct rte_crypto_sgl sgl;
struct rte_crypto_sym_vec symvec;
+ struct rte_crypto_va_iova_ptr iv_ptr, digest_ptr;
struct rte_crypto_vec vec[UINT8_MAX];
sop = op->sym;
@@ -213,13 +217,14 @@ process_cpu_crypt_auth_op(uint8_t dev_id, struct rte_crypto_op *op)
sgl.vec = vec;
sgl.num = n;
symvec.sgl = &sgl;
- iv = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
- symvec.iv = &iv;
- symvec.aad = (void **)&sop->aead.aad.data;
- symvec.digest = (void **)&sop->auth.digest.data;
+ symvec.iv = &iv_ptr;
+ symvec.digest = &digest_ptr;
symvec.status = &st;
symvec.num = 1;
+ iv_ptr.va = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
+ digest_ptr.va = (void *)sop->auth.digest.data;
+
ofs.raw = 0;
ofs.ofs.cipher.head = sop->cipher.data.offset - sop->auth.data.offset;
ofs.ofs.cipher.tail = (sop->auth.data.offset + sop->auth.data.length) -
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index c14f750fa..e7ba35c2d 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -620,7 +620,8 @@ operation descriptor (``struct rte_crypto_sym_vec``) containing:
descriptors of performed operations (``struct rte_crypto_sgl``). Each instance
of ``struct rte_crypto_sgl`` consists of a number of segments and a pointer to
an array of segment descriptors ``struct rte_crypto_vec``;
-- pointers to arrays of size ``num`` containing IV, AAD and digest information,
+- pointers to arrays of size ``num`` containing IV, AAD and digest information
+ in the ``cpu_crypto`` sub-structure,
- pointer to an array of size ``num`` where status information will be stored
for each operation.
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 73ac08fb0..20ebaef5b 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -135,6 +135,9 @@ API Changes
* bpf: ``RTE_BPF_XTYPE_NUM`` has been dropped from ``rte_bpf_xtype``.
+* The structure ``rte_crypto_sym_vec`` is updated to support both cpu_crypto
+ synchrounous operation and asynchronous raw data-path APIs.
+
ABI Changes
-----------
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 1d2a0ce00..973b61bd6 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -464,9 +464,10 @@ aesni_gcm_sgl_encrypt(struct aesni_gcm_session *s,
processed = 0;
for (i = 0; i < vec->num; ++i) {
aesni_gcm_process_gcm_sgl_op(s, gdata_ctx,
- &vec->sgl[i], vec->iv[i], vec->aad[i]);
+ &vec->sgl[i], vec->iv[i].va,
+ vec->aad[i].va);
vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(s,
- gdata_ctx, vec->digest[i]);
+ gdata_ctx, vec->digest[i].va);
processed += (vec->status[i] == 0);
}
@@ -482,9 +483,10 @@ aesni_gcm_sgl_decrypt(struct aesni_gcm_session *s,
processed = 0;
for (i = 0; i < vec->num; ++i) {
aesni_gcm_process_gcm_sgl_op(s, gdata_ctx,
- &vec->sgl[i], vec->iv[i], vec->aad[i]);
+ &vec->sgl[i], vec->iv[i].va,
+ vec->aad[i].va);
vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(s,
- gdata_ctx, vec->digest[i]);
+ gdata_ctx, vec->digest[i].va);
processed += (vec->status[i] == 0);
}
@@ -505,9 +507,9 @@ aesni_gmac_sgl_generate(struct aesni_gcm_session *s,
}
aesni_gcm_process_gmac_sgl_op(s, gdata_ctx,
- &vec->sgl[i], vec->iv[i]);
+ &vec->sgl[i], vec->iv[i].va);
vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(s,
- gdata_ctx, vec->digest[i]);
+ gdata_ctx, vec->digest[i].va);
processed += (vec->status[i] == 0);
}
@@ -528,9 +530,9 @@ aesni_gmac_sgl_verify(struct aesni_gcm_session *s,
}
aesni_gcm_process_gmac_sgl_op(s, gdata_ctx,
- &vec->sgl[i], vec->iv[i]);
+ &vec->sgl[i], vec->iv[i].va);
vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(s,
- gdata_ctx, vec->digest[i]);
+ gdata_ctx, vec->digest[i].va);
processed += (vec->status[i] == 0);
}
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index 1bddbcf74..01b3bfc29 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -1744,7 +1744,7 @@ generate_sync_dgst(struct rte_crypto_sym_vec *vec,
for (i = 0, k = 0; i != vec->num; i++) {
if (vec->status[i] == 0) {
- memcpy(vec->digest[i], dgst[i], len);
+ memcpy(vec->digest[i].va, dgst[i], len);
k++;
}
}
@@ -1760,7 +1760,7 @@ verify_sync_dgst(struct rte_crypto_sym_vec *vec,
for (i = 0, k = 0; i != vec->num; i++) {
if (vec->status[i] == 0) {
- if (memcmp(vec->digest[i], dgst[i], len) != 0)
+ if (memcmp(vec->digest[i].va, dgst[i], len) != 0)
vec->status[i] = EBADMSG;
else
k++;
@@ -1823,9 +1823,8 @@ aesni_mb_cpu_crypto_process_bulk(struct rte_cryptodev *dev,
}
/* Submit job for processing */
- set_cpu_mb_job_params(job, s, sofs, buf, len,
- vec->iv[i], vec->aad[i], tmp_dgst[i],
- &vec->status[i]);
+ set_cpu_mb_job_params(job, s, sofs, buf, len, vec->iv[i].va,
+ vec->aad[i].va, tmp_dgst[i], &vec->status[i]);
job = submit_sync_job(mb_mgr);
j++;
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index f29c98051..8201189e0 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -51,26 +51,44 @@ struct rte_crypto_sgl {
};
/**
- * Synchronous operation descriptor.
- * Supposed to be used with CPU crypto API call.
+ * Crypto virtual and IOVA address descriptor, used to describe cryptographic
+ * data buffer without the length information. The length information is
+ * normally predefined during session creation.
+ */
+struct rte_crypto_va_iova_ptr {
+ void *va;
+ rte_iova_t *iova;
+};
+
+/**
+ * Raw data operation descriptor.
+ * Supposed to be used with synchronous CPU crypto API call or asynchronous
+ * RAW data path API call.
*/
struct rte_crypto_sym_vec {
+ /** number of operations to perform */
+ uint32_t num;
/** array of SGL vectors */
struct rte_crypto_sgl *sgl;
- /** array of pointers to IV */
- void **iv;
- /** array of pointers to AAD */
- void **aad;
+ /** array of pointers to cipher IV */
+ struct rte_crypto_va_iova_ptr *iv;
/** array of pointers to digest */
- void **digest;
+ struct rte_crypto_va_iova_ptr *digest;
+
+ __extension__
+ union {
+ /** array of pointers to auth IV, used for chain operation */
+ struct rte_crypto_va_iova_ptr *auth_iv;
+ /** array of pointers to AAD, used for AEAD operation */
+ struct rte_crypto_va_iova_ptr *aad;
+ };
+
/**
* array of statuses for each operation:
- * - 0 on success
- * - errno on error
+ * - 0 on success
+ * - errno on error
*/
int32_t *status;
- /** number of operations to perform */
- uint32_t num;
};
/**
diff --git a/lib/librte_ipsec/esp_inb.c b/lib/librte_ipsec/esp_inb.c
index 96eec0131..2b1df6a03 100644
--- a/lib/librte_ipsec/esp_inb.c
+++ b/lib/librte_ipsec/esp_inb.c
@@ -693,9 +693,9 @@ cpu_inb_pkt_prepare(const struct rte_ipsec_session *ss,
struct rte_ipsec_sa *sa;
struct replay_sqn *rsn;
union sym_op_data icv;
- void *iv[num];
- void *aad[num];
- void *dgst[num];
+ struct rte_crypto_va_iova_ptr iv[num];
+ struct rte_crypto_va_iova_ptr aad[num];
+ struct rte_crypto_va_iova_ptr dgst[num];
uint32_t dr[num];
uint32_t l4ofs[num];
uint32_t clen[num];
@@ -720,9 +720,9 @@ cpu_inb_pkt_prepare(const struct rte_ipsec_session *ss,
l4ofs + k, rc, ivbuf[k]);
/* fill iv, digest and aad */
- iv[k] = ivbuf[k];
- aad[k] = icv.va + sa->icv_len;
- dgst[k++] = icv.va;
+ iv[k].va = ivbuf[k];
+ aad[k].va = icv.va + sa->icv_len;
+ dgst[k++].va = icv.va;
} else {
dr[i - k] = i;
rte_errno = -rc;
diff --git a/lib/librte_ipsec/esp_outb.c b/lib/librte_ipsec/esp_outb.c
index fb9d5864c..1e181cf2c 100644
--- a/lib/librte_ipsec/esp_outb.c
+++ b/lib/librte_ipsec/esp_outb.c
@@ -449,9 +449,9 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
uint32_t i, k, n;
uint32_t l2, l3;
union sym_op_data icv;
- void *iv[num];
- void *aad[num];
- void *dgst[num];
+ struct rte_crypto_va_iova_ptr iv[num];
+ struct rte_crypto_va_iova_ptr aad[num];
+ struct rte_crypto_va_iova_ptr dgst[num];
uint32_t dr[num];
uint32_t l4ofs[num];
uint32_t clen[num];
@@ -488,9 +488,9 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss,
ivbuf[k]);
/* fill iv, digest and aad */
- iv[k] = ivbuf[k];
- aad[k] = icv.va + sa->icv_len;
- dgst[k++] = icv.va;
+ iv[k].va = ivbuf[k];
+ aad[k].va = icv.va + sa->icv_len;
+ dgst[k++].va = icv.va;
} else {
dr[i - k] = i;
rte_errno = -rc;
diff --git a/lib/librte_ipsec/misc.h b/lib/librte_ipsec/misc.h
index 1b543ed87..79b9a2076 100644
--- a/lib/librte_ipsec/misc.h
+++ b/lib/librte_ipsec/misc.h
@@ -112,7 +112,9 @@ mbuf_cut_seg_ofs(struct rte_mbuf *mb, struct rte_mbuf *ms, uint32_t ofs,
static inline void
cpu_crypto_bulk(const struct rte_ipsec_session *ss,
union rte_crypto_sym_ofs ofs, struct rte_mbuf *mb[],
- void *iv[], void *aad[], void *dgst[], uint32_t l4ofs[],
+ struct rte_crypto_va_iova_ptr iv[],
+ struct rte_crypto_va_iova_ptr aad[],
+ struct rte_crypto_va_iova_ptr dgst[], uint32_t l4ofs[],
uint32_t clen[], uint32_t num)
{
uint32_t i, j, n;
@@ -136,8 +138,8 @@ cpu_crypto_bulk(const struct rte_ipsec_session *ss,
/* fill the request structure */
symvec.sgl = &vecpkt[j];
symvec.iv = &iv[j];
- symvec.aad = &aad[j];
symvec.digest = &dgst[j];
+ symvec.aad = &aad[j];
symvec.status = &st[j];
symvec.num = i - j;
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v10 2/4] cryptodev: add raw crypto data-path APIs
2020-09-24 16:34 ` [dpdk-dev] [dpdk-dev v10 0/4] cryptodev: add raw data-path APIs Fan Zhang
2020-09-24 16:34 ` [dpdk-dev] [dpdk-dev v10 1/4] cryptodev: change crypto symmetric vector structure Fan Zhang
@ 2020-09-24 16:34 ` Fan Zhang
2020-09-25 8:04 ` Dybkowski, AdamX
` (2 more replies)
2020-09-24 16:34 ` [dpdk-dev] [dpdk-dev v10 3/4] crypto/qat: add raw crypto data-path API support Fan Zhang
` (3 subsequent siblings)
5 siblings, 3 replies; 84+ messages in thread
From: Fan Zhang @ 2020-09-24 16:34 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski,
anoobj, konstantin.ananyev, Fan Zhang, Piotr Bronowski
This patch adds raw data-path APIs for enqueue and dequeue
operations to cryptodev. The APIs support flexible user-define
enqueue and dequeue behaviors.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
---
doc/guides/prog_guide/cryptodev_lib.rst | 93 +++++
doc/guides/rel_notes/release_20_11.rst | 7 +
lib/librte_cryptodev/rte_crypto_sym.h | 2 +-
lib/librte_cryptodev/rte_cryptodev.c | 104 +++++
lib/librte_cryptodev/rte_cryptodev.h | 354 +++++++++++++++++-
lib/librte_cryptodev/rte_cryptodev_pmd.h | 47 ++-
.../rte_cryptodev_version.map | 11 +
7 files changed, 614 insertions(+), 4 deletions(-)
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index e7ba35c2d..5fe6c3c24 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -632,6 +632,99 @@ a call argument. Status different than zero must be treated as error.
For more details, e.g. how to convert an mbuf to an SGL, please refer to an
example usage in the IPsec library implementation.
+Cryptodev Raw Data-path APIs
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The Crypto Raw data-path APIs are a set of APIs are designed to enable
+externel libraries/applications which want to leverage the cryptographic
+processing provided by DPDK crypto PMDs through the cryptodev API but in a
+manner that is not dependent on native DPDK data structures (eg. rte_mbuf,
+rte_crypto_op, ... etc) in their data-path implementation.
+
+The raw data-path APIs have the following advantages:
+- External data structure friendly design. The new APIs uses the operation
+ descriptor ``struct rte_crypto_sym_vec`` that supports raw data pointer and
+ IOVA addresses as input. Moreover, the APIs does not require the user to
+ allocate the descriptor from mempool, nor requiring mbufs to describe input
+ data's virtual and IOVA addresses. All these features made the translation
+ from user's own data structure into the descriptor easier and more efficient.
+- Flexible enqueue and dequeue operation. The raw data-path APIs gives the
+ user more control to the enqueue and dequeue operations, including the
+ capability of precious enqueue/dequeue count, abandoning enqueue or dequeue
+ at any time, and operation status translation and set on the fly.
+
+Cryptodev PMDs who supports the raw data-path APIs will have
+``RTE_CRYPTODEV_FF_SYM_HW_RAW_DP`` feature flag presented. To use this
+feature, the user should create a local ``struct rte_crypto_raw_dp_ctx``
+buffer and extend to at least the length returned by
+``rte_cryptodev_raw_get_dp_context_size`` function call. The created buffer
+is then configured using ``rte_cryptodev_raw_configure_dp_context`` function.
+The library and the crypto device driver will then configure the buffer and
+write necessary temporary data into the buffer for later enqueue and dequeue
+operations. The temporary data may be treated as the shadow copy of the
+driver's private queue pair data.
+
+After the ``struct rte_crypto_raw_dp_ctx`` buffer is initialized, it is then
+attached either the cryptodev sym session, the rte_security session, or the
+cryptodev xform for session-less operation by
+``rte_cryptodev_raw_attach_session`` function. With the session or xform
+information the driver will set the corresponding enqueue and dequeue function
+handlers to the ``struct rte_crypto_raw_dp_ctx`` buffer.
+
+After the session is attached, the ``struct rte_crypto_raw_dp_ctx`` buffer is
+now ready for enqueue and dequeue operation. There are two different enqueue
+functions: ``rte_cryptodev_raw_enqueue`` to enqueue single descriptor,
+and ``rte_cryptodev_raw_enqueue_burst`` to enqueue multiple descriptors.
+In case of the application uses similar approach to
+``struct rte_crypto_sym_vec`` to manage its data burst but with different
+data structure, using the ``rte_cryptodev_raw_enqueue_burst`` function may be
+less efficient as this is a situation where the application has to loop over
+all crypto descriptors to assemble the ``struct rte_crypto_sym_vec`` buffer
+from its own data structure, and then the driver will loop over them again to
+translate every crypto job to the driver's specific queue data. The
+``rte_cryptodev_raw_enqueue`` should be used to save one loop for each data
+burst instead.
+
+During the enqueue, the cryptodev driver only sets the enqueued descriptors
+into the device queue but not initiates the device to start processing them.
+The temporary queue pair data changes in relation to the enqueued descriptors
+may be recorded in the ``struct rte_crypto_raw_dp_ctx`` buffer as the reference
+to the next enqueue function call. When ``rte_cryptodev_raw_enqueue_done`` is
+called, the driver will initiate the processing of all enqueued descriptors and
+merge the temporary queue pair data changes into the driver's private queue
+pair data. Calling ``rte_cryptodev_raw_configure_dp_context`` twice without
+``rte_cryptodev_dp_enqueue_done`` call in between will invalidate the temporary
+data stored in ``struct rte_crypto_raw_dp_ctx`` buffer. This feature is useful
+when the user wants to abandon partially enqueued data for a failed enqueue
+burst operation and try enqueuing in a whole later.
+
+Similar as enqueue, there are two dequeue functions:
+``rte_cryptodev_raw_dequeue`` for dequeing single descriptor, and
+``rte_cryptodev_raw_dequeue_burst`` for dequeuing a burst of descriptor. The
+dequeue functions only writes back the user data that was passed to the driver
+during inqueue, and inform the application the operation status.
+Different than ``rte_cryptodev_dequeue_burst`` which the user can only
+set an expected dequeue count and needs to read from dequeued cryptodev
+operations' status field, the raw data-path dequeue burst function allows
+the user to provide callback functions to retrieve dequeue
+count from the enqueued user data, and write the expected status value to the
+user data on the fly.
+
+Same as enqueue, both ``rte_cryptodev_raw_dequeue`` and
+``rte_cryptodev_raw_dequeue_burst`` will not wipe the dequeued descriptors
+from cryptodev queue unless ``rte_cryptodev_dp_dequeue_done`` is called. The
+dequeue related temporary queue data will be merged into the driver's private
+queue data in the function call.
+
+There are a few limitations to the data path service:
+
+* Only support in-place operations.
+* APIs are NOT thread-safe.
+* CANNOT mix the direct API's enqueue with rte_cryptodev_enqueue_burst, or
+ vice versa.
+
+See *DPDK API Reference* for details on each API definitions.
+
Sample code
-----------
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 20ebaef5b..d3d9f82f7 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -55,6 +55,13 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+ * **Added raw data-path APIs for cryptodev library.**
+
+ Cryptodev is added raw data-path APIs to accelerate external libraries or
+ applications those want to avail fast cryptodev enqueue/dequeue
+ operations but does not necessarily depends on mbufs and cryptodev
+ operation mempool.
+
Removed Items
-------------
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index 8201189e0..e1f23d303 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -57,7 +57,7 @@ struct rte_crypto_sgl {
*/
struct rte_crypto_va_iova_ptr {
void *va;
- rte_iova_t *iova;
+ rte_iova_t iova;
};
/**
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index 1dd795bcb..daeb5f504 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -1914,6 +1914,110 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec);
}
+int
+rte_cryptodev_raw_get_dp_context_size(uint8_t dev_id)
+{
+ struct rte_cryptodev *dev;
+ int32_t size = sizeof(struct rte_crypto_raw_dp_ctx);
+ int32_t priv_size;
+
+ if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
+ return -EINVAL;
+
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+ if (*dev->dev_ops->get_drv_ctx_size == NULL ||
+ !(dev->feature_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP)) {
+ return -ENOTSUP;
+ }
+
+ priv_size = (*dev->dev_ops->get_drv_ctx_size)(dev);
+ if (priv_size < 0)
+ return -ENOTSUP;
+
+ return RTE_ALIGN_CEIL((size + priv_size), 8);
+}
+
+int
+rte_cryptodev_raw_configure_dp_context(uint8_t dev_id, uint16_t qp_id,
+ struct rte_crypto_raw_dp_ctx *ctx)
+{
+ struct rte_cryptodev *dev;
+ union rte_cryptodev_session_ctx sess_ctx = {NULL};
+
+ if (!rte_cryptodev_get_qp_status(dev_id, qp_id))
+ return -EINVAL;
+
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+ if (!(dev->feature_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP)
+ || dev->dev_ops->configure_dp_ctx == NULL)
+ return -ENOTSUP;
+
+ return (*dev->dev_ops->configure_dp_ctx)(dev, qp_id,
+ RTE_CRYPTO_OP_WITH_SESSION, sess_ctx, ctx);
+}
+
+int
+rte_cryptodev_raw_attach_session(uint8_t dev_id, uint16_t qp_id,
+ struct rte_crypto_raw_dp_ctx *ctx,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx)
+{
+ struct rte_cryptodev *dev;
+
+ if (!rte_cryptodev_get_qp_status(dev_id, qp_id))
+ return -EINVAL;
+
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+ if (!(dev->feature_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP)
+ || dev->dev_ops->configure_dp_ctx == NULL)
+ return -ENOTSUP;
+ return (*dev->dev_ops->configure_dp_ctx)(dev, qp_id, sess_type,
+ session_ctx, ctx);
+}
+
+uint32_t
+rte_cryptodev_raw_enqueue_burst(struct rte_crypto_raw_dp_ctx *ctx,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **user_data)
+{
+ if (vec->num == 1) {
+ vec->status[0] = rte_cryptodev_raw_enqueue(ctx, vec->sgl->vec,
+ vec->sgl->num, ofs, vec->iv, vec->digest, vec->aad,
+ user_data[0]);
+ return (vec->status[0] == 0) ? 1 : 0;
+ }
+
+ return (*ctx->enqueue_burst)(ctx->qp_data, ctx->drv_ctx_data, vec,
+ ofs, user_data);
+}
+
+int
+rte_cryptodev_raw_enqueue_done(struct rte_crypto_raw_dp_ctx *ctx,
+ uint32_t n)
+{
+ return (*ctx->enqueue_done)(ctx->qp_data, ctx->drv_ctx_data, n);
+}
+
+int
+rte_cryptodev_raw_dequeue_done(struct rte_crypto_raw_dp_ctx *ctx,
+ uint32_t n)
+{
+ return (*ctx->dequeue_done)(ctx->qp_data, ctx->drv_ctx_data, n);
+}
+
+uint32_t
+rte_cryptodev_raw_dequeue_burst(struct rte_crypto_raw_dp_ctx *ctx,
+ rte_cryptodev_raw_get_dequeue_count_t get_dequeue_count,
+ rte_cryptodev_raw_post_dequeue_t post_dequeue,
+ void **out_user_data, uint8_t is_user_data_array,
+ uint32_t *n_success_jobs)
+{
+ return (*ctx->dequeue_burst)(ctx->qp_data, ctx->drv_ctx_data,
+ get_dequeue_count, post_dequeue, out_user_data,
+ is_user_data_array, n_success_jobs);
+}
+
/** Initialise rte_crypto_op mempool element */
static void
rte_crypto_op_init(struct rte_mempool *mempool,
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 7b3ebc20f..3579ab66e 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -466,7 +466,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
/**< Support symmetric session-less operations */
#define RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA (1ULL << 23)
/**< Support operations on data which is not byte aligned */
-
+#define RTE_CRYPTODEV_FF_SYM_HW_RAW_DP (1ULL << 24)
+/**< Support accelerated specific raw data-path APIs */
/**
* Get the name of a crypto device feature flag
@@ -1351,6 +1352,357 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
struct rte_crypto_sym_vec *vec);
+/**
+ * Get the size of the raw data-path context buffer.
+ *
+ * @param dev_id The device identifier.
+ *
+ * @return
+ * - If the device supports raw data-path APIs, return the context size.
+ * - If the device does not support the APIs, return -1.
+ */
+__rte_experimental
+int
+rte_cryptodev_raw_get_dp_context_size(uint8_t dev_id);
+
+/**
+ * Union of different crypto session types, including session-less xform
+ * pointer.
+ */
+union rte_cryptodev_session_ctx {
+ struct rte_cryptodev_sym_session *crypto_sess;
+ struct rte_crypto_sym_xform *xform;
+ struct rte_security_session *sec_sess;
+};
+
+/**
+ * Enqueue a data vector into device queue but the driver will not start
+ * processing until rte_cryptodev_raw_enqueue_done() is called.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param drv_ctx Driver specific context data.
+ * @param vec The array of descriptor vectors.
+ * @param ofs Start and stop offsets for auth and cipher
+ * operations.
+ * @param user_data The array of user data for dequeue later.
+ * @return
+ * - The number of descriptors successfully submitted.
+ */
+typedef uint32_t (*cryptodev_dp_sym_enqueue_burst_t)(
+ void *qp, uint8_t *drv_ctx, struct rte_crypto_sym_vec *vec,
+ union rte_crypto_sym_ofs ofs, void *user_data[]);
+
+/**
+ * Enqueue single descriptor into device queue but the driver will not start
+ * processing until rte_cryptodev_raw_enqueue_done() is called.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param drv_ctx Driver specific context data.
+ * @param data_vec The buffer data vector.
+ * @param n_data_vecs Number of buffer data vectors.
+ * @param ofs Start and stop offsets for auth and cipher
+ * operations.
+ * @param iv IV virtual and IOVA addresses
+ * @param digest digest virtual and IOVA addresses
+ * @param aad_or_auth_iv AAD or auth IV virtual and IOVA addresses,
+ * depends on the algorithm used.
+ * @param user_data The user data.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+typedef int (*cryptodev_dp_sym_enqueue_t)(
+ void *qp, uint8_t *drv_ctx, struct rte_crypto_vec *data_vec,
+ uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs,
+ struct rte_crypto_va_iova_ptr *iv,
+ struct rte_crypto_va_iova_ptr *digest,
+ struct rte_crypto_va_iova_ptr *aad_or_auth_iv,
+ void *user_data);
+
+/**
+ * Inform the cryptodev queue pair to start processing or finish dequeuing all
+ * enqueued/dequeued descriptors.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param drv_ctx Driver specific context data.
+ * @param n The total number of processed descriptors.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+typedef int (*cryptodev_dp_sym_operation_done_t)(void *qp, uint8_t *drv_ctx,
+ uint32_t n);
+
+/**
+ * Typedef that the user provided for the driver to get the dequeue count.
+ * The function may return a fixed number or the number parsed from the user
+ * data stored in the first processed descriptor.
+ *
+ * @param user_data Dequeued user data.
+ **/
+typedef uint32_t (*rte_cryptodev_raw_get_dequeue_count_t)(void *user_data);
+
+/**
+ * Typedef that the user provided to deal with post dequeue operation, such
+ * as filling status.
+ *
+ * @param user_data Dequeued user data.
+ * @param index Index number of the processed descriptor.
+ * @param is_op_success Operation status provided by the driver.
+ **/
+typedef void (*rte_cryptodev_raw_post_dequeue_t)(void *user_data,
+ uint32_t index, uint8_t is_op_success);
+
+/**
+ * Dequeue symmetric crypto processing of user provided data.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param drv_ctx Driver specific context data.
+ * @param get_dequeue_count User provided callback function to
+ * obtain dequeue count.
+ * @param post_dequeue User provided callback function to
+ * post-process a dequeued operation.
+ * @param out_user_data User data pointer array to be retrieve
+ * from device queue. In case of
+ * *is_user_data_array* is set there
+ * should be enough room to store all
+ * user data.
+ * @param is_user_data_array Set 1 if every dequeued user data will
+ * be written into out_user_data* array.
+ * @param n_success Driver written value to specific the
+ * total successful operations count.
+ *
+ * @return
+ * - Returns number of dequeued packets.
+ */
+typedef uint32_t (*cryptodev_dp_sym_dequeue_burst_t)(void *qp,
+ uint8_t *drv_ctx,
+ rte_cryptodev_raw_get_dequeue_count_t get_dequeue_count,
+ rte_cryptodev_raw_post_dequeue_t post_dequeue,
+ void **out_user_data, uint8_t is_user_data_array,
+ uint32_t *n_success);
+
+/**
+ * Dequeue symmetric crypto processing of user provided data.
+ *
+ * @param qp Driver specific queue pair data.
+ * @param drv_ctx Driver specific context data.
+ * @param out_user_data User data pointer to be retrieve from
+ * device queue.
+ *
+ * @return
+ * - 1 if the user_data is dequeued and the operation is a success.
+ * - 0 if the user_data is dequeued but the operation is failed.
+ * - -1 if no operation is dequeued.
+ */
+typedef int (*cryptodev_dp_sym_dequeue_t)(
+ void *qp, uint8_t *drv_ctx, void **out_user_data);
+
+/**
+ * Context data for raw data-path API crypto process. The buffer of this
+ * structure is to be allocated by the user application with the size equal
+ * or bigger than rte_cryptodev_raw_get_dp_context_size() returned value.
+ *
+ * NOTE: the buffer is to be used and maintained by the cryptodev driver, the
+ * user should NOT alter the buffer content to avoid application or system
+ * crash.
+ */
+struct rte_crypto_raw_dp_ctx {
+ void *qp_data;
+
+ cryptodev_dp_sym_enqueue_t enqueue;
+ cryptodev_dp_sym_enqueue_burst_t enqueue_burst;
+ cryptodev_dp_sym_operation_done_t enqueue_done;
+ cryptodev_dp_sym_dequeue_t dequeue;
+ cryptodev_dp_sym_dequeue_burst_t dequeue_burst;
+ cryptodev_dp_sym_operation_done_t dequeue_done;
+
+ /* Driver specific context data */
+ __extension__ uint8_t drv_ctx_data[];
+};
+
+/**
+ * Configure raw data-path context data.
+ *
+ * NOTE:
+ * After the context data is configured, the user should call
+ * rte_cryptodev_raw_attach_session() before using it in
+ * rte_cryptodev_raw_enqueue/dequeue function call.
+ *
+ * @param dev_id The device identifier.
+ * @param qp_id The index of the queue pair from which to
+ * retrieve processed packets. The value must be
+ * in the range [0, nb_queue_pair - 1] previously
+ * supplied to rte_cryptodev_configure().
+ * @param ctx The raw data-path context data.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+__rte_experimental
+int
+rte_cryptodev_raw_configure_dp_context(uint8_t dev_id, uint16_t qp_id,
+ struct rte_crypto_raw_dp_ctx *ctx);
+
+/**
+ * Attach a cryptodev session to a initialized raw data path context.
+ *
+ * @param dev_id The device identifier.
+ * @param qp_id The index of the queue pair from which to
+ * retrieve processed packets. The value must be
+ * in the range [0, nb_queue_pair - 1] previously
+ * supplied to rte_cryptodev_configure().
+ * @param ctx The raw data-path context data.
+ * @param sess_type session type.
+ * @param session_ctx Session context data.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+__rte_experimental
+int
+rte_cryptodev_raw_attach_session(uint8_t dev_id, uint16_t qp_id,
+ struct rte_crypto_raw_dp_ctx *ctx,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx);
+
+/**
+ * Enqueue single raw data-path descriptor.
+ *
+ * The enqueued descriptor will not be started processing until
+ * rte_cryptodev_raw_enqueue_done() is called.
+ *
+ * @param ctx The initialized raw data-path context data.
+ * @param data_vec The buffer vector.
+ * @param n_data_vecs Number of buffer vectors.
+ * @param ofs Start and stop offsets for auth and cipher
+ * operations.
+ * @param iv IV virtual and IOVA addresses
+ * @param digest digest virtual and IOVA addresses
+ * @param aad_or_auth_iv AAD or auth IV virtual and IOVA addresses,
+ * depends on the algorithm used.
+ * @param user_data The user data.
+ * @return
+ * - The number of descriptors successfully enqueued.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_cryptodev_raw_enqueue(struct rte_crypto_raw_dp_ctx *ctx,
+ struct rte_crypto_vec *data_vec, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ struct rte_crypto_va_iova_ptr *iv,
+ struct rte_crypto_va_iova_ptr *digest,
+ struct rte_crypto_va_iova_ptr *aad_or_auth_iv,
+ void *user_data)
+{
+ return (*ctx->enqueue)(ctx->qp_data, ctx->drv_ctx_data, data_vec,
+ n_data_vecs, ofs, iv, digest, aad_or_auth_iv, user_data);
+}
+
+/**
+ * Enqueue a data vector of raw data-path descriptors.
+ *
+ * The enqueued descriptors will not be started processing until
+ * rte_cryptodev_raw_enqueue_done() is called.
+ *
+ * @param ctx The initialized raw data-path context data.
+ * @param vec The array of descriptor vectors.
+ * @param ofs Start and stop offsets for auth and cipher
+ * operations.
+ * @param user_data The array of opaque data for dequeue.
+ * @return
+ * - The number of descriptors successfully enqueued.
+ */
+__rte_experimental
+uint32_t
+rte_cryptodev_raw_enqueue_burst(struct rte_crypto_raw_dp_ctx *ctx,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void **user_data);
+
+/**
+ * Start processing all enqueued descriptors from last
+ * rte_cryptodev_raw_configure_dp_context() call.
+ *
+ * @param ctx The initialized raw data-path context data.
+ * @param n The total number of submitted descriptors.
+ */
+__rte_experimental
+int
+rte_cryptodev_raw_enqueue_done(struct rte_crypto_raw_dp_ctx *ctx,
+ uint32_t n);
+
+/**
+ * Dequeue a burst of raw crypto data-path operations and write the previously
+ * enqueued user data into the array provided.
+ *
+ * The dequeued operations, including the user data stored, will not be
+ * wiped out from the device queue until rte_cryptodev_raw_dequeue_done() is
+ * called.
+ *
+ * @param ctx The initialized raw data-path context
+ * data.
+ * @param get_dequeue_count User provided callback function to
+ * obtain dequeue count.
+ * @param post_dequeue User provided callback function to
+ * post-process a dequeued operation.
+ * @param out_user_data User data pointer array to be retrieve
+ * from device queue. In case of
+ * *is_user_data_array* is set there
+ * should be enough room to store all
+ * user data.
+ * @param is_user_data_array Set 1 if every dequeued user data will
+ * be written into *out_user_data* array.
+ * @param n_success Driver written value to specific the
+ * total successful operations count.
+ *
+ * @return
+ * - Returns number of dequeued packets.
+ */
+__rte_experimental
+uint32_t
+rte_cryptodev_raw_dequeue_burst(struct rte_crypto_raw_dp_ctx *ctx,
+ rte_cryptodev_raw_get_dequeue_count_t get_dequeue_count,
+ rte_cryptodev_raw_post_dequeue_t post_dequeue,
+ void **out_user_data, uint8_t is_user_data_array,
+ uint32_t *n_success);
+
+/**
+ * Dequeue a raw crypto data-path operation and write the previously
+ * enqueued user data.
+ *
+ * The dequeued operation, including the user data stored, will not be wiped
+ * out from the device queue until rte_cryptodev_raw_dequeue_done() is called.
+ *
+ * @param ctx The initialized raw data-path context
+ * data.
+ * @param out_user_data User data pointer to be retrieve from
+ * device queue. The driver shall support
+ * NULL input of this parameter.
+ *
+ * @return
+ * - 1 if the user data is dequeued and the operation is a success.
+ * - 0 if the user data is dequeued but the operation is failed.
+ * - -1 if no operation is ready to be dequeued.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_cryptodev_raw_dequeue(struct rte_crypto_raw_dp_ctx *ctx,
+ void **out_user_data)
+{
+ return (*ctx->dequeue)(ctx->qp_data, ctx->drv_ctx_data, out_user_data);
+}
+
+/**
+ * Inform the queue pair dequeue operations finished.
+ *
+ * @param ctx The initialized raw data-path context data.
+ * @param n The total number of jobs already dequeued.
+ */
+__rte_experimental
+int
+rte_cryptodev_raw_dequeue_done(struct rte_crypto_raw_dp_ctx *ctx,
+ uint32_t n);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
index 81975d72b..69a2a6d64 100644
--- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -316,6 +316,40 @@ typedef uint32_t (*cryptodev_sym_cpu_crypto_process_t)
(struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess,
union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec);
+/**
+ * Typedef that the driver provided to get service context private date size.
+ *
+ * @param dev Crypto device pointer.
+ *
+ * @return
+ * - On success return the size of the device's service context private data.
+ * - On failure return negative integer.
+ */
+typedef int (*cryptodev_dp_get_service_ctx_size_t)(
+ struct rte_cryptodev *dev);
+
+/**
+ * Typedef that the driver provided to configure data-path context.
+ *
+ * @param dev Crypto device pointer.
+ * @param qp_id Crypto device queue pair index.
+ * @param service_type Type of the service requested.
+ * @param sess_type session type.
+ * @param session_ctx Session context data. If NULL the driver
+ * shall only configure the drv_ctx_data in
+ * ctx buffer. Otherwise the driver shall only
+ * parse the session_ctx to set appropriate
+ * function pointers in ctx.
+ * @param ctx The raw data-path context data.
+ * @return
+ * - On success return 0.
+ * - On failure return negative integer.
+ */
+typedef int (*cryptodev_dp_configure_ctx_t)(
+ struct rte_cryptodev *dev, uint16_t qp_id,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ struct rte_crypto_raw_dp_ctx *ctx);
/** Crypto device operations function pointer table */
struct rte_cryptodev_ops {
@@ -348,8 +382,17 @@ struct rte_cryptodev_ops {
/**< Clear a Crypto sessions private data. */
cryptodev_asym_free_session_t asym_session_clear;
/**< Clear a Crypto sessions private data. */
- cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
- /**< process input data synchronously (cpu-crypto). */
+ union {
+ cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
+ /**< process input data synchronously (cpu-crypto). */
+ __extension__
+ struct {
+ cryptodev_dp_get_service_ctx_size_t get_drv_ctx_size;
+ /**< Get data path service context data size. */
+ cryptodev_dp_configure_ctx_t configure_dp_ctx;
+ /**< Initialize crypto service ctx data. */
+ };
+ };
};
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 02f6dcf72..bc4cd1ea5 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -105,4 +105,15 @@ EXPERIMENTAL {
# added in 20.08
rte_cryptodev_get_qp_status;
+
+ # added in 20.11
+ rte_cryptodev_raw_attach_session;
+ rte_cryptodev_raw_configure_dp_context;
+ rte_cryptodev_raw_get_dp_context_size;
+ rte_cryptodev_raw_dequeue;
+ rte_cryptodev_raw_dequeue_burst;
+ rte_cryptodev_raw_dequeue_done;
+ rte_cryptodev_raw_enqueue;
+ rte_cryptodev_raw_enqueue_burst;
+ rte_cryptodev_raw_enqueue_done;
};
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v10 3/4] crypto/qat: add raw crypto data-path API support
2020-09-24 16:34 ` [dpdk-dev] [dpdk-dev v10 0/4] cryptodev: add raw data-path APIs Fan Zhang
2020-09-24 16:34 ` [dpdk-dev] [dpdk-dev v10 1/4] cryptodev: change crypto symmetric vector structure Fan Zhang
2020-09-24 16:34 ` [dpdk-dev] [dpdk-dev v10 2/4] cryptodev: add raw crypto data-path APIs Fan Zhang
@ 2020-09-24 16:34 ` Fan Zhang
2020-09-25 8:04 ` Dybkowski, AdamX
2020-09-24 16:34 ` [dpdk-dev] [dpdk-dev v10 4/4] test/crypto: add unit-test for cryptodev raw API test Fan Zhang
` (2 subsequent siblings)
5 siblings, 1 reply; 84+ messages in thread
From: Fan Zhang @ 2020-09-24 16:34 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski,
anoobj, konstantin.ananyev, Fan Zhang
This patch updates QAT PMD to add raw data-path API support.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
drivers/crypto/qat/meson.build | 1 +
drivers/crypto/qat/qat_sym.h | 11 +
drivers/crypto/qat/qat_sym_hw_dp.c | 951 +++++++++++++++++++++++++++++
drivers/crypto/qat/qat_sym_pmd.c | 9 +-
4 files changed, 970 insertions(+), 2 deletions(-)
create mode 100644 drivers/crypto/qat/qat_sym_hw_dp.c
diff --git a/drivers/crypto/qat/meson.build b/drivers/crypto/qat/meson.build
index a225f374a..bc90ec44c 100644
--- a/drivers/crypto/qat/meson.build
+++ b/drivers/crypto/qat/meson.build
@@ -15,6 +15,7 @@ if dep.found()
qat_sources += files('qat_sym_pmd.c',
'qat_sym.c',
'qat_sym_session.c',
+ 'qat_sym_hw_dp.c',
'qat_asym_pmd.c',
'qat_asym.c')
qat_ext_deps += dep
diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h
index 1a9748849..b748c7705 100644
--- a/drivers/crypto/qat/qat_sym.h
+++ b/drivers/crypto/qat/qat_sym.h
@@ -264,6 +264,16 @@ qat_sym_process_response(void **op, uint8_t *resp)
}
*op = (void *)rx_op;
}
+
+int
+qat_sym_dp_configure_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ struct rte_crypto_raw_dp_ctx *raw_dp_ctx);
+
+int
+qat_sym_get_service_ctx_size(struct rte_cryptodev *dev);
+
#else
static inline void
@@ -276,5 +286,6 @@ static inline void
qat_sym_process_response(void **op __rte_unused, uint8_t *resp __rte_unused)
{
}
+
#endif
#endif /* _QAT_SYM_H_ */
diff --git a/drivers/crypto/qat/qat_sym_hw_dp.c b/drivers/crypto/qat/qat_sym_hw_dp.c
new file mode 100644
index 000000000..9c65c1c3f
--- /dev/null
+++ b/drivers/crypto/qat/qat_sym_hw_dp.c
@@ -0,0 +1,951 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_cryptodev_pmd.h>
+
+#include "adf_transport_access_macros.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#include "qat_sym.h"
+#include "qat_sym_pmd.h"
+#include "qat_sym_session.h"
+#include "qat_qp.h"
+
+struct qat_sym_dp_ctx {
+ struct qat_sym_session *session;
+ uint32_t tail;
+ uint32_t head;
+ uint16_t cached_enqueue;
+ uint16_t cached_dequeue;
+};
+
+static __rte_always_inline int32_t
+qat_sym_dp_parse_data_vec(struct qat_qp *qp, struct icp_qat_fw_la_bulk_req *req,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs)
+{
+ struct qat_queue *tx_queue;
+ struct qat_sym_op_cookie *cookie;
+ struct qat_sgl *list;
+ uint32_t i;
+ uint32_t total_len;
+
+ if (likely(n_data_vecs == 1)) {
+ req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr =
+ data[0].iova;
+ req->comn_mid.src_length = req->comn_mid.dst_length =
+ data[0].len;
+ return data[0].len;
+ }
+
+ if (n_data_vecs == 0 || n_data_vecs > QAT_SYM_SGL_MAX_NUMBER)
+ return -1;
+
+ total_len = 0;
+ tx_queue = &qp->tx_q;
+
+ ICP_QAT_FW_COMN_PTR_TYPE_SET(req->comn_hdr.comn_req_flags,
+ QAT_COMN_PTR_TYPE_SGL);
+ cookie = qp->op_cookies[tx_queue->tail >> tx_queue->trailz];
+ list = (struct qat_sgl *)&cookie->qat_sgl_src;
+
+ for (i = 0; i < n_data_vecs; i++) {
+ list->buffers[i].len = data[i].len;
+ list->buffers[i].resrvd = 0;
+ list->buffers[i].addr = data[i].iova;
+ if (total_len + data[i].len > UINT32_MAX) {
+ QAT_DP_LOG(ERR, "Message too long");
+ return -1;
+ }
+ total_len += data[i].len;
+ }
+
+ list->num_bufs = i;
+ req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr =
+ cookie->qat_sgl_src_phys_addr;
+ req->comn_mid.src_length = req->comn_mid.dst_length = 0;
+ return total_len;
+}
+
+static __rte_always_inline void
+set_cipher_iv(struct icp_qat_fw_la_cipher_req_params *cipher_param,
+ struct rte_crypto_va_iova_ptr *iv_ptr, uint32_t iv_len,
+ struct icp_qat_fw_la_bulk_req *qat_req)
+{
+ /* copy IV into request if it fits */
+ if (iv_len <= sizeof(cipher_param->u.cipher_IV_array))
+ rte_memcpy(cipher_param->u.cipher_IV_array, iv_ptr->va,
+ iv_len);
+ else {
+ ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(
+ qat_req->comn_hdr.serv_specif_flags,
+ ICP_QAT_FW_CIPH_IV_64BIT_PTR);
+ cipher_param->u.s.cipher_IV_ptr = iv_ptr->iova;
+ }
+}
+
+#define QAT_SYM_DP_IS_RESP_SUCCESS(resp) \
+ (ICP_QAT_FW_COMN_STATUS_FLAG_OK == \
+ ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(resp->comn_hdr.comn_status))
+
+static __rte_always_inline void
+qat_sym_dp_fill_vec_status(int32_t *sta, int status, uint32_t n)
+{
+ uint32_t i;
+
+ for (i = 0; i < n; i++)
+ sta[i] = status;
+}
+
+#define QAT_SYM_DP_CHECK_ENQ_POSSIBLE(q, c, n) \
+ (q->enqueued - q->dequeued + c + n < q->max_inflights)
+
+static __rte_always_inline void
+submit_one_aead_job(struct qat_sym_session *ctx,
+ struct icp_qat_fw_la_bulk_req *req,
+ struct rte_crypto_va_iova_ptr *iv,
+ struct rte_crypto_va_iova_ptr *digest,
+ struct rte_crypto_va_iova_ptr *aad,
+ union rte_crypto_sym_ofs ofs, uint32_t data_len)
+{
+ struct icp_qat_fw_la_cipher_req_params *cipher_param =
+ (void *)&req->serv_specif_rqpars;
+ struct icp_qat_fw_la_auth_req_params *auth_param =
+ (void *)((uint8_t *)&req->serv_specif_rqpars +
+ ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET);
+ uint8_t *aad_data;
+ uint8_t aad_ccm_real_len;
+ uint8_t aad_len_field_sz;
+ uint32_t msg_len_be;
+ rte_iova_t aad_iova = 0;
+ uint8_t q;
+
+ switch (ctx->qat_hash_alg) {
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+ ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(
+ req->comn_hdr.serv_specif_flags,
+ ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS);
+ rte_memcpy(cipher_param->u.cipher_IV_array, iv->va,
+ ctx->cipher_iv.length);
+ aad_iova = aad->iova;
+ break;
+ case ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC:
+ aad_data = aad->va;
+ aad_iova = aad->iova;
+ aad_ccm_real_len = 0;
+ aad_len_field_sz = 0;
+ msg_len_be = rte_bswap32((uint32_t)data_len -
+ ofs.ofs.cipher.head);
+
+ if (ctx->aad_len > ICP_QAT_HW_CCM_AAD_DATA_OFFSET) {
+ aad_len_field_sz = ICP_QAT_HW_CCM_AAD_LEN_INFO;
+ aad_ccm_real_len = ctx->aad_len -
+ ICP_QAT_HW_CCM_AAD_B0_LEN -
+ ICP_QAT_HW_CCM_AAD_LEN_INFO;
+ } else {
+ aad_data = iv->va;
+ aad_iova = iv->iova;
+ }
+
+ q = ICP_QAT_HW_CCM_NQ_CONST - ctx->cipher_iv.length;
+ aad_data[0] = ICP_QAT_HW_CCM_BUILD_B0_FLAGS(
+ aad_len_field_sz, ctx->digest_length, q);
+ if (q > ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE) {
+ memcpy(aad_data + ctx->cipher_iv.length +
+ ICP_QAT_HW_CCM_NONCE_OFFSET + (q -
+ ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE),
+ (uint8_t *)&msg_len_be,
+ ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE);
+ } else {
+ memcpy(aad_data + ctx->cipher_iv.length +
+ ICP_QAT_HW_CCM_NONCE_OFFSET,
+ (uint8_t *)&msg_len_be +
+ (ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE
+ - q), q);
+ }
+
+ if (aad_len_field_sz > 0) {
+ *(uint16_t *)&aad_data[ICP_QAT_HW_CCM_AAD_B0_LEN] =
+ rte_bswap16(aad_ccm_real_len);
+
+ if ((aad_ccm_real_len + aad_len_field_sz)
+ % ICP_QAT_HW_CCM_AAD_B0_LEN) {
+ uint8_t pad_len = 0;
+ uint8_t pad_idx = 0;
+
+ pad_len = ICP_QAT_HW_CCM_AAD_B0_LEN -
+ ((aad_ccm_real_len +
+ aad_len_field_sz) %
+ ICP_QAT_HW_CCM_AAD_B0_LEN);
+ pad_idx = ICP_QAT_HW_CCM_AAD_B0_LEN +
+ aad_ccm_real_len +
+ aad_len_field_sz;
+ memset(&aad_data[pad_idx], 0, pad_len);
+ }
+ }
+
+ rte_memcpy(((uint8_t *)cipher_param->u.cipher_IV_array)
+ + ICP_QAT_HW_CCM_NONCE_OFFSET,
+ (uint8_t *)iv->va +
+ ICP_QAT_HW_CCM_NONCE_OFFSET, ctx->cipher_iv.length);
+ *(uint8_t *)&cipher_param->u.cipher_IV_array[0] =
+ q - ICP_QAT_HW_CCM_NONCE_OFFSET;
+
+ rte_memcpy((uint8_t *)aad->va +
+ ICP_QAT_HW_CCM_NONCE_OFFSET,
+ (uint8_t *)iv->va + ICP_QAT_HW_CCM_NONCE_OFFSET,
+ ctx->cipher_iv.length);
+ break;
+ default:
+ break;
+ }
+
+ cipher_param->cipher_offset = ofs.ofs.cipher.head;
+ cipher_param->cipher_length = data_len - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ auth_param->auth_off = ofs.ofs.cipher.head;
+ auth_param->auth_len = cipher_param->cipher_length;
+ auth_param->auth_res_addr = digest->iova;
+ auth_param->u1.aad_adr = aad_iova;
+
+ if (ctx->is_single_pass) {
+ cipher_param->spc_aad_addr = aad_iova;
+ cipher_param->spc_auth_res_addr = digest->iova;
+ }
+}
+
+static __rte_always_inline int
+qat_sym_dp_submit_single_aead(void *qp_data, uint8_t *drv_ctx,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ struct rte_crypto_va_iova_ptr *iv,
+ struct rte_crypto_va_iova_ptr *digest,
+ struct rte_crypto_va_iova_ptr *aad,
+ void *user_data)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+ uint32_t tail = dp_ctx->tail;
+
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+ rte_prefetch0((uint8_t *)tx_queue->base_addr + tail);
+ data_len = qat_sym_dp_parse_data_vec(qp, req, data, n_data_vecs);
+ if (unlikely(data_len < 0))
+ return -1;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data;
+
+ submit_one_aead_job(ctx, req, iv, digest, aad, ofs,
+ (uint32_t)data_len);
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue++;
+
+ return 0;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_submit_aead_jobs(void *qp_data, uint8_t *drv_ctx,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void *user_data[])
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ uint32_t i;
+ uint32_t tail;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+
+ if (unlikely(QAT_SYM_DP_CHECK_ENQ_POSSIBLE(qp,
+ dp_ctx->cached_enqueue, vec->num) == 0)) {
+ qat_sym_dp_fill_vec_status(vec->status, -1, vec->num);
+ return 0;
+ }
+
+ tail = dp_ctx->tail;
+
+ for (i = 0; i < vec->num; i++) {
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+
+ data_len = qat_sym_dp_parse_data_vec(qp, req, vec->sgl[i].vec,
+ vec->sgl[i].num) - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ if (unlikely(data_len < 0))
+ break;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data[i];
+ submit_one_aead_job(ctx, req, &vec->iv[i], &vec->digest[i],
+ &vec->aad[i], ofs, (uint32_t)data_len);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ }
+
+ if (unlikely(i < vec->num))
+ qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i);
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue += i;
+ return i;
+}
+
+static __rte_always_inline void
+submit_one_cipher_job(struct qat_sym_session *ctx,
+ struct icp_qat_fw_la_bulk_req *req,
+ struct rte_crypto_va_iova_ptr *iv,
+ union rte_crypto_sym_ofs ofs, uint32_t data_len)
+{
+ struct icp_qat_fw_la_cipher_req_params *cipher_param;
+
+ cipher_param = (void *)&req->serv_specif_rqpars;
+
+ /* cipher IV */
+ set_cipher_iv(cipher_param, iv, ctx->cipher_iv.length, req);
+ cipher_param->cipher_offset = ofs.ofs.cipher.head;
+ cipher_param->cipher_length = data_len - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+}
+
+static __rte_always_inline int
+qat_sym_dp_submit_single_cipher(void *qp_data, uint8_t *drv_ctx,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ struct rte_crypto_va_iova_ptr *iv,
+ struct rte_crypto_va_iova_ptr *digest __rte_unused,
+ struct rte_crypto_va_iova_ptr *aad __rte_unused,
+ void *user_data)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+ uint32_t tail = dp_ctx->tail;
+
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+ rte_prefetch0((uint8_t *)tx_queue->base_addr + tail);
+ data_len = qat_sym_dp_parse_data_vec(qp, req, data, n_data_vecs);
+ if (unlikely(data_len < 0))
+ return -1;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data;
+
+ submit_one_cipher_job(ctx, req, iv, ofs, (uint32_t)data_len);
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue++;
+
+ return 0;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_submit_cipher_jobs(void *qp_data, uint8_t *drv_ctx,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void *user_data[])
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ uint32_t i;
+ uint32_t tail;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+
+ if (unlikely(QAT_SYM_DP_CHECK_ENQ_POSSIBLE(qp,
+ dp_ctx->cached_enqueue, vec->num) == 0)) {
+ qat_sym_dp_fill_vec_status(vec->status, -1, vec->num);
+ return 0;
+ }
+
+ tail = dp_ctx->tail;
+
+ for (i = 0; i < vec->num; i++) {
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+
+ data_len = qat_sym_dp_parse_data_vec(qp, req, vec->sgl[i].vec,
+ vec->sgl[i].num) - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ if (unlikely(data_len < 0))
+ break;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data[i];
+ submit_one_cipher_job(ctx, req, &vec->iv[i], ofs,
+ (uint32_t)data_len);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ }
+
+ if (unlikely(i < vec->num))
+ qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i);
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue += i;
+ return i;
+}
+
+static __rte_always_inline void
+submit_one_auth_job(struct qat_sym_session *ctx,
+ struct icp_qat_fw_la_bulk_req *req,
+ struct rte_crypto_va_iova_ptr *digest,
+ struct rte_crypto_va_iova_ptr *auth_iv,
+ union rte_crypto_sym_ofs ofs, uint32_t data_len)
+{
+ struct icp_qat_fw_la_cipher_req_params *cipher_param;
+ struct icp_qat_fw_la_auth_req_params *auth_param;
+
+ cipher_param = (void *)&req->serv_specif_rqpars;
+ auth_param = (void *)((uint8_t *)cipher_param +
+ ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET);
+
+ auth_param->auth_off = ofs.ofs.auth.head;
+ auth_param->auth_len = data_len - ofs.ofs.auth.head -
+ ofs.ofs.auth.tail;
+ auth_param->auth_res_addr = digest->iova;
+
+ switch (ctx->qat_hash_alg) {
+ case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
+ case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
+ auth_param->u1.aad_adr = auth_iv->iova;
+ break;
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+ ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(
+ req->comn_hdr.serv_specif_flags,
+ ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS);
+ rte_memcpy(cipher_param->u.cipher_IV_array, auth_iv->va,
+ ctx->auth_iv.length);
+ break;
+ default:
+ break;
+ }
+}
+
+static __rte_always_inline int
+qat_sym_dp_submit_single_auth(void *qp_data, uint8_t *drv_ctx,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ struct rte_crypto_va_iova_ptr *iv __rte_unused,
+ struct rte_crypto_va_iova_ptr *digest,
+ struct rte_crypto_va_iova_ptr *auth_iv,
+ void *user_data)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+ uint32_t tail = dp_ctx->tail;
+
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+ rte_prefetch0((uint8_t *)tx_queue->base_addr + tail);
+ data_len = qat_sym_dp_parse_data_vec(qp, req, data, n_data_vecs);
+ if (unlikely(data_len < 0))
+ return -1;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data;
+
+ submit_one_auth_job(ctx, req, digest, auth_iv, ofs,
+ (uint32_t)data_len);
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue++;
+
+ return 0;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_submit_auth_jobs(void *qp_data, uint8_t *drv_ctx,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void *user_data[])
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ uint32_t i;
+ uint32_t tail;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+
+ if (unlikely(QAT_SYM_DP_CHECK_ENQ_POSSIBLE(qp,
+ dp_ctx->cached_enqueue, vec->num) == 0)) {
+ qat_sym_dp_fill_vec_status(vec->status, -1, vec->num);
+ return 0;
+ }
+
+ tail = dp_ctx->tail;
+
+ for (i = 0; i < vec->num; i++) {
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+
+ data_len = qat_sym_dp_parse_data_vec(qp, req, vec->sgl[i].vec,
+ vec->sgl[i].num) - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ if (unlikely(data_len < 0))
+ break;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data[i];
+ submit_one_auth_job(ctx, req, &vec->digest[i],
+ &vec->auth_iv[i], ofs, (uint32_t)data_len);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ }
+
+ if (unlikely(i < vec->num))
+ qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i);
+
+ dp_ctx->tail = tail;
+ return i;
+}
+
+static __rte_always_inline int
+submit_one_chain_job(struct qat_sym_session *ctx,
+ struct icp_qat_fw_la_bulk_req *req,
+ struct rte_crypto_vec *data,
+ uint16_t n_data_vecs,
+ struct rte_crypto_va_iova_ptr *cipher_iv,
+ struct rte_crypto_va_iova_ptr *digest,
+ struct rte_crypto_va_iova_ptr *auth_iv,
+ union rte_crypto_sym_ofs ofs, uint32_t data_len)
+{
+ struct icp_qat_fw_la_cipher_req_params *cipher_param;
+ struct icp_qat_fw_la_auth_req_params *auth_param;
+ rte_iova_t auth_iova_end;
+ int32_t cipher_len, auth_len;
+
+ cipher_param = (void *)&req->serv_specif_rqpars;
+ auth_param = (void *)((uint8_t *)cipher_param +
+ ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET);
+
+ cipher_len = data_len - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ auth_len = data_len - ofs.ofs.auth.head - ofs.ofs.auth.tail;
+
+ if (unlikely(cipher_len < 0 || auth_len < 0))
+ return -1;
+
+ cipher_param->cipher_offset = ofs.ofs.cipher.head;
+ cipher_param->cipher_length = cipher_len;
+ set_cipher_iv(cipher_param, cipher_iv, ctx->cipher_iv.length, req);
+
+ auth_param->auth_off = ofs.ofs.auth.head;
+ auth_param->auth_len = auth_len;
+ auth_param->auth_res_addr = digest->iova;
+
+ switch (ctx->qat_hash_alg) {
+ case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
+ case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
+ auth_param->u1.aad_adr = auth_iv->iova;
+
+ if (unlikely(n_data_vecs > 1)) {
+ int auth_end_get = 0, i = n_data_vecs - 1;
+ struct rte_crypto_vec *cvec = &data[0];
+ uint32_t len;
+
+ len = data_len - ofs.ofs.auth.tail;
+
+ while (i >= 0 && len > 0) {
+ if (cvec->len >= len) {
+ auth_iova_end = cvec->iova +
+ (cvec->len - len);
+ len = 0;
+ auth_end_get = 1;
+ break;
+ }
+ len -= cvec->len;
+ i--;
+ cvec++;
+ }
+
+ if (unlikely(auth_end_get == 0))
+ return -1;
+ } else
+ auth_iova_end = data[0].iova + auth_param->auth_off +
+ auth_param->auth_len;
+
+ /* Then check if digest-encrypted conditions are met */
+ if ((auth_param->auth_off + auth_param->auth_len <
+ cipher_param->cipher_offset +
+ cipher_param->cipher_length) &&
+ (digest->iova == auth_iova_end)) {
+ /* Handle partial digest encryption */
+ if (cipher_param->cipher_offset +
+ cipher_param->cipher_length <
+ auth_param->auth_off +
+ auth_param->auth_len +
+ ctx->digest_length)
+ req->comn_mid.dst_length =
+ req->comn_mid.src_length =
+ auth_param->auth_off +
+ auth_param->auth_len +
+ ctx->digest_length;
+ struct icp_qat_fw_comn_req_hdr *header =
+ &req->comn_hdr;
+ ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(
+ header->serv_specif_flags,
+ ICP_QAT_FW_LA_DIGEST_IN_BUFFER);
+ }
+ break;
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
+ case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
+ break;
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+static __rte_always_inline int
+qat_sym_dp_submit_single_chain(void *qp_data, uint8_t *drv_ctx,
+ struct rte_crypto_vec *data, uint16_t n_data_vecs,
+ union rte_crypto_sym_ofs ofs,
+ struct rte_crypto_va_iova_ptr *cipher_iv,
+ struct rte_crypto_va_iova_ptr *digest,
+ struct rte_crypto_va_iova_ptr *auth_iv,
+ void *user_data)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+ uint32_t tail = dp_ctx->tail;
+
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+ rte_prefetch0((uint8_t *)tx_queue->base_addr + tail);
+ data_len = qat_sym_dp_parse_data_vec(qp, req, data, n_data_vecs);
+ if (unlikely(data_len < 0))
+ return -1;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data;
+
+ if (unlikely(submit_one_chain_job(ctx, req, data, n_data_vecs,
+ cipher_iv, digest, auth_iv, ofs, (uint32_t)data_len)))
+ return -1;
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue++;
+
+ return 0;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_submit_chain_jobs(void *qp_data, uint8_t *drv_ctx,
+ struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+ void *user_data[])
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_session *ctx = dp_ctx->session;
+ uint32_t i;
+ uint32_t tail;
+ struct icp_qat_fw_la_bulk_req *req;
+ int32_t data_len;
+
+ if (unlikely(QAT_SYM_DP_CHECK_ENQ_POSSIBLE(qp,
+ dp_ctx->cached_enqueue, vec->num) == 0)) {
+ qat_sym_dp_fill_vec_status(vec->status, -1, vec->num);
+ return 0;
+ }
+
+ tail = dp_ctx->tail;
+
+ for (i = 0; i < vec->num; i++) {
+ req = (struct icp_qat_fw_la_bulk_req *)(
+ (uint8_t *)tx_queue->base_addr + tail);
+ rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
+
+ data_len = qat_sym_dp_parse_data_vec(qp, req, vec->sgl[i].vec,
+ vec->sgl[i].num) - ofs.ofs.cipher.head -
+ ofs.ofs.cipher.tail;
+ if (unlikely(data_len < 0))
+ break;
+ req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data[i];
+ if (unlikely(submit_one_chain_job(ctx, req, vec->sgl[i].vec,
+ vec->sgl[i].num, &vec->iv[i], &vec->digest[i],
+ &vec->auth_iv[i], ofs, (uint32_t)data_len)))
+ break;
+
+ tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
+ }
+
+ if (unlikely(i < vec->num))
+ qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i);
+
+ dp_ctx->tail = tail;
+ dp_ctx->cached_enqueue += i;
+ return i;
+}
+
+static __rte_always_inline uint32_t
+qat_sym_dp_dequeue(void *qp_data, uint8_t *drv_ctx,
+ rte_cryptodev_raw_get_dequeue_count_t get_dequeue_count,
+ rte_cryptodev_raw_post_dequeue_t post_dequeue,
+ void **out_user_data, uint8_t is_user_data_array,
+ uint32_t *n_success_jobs)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx;
+ struct qat_queue *rx_queue = &qp->rx_q;
+ struct icp_qat_fw_comn_resp *resp;
+ void *resp_opaque;
+ uint32_t i, n, inflight;
+ uint32_t head;
+ uint8_t status;
+
+ *n_success_jobs = 0;
+ head = dp_ctx->head;
+
+ inflight = qp->enqueued - qp->dequeued;
+ if (unlikely(inflight == 0))
+ return 0;
+
+ resp = (struct icp_qat_fw_comn_resp *)((uint8_t *)rx_queue->base_addr +
+ head);
+ /* no operation ready */
+ if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG))
+ return 0;
+
+ resp_opaque = (void *)(uintptr_t)resp->opaque_data;
+ /* get the dequeue count */
+ n = get_dequeue_count(resp_opaque);
+ if (unlikely(n == 0))
+ return 0;
+
+ out_user_data[0] = resp_opaque;
+ status = QAT_SYM_DP_IS_RESP_SUCCESS(resp);
+ post_dequeue(resp_opaque, 0, status);
+ *n_success_jobs += status;
+
+ head = (head + rx_queue->msg_size) & rx_queue->modulo_mask;
+
+ /* we already finished dequeue when n == 1 */
+ if (unlikely(n == 1)) {
+ i = 1;
+ goto end_deq;
+ }
+
+ if (is_user_data_array) {
+ for (i = 1; i < n; i++) {
+ resp = (struct icp_qat_fw_comn_resp *)(
+ (uint8_t *)rx_queue->base_addr + head);
+ if (unlikely(*(uint32_t *)resp ==
+ ADF_RING_EMPTY_SIG))
+ goto end_deq;
+ out_user_data[i] = (void *)(uintptr_t)resp->opaque_data;
+ status = QAT_SYM_DP_IS_RESP_SUCCESS(resp);
+ *n_success_jobs += status;
+ post_dequeue(out_user_data[i], i, status);
+ head = (head + rx_queue->msg_size) &
+ rx_queue->modulo_mask;
+ }
+
+ goto end_deq;
+ }
+
+ /* opaque is not array */
+ for (i = 1; i < n; i++) {
+ resp = (struct icp_qat_fw_comn_resp *)(
+ (uint8_t *)rx_queue->base_addr + head);
+ status = QAT_SYM_DP_IS_RESP_SUCCESS(resp);
+ if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG))
+ goto end_deq;
+ head = (head + rx_queue->msg_size) &
+ rx_queue->modulo_mask;
+ post_dequeue(resp_opaque, i, status);
+ *n_success_jobs += status;
+ }
+
+end_deq:
+ dp_ctx->head = head;
+ dp_ctx->cached_dequeue += i;
+ return i;
+}
+
+static __rte_always_inline int
+qat_sym_dp_dequeue_single_job(void *qp_data, uint8_t *drv_ctx,
+ void **out_user_data)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx;
+ struct qat_queue *rx_queue = &qp->rx_q;
+
+ register struct icp_qat_fw_comn_resp *resp;
+
+ resp = (struct icp_qat_fw_comn_resp *)((uint8_t *)rx_queue->base_addr +
+ dp_ctx->head);
+
+ if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG))
+ return -1;
+
+ *out_user_data = (void *)(uintptr_t)resp->opaque_data;
+
+ dp_ctx->head = (dp_ctx->head + rx_queue->msg_size) &
+ rx_queue->modulo_mask;
+ dp_ctx->cached_dequeue++;
+
+ return QAT_SYM_DP_IS_RESP_SUCCESS(resp);
+}
+
+static __rte_always_inline int
+qat_sym_dp_kick_tail(void *qp_data, uint8_t *drv_ctx, uint32_t n)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_queue *tx_queue = &qp->tx_q;
+ struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx;
+
+ if (unlikely(dp_ctx->cached_enqueue != n))
+ return -1;
+
+ qp->enqueued += n;
+ qp->stats.enqueued_count += n;
+
+ tx_queue->tail = dp_ctx->tail;
+
+ WRITE_CSR_RING_TAIL(qp->mmap_bar_addr,
+ tx_queue->hw_bundle_number,
+ tx_queue->hw_queue_number, tx_queue->tail);
+ tx_queue->csr_tail = tx_queue->tail;
+ dp_ctx->cached_enqueue = 0;
+
+ return 0;
+}
+
+static __rte_always_inline int
+qat_sym_dp_update_head(void *qp_data, uint8_t *drv_ctx, uint32_t n)
+{
+ struct qat_qp *qp = qp_data;
+ struct qat_queue *rx_queue = &qp->rx_q;
+ struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx;
+
+ if (unlikely(dp_ctx->cached_dequeue != n))
+ return -1;
+
+ rx_queue->head = dp_ctx->head;
+ rx_queue->nb_processed_responses += n;
+ qp->dequeued += n;
+ qp->stats.dequeued_count += n;
+ if (rx_queue->nb_processed_responses > QAT_CSR_HEAD_WRITE_THRESH) {
+ uint32_t old_head, new_head;
+ uint32_t max_head;
+
+ old_head = rx_queue->csr_head;
+ new_head = rx_queue->head;
+ max_head = qp->nb_descriptors * rx_queue->msg_size;
+
+ /* write out free descriptors */
+ void *cur_desc = (uint8_t *)rx_queue->base_addr + old_head;
+
+ if (new_head < old_head) {
+ memset(cur_desc, ADF_RING_EMPTY_SIG_BYTE,
+ max_head - old_head);
+ memset(rx_queue->base_addr, ADF_RING_EMPTY_SIG_BYTE,
+ new_head);
+ } else {
+ memset(cur_desc, ADF_RING_EMPTY_SIG_BYTE, new_head -
+ old_head);
+ }
+ rx_queue->nb_processed_responses = 0;
+ rx_queue->csr_head = new_head;
+
+ /* write current head to CSR */
+ WRITE_CSR_RING_HEAD(qp->mmap_bar_addr,
+ rx_queue->hw_bundle_number, rx_queue->hw_queue_number,
+ new_head);
+ }
+
+ dp_ctx->cached_dequeue = 0;
+ return 0;
+}
+
+int
+qat_sym_dp_configure_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
+ enum rte_crypto_op_sess_type sess_type,
+ union rte_cryptodev_session_ctx session_ctx,
+ struct rte_crypto_raw_dp_ctx *raw_dp_ctx)
+{
+ struct qat_qp *qp;
+ struct qat_sym_session *ctx;
+ struct qat_sym_dp_ctx *dp_ctx;
+
+ qp = dev->data->queue_pairs[qp_id];
+ dp_ctx = (struct qat_sym_dp_ctx *)raw_dp_ctx->drv_ctx_data;
+
+ if (session_ctx.crypto_sess == NULL) {
+ memset(raw_dp_ctx, 0, sizeof(*raw_dp_ctx) +
+ sizeof(struct qat_sym_dp_ctx));
+ raw_dp_ctx->qp_data = dev->data->queue_pairs[qp_id];
+ dp_ctx->tail = qp->tx_q.tail;
+ dp_ctx->head = qp->rx_q.head;
+ dp_ctx->cached_enqueue = dp_ctx->cached_dequeue = 0;
+ return 0;
+ }
+
+ if (sess_type != RTE_CRYPTO_OP_WITH_SESSION)
+ return -EINVAL;
+
+ ctx = (struct qat_sym_session *)get_sym_session_private_data(
+ session_ctx.crypto_sess, qat_sym_driver_id);
+
+ dp_ctx->session = ctx;
+
+ raw_dp_ctx->enqueue_done = qat_sym_dp_kick_tail;
+ raw_dp_ctx->dequeue_burst = qat_sym_dp_dequeue;
+ raw_dp_ctx->dequeue = qat_sym_dp_dequeue_single_job;
+ raw_dp_ctx->dequeue_done = qat_sym_dp_update_head;
+
+ if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER ||
+ ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) {
+ /* AES-GCM or AES-CCM */
+ if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64 ||
+ (ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_AES128
+ && ctx->qat_mode == ICP_QAT_HW_CIPHER_CTR_MODE
+ && ctx->qat_hash_alg ==
+ ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC)) {
+ raw_dp_ctx->enqueue_burst = qat_sym_dp_submit_aead_jobs;
+ raw_dp_ctx->enqueue = qat_sym_dp_submit_single_aead;
+ } else {
+ raw_dp_ctx->enqueue_burst =
+ qat_sym_dp_submit_chain_jobs;
+ raw_dp_ctx->enqueue = qat_sym_dp_submit_single_chain;
+ }
+ } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_AUTH) {
+ raw_dp_ctx->enqueue_burst = qat_sym_dp_submit_auth_jobs;
+ raw_dp_ctx->enqueue = qat_sym_dp_submit_single_auth;
+ } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) {
+ raw_dp_ctx->enqueue_burst = qat_sym_dp_submit_cipher_jobs;
+ raw_dp_ctx->enqueue = qat_sym_dp_submit_single_cipher;
+ }
+
+ return 0;
+}
+
+int
+qat_sym_get_service_ctx_size(__rte_unused struct rte_cryptodev *dev)
+{
+ return sizeof(struct qat_sym_dp_ctx);
+}
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index 314742f53..0744e6306 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -258,7 +258,11 @@ static struct rte_cryptodev_ops crypto_qat_ops = {
/* Crypto related operations */
.sym_session_get_size = qat_sym_session_get_private_size,
.sym_session_configure = qat_sym_session_configure,
- .sym_session_clear = qat_sym_session_clear
+ .sym_session_clear = qat_sym_session_clear,
+
+ /* Raw data-path API related operations */
+ .get_drv_ctx_size = qat_sym_get_service_ctx_size,
+ .configure_dp_ctx = qat_sym_dp_configure_ctx,
};
#ifdef RTE_LIBRTE_SECURITY
@@ -376,7 +380,8 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
- RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
+ RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED |
+ RTE_CRYPTODEV_FF_SYM_HW_RAW_DP;
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* [dpdk-dev] [dpdk-dev v10 4/4] test/crypto: add unit-test for cryptodev raw API test
2020-09-24 16:34 ` [dpdk-dev] [dpdk-dev v10 0/4] cryptodev: add raw data-path APIs Fan Zhang
` (2 preceding siblings ...)
2020-09-24 16:34 ` [dpdk-dev] [dpdk-dev v10 3/4] crypto/qat: add raw crypto data-path API support Fan Zhang
@ 2020-09-24 16:34 ` Fan Zhang
2020-09-25 8:05 ` Dybkowski, AdamX
2020-10-08 15:01 ` Akhil Goyal
2020-10-08 15:04 ` [dpdk-dev] [dpdk-dev v10 0/4] cryptodev: add raw data-path APIs Akhil Goyal
2020-10-09 21:11 ` [dpdk-dev] [dpdk-dev v11 " Fan Zhang
5 siblings, 2 replies; 84+ messages in thread
From: Fan Zhang @ 2020-09-24 16:34 UTC (permalink / raw)
To: dev
Cc: akhil.goyal, fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski,
anoobj, konstantin.ananyev, Fan Zhang
This patch adds the cryptodev raw API test support to unit test.
In addtion a new test-case for QAT PMD for the test type is
enabled.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
app/test/test_cryptodev.c | 759 ++++++++++++++++++++++++--
app/test/test_cryptodev.h | 12 +
app/test/test_cryptodev_blockcipher.c | 58 +-
3 files changed, 775 insertions(+), 54 deletions(-)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 99f1eed82..4e7dd1b63 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -49,6 +49,10 @@
#define VDEV_ARGS_SIZE 100
#define MAX_NB_SESSIONS 4
+#define MAX_DRV_SERVICE_CTX_SIZE 256
+
+#define MAX_RAW_DEQUEUE_COUNT 65535
+
#define IN_PLACE 0
#define OUT_OF_PLACE 1
@@ -57,6 +61,8 @@ static int gbl_driver_id;
static enum rte_security_session_action_type gbl_action_type =
RTE_SECURITY_ACTION_TYPE_NONE;
+enum cryptodev_api_test_type global_api_test_type = CRYPTODEV_API_TEST;
+
struct crypto_testsuite_params {
struct rte_mempool *mbuf_pool;
struct rte_mempool *large_mbuf_pool;
@@ -147,6 +153,187 @@ ceil_byte_length(uint32_t num_bits)
return (num_bits >> 3);
}
+void
+process_sym_raw_hw_api_op(uint8_t dev_id, uint16_t qp_id,
+ struct rte_crypto_op *op, uint8_t is_cipher, uint8_t is_auth,
+ uint8_t len_in_bits, uint8_t cipher_iv_len)
+{
+ int32_t n;
+ struct rte_crypto_sym_op *sop = op->sym;
+ struct rte_crypto_op *ret_op = NULL;
+ struct rte_crypto_vec data_vec[UINT8_MAX];
+ struct rte_crypto_va_iova_ptr cipher_iv, digest, aad_auth_iv;
+ union rte_crypto_sym_ofs ofs;
+ struct rte_crypto_sym_vec vec;
+ struct rte_crypto_sgl sgl;
+ uint32_t max_len;
+ union rte_cryptodev_session_ctx sess;
+ uint32_t count = 0;
+ struct rte_crypto_raw_dp_ctx *ctx;
+ uint32_t cipher_offset = 0, cipher_len = 0, auth_offset = 0,
+ auth_len = 0;
+ int ctx_service_size;
+ int32_t status = 0;
+
+ ctx_service_size = rte_cryptodev_raw_get_dp_context_size(dev_id);
+ if (ctx_service_size < 0) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ ctx = malloc(ctx_service_size);
+ if (!ctx) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ if (rte_cryptodev_raw_configure_dp_context(dev_id, qp_id, ctx) < 0) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ sess.crypto_sess = sop->session;
+ if (rte_cryptodev_raw_attach_session(dev_id, qp_id, ctx,
+ op->sess_type, sess) < 0) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ cipher_iv.iova = 0;
+ cipher_iv.va = NULL;
+ aad_auth_iv.iova = 0;
+ aad_auth_iv.va = NULL;
+ digest.iova = 0;
+ digest.va = NULL;
+ sgl.vec = data_vec;
+ vec.num = 1;
+ vec.sgl = &sgl;
+ vec.iv = &cipher_iv;
+ vec.digest = &digest;
+ vec.aad = &aad_auth_iv;
+ vec.status = &status;
+
+ ofs.raw = 0;
+
+ if (is_cipher && is_auth) {
+ cipher_offset = sop->cipher.data.offset;
+ cipher_len = sop->cipher.data.length;
+ auth_offset = sop->auth.data.offset;
+ auth_len = sop->auth.data.length;
+ max_len = RTE_MAX(cipher_offset + cipher_len,
+ auth_offset + auth_len);
+ if (len_in_bits) {
+ max_len = max_len >> 3;
+ cipher_offset = cipher_offset >> 3;
+ auth_offset = auth_offset >> 3;
+ cipher_len = cipher_len >> 3;
+ auth_len = auth_len >> 3;
+ }
+ ofs.ofs.cipher.head = cipher_offset;
+ ofs.ofs.cipher.tail = max_len - cipher_offset - cipher_len;
+ ofs.ofs.auth.head = auth_offset;
+ ofs.ofs.auth.tail = max_len - auth_offset - auth_len;
+ cipher_iv.va = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
+ cipher_iv.iova = rte_crypto_op_ctophys_offset(op, IV_OFFSET);
+ aad_auth_iv.va = rte_crypto_op_ctod_offset(
+ op, void *, IV_OFFSET + cipher_iv_len);
+ aad_auth_iv.iova = rte_crypto_op_ctophys_offset(op, IV_OFFSET +
+ cipher_iv_len);
+ digest.va = (void *)sop->auth.digest.data;
+ digest.iova = sop->auth.digest.phys_addr;
+
+ } else if (is_cipher) {
+ cipher_offset = sop->cipher.data.offset;
+ cipher_len = sop->cipher.data.length;
+ max_len = cipher_len + cipher_offset;
+ if (len_in_bits) {
+ max_len = max_len >> 3;
+ cipher_offset = cipher_offset >> 3;
+ cipher_len = cipher_len >> 3;
+ }
+ ofs.ofs.cipher.head = cipher_offset;
+ ofs.ofs.cipher.tail = max_len - cipher_offset - cipher_len;
+ cipher_iv.va = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
+ cipher_iv.iova = rte_crypto_op_ctophys_offset(op, IV_OFFSET);
+
+ } else if (is_auth) {
+ auth_offset = sop->auth.data.offset;
+ auth_len = sop->auth.data.length;
+ max_len = auth_len + auth_offset;
+ if (len_in_bits) {
+ max_len = max_len >> 3;
+ auth_offset = auth_offset >> 3;
+ auth_len = auth_len >> 3;
+ }
+ ofs.ofs.auth.head = auth_offset;
+ ofs.ofs.auth.tail = max_len - auth_offset - auth_len;
+ aad_auth_iv.va = rte_crypto_op_ctod_offset(
+ op, void *, IV_OFFSET + cipher_iv_len);
+ aad_auth_iv.iova = rte_crypto_op_ctophys_offset(op, IV_OFFSET +
+ cipher_iv_len);
+ digest.va = (void *)sop->auth.digest.data;
+ digest.iova = sop->auth.digest.phys_addr;
+
+ } else { /* aead */
+ cipher_offset = sop->aead.data.offset;
+ cipher_len = sop->aead.data.length;
+ max_len = cipher_len + cipher_offset;
+ if (len_in_bits) {
+ max_len = max_len >> 3;
+ cipher_offset = cipher_offset >> 3;
+ cipher_len = cipher_len >> 3;
+ }
+ ofs.ofs.cipher.head = cipher_offset;
+ ofs.ofs.cipher.tail = max_len - cipher_offset - cipher_len;
+ cipher_iv.va = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET);
+ cipher_iv.iova = rte_crypto_op_ctophys_offset(op, IV_OFFSET);
+ aad_auth_iv.va = (void *)sop->aead.aad.data;
+ aad_auth_iv.iova = sop->aead.aad.phys_addr;
+ digest.va = (void *)sop->aead.digest.data;
+ digest.iova = sop->aead.digest.phys_addr;
+ }
+
+ n = rte_crypto_mbuf_to_vec(sop->m_src, 0, max_len,
+ data_vec, RTE_DIM(data_vec));
+ if (n < 0 || n > sop->m_src->nb_segs) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ sgl.num = n;
+
+ if (rte_cryptodev_raw_enqueue_burst(ctx, &vec, ofs, (void **)&op)
+ < 1) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+ status = rte_cryptodev_raw_enqueue_done(ctx, 1);
+ if (status < 0) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+
+
+ status = -1;
+ while (count++ < MAX_RAW_DEQUEUE_COUNT && status == -1) {
+ status = rte_cryptodev_raw_dequeue(ctx, (void **)&ret_op);
+ if (status == -1)
+ rte_pause();
+ }
+
+ if (status != -1) {
+ if (rte_cryptodev_raw_dequeue_done(ctx, 1) < 0) {
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ return;
+ }
+ }
+
+ op->status = (count == MAX_RAW_DEQUEUE_COUNT + 1 || ret_op != op ||
+ status != 1) ? RTE_CRYPTO_OP_STATUS_ERROR :
+ RTE_CRYPTO_OP_STATUS_SUCCESS;
+}
+
static void
process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op)
{
@@ -1661,6 +1848,9 @@ test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_sym_session *sess,
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -1715,12 +1905,18 @@ test_AES_cipheronly_all(void)
static int
test_AES_docsis_all(void)
{
+ /* Data-path service does not support DOCSIS yet */
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
return test_blockcipher(BLKCIPHER_AES_DOCSIS_TYPE);
}
static int
test_DES_docsis_all(void)
{
+ /* Data-path service does not support DOCSIS yet */
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
return test_blockcipher(BLKCIPHER_DES_DOCSIS_TYPE);
}
@@ -2435,6 +2631,12 @@ test_snow3g_authentication(const struct snow3g_hash_test_data *tdata)
return -ENOTSUP;
}
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
+
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
cap_idx.type = RTE_CRYPTO_SYM_XFORM_AUTH;
@@ -2475,7 +2677,11 @@ test_snow3g_authentication(const struct snow3g_hash_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1, 0);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
ut_params->obuf = ut_params->op->sym->m_src;
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -2513,6 +2719,12 @@ test_snow3g_authentication_verify(const struct snow3g_hash_test_data *tdata)
return -ENOTSUP;
}
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
+
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
cap_idx.type = RTE_CRYPTO_SYM_XFORM_AUTH;
@@ -2554,7 +2766,11 @@ test_snow3g_authentication_verify(const struct snow3g_hash_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1, 0);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
@@ -2580,6 +2796,16 @@ test_kasumi_authentication(const struct kasumi_hash_test_data *tdata)
unsigned plaintext_pad_len;
unsigned plaintext_len;
uint8_t *plaintext;
+ struct rte_cryptodev_info dev_info;
+
+ rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -2624,6 +2850,9 @@ test_kasumi_authentication(const struct kasumi_hash_test_data *tdata)
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1, 0);
else
ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
@@ -2653,6 +2882,16 @@ test_kasumi_authentication_verify(const struct kasumi_hash_test_data *tdata)
unsigned plaintext_pad_len;
unsigned plaintext_len;
uint8_t *plaintext;
+ struct rte_cryptodev_info dev_info;
+
+ rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -2695,7 +2934,11 @@ test_kasumi_authentication_verify(const struct kasumi_hash_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1, 0);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
@@ -2860,6 +3103,16 @@ test_kasumi_encryption(const struct kasumi_test_data *tdata)
uint8_t *plaintext, *ciphertext;
unsigned plaintext_pad_len;
unsigned plaintext_len;
+ struct rte_cryptodev_info dev_info;
+
+ rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -2902,8 +3155,12 @@ test_kasumi_encryption(const struct kasumi_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
- ut_params->op);
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_dst;
@@ -2959,6 +3216,12 @@ test_kasumi_encryption_sgl(const struct kasumi_test_data *tdata)
return -ENOTSUP;
}
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
+
/* Create KASUMI session */
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
@@ -2988,7 +3251,11 @@ test_kasumi_encryption_sgl(const struct kasumi_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -3031,10 +3298,14 @@ test_kasumi_encryption_oop(const struct kasumi_test_data *tdata)
struct rte_cryptodev_sym_capability_idx cap_idx;
cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_KASUMI_F8;
+ /* Data-path service does not support OOP */
if (rte_cryptodev_sym_capability_get(ts_params->valid_devs[0],
&cap_idx) == NULL)
return -ENOTSUP;
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
+
/* Create KASUMI session */
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
@@ -3116,6 +3387,9 @@ test_kasumi_encryption_oop_sgl(const struct kasumi_test_data *tdata)
&cap_idx) == NULL)
return -ENOTSUP;
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
+
rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
uint64_t feat_flags = dev_info.feature_flags;
@@ -3201,6 +3475,9 @@ test_kasumi_decryption_oop(const struct kasumi_test_data *tdata)
&cap_idx) == NULL)
return -ENOTSUP;
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
+
/* Create KASUMI session */
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_DECRYPT,
@@ -3269,6 +3546,16 @@ test_kasumi_decryption(const struct kasumi_test_data *tdata)
uint8_t *ciphertext, *plaintext;
unsigned ciphertext_pad_len;
unsigned ciphertext_len;
+ struct rte_cryptodev_info dev_info;
+
+ rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -3311,7 +3598,11 @@ test_kasumi_decryption(const struct kasumi_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, 0);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -3344,6 +3635,16 @@ test_snow3g_encryption(const struct snow3g_test_data *tdata)
uint8_t *plaintext, *ciphertext;
unsigned plaintext_pad_len;
unsigned plaintext_len;
+ struct rte_cryptodev_info dev_info;
+
+ rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -3386,7 +3687,11 @@ test_snow3g_encryption(const struct snow3g_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -3427,6 +3732,9 @@ test_snow3g_encryption_oop(const struct snow3g_test_data *tdata)
&cap_idx) == NULL)
return -ENOTSUP;
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
+
/* Create SNOW 3G session */
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
@@ -3510,6 +3818,9 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
&cap_idx) == NULL)
return -ENOTSUP;
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
+
rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
uint64_t feat_flags = dev_info.feature_flags;
@@ -3629,6 +3940,9 @@ test_snow3g_encryption_offset_oop(const struct snow3g_test_data *tdata)
&cap_idx) == NULL)
return -ENOTSUP;
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
+
/* Create SNOW 3G session */
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
@@ -3719,6 +4033,16 @@ static int test_snow3g_decryption(const struct snow3g_test_data *tdata)
uint8_t *plaintext, *ciphertext;
unsigned ciphertext_pad_len;
unsigned ciphertext_len;
+ struct rte_cryptodev_info dev_info;
+
+ rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -3761,7 +4085,11 @@ static int test_snow3g_decryption(const struct snow3g_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_dst;
@@ -3799,6 +4127,9 @@ static int test_snow3g_decryption_oop(const struct snow3g_test_data *tdata)
&cap_idx) == NULL)
return -ENOTSUP;
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
+
/* Create SNOW 3G session */
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_DECRYPT,
@@ -3886,6 +4217,12 @@ test_zuc_cipher_auth(const struct wireless_test_data *tdata)
return -ENOTSUP;
}
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
+
/* Check if device supports ZUC EEA3 */
cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
cap_idx.algo.cipher = RTE_CRYPTO_CIPHER_ZUC_EEA3;
@@ -3929,7 +4266,11 @@ test_zuc_cipher_auth(const struct wireless_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
@@ -3969,6 +4310,16 @@ test_snow3g_cipher_auth(const struct snow3g_test_data *tdata)
uint8_t *plaintext, *ciphertext;
unsigned plaintext_pad_len;
unsigned plaintext_len;
+ struct rte_cryptodev_info dev_info;
+
+ rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -4024,7 +4375,11 @@ test_snow3g_cipher_auth(const struct snow3g_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = ut_params->op->sym->m_src;
@@ -4092,6 +4447,14 @@ test_snow3g_auth_cipher(const struct snow3g_test_data *tdata,
printf("Device doesn't support digest encrypted.\n");
return -ENOTSUP;
}
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
+ }
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
}
/* Create SNOW 3G session */
@@ -4160,7 +4523,11 @@ test_snow3g_auth_cipher(const struct snow3g_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4270,7 +4637,14 @@ test_snow3g_auth_cipher_sgl(const struct snow3g_test_data *tdata,
"in both input and output mbufs.\n");
return -ENOTSUP;
}
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
} else {
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) {
printf("Device doesn't support out-of-place scatter-gather "
"in both input and output mbufs.\n");
@@ -4349,7 +4723,11 @@ test_snow3g_auth_cipher_sgl(const struct snow3g_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4457,7 +4835,15 @@ test_kasumi_auth_cipher(const struct kasumi_test_data *tdata,
uint64_t feat_flags = dev_info.feature_flags;
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
+
if (op_mode == OUT_OF_PLACE) {
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
if (!(feat_flags & RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED)) {
printf("Device doesn't support digest encrypted.\n");
return -ENOTSUP;
@@ -4531,7 +4917,11 @@ test_kasumi_auth_cipher(const struct kasumi_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4642,7 +5032,14 @@ test_kasumi_auth_cipher_sgl(const struct kasumi_test_data *tdata,
"in both input and output mbufs.\n");
return -ENOTSUP;
}
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
} else {
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) {
printf("Device doesn't support out-of-place scatter-gather "
"in both input and output mbufs.\n");
@@ -4721,7 +5118,11 @@ test_kasumi_auth_cipher_sgl(const struct kasumi_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4806,6 +5207,16 @@ test_kasumi_cipher_auth(const struct kasumi_test_data *tdata)
uint8_t *plaintext, *ciphertext;
unsigned plaintext_pad_len;
unsigned plaintext_len;
+ struct rte_cryptodev_info dev_info;
+
+ rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -4862,7 +5273,11 @@ test_kasumi_cipher_auth(const struct kasumi_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -4905,6 +5320,16 @@ test_zuc_encryption(const struct wireless_test_data *tdata)
uint8_t *plaintext, *ciphertext;
unsigned plaintext_pad_len;
unsigned plaintext_len;
+ struct rte_cryptodev_info dev_info;
+
+ rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -4949,7 +5374,11 @@ test_zuc_encryption(const struct wireless_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5004,6 +5433,12 @@ test_zuc_encryption_sgl(const struct wireless_test_data *tdata)
return -ENOTSUP;
}
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
+
plaintext_len = ceil_byte_length(tdata->plaintext.len);
/* Append data which is padded to a multiple */
@@ -5036,7 +5471,11 @@ test_zuc_encryption_sgl(const struct wireless_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5084,6 +5523,12 @@ test_zuc_authentication(const struct wireless_test_data *tdata)
return -ENOTSUP;
}
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
+
/* Check if device supports ZUC EIA3 */
cap_idx.type = RTE_CRYPTO_SYM_XFORM_AUTH;
cap_idx.algo.auth = RTE_CRYPTO_AUTH_ZUC_EIA3;
@@ -5124,7 +5569,11 @@ test_zuc_authentication(const struct wireless_test_data *tdata)
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 1, 0);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
ut_params->obuf = ut_params->op->sym->m_src;
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5181,7 +5630,15 @@ test_zuc_auth_cipher(const struct wireless_test_data *tdata,
"in both input and output mbufs.\n");
return -ENOTSUP;
}
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
} else {
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) {
printf("Device doesn't support out-of-place scatter-gather "
"in both input and output mbufs.\n");
@@ -5256,7 +5713,11 @@ test_zuc_auth_cipher(const struct wireless_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5363,7 +5824,15 @@ test_zuc_auth_cipher_sgl(const struct wireless_test_data *tdata,
"in both input and output mbufs.\n");
return -ENOTSUP;
}
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
} else {
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) {
printf("Device doesn't support out-of-place scatter-gather "
"in both input and output mbufs.\n");
@@ -5442,7 +5911,11 @@ test_zuc_auth_cipher_sgl(const struct wireless_test_data *tdata,
if (retval < 0)
return retval;
- ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 1, tdata->cipher_iv.len);
+ else
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -5585,6 +6058,9 @@ test_kasumi_decryption_test_case_2(void)
static int
test_kasumi_decryption_test_case_3(void)
{
+ /* rte_crypto_mbuf_to_vec does not support incomplete mbuf build */
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
return test_kasumi_decryption(&kasumi_test_case_3);
}
@@ -5784,6 +6260,9 @@ test_snow3g_auth_cipher_part_digest_enc_oop(void)
static int
test_snow3g_auth_cipher_test_case_3_sgl(void)
{
+ /* rte_crypto_mbuf_to_vec does not support incomplete mbuf build */
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
return test_snow3g_auth_cipher_sgl(
&snow3g_auth_cipher_test_case_3, IN_PLACE, 0);
}
@@ -5798,6 +6277,9 @@ test_snow3g_auth_cipher_test_case_3_oop_sgl(void)
static int
test_snow3g_auth_cipher_part_digest_enc_sgl(void)
{
+ /* rte_crypto_mbuf_to_vec does not support incomplete mbuf build */
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
return test_snow3g_auth_cipher_sgl(
&snow3g_auth_cipher_partial_digest_encryption,
IN_PLACE, 0);
@@ -6151,11 +6633,12 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata,
unsigned int ciphertext_len;
struct rte_cryptodev_info dev_info;
- struct rte_crypto_op *op;
/* Check if device supports particular algorithms separately */
if (test_mixed_check_if_unsupported(tdata))
return -ENOTSUP;
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
@@ -6166,6 +6649,9 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata,
return -ENOTSUP;
}
+ if (op_mode == OUT_OF_PLACE)
+ return -ENOTSUP;
+
/* Create the session */
if (verify)
retval = create_wireless_algo_cipher_auth_session(
@@ -6197,9 +6683,11 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata,
/* clear mbuf payload */
memset(rte_pktmbuf_mtod(ut_params->ibuf, uint8_t *), 0,
rte_pktmbuf_tailroom(ut_params->ibuf));
- if (op_mode == OUT_OF_PLACE)
+ if (op_mode == OUT_OF_PLACE) {
+
memset(rte_pktmbuf_mtod(ut_params->obuf, uint8_t *), 0,
rte_pktmbuf_tailroom(ut_params->obuf));
+ }
ciphertext_len = ceil_byte_length(tdata->ciphertext.len_bits);
plaintext_len = ceil_byte_length(tdata->plaintext.len_bits);
@@ -6240,18 +6728,17 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata,
if (retval < 0)
return retval;
- op = process_crypto_request(ts_params->valid_devs[0],
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
/* Check if the op failed because the device doesn't */
/* support this particular combination of algorithms */
- if (op == NULL && ut_params->op->status ==
+ if (ut_params->op == NULL && ut_params->op->status ==
RTE_CRYPTO_OP_STATUS_INVALID_SESSION) {
printf("Device doesn't support this mixed combination. "
"Test Skipped.\n");
return -ENOTSUP;
}
- ut_params->op = op;
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
@@ -6342,11 +6829,12 @@ test_mixed_auth_cipher_sgl(const struct mixed_cipher_auth_test_data *tdata,
uint8_t digest_buffer[10000];
struct rte_cryptodev_info dev_info;
- struct rte_crypto_op *op;
/* Check if device supports particular algorithms */
if (test_mixed_check_if_unsupported(tdata))
return -ENOTSUP;
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
@@ -6445,20 +6933,18 @@ test_mixed_auth_cipher_sgl(const struct mixed_cipher_auth_test_data *tdata,
if (retval < 0)
return retval;
- op = process_crypto_request(ts_params->valid_devs[0],
+ ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
/* Check if the op failed because the device doesn't */
/* support this particular combination of algorithms */
- if (op == NULL && ut_params->op->status ==
+ if (ut_params->op == NULL && ut_params->op->status ==
RTE_CRYPTO_OP_STATUS_INVALID_SESSION) {
printf("Device doesn't support this mixed combination. "
"Test Skipped.\n");
return -ENOTSUP;
}
- ut_params->op = op;
-
TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
ut_params->obuf = (op_mode == IN_PLACE ?
@@ -6999,6 +7485,16 @@ test_authenticated_encryption(const struct aead_test_data *tdata)
uint8_t *ciphertext, *auth_tag;
uint16_t plaintext_pad_len;
uint32_t i;
+ struct rte_cryptodev_info dev_info;
+
+ rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -7048,6 +7544,9 @@ test_authenticated_encryption(const struct aead_test_data *tdata)
/* Process crypto operation */
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+ else if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 0, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -8496,6 +8995,16 @@ test_authenticated_decryption(const struct aead_test_data *tdata)
int retval;
uint8_t *plaintext;
uint32_t i;
+ struct rte_cryptodev_info dev_info;
+
+ rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -8545,6 +9054,9 @@ test_authenticated_decryption(const struct aead_test_data *tdata)
/* Process crypto operation */
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+ else if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 0, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -8839,6 +9351,9 @@ test_authenticated_encryption_oop(const struct aead_test_data *tdata)
&cap_idx) == NULL)
return -ENOTSUP;
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
+
/* not supported with CPU crypto */
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
return -ENOTSUP;
@@ -8928,8 +9443,9 @@ test_authenticated_decryption_oop(const struct aead_test_data *tdata)
&cap_idx) == NULL)
return -ENOTSUP;
- /* not supported with CPU crypto */
- if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+ /* not supported with CPU crypto and raw data-path APIs*/
+ if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO ||
+ global_api_test_type == CRYPTODEV_RAW_API_TEST)
return -ENOTSUP;
/* Create AEAD session */
@@ -9115,6 +9631,12 @@ test_authenticated_decryption_sessionless(
return -ENOTSUP;
}
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
+
/* not supported with CPU crypto */
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
return -ENOTSUP;
@@ -9156,8 +9678,13 @@ test_authenticated_decryption_sessionless(
"crypto op session type not sessionless");
/* Process crypto operation */
- TEST_ASSERT_NOT_NULL(process_crypto_request(ts_params->valid_devs[0],
- ut_params->op), "failed to process sym crypto op");
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 0, 0, 0);
+ else
+ TEST_ASSERT_NOT_NULL(process_crypto_request(
+ ts_params->valid_devs[0], ut_params->op),
+ "failed to process sym crypto op");
TEST_ASSERT_NOT_NULL(ut_params->op, "failed crypto process");
@@ -9449,6 +9976,16 @@ test_MD5_HMAC_generate(const struct HMAC_MD5_vector *test_case)
struct crypto_testsuite_params *ts_params = &testsuite_params;
struct crypto_unittest_params *ut_params = &unittest_params;
+ struct rte_cryptodev_info dev_info;
+
+ rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -9477,6 +10014,9 @@ test_MD5_HMAC_generate(const struct HMAC_MD5_vector *test_case)
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -9509,6 +10049,16 @@ test_MD5_HMAC_verify(const struct HMAC_MD5_vector *test_case)
struct crypto_testsuite_params *ts_params = &testsuite_params;
struct crypto_unittest_params *ut_params = &unittest_params;
+ struct rte_cryptodev_info dev_info;
+
+ rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -9535,6 +10085,9 @@ test_MD5_HMAC_verify(const struct HMAC_MD5_vector *test_case)
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -10040,6 +10593,16 @@ test_AES_GMAC_authentication(const struct gmac_test_data *tdata)
{
struct crypto_testsuite_params *ts_params = &testsuite_params;
struct crypto_unittest_params *ut_params = &unittest_params;
+ struct rte_cryptodev_info dev_info;
+
+ rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
int retval;
@@ -10103,6 +10666,9 @@ test_AES_GMAC_authentication(const struct gmac_test_data *tdata)
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -10161,6 +10727,16 @@ test_AES_GMAC_authentication_verify(const struct gmac_test_data *tdata)
int retval;
uint32_t plaintext_pad_len;
uint8_t *plaintext;
+ struct rte_cryptodev_info dev_info;
+
+ rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
TEST_ASSERT_NOT_EQUAL(tdata->gmac_tag.len, 0,
"No GMAC length in the source data");
@@ -10220,6 +10796,9 @@ test_AES_GMAC_authentication_verify(const struct gmac_test_data *tdata)
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -10735,6 +11314,16 @@ test_authentication_verify_fail_when_data_corruption(
int retval;
uint8_t *plaintext;
+ struct rte_cryptodev_info dev_info;
+
+ rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -10744,6 +11333,7 @@ test_authentication_verify_fail_when_data_corruption(
&cap_idx) == NULL)
return -ENOTSUP;
+
/* Create session */
retval = create_auth_session(ut_params,
ts_params->valid_devs[0],
@@ -10785,7 +11375,10 @@ test_authentication_verify_fail_when_data_corruption(
TEST_ASSERT_NOT_EQUAL(ut_params->op->status,
RTE_CRYPTO_OP_STATUS_SUCCESS,
"authentication not failed");
- } else {
+ } else if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 0, 0);
+ else {
ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NULL(ut_params->op, "authentication not failed");
@@ -10803,6 +11396,16 @@ test_authentication_verify_GMAC_fail_when_corruption(
{
int retval;
uint8_t *plaintext;
+ struct rte_cryptodev_info dev_info;
+
+ rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -10856,7 +11459,10 @@ test_authentication_verify_GMAC_fail_when_corruption(
TEST_ASSERT_NOT_EQUAL(ut_params->op->status,
RTE_CRYPTO_OP_STATUS_SUCCESS,
"authentication not failed");
- } else {
+ } else if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 1, 0, 0);
+ else {
ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NULL(ut_params->op, "authentication not failed");
@@ -10875,6 +11481,16 @@ test_authenticated_decryption_fail_when_corruption(
int retval;
uint8_t *ciphertext;
+ struct rte_cryptodev_info dev_info;
+
+ rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -10931,7 +11547,10 @@ test_authenticated_decryption_fail_when_corruption(
TEST_ASSERT_NOT_EQUAL(ut_params->op->status,
RTE_CRYPTO_OP_STATUS_SUCCESS,
"authentication not failed");
- } else {
+ } else if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 0, 0);
+ else {
ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
TEST_ASSERT_NULL(ut_params->op, "authentication not failed");
@@ -10952,6 +11571,16 @@ test_authenticated_encryt_with_esn(
uint16_t plaintext_pad_len;
uint8_t cipher_key[reference->cipher_key.len + 1];
uint8_t auth_key[reference->auth_key.len + 1];
+ struct rte_cryptodev_info dev_info;
+
+ rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -11026,6 +11655,9 @@ test_authenticated_encryt_with_esn(
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 0, 0);
else
ut_params->op = process_crypto_request(
ts_params->valid_devs[0], ut_params->op);
@@ -11072,6 +11704,16 @@ test_authenticated_decrypt_with_esn(
uint8_t *ciphertext;
uint8_t cipher_key[reference->cipher_key.len + 1];
uint8_t auth_key[reference->auth_key.len + 1];
+ struct rte_cryptodev_info dev_info;
+
+ rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
/* Verify the capabilities */
struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -11146,6 +11788,9 @@ test_authenticated_decrypt_with_esn(
if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_crypt_auth_op(ts_params->valid_devs[0],
ut_params->op);
+ else if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 1, 1, 0, 0);
else
ut_params->op = process_crypto_request(ts_params->valid_devs[0],
ut_params->op);
@@ -11286,10 +11931,21 @@ test_authenticated_encryption_SGL(const struct aead_test_data *tdata,
if (sgl_in && (!(dev_info.feature_flags &
RTE_CRYPTODEV_FF_IN_PLACE_SGL)))
return -ENOTSUP;
+
+ uint64_t feat_flags = dev_info.feature_flags;
+
+ if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+ (!(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP))) {
+ printf("Device doesn't support RAW data-path APIs.\n");
+ return -ENOTSUP;
+ }
} else {
unsigned int sgl_in = fragsz < tdata->plaintext.len;
unsigned int sgl_out = (fragsz_oop ? fragsz_oop : fragsz) <
tdata->plaintext.len;
+ /* Raw data path API does not support OOP */
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ return -ENOTSUP;
if (sgl_in && !sgl_out) {
if (!(dev_info.feature_flags &
RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT))
@@ -11485,6 +12141,9 @@ test_authenticated_encryption_SGL(const struct aead_test_data *tdata,
if (oop == IN_PLACE &&
gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op);
+ else if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+ process_sym_raw_hw_api_op(ts_params->valid_devs[0], 0,
+ ut_params->op, 0, 0, 0, 0);
else
TEST_ASSERT_NOT_NULL(
process_crypto_request(ts_params->valid_devs[0],
@@ -13046,6 +13705,30 @@ test_cryptodev_nitrox(void)
return unit_test_suite_runner(&cryptodev_nitrox_testsuite);
}
+static int
+test_cryptodev_qat_raw_api(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+ int ret;
+
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "QAT PMD must be loaded. Check that both "
+ "CONFIG_RTE_LIBRTE_PMD_QAT and CONFIG_RTE_LIBRTE_PMD_QAT_SYM "
+ "are enabled in config file to run this testsuite.\n");
+ return TEST_SKIPPED;
+ }
+
+ global_api_test_type = CRYPTODEV_RAW_API_TEST;
+ ret = unit_test_suite_runner(&cryptodev_testsuite);
+ global_api_test_type = CRYPTODEV_API_TEST;
+
+ return ret;
+}
+
+REGISTER_TEST_COMMAND(cryptodev_qat_raw_api_autotest,
+ test_cryptodev_qat_raw_api);
REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_mb_autotest,
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 41542e055..d8fc0db53 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -71,6 +71,13 @@
#define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
#define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
+enum cryptodev_api_test_type {
+ CRYPTODEV_API_TEST = 0,
+ CRYPTODEV_RAW_API_TEST
+};
+
+extern enum cryptodev_api_test_type global_api_test_type;
+
/**
* Write (spread) data from buffer to mbuf data
*
@@ -209,4 +216,9 @@ create_segmented_mbuf(struct rte_mempool *mbuf_pool, int pkt_len,
return NULL;
}
+void
+process_sym_raw_hw_api_op(uint8_t dev_id, uint16_t qp_id,
+ struct rte_crypto_op *op, uint8_t is_cipher, uint8_t is_auth,
+ uint8_t len_in_bits, uint8_t cipher_iv_len);
+
#endif /* TEST_CRYPTODEV_H_ */
diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c
index 221262341..f675a2c92 100644
--- a/app/test/test_cryptodev_blockcipher.c
+++ b/app/test/test_cryptodev_blockcipher.c
@@ -136,6 +136,14 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
nb_segs = 3;
}
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST &&
+ !(feat_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP)) {
+ printf("Device doesn't support raw data-path APIs. "
+ "Test Skipped.\n");
+ snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "SKIPPED");
+ return TEST_SKIPPED;
+ }
+
if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) {
uint64_t oop_flags = RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
@@ -148,6 +156,13 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
"SKIPPED");
return TEST_SKIPPED;
}
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST) {
+ printf("Raw Data Path APIs do not support OOP, "
+ "Test Skipped.\n");
+ snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "SKIPPED");
+ status = TEST_SUCCESS;
+ goto error_exit;
+ }
}
if (tdata->cipher_key.len)
@@ -462,25 +477,36 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
}
/* Process crypto operation */
- if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
- snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
- "line %u FAILED: %s",
- __LINE__, "Error sending packet for encryption");
- status = TEST_FAILED;
- goto error_exit;
- }
+ if (global_api_test_type == CRYPTODEV_RAW_API_TEST) {
+ uint8_t is_cipher = 0, is_auth = 0;
+ if (t->op_mask & BLOCKCIPHER_TEST_OP_CIPHER)
+ is_cipher = 1;
+ if (t->op_mask & BLOCKCIPHER_TEST_OP_AUTH)
+ is_auth = 1;
- op = NULL;
+ process_sym_raw_hw_api_op(dev_id, 0, op, is_cipher, is_auth, 0,
+ tdata->iv.len);
+ } else {
+ if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) {
+ snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
+ "line %u FAILED: %s",
+ __LINE__, "Error sending packet for encryption");
+ status = TEST_FAILED;
+ goto error_exit;
+ }
- while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) == 0)
- rte_pause();
+ op = NULL;
- if (!op) {
- snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
- "line %u FAILED: %s",
- __LINE__, "Failed to process sym crypto op");
- status = TEST_FAILED;
- goto error_exit;
+ while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) == 0)
+ rte_pause();
+
+ if (!op) {
+ snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
+ "line %u FAILED: %s",
+ __LINE__, "Failed to process sym crypto op");
+ status = TEST_FAILED;
+ goto error_exit;
+ }
}
debug_hexdump(stdout, "m_src(after):",
--
2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v10 1/4] cryptodev: change crypto symmetric vector structure
2020-09-24 16:34 ` [dpdk-dev] [dpdk-dev v10 1/4] cryptodev: change crypto symmetric vector structure Fan Zhang
@ 2020-09-25 8:03 ` Dybkowski, AdamX
2020-09-28 17:01 ` Ananyev, Konstantin
1 sibling, 0 replies; 84+ messages in thread
From: Dybkowski, AdamX @ 2020-09-25 8:03 UTC (permalink / raw)
To: Zhang, Roy Fan, dev
Cc: akhil.goyal, Trahe, Fiona, Kusztal, ArkadiuszX, anoobj, Ananyev,
Konstantin
> -----Original Message-----
> From: Zhang, Roy Fan <roy.fan.zhang@intel.com>
> Sent: Thursday, 24 September, 2020 18:34
> To: dev@dpdk.org
> Cc: akhil.goyal@nxp.com; Trahe, Fiona <fiona.trahe@intel.com>; Kusztal,
> ArkadiuszX <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> <adamx.dybkowski@intel.com>; anoobj@marvell.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Zhang, Roy Fan
> <roy.fan.zhang@intel.com>
> Subject: [dpdk-dev v10 1/4] cryptodev: change crypto symmetric vector
> structure
>
> This patch updates ``rte_crypto_sym_vec`` structure to add support for both
> cpu_crypto synchrounous operation and asynchronous raw data-path APIs.
> The patch also includes AESNI-MB and AESNI-GCM PMD changes, unit test
> changes and documentation updates.
>
> Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v10 2/4] cryptodev: add raw crypto data-path APIs
2020-09-24 16:34 ` [dpdk-dev] [dpdk-dev v10 2/4] cryptodev: add raw crypto data-path APIs Fan Zhang
@ 2020-09-25 8:04 ` Dybkowski, AdamX
2020-10-08 14:26 ` Akhil Goyal
2020-10-08 14:37 ` Akhil Goyal
2 siblings, 0 replies; 84+ messages in thread
From: Dybkowski, AdamX @ 2020-09-25 8:04 UTC (permalink / raw)
To: Zhang, Roy Fan, dev
Cc: akhil.goyal, Trahe, Fiona, Kusztal, ArkadiuszX, anoobj, Ananyev,
Konstantin, Bronowski, PiotrX
> -----Original Message-----
> From: Zhang, Roy Fan <roy.fan.zhang@intel.com>
> Sent: Thursday, 24 September, 2020 18:34
> To: dev@dpdk.org
> Cc: akhil.goyal@nxp.com; Trahe, Fiona <fiona.trahe@intel.com>; Kusztal,
> ArkadiuszX <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> <adamx.dybkowski@intel.com>; anoobj@marvell.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Zhang, Roy Fan
> <roy.fan.zhang@intel.com>; Bronowski, PiotrX
> <piotrx.bronowski@intel.com>
> Subject: [dpdk-dev v10 2/4] cryptodev: add raw crypto data-path APIs
>
> This patch adds raw data-path APIs for enqueue and dequeue operations to
> cryptodev. The APIs support flexible user-define enqueue and dequeue
> behaviors.
>
> Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
> Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v10 3/4] crypto/qat: add raw crypto data-path API support
2020-09-24 16:34 ` [dpdk-dev] [dpdk-dev v10 3/4] crypto/qat: add raw crypto data-path API support Fan Zhang
@ 2020-09-25 8:04 ` Dybkowski, AdamX
0 siblings, 0 replies; 84+ messages in thread
From: Dybkowski, AdamX @ 2020-09-25 8:04 UTC (permalink / raw)
To: Zhang, Roy Fan, dev
Cc: akhil.goyal, Trahe, Fiona, Kusztal, ArkadiuszX, anoobj, Ananyev,
Konstantin
> -----Original Message-----
> From: Zhang, Roy Fan <roy.fan.zhang@intel.com>
> Sent: Thursday, 24 September, 2020 18:34
> To: dev@dpdk.org
> Cc: akhil.goyal@nxp.com; Trahe, Fiona <fiona.trahe@intel.com>; Kusztal,
> ArkadiuszX <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> <adamx.dybkowski@intel.com>; anoobj@marvell.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Zhang, Roy Fan
> <roy.fan.zhang@intel.com>
> Subject: [dpdk-dev v10 3/4] crypto/qat: add raw crypto data-path API
> support
>
> This patch updates QAT PMD to add raw data-path API support.
>
> Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v10 4/4] test/crypto: add unit-test for cryptodev raw API test
2020-09-24 16:34 ` [dpdk-dev] [dpdk-dev v10 4/4] test/crypto: add unit-test for cryptodev raw API test Fan Zhang
@ 2020-09-25 8:05 ` Dybkowski, AdamX
2020-10-08 15:01 ` Akhil Goyal
1 sibling, 0 replies; 84+ messages in thread
From: Dybkowski, AdamX @ 2020-09-25 8:05 UTC (permalink / raw)
To: Zhang, Roy Fan, dev
Cc: akhil.goyal, Trahe, Fiona, Kusztal, ArkadiuszX, anoobj, Ananyev,
Konstantin
> -----Original Message-----
> From: Zhang, Roy Fan <roy.fan.zhang@intel.com>
> Sent: Thursday, 24 September, 2020 18:34
> To: dev@dpdk.org
> Cc: akhil.goyal@nxp.com; Trahe, Fiona <fiona.trahe@intel.com>; Kusztal,
> ArkadiuszX <arkadiuszx.kusztal@intel.com>; Dybkowski, AdamX
> <adamx.dybkowski@intel.com>; anoobj@marvell.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Zhang, Roy Fan
> <roy.fan.zhang@intel.com>
> Subject: [dpdk-dev v10 4/4] test/crypto: add unit-test for cryptodev raw API
> test
>
> This patch adds the cryptodev raw API test support to unit test.
> In addtion a new test-case for QAT PMD for the test type is
> enabled.
>
> Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v10 1/4] cryptodev: change crypto symmetric vector structure
2020-09-24 16:34 ` [dpdk-dev] [dpdk-dev v10 1/4] cryptodev: change crypto symmetric vector structure Fan Zhang
2020-09-25 8:03 ` Dybkowski, AdamX
@ 2020-09-28 17:01 ` Ananyev, Konstantin
1 sibling, 0 replies; 84+ messages in thread
From: Ananyev, Konstantin @ 2020-09-28 17:01 UTC (permalink / raw)
To: Zhang, Roy Fan, dev
Cc: akhil.goyal, Trahe, Fiona, Kusztal, ArkadiuszX, Dybkowski, AdamX, anoobj
> This patch updates ``rte_crypto_sym_vec`` structure to add
> support for both cpu_crypto synchrounous operation and
> asynchronous raw data-path APIs. The patch also includes
> AESNI-MB and AESNI-GCM PMD changes, unit test changes and
> documentation updates.
>
> Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 2.20.1
^ permalink raw reply [flat|nested] 84+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v10 2/4] cryptodev: add raw crypto data-path APIs
2020-09-24 16:34 ` [dpdk-dev] [dpdk-dev v10 2/4] cryptodev: add raw crypto data-path APIs Fan Zhang
2020-09-25 8:04 ` Dybkowski, AdamX
@ 2020-10-08 14:26 ` Akhil Goyal
2020-10-08 15:29 ` Zhang, Roy Fan
2020-10-08 14:37 ` Akhil Goyal
2 siblings, 1 reply; 84+ messages in thread
From: Akhil Goyal @ 2020-10-08 14:26 UTC (permalink / raw)
To: Fan Zhang, dev
Cc: fiona.trahe, arkadiuszx.kusztal, adamx.dybkowski, anoobj,
konstantin.ananyev, Piotr Bronowski
Hi Fan,
>
> This patch adds raw data-path APIs for enqueue and dequeue
> operations to cryptodev. The APIs support flexible user-define
> enqueue and dequeue behaviors.
>
> Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
> Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
> ---
> doc/guides/prog_guide/cryptodev_lib.rst | 93 +++++
> doc/guides/rel_notes/release_20_11.rst | 7 +
> lib/librte_cryptodev/rte_crypto_sym.h | 2 +-
> lib/librte_cryptodev/rte_cryptodev.c | 104 +++++
> lib/librte_cryptodev/rte_cryptodev.h | 354 +++++++++++++++++-
> lib/librte_cryptodev/rte_cryptodev_pmd.h | 47 ++-
> .../rte_cryptodev_version.map | 11 +
> 7 files changed, 614 insertions(+), 4 deletions(-)
>
> diff --git a/doc/guides/prog_guide/cryptodev_lib.rst
> b/doc/guides/prog_guide/cryptodev_lib.rst
> index e7ba35c2d..5fe6c3c24 100644
> --- a/doc/guides/prog_guide/cryptodev_lib.rst
> +++ b/doc/guides/prog_guide/cryptodev_lib.rst
> @@ -632,6 +632,99 @@ a call argument. Status different than zero must be
> treated as error.
> For more details, e.g. how to convert an mbuf to an SGL, please refer to an
> example usage in the IPsec library implementation.
>
> +Cryptodev Raw Data-path APIs
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +The Crypto Raw data-path APIs are a set of APIs are designed to enable
The Crypto Raw data-path APIs are a set of APIs designed to enable
external libraries/applications to leverage the cryptographic
> +externel libraries/applications which want to leverage the cryptographic
> +processing provided by DPDK crypto PMDs through the cryptodev API but in a
> +manner that is not dependent on native DPDK data structures (eg. rte_mbuf,
> +rte_crypto_op, ... etc) in their data-path implementation.
> +
> +The raw data-path APIs have the following advantages:
> +- External data structure friendly design. The new APIs uses the operation
> + descriptor ``struct rte_crypto_sym_vec`` that supports raw data pointer and
> + IOVA addresses as input. Moreover, the APIs does not require the user to
> + allocate the descriptor from mempool, nor requiring mbufs to describe input
> + data's virtual and IOVA addresses. All these features made the translation
> + from user's own data structure into the descriptor easier and more efficient.
> +- Flexible enqueue and dequeue operation. The raw data-path APIs gives the
> + user more control to the enqueue and dequeue operations, including the
> + capability of precious enqueue/dequeue count, abandoning enqueue or
> dequeue
> + at any time, and operation status translation and set on the fly.
> +
> +Cryptodev PMDs who supports the raw data-path APIs will have
Cryptodev PMDs which support raw data-path APIs will have
> +``RTE_CRYPTODEV_FF_SYM_HW_RAW_DP`` feature flag presented. To use this
> +feature, the user should create a local ``struct rte_crypto_raw_dp_ctx``
> +buffer and extend to at least the length returned by
> +``rte_cryptodev_raw_get_dp_context_size`` function call. The created buffer
``rte_cryptodev_get_raw_dp_ctx_size``
> +is then configured using ``rte_cryptodev_raw_configure_dp_context`` function.
rte_cryptodev _configure_raw_dp_ctx
> +The library and the crypto device driver will then configure the buffer and
> +write necessary temporary data into the buffer for later enqueue and dequeue
> +operations. The temporary data may be treated as the shadow copy of the
> +driver's private queue pair data.
> +
> +After the ``struct rte_crypto_raw_dp_ctx`` buffer is initialized, it is then
> +attached either the cryptodev sym session, the rte_security session, or the
attached either with the cryptodev sym session, the rte_security session, or the
> +cryptodev xform for session-less operation by
> +``rte_cryptodev_raw_attach_session`` function. With the session or xform
``rte_cryptodev_attach_raw_session`` API
> +information the driver will set the corresponding enqueue and dequeue
> function
> +handlers to the ``struct rte_crypto_raw_dp_ctx`` buffer.
> +
> +After the session is attached, the ``struct rte_crypto_raw_dp_ctx`` buffer is
> +now ready for enqueue and dequeue operation. There are two different
> enqueue
> +functions: ``rte_cryptodev_raw_enqueue`` to enqueue single descriptor,
> +and ``rte_cryptodev_raw_enqueue_burst`` to enqueue multiple descriptors.
> +In case of the application uses similar approach to
> +``struct rte_crypto_sym_vec`` to manage its data burst but with different
> +data structure, using the ``rte_cryptodev_raw_enqueue_burst`` function may
> be
> +less efficient as this is a situation where the application has to loop over
> +all crypto descriptors to assemble the ``struct rte_crypto_sym_vec`` buffer
> +from its own data structure, and then the driver will loop over them again to
> +translate every crypto job to the driver's specific queue data. The
> +``rte_cryptodev_raw_enqueue`` should be used to save one loop for each data
> +burst instead.
> +
> +During the enqueue, the cryptodev driver only sets the enqueued descriptors
> +into the device queue but not initiates the device to start processing them.
> +The temporary queue pair data changes in relation to the enqueued descriptors
> +may be recorded in the ``struct rte_crypto_raw_dp_ctx`` buffer as the
> reference
> +to the next enqueue function call. When ``rte_cryptodev_raw_enqueue_done``
> is
> +called, the driver will initiate the processing of all enqueued descriptors and
> +merge the temporary queue pair data changes into the driver's private queue
> +pair data. Calling ``rte_cryptodev_raw_configure_dp_context`` twice without
> +``rte_cryptodev_dp_enqueue_done`` call in between will invalidate the
> temporary
> +data stored in ``struct rte_crypto_raw_dp_ctx`` buffer. This feature is useful
> +when the user wants to abandon partially enqueued data for a failed enqueue
> +burst operation and try enqueuing in a whole later.
This feature may not be supported by all the HW PMDs, Can there be a way to bypass
this done API?
> +
> +Similar as enqueue, there are two dequeue functions:
> +``rte_cryptodev_raw_dequeue`` for dequeing single descriptor, and
> +``rte_cryptodev_raw_dequeue_burst`` for dequeuing a burst of descriptor. The
> +dequeue functions only writes back the user data that was passed to the driver
> +during inqueue, and inform the application the operation status.
during enqueue, and inform the operation status to the application.
> +Different than ``rte_cryptodev_dequeue_burst`` which the user can only
> +set an expected dequeue count and needs to read from dequeued cryptodev
> +operations' status field, the raw data-path dequeue burst function allows
> +the user to provide callback functions to retrieve dequeue
> +count from the enqueued user data, and write the expected status value to the
> +user data on the fly.
> +
> +Same as enqueue, both ``rte_cryptodev_raw_dequeue`` and
> +``rte_cryptodev_raw_dequeue_burst`` will not wipe the dequeued descriptors
> +from cryptodev queue unless ``rte_cryptodev_dp_dequeue_done`` is called.
> The
> +dequeue related temporary queue data will be merged into the driver's private
> +queue data in the function call.
> +
> +There are a few limitations to the data path service:
> +
> +* Only support in-place operations.
> +* APIs are NOT thread-safe.
> +* CANNOT mix the direct API's enqueue with rte_cryptodev_enqueue_burst, or
> + vice versa.
> +
> +See *DPDK API Reference* for details on each API definitions.
> +
> Sample code
> -----------
>
> diff --git a/doc/guides/rel_notes/release_20_11.rst
> b/doc/guides/rel_notes/release_20_11.rst
> index 20ebaef5b..d3d9f82f7 100644
> --- a/doc/guides/rel_notes/release_20_11.rst
> +++ b/doc/guides/rel_notes/release_20_11.rst
> @@ -55,6 +55,13 @@ New Features
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> + * **Added raw data-path APIs for cryptodev library.**
> +
> + Cryptodev is added raw data-path APIs to accelerate external libraries or
> + applications those want to avail fast cryptodev enqueue/dequeue
> + operations but does not necessarily depends on mbufs and cryptodev
> + operation mempool.
Raw crypto data path APIs are added to accelerate external libraries or
Applications which need to perform crypto processing on raw buffers and
Not dependent on rte_mbufs or rte_crypto_op mempools.
> +
>
> Removed Items
> -------------
> diff --git a/lib/librte_cryptodev/rte_crypto_sym.h
> b/lib/librte_cryptodev/rte_crypto_sym.h
> index 8201189e0..e1f23d303 100644
> --- a/lib/librte_cryptodev/rte_crypto_sym.h
> +++ b/lib/librte_cryptodev/rte_crypto_sym.h
> @@ -57,7 +57,7 @@ struct rte_crypto_sgl {
> */
> struct rte_crypto_va_iova_ptr {
> void *va;
> - rte_iova_t *iova;
> + rte_iova_t iova;
> };
This should be part of 1/4 of this patchset.
>
> /**
> diff --git a/lib/librte_cryptodev/rte_cryptodev.c
> b/lib/librte_cryptodev/rte_cryptodev.c
> index 1dd795bcb..daeb5f504 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.c
> +++ b/lib/librte_cryptodev/rte_cryptodev.c
> @@ -1914,6 +1914,110 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t
> dev_id,
> return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec);
> }
>
> +int
> +rte_cryptodev_raw_get_dp_context_size(uint8_t dev_id)
As suggested above raw_dp should be used as keyword in all the APIs
Hence it should be rte_cryptodev_get_raw_dp_ctx_size
> +{
> + struct rte_cryptodev *dev;
> + int32_t size = sizeof(struct rte_crypto_raw_dp_ctx);
> + int32_t priv_size;
> +
> + if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
> + return -EINVAL;
> +
> + dev = rte_cryptodev_pmd_get_dev(dev_id);
> +
> + if (*dev->dev_ops->get_drv_ctx_size == NULL ||
> + !(dev->feature_flags &
> RTE_CRYPTODEV_FF_SYM_HW_RAW_DP)) {
> + return -ENOTSUP;
> + }
> +
> + priv_size = (*dev->dev_ops->get_drv_ctx_size)(dev);
> + if (priv_size < 0)
> + return -ENOTSUP;
> +
> + return RTE_ALIGN_CEIL((size + priv_size), 8);
> +}
> +
> +int
> +rte_cryptodev_raw_configure_dp_context(uint8_t dev_id, uint16_t qp_id,
> + struct rte_crypto_raw_dp_ctx *ctx)
rte_cryptodev_configure_raw_dp_ctx
> +{
> + struct rte_cryptodev *dev;
> + union rte_cryptodev_session_ctx sess_ctx = {NULL};
> +
> + if (!rte_cryptodev_get_qp_status(dev_id, qp_id))
> + return -EINVAL;
> +
> + dev = rte_cryptodev_pmd_get_dev(dev_id);
> + if (!(dev->feature_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP)
> + || dev->dev_ops->configure_dp_ctx == NULL)
> + return -ENOTSUP;
> +
> + return (*dev->dev_ops->configure_dp_ctx)(dev, qp_id,
> + RTE_CRYPTO_OP_WITH_SESSION, sess_ctx, ctx);
> +}
> +
> +int
> +rte_cryptodev_raw_attach_session(uint8_t dev_id, uint16_t qp_id,
> + struct rte_crypto_raw_dp_ctx *ctx,
> + enum rte_crypto_op_sess_type sess_type,
> + union rte_cryptodev_session_ctx session_ctx)
> +{
> + struct rte_cryptodev *dev;
> +
> + if (!rte_cryptodev_get_qp_status(dev_id, qp_id))
> + return -EINVAL;
> +
> + dev = rte_cryptodev_pmd_get_dev(dev_id);
> + if (!(dev->feature_flags & RTE_CRYPTODEV_FF_SYM_HW_RAW_DP)
> + || dev->dev_ops->configure_dp_ctx == NULL)
> + return -ENOTSUP;
> + return (*dev->dev_ops->configure_dp_ctx)(dev, qp_id, sess_type,
> + session_ctx, ctx);
What is the difference between rte_cryptodev_raw_configure_dp_context and
rte_cryptodev_raw_attach_session?
And if at all it is needed, then it should be rte_cryptodev_attach_raw_dp_session.
IMO attach is not needed, I am not clear about it.
You are calling the same dev_ops for both - one with explicit session time and other
From an argument.
> +}
> +
> +uint32_t
> +rte_cryptodev_raw_enqueue_burst(struct rte_crypto_raw_dp_ctx *ctx,
> + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
> + void **user_data)
> +{
> + if (vec->num == 1) {
Why do we need this check? I think user is aware that for enqueuing 1 vector,
He should use the other API. Driver will be doing the enqueue operation only one time.
> + vec->status[0] = rte_cryptodev_raw_enqueue(ctx, vec->sgl-
> >vec,
> + vec->sgl->num, ofs, vec->iv, vec->digest, vec->aad,
> + user_data[0]);
> + return (vec->status[0] == 0) ? 1 : 0;
> + }
> +
> + return (*ctx->enqueue_burst)(ctx->qp_data, ctx->drv_ctx_data, vec,
> + ofs, user_data);
> +}
Where are rte_cryptodev_raw_enqueue and rte_cryptodev_raw_dequeue ??
> +
> +int
> +rte_cryptodev_raw_enqueue_done(struct rte_crypto_raw_dp_ctx *ctx,
> + uint32_t n)
> +{
> + return (*ctx->enqueue_done)(ctx->qp_data, ctx->drv_ctx_data, n);
> +}
> +
> +int
> +rte_cryptodev_raw_dequeue_done(struct rte_crypto_raw_dp_ctx *ctx,
> + uint32_t n)
> +{
> + return (*ctx->dequeue_done)(ctx->qp_data, ctx->drv_ctx_data, n);
> +}
> +
> +uint32_t
> +rte_cryptodev_raw_dequeue_burst(struct rte_crypto_raw_dp_ctx *ctx,
> + rte_cryptodev_raw_get_dequeue_count_t get_dequeue_count,
> + rte_cryptodev_raw_post_dequeue_t post_dequeue,
> + void **out_user_data, uint8_t is_user_data_array,
> + uint32_t *n_success_jobs)
> +{
> + return (*ctx->dequeue_burst)(ctx->qp_data, ctx->drv_ctx_data,
> + get_dequeue_count, post_dequeue, out_user_data,
> + is_user_data_array, n_success_jobs);
> +}
> +
> /** Initialise rte_crypto_op mempool element */
> static void
> rte_crypto_op_init(struct rte_mempool *mempool,
> diff --git a/lib/librte_cryptodev/rte_cryptodev.h
> b/lib/librte_cryptodev/rte_cryptodev.h
> index 7b3ebc20f..3579ab66e 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.h
> +++ b/lib/librte_cryptodev/rte_cryptodev.h
> @@ -466,7 +466,8 @@ rte_cryptodev_asym_get_xform_enum(enum
> rte_crypto_asym_xform_type *xform_enum,
> /**< Support symmetric session-less operations */
> #define RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA (1ULL
> << 23)
> /**< Support operations on data which is not byte aligned */
> -
> +#define RTE_CRYPTODEV_FF_SYM_HW_RAW_DP (1ULL
> << 24)
RTE_CRYPTODEV_FF_SYM_RAW_DP should be better
Add this in doc/guides/cryptodevs/features/default.ini as well in this patch.
> +/**< Support accelerated specific raw data-path APIs */
>
> /**
> * Get the name of a crypto device feature flag
> @@ -1351,6 +1352,357 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t
> dev_id,
> struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs,
> struct rte_crypto_sym_vec *vec);
>
> +/**
> + * Get the size of the raw data-path context buffer.
> + *
> + * @param dev_id The device identifier.
> + *
> + * @return
> + * - If the device supports raw data-path APIs, return the context size.
> + * - If the device does not support the APIs, return -1.
> + */
> +__rte_experimental
> +int
> +rte_cryptodev_raw_get_dp_context_size(uint8_t dev_id);
> +
> +/**
> + * Union of different crypto session types, including session-less xform
> + * pointer.
> + */
> +union rte_cryptodev_session_ctx {
> + struct rte_cryptodev_sym_session *crypto_sess;
> + struct rte_crypto_sym_xform *xform;
> + struct rte_security_session *sec_sess;
> +};
> +
> +/**
> + * Enqueue a data vector into device queue but the driver will not start
> + * processing until rte_cryptodev_raw_enqueue_done() is called.
> + *
> + * @param qp Driver specific queue pair data.
> + * @param drv_ctx Driver specific context data.
> + * @param vec The array of descriptor vectors.
> + * @param ofs Start and stop offsets for auth and cipher
> + * operations.
> + * @param user_data The array of user data for dequeue later.
> + * @return
> + * - The number of descriptors successfully submitted.
> + */
> +typedef uint32_t (*cryptodev_dp_sym_enqueue_burst_t)(
> + void *qp, uint8_t *drv_ctx, struct rte_crypto_sym_vec *vec,
> + union rte_crypto_sym_ofs ofs, void *user_data[]);
> +
> +/**
> + * Enqueue single descriptor into device queue but the driver will not start
> + * processing until rte_cryptodev_raw_enqueue_done() is called.
> + *
> + * @param qp Driver specific queue pair data.
> + * @param drv_ctx Driver specific context data.
> + * @param data_vec The buffer data vector.
> + * @param n_data_vecs Number of buffer data vectors.
> + * @param ofs Start and stop offsets for auth and cipher
> + * operations.
> + * @param iv IV virtual and IOVA addresses
> + * @param digest digest virtual and IOVA addresses
> + * @param aad_or_auth_iv AAD or auth IV virtual and IOVA addresses,
> + * depends on the algorithm used.
> + * @param user_data The user data.
> + * @return
> + * - On success return 0.
> + * - On failure return negative integer.
> + */
> +typedef int (*cryptodev_dp_sym_enqueue_t)(
> + void *qp, uint8_t *drv_ctx, struct rte_crypto_vec *data_vec,
> + uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs,
> + struct rte_crypto_va_iova_ptr *iv,
> + struct rte_crypto_va_iova_ptr *digest,
> + struct rte_crypto_va_iova_ptr *aad_or_auth_iv,
> + void *user_data);
> +
> +/**
> + * Inform the cryptodev queue pair to start processing or finish dequeuing all
> + * enqueued/dequeued descriptors.
> + *
> + * @param qp Driver specific queue pair data.
> + * @param drv_ctx Driver specific context data.
> + * @param n The total number of processed descriptors.
> + * @return
> + * - On success return 0.
> + * - On failure return negative integer.
> + */
> +typedef int (*cryptodev_dp_sym_operation_done_t)(void *qp, uint8_t *drv_ctx,
> + uint32_t n);
> +
> +/**
> + * Typedef that the user provided for the driver to get the dequeue count.
> + * The function may return a fixed number or the number parsed from the user
> + * data stored in the first processed descriptor.
> + *
> + * @param user_data Dequeued user data.
> + **/
> +typedef uint32_t (*rte_cryptodev_raw_get_dequeue_count_t)(void
> *user_data);
> +
> +/**
> + * Typedef that the user provided to deal with post dequeue operation, such
> + * as filling status.
> + *
> + * @param user_data Dequeued user data.
> + * @param index Index number of the processed descriptor.
> + * @param is_op_success Operation status provided by the driver.
> + **/
> +typedef void (*rte_cryptodev_raw_post_dequeue_t)(void *user_data,
> + uint32_t index, uint8_t is_op_success);
> +
> +/**
> + * Dequeue symmetric crypto processing of user provided data.
> + *
> + * @param qp Driver specific queue pair data.
> + * @param drv_ctx Driver specific context data.
> + * @param get_dequeue_count User provided callback function to
> + * obtain dequeue count.
> + * @param post_dequeue User provided callback function to
> + * post-process a dequeued operation.
> + * @param out_user_data User data pointer array to be retrieve
> + * from device queue. In case of
> + * *is_user_data_array* is set there
> + * should be enough room to store all
> + * user data.
> + * @param is_user_data_array Set 1 if every dequeued user data will
> + * be written into out_user_data* array.
> + * @param n_success Driver written value to specific the
> + * total successful operations count.
> + *
> + * @return
> + * - Returns number of dequeued packets.
> + */
> +typedef uint32_t (*cryptodev_dp_sym_dequeue_burst_t)(void *qp,
> + uint8_t *drv_ctx,
> + rte_cryptodev_raw_get_dequeue_count_t get_dequeue_count,
> + rte_cryptodev_raw_post_dequeue_t post_dequeue,
> + void **out_user_data, uint8_t is_user_data_array,
> + uint32_t *n_success);
> +
> +/**
> + * Dequeue symmetric crypto processing of user provided data.
> + *
> + * @param qp Driver specific queue pair data.
> + * @param drv_ctx Driver specific context data.
> + * @param out_user_data User data pointer to be retrieve from
> + * device queue.
> + *
> + * @return
> + * - 1 if the user_data is dequeued and the operation is a success.
> + * - 0 if the user_data is dequeued but the operation is failed.
> + * - -1 if no operation is dequeued.
> + */
> +typedef int (*cryptodev_dp_sym_dequeue_t)(
> + void *qp, uint8_t *drv_ctx, void **out_user_data);
> +
> +/**
> + * Context data for raw data-path API crypto process. The buffer of this
> + * structure is to be allocated by the user application with the size equal
> + * or bigger than rte_cryptodev_raw_get_dp_context_size() returned value.
> + *
> + * NOTE: the buffer is to be used and maintained by the cryptodev driver, the
> + * user should NOT alter the buffer content to avoid application or system
> + * crash.
> + */
> +struct rte_crypto_raw_dp_ctx {
> + void *qp_data;
> +
> + cryptodev_dp_sym_enqueue_t enqueue;
> + cryptodev_dp_sym_enqueue_burst_t enqueue_burst;
> + cryptodev_dp_sym_operation_done_t enqueue_done;
> + cryptodev_dp_sym_dequeue_t dequeue;
> + cryptodev_dp_sym_dequeue_burst_t dequeue_burst;
> + cryptodev_dp_sym_operation_done_t dequeue_done;
These function pointers are data path only. Why do we need to add explicit dp in each one of them
These should be cryptodev_sym_raw_**
> +
> + /* Driver specific context data */
> + __extension__ uint8_t drv_ctx_data[];
> +};
> +
> +/**
> + * Configure raw data-path context data.
> + *
> + * NOTE:
> + * After the context data is configured, the user should call
> + * rte_cryptodev_raw_attach_session() before using it in
> + * rte_cryptodev_raw_enqueue/dequeue function call.
I am not clear of the purpose of attach API? It looks an overhead to me.
> + *
> + * @param dev_id The device identifier.
> + * @param qp_id The index of the queue pair from which to
> + * retrieve processed packets. The value must be
> + * in the range [0, nb_queue_pair - 1] previously
> + * supplied to rte_cryptodev_configure().
> + * @param ctx The raw data-path context data.
> + * @return
> + * - On success return 0.
> + * - On failure return negative integer.
> + */
> +__rte_experimental
> +int
> +rte_cryptodev_raw_configure_dp_context(uint8_t dev_id, uint16_t qp_id,
> + struct rte_crypto_raw_dp_ctx *ctx);
> +
> +/**
> + * Attach a cryptodev session to a initialized raw data path context.
> + *
> + * @param dev_id The device identifier.
> + * @param qp_id The index of the queue pair from which to
> + * retrieve processed packets. The value must be
> + * in the range [0, nb_queue_pair - 1] previously
> + * supplied to rte_cryptodev_configure().
> + * @param ctx The raw data-path context data.
> + * @param sess_type session type.
> + * @param session_ctx Session context data.
> + * @return
> + * - On success return 0.
> + * - On failure return negative integer.
> + */
> +__rte_experimental
> +int
> +rte_cryptodev_raw_attach_session(uint8_t dev_id, uint16_t qp_id,
> + struct rte_crypto_raw_dp_ctx *ctx,
> + enum rte_crypto_op_sess_type sess_type,
> + union rte_cryptodev_session_ctx session_ctx);
> +
> +/**
> + * Enqueue single raw data-path descriptor.
> + *
> + * The enqueued descriptor will not be started processing until
> + * rte_cryptodev_raw_enqueue_done() is called.
> + *
> + * @param ctx The initialized raw data-path context data.
> + * @param data_vec The buffer vector.
> + * @param n_data_vecs Number of buffer vectors.
> + * @param ofs Start and stop offsets for auth and cipher
> + * operations.
> + * @param iv IV virtual and IOVA addresses
> + * @param digest digest virtual and IOVA addresses
> + * @param aad_or_auth_iv AAD or auth IV virtual and IOVA addresses,
> + * depends on the algorithm used.
> + * @param user_data The user data.
> + * @return
> + * - The number of descriptors successfully enqueued.
> + */
> +__rte_experimental
> +static __rte_always_inline int
> +rte_cryptodev_raw_enqueue(struct rte_crypto_raw_dp_ctx *ctx,
> + struct rte_crypto_vec *data_vec, uint16_t n_data_vecs,
> + union rte_crypto_sym_ofs ofs,
> + struct rte_crypto_va_iova_ptr *iv,
> + struct rte_crypto_va_iova_ptr *digest,
> + struct rte_crypto_va_iova_ptr *aad_or_auth_iv,
> + void *user_data)
> +{
> + return (*ctx->enqueue)(ctx->qp_data, ctx->drv_ctx_data, data_vec,
> + n_data_vecs, ofs, iv, digest, aad_or_auth_iv, user_data);
> +}
> +
> +/**
> + * Enqueue a data vector of raw data-path descriptors.
> + *
> + * The enqueued descriptors will not be started processing until
> + * rte_cryptodev_raw_enqueue_done() is called.
> + *
> + * @param ctx The initialized raw data-path context data.
> + * @param vec The array of descriptor vectors.
> + * @param ofs Start and stop offsets for auth and cipher
> + * operations.
> + * @param user_data The array of opaque data for dequeue.
> + * @return
> + * - The number of descriptors successfully enqueued.
> + */
> +__rte_experimental
> +uint32_t
> +rte_cryptodev_raw_enqueue_burst(struct rte_crypto_raw_dp_ctx *ctx,
> + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
> + void **user_data);
> +
> +/**
> + * Start processing all enqueued descriptors from last
> + * rte_cryptodev_raw_configure_dp_context() call.
> + *
> + * @param ctx The initialized raw data-path context data.
> + * @param n The total number of submitted descriptors.
> + */
> +__rte_experimental
> +int
> +rte_cryptodev_raw_enqueue_done(struct rte_crypto_raw_dp_ctx *ctx,
> + uint32_t n);
> +
> +/**
> + * Dequeue a burst of raw crypto data-path operations and write the previously
> + * enqueued user data into the array provided.
> + *
> + * The dequeued operations, including the user data stored, will not be
> + * wiped out from the device queue until rte_cryptodev_raw_dequeue_done()
> is
> + * called.
> + *
> + * @param ctx The initialized raw data-path context
> + * data.
> + * @param get_dequeue_count User provided callback function to
> + * obtain dequeue count.
> + * @param post_dequeue User provided callback function to
> + * post-process a dequeued operation.
> + * @param out_user_data User data pointer array to be retrieve
> + * from device queue. In case of
> + * *is_user_data_array* is set there
> + * should be enough room to store all
> + * user data.
> + * @param is_user_data_array Set 1 if every dequeued user data will
> + * be written into *out_user_data* array.
> + * @param n_success Driver written value to specific the
> + * total successful operations count.
// to specify the
> + *
> + * @return
> + * - Returns number of dequeued packets.
> + */
> +__rte_experimental
> +uint32_t
> +rte_cryptodev_raw_dequeue_burst(struct rte_crypto_raw_dp_ctx *ctx,
> + rte_cryptodev_raw_get_dequeue_count_t get_dequeue_count,
> + rte_cryptodev_raw_post_dequeue_t post_dequeue,
> + void **out_user_data, uint8_t is_user_data_array,
> + uint32_t *n_success);
> +
> +/**
> + * Dequeue a raw crypto data-path operation and write the previously
> + * enqueued user data.
> + *
> + * The dequeued operation, including the user data stored, will not be wiped
> + * out from the device queue until rte_cryptodev_raw_dequeue_done() is
> called.
> + *
> + * @param ctx The initialized raw data-path context
> + * data.
> + * @param out_user_data User data pointer to be retrieve from
> + * device queue. The driver shall support
> + * NULL input of this parameter.
> + *
> + * @return
> + * - 1 if the user data is dequeued and the operation is a success.
> + * - 0 if the user data is dequeued but the operation is failed.
> + * - -1 if no operation is ready to be dequeued.
> + */
> +__rte_experimental
> +static __rte_always_inline int
Why is this function specifically inline and not others?
> +rte_cryptodev_raw_dequeue(struct rte_crypto_raw_dp_ctx *ctx,
> + void **out_user_data)
> +{
> + return (*ctx->dequeue)(ctx->qp_data, ctx->drv_ctx_data,
> out_user_data);
> +}
> +
> +/**
> + * Inform the queue pair dequeue operations finished.
> + *
> + * @param ctx The initialized raw data-path context data.
> + * @param n The total number of jobs already dequeued.
> + */
> +__rte_experimental
> +int
> +rte_cryptodev_raw_dequeue_done(struct rte_crypto_raw_dp_ctx *ctx,
> + uint32_t n);
> +
> #ifdef __cplusplus
> }
> #endif
> diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h
> b/lib/librte_cryptodev/rte_cryptodev_pmd.h
> index 81975d72b..69a2a6d64 100644
> --- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
> +++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
> @@ -316,6 +316,40 @@ typedef uint32_t
> (*cryptodev_sym_cpu_crypto_process_t)
> (struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess,
> union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec);
>
> +/**
> + * Typedef that the driver provided to get service context private date size.
> + *
> + * @param dev Crypto device pointer.
> + *
> + * @return
> + * - On success return the size of the device's service context private data.
> + * - On failure return