From: Brian Dooley <brian.dooley@intel.com>
To: Kai Ji <kai.ji@intel.com>,
Pablo de Lara <pablo.de.lara.guarch@intel.com>
Cc: dev@dpdk.org, gakhil@marvell.com, Brian Dooley <brian.dooley@intel.com>
Subject: [PATCH v2] crypto/ipsec_mb: unified IPsec MB interface
Date: Thu, 14 Dec 2023 15:15:44 +0000 [thread overview]
Message-ID: <20231214151544.2189302-1-brian.dooley@intel.com> (raw)
In-Reply-To: <20231212153640.1561504-1-brian.dooley@intel.com>
Currently IPsec MB provides both the JOB API and direct API.
AESNI_MB PMD is using the JOB API codepath while ZUC, KASUMI, SNOW3G,
CHACHA20_POLY1305 and AESNI_GCM are using the direct API.
Instead of using the direct API for these PMDs, they should now make
use of the JOB API codepath. This would remove all use of the IPsec MB
direct API for these PMDs.
Signed-off-by: Brian Dooley <brian.dooley@intel.com>
---
v2:
- Fix compilation failure
---
drivers/crypto/ipsec_mb/pmd_aesni_gcm.c | 758 +-------------------
drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 8 +-
drivers/crypto/ipsec_mb/pmd_aesni_mb_priv.h | 13 +
drivers/crypto/ipsec_mb/pmd_chacha_poly.c | 335 +--------
drivers/crypto/ipsec_mb/pmd_kasumi.c | 404 +----------
drivers/crypto/ipsec_mb/pmd_snow3g.c | 540 +-------------
drivers/crypto/ipsec_mb/pmd_zuc.c | 342 +--------
7 files changed, 39 insertions(+), 2361 deletions(-)
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_gcm.c b/drivers/crypto/ipsec_mb/pmd_aesni_gcm.c
index 8d40bd9169..44609333ee 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_gcm.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_gcm.c
@@ -3,753 +3,7 @@
*/
#include "pmd_aesni_gcm_priv.h"
-
-static void
-aesni_gcm_set_ops(struct aesni_gcm_ops *ops, IMB_MGR *mb_mgr)
-{
- /* Set 128 bit function pointers. */
- ops[GCM_KEY_128].pre = mb_mgr->gcm128_pre;
- ops[GCM_KEY_128].init = mb_mgr->gcm128_init;
-
- ops[GCM_KEY_128].enc = mb_mgr->gcm128_enc;
- ops[GCM_KEY_128].update_enc = mb_mgr->gcm128_enc_update;
- ops[GCM_KEY_128].finalize_enc = mb_mgr->gcm128_enc_finalize;
-
- ops[GCM_KEY_128].dec = mb_mgr->gcm128_dec;
- ops[GCM_KEY_128].update_dec = mb_mgr->gcm128_dec_update;
- ops[GCM_KEY_128].finalize_dec = mb_mgr->gcm128_dec_finalize;
-
- ops[GCM_KEY_128].gmac_init = mb_mgr->gmac128_init;
- ops[GCM_KEY_128].gmac_update = mb_mgr->gmac128_update;
- ops[GCM_KEY_128].gmac_finalize = mb_mgr->gmac128_finalize;
-
- /* Set 192 bit function pointers. */
- ops[GCM_KEY_192].pre = mb_mgr->gcm192_pre;
- ops[GCM_KEY_192].init = mb_mgr->gcm192_init;
-
- ops[GCM_KEY_192].enc = mb_mgr->gcm192_enc;
- ops[GCM_KEY_192].update_enc = mb_mgr->gcm192_enc_update;
- ops[GCM_KEY_192].finalize_enc = mb_mgr->gcm192_enc_finalize;
-
- ops[GCM_KEY_192].dec = mb_mgr->gcm192_dec;
- ops[GCM_KEY_192].update_dec = mb_mgr->gcm192_dec_update;
- ops[GCM_KEY_192].finalize_dec = mb_mgr->gcm192_dec_finalize;
-
- ops[GCM_KEY_192].gmac_init = mb_mgr->gmac192_init;
- ops[GCM_KEY_192].gmac_update = mb_mgr->gmac192_update;
- ops[GCM_KEY_192].gmac_finalize = mb_mgr->gmac192_finalize;
-
- /* Set 256 bit function pointers. */
- ops[GCM_KEY_256].pre = mb_mgr->gcm256_pre;
- ops[GCM_KEY_256].init = mb_mgr->gcm256_init;
-
- ops[GCM_KEY_256].enc = mb_mgr->gcm256_enc;
- ops[GCM_KEY_256].update_enc = mb_mgr->gcm256_enc_update;
- ops[GCM_KEY_256].finalize_enc = mb_mgr->gcm256_enc_finalize;
-
- ops[GCM_KEY_256].dec = mb_mgr->gcm256_dec;
- ops[GCM_KEY_256].update_dec = mb_mgr->gcm256_dec_update;
- ops[GCM_KEY_256].finalize_dec = mb_mgr->gcm256_dec_finalize;
-
- ops[GCM_KEY_256].gmac_init = mb_mgr->gmac256_init;
- ops[GCM_KEY_256].gmac_update = mb_mgr->gmac256_update;
- ops[GCM_KEY_256].gmac_finalize = mb_mgr->gmac256_finalize;
-}
-
-static int
-aesni_gcm_session_configure(IMB_MGR *mb_mgr, void *session,
- const struct rte_crypto_sym_xform *xform)
-{
- struct aesni_gcm_session *sess = session;
- const struct rte_crypto_sym_xform *auth_xform;
- const struct rte_crypto_sym_xform *cipher_xform;
- const struct rte_crypto_sym_xform *aead_xform;
-
- uint8_t key_length;
- const uint8_t *key;
- enum ipsec_mb_operation mode;
- int ret = 0;
-
- ret = ipsec_mb_parse_xform(xform, &mode, &auth_xform,
- &cipher_xform, &aead_xform);
- if (ret)
- return ret;
-
- /**< GCM key type */
-
- sess->op = mode;
-
- switch (sess->op) {
- case IPSEC_MB_OP_HASH_GEN_ONLY:
- case IPSEC_MB_OP_HASH_VERIFY_ONLY:
- /* AES-GMAC
- * auth_xform = xform;
- */
- if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_AES_GMAC) {
- IPSEC_MB_LOG(ERR,
- "Only AES GMAC is supported as an authentication only algorithm");
- ret = -ENOTSUP;
- goto error_exit;
- }
- /* Set IV parameters */
- sess->iv.offset = auth_xform->auth.iv.offset;
- sess->iv.length = auth_xform->auth.iv.length;
- key_length = auth_xform->auth.key.length;
- key = auth_xform->auth.key.data;
- sess->req_digest_length =
- RTE_MIN(auth_xform->auth.digest_length,
- DIGEST_LENGTH_MAX);
- break;
- case IPSEC_MB_OP_AEAD_AUTHENTICATED_ENCRYPT:
- case IPSEC_MB_OP_AEAD_AUTHENTICATED_DECRYPT:
- /* AES-GCM
- * aead_xform = xform;
- */
-
- if (aead_xform->aead.algo != RTE_CRYPTO_AEAD_AES_GCM) {
- IPSEC_MB_LOG(ERR,
- "The only combined operation supported is AES GCM");
- ret = -ENOTSUP;
- goto error_exit;
- }
- /* Set IV parameters */
- sess->iv.offset = aead_xform->aead.iv.offset;
- sess->iv.length = aead_xform->aead.iv.length;
- key_length = aead_xform->aead.key.length;
- key = aead_xform->aead.key.data;
- sess->aad_length = aead_xform->aead.aad_length;
- sess->req_digest_length =
- RTE_MIN(aead_xform->aead.digest_length,
- DIGEST_LENGTH_MAX);
- break;
- default:
- IPSEC_MB_LOG(
- ERR, "Wrong xform type, has to be AEAD or authentication");
- ret = -ENOTSUP;
- goto error_exit;
- }
-
- /* Check key length, and calculate GCM pre-compute. */
- switch (key_length) {
- case 16:
- sess->key_length = GCM_KEY_128;
- mb_mgr->gcm128_pre(key, &sess->gdata_key);
- break;
- case 24:
- sess->key_length = GCM_KEY_192;
- mb_mgr->gcm192_pre(key, &sess->gdata_key);
- break;
- case 32:
- sess->key_length = GCM_KEY_256;
- mb_mgr->gcm256_pre(key, &sess->gdata_key);
- break;
- default:
- IPSEC_MB_LOG(ERR, "Invalid key length");
- ret = -EINVAL;
- goto error_exit;
- }
-
- /* Digest check */
- if (sess->req_digest_length > 16) {
- IPSEC_MB_LOG(ERR, "Invalid digest length");
- ret = -EINVAL;
- goto error_exit;
- }
- /*
- * If size requested is different, generate the full digest
- * (16 bytes) in a temporary location and then memcpy
- * the requested number of bytes.
- */
- if (sess->req_digest_length < 4)
- sess->gen_digest_length = 16;
- else
- sess->gen_digest_length = sess->req_digest_length;
-
-error_exit:
- return ret;
-}
-
-/**
- * Process a completed job and return rte_mbuf which job processed
- *
- * @param job IMB_JOB job to process
- *
- * @return
- * - Returns processed mbuf which is trimmed of output digest used in
- * verification of supplied digest in the case of a HASH_CIPHER operation
- * - Returns NULL on invalid job
- */
-static void
-post_process_gcm_crypto_op(struct ipsec_mb_qp *qp,
- struct rte_crypto_op *op,
- struct aesni_gcm_session *session)
-{
- struct aesni_gcm_qp_data *qp_data = ipsec_mb_get_qp_private_data(qp);
-
- op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- /* Verify digest if required */
- if (session->op == IPSEC_MB_OP_AEAD_AUTHENTICATED_DECRYPT ||
- session->op == IPSEC_MB_OP_HASH_VERIFY_ONLY) {
- uint8_t *digest;
-
- uint8_t *tag = qp_data->temp_digest;
-
- if (session->op == IPSEC_MB_OP_HASH_VERIFY_ONLY)
- digest = op->sym->auth.digest.data;
- else
- digest = op->sym->aead.digest.data;
-
-#ifdef RTE_LIBRTE_PMD_AESNI_GCM_DEBUG
- rte_hexdump(stdout, "auth tag (orig):",
- digest, session->req_digest_length);
- rte_hexdump(stdout, "auth tag (calc):",
- tag, session->req_digest_length);
-#endif
-
- if (memcmp(tag, digest, session->req_digest_length) != 0)
- op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
- } else {
- if (session->req_digest_length != session->gen_digest_length) {
- if (session->op ==
- IPSEC_MB_OP_AEAD_AUTHENTICATED_ENCRYPT)
- memcpy(op->sym->aead.digest.data,
- qp_data->temp_digest,
- session->req_digest_length);
- else
- memcpy(op->sym->auth.digest.data,
- qp_data->temp_digest,
- session->req_digest_length);
- }
- }
-}
-
-/**
- * Process a completed GCM request
- *
- * @param qp Queue Pair to process
- * @param op Crypto operation
- * @param sess AESNI-GCM session
- *
- */
-static void
-handle_completed_gcm_crypto_op(struct ipsec_mb_qp *qp,
- struct rte_crypto_op *op,
- struct aesni_gcm_session *sess)
-{
- post_process_gcm_crypto_op(qp, op, sess);
-
- /* Free session if a session-less crypto op */
- if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
- memset(sess, 0, sizeof(struct aesni_gcm_session));
- rte_mempool_put(qp->sess_mp, op->sym->session);
- op->sym->session = NULL;
- }
-}
-
-/**
- * Process a crypto operation, calling
- * the GCM API from the multi buffer library.
- *
- * @param qp queue pair
- * @param op symmetric crypto operation
- * @param session GCM session
- *
- * @return
- * 0 on success
- */
-static int
-process_gcm_crypto_op(struct ipsec_mb_qp *qp, struct rte_crypto_op *op,
- struct aesni_gcm_session *session)
-{
- struct aesni_gcm_qp_data *qp_data = ipsec_mb_get_qp_private_data(qp);
- uint8_t *src, *dst;
- uint8_t *iv_ptr;
- struct rte_crypto_sym_op *sym_op = op->sym;
- struct rte_mbuf *m_src = sym_op->m_src;
- uint32_t offset, data_offset, data_length;
- uint32_t part_len, total_len, data_len;
- uint8_t *tag;
- unsigned int oop = 0;
- struct aesni_gcm_ops *ops = &qp_data->ops[session->key_length];
-
- if (session->op == IPSEC_MB_OP_AEAD_AUTHENTICATED_ENCRYPT ||
- session->op == IPSEC_MB_OP_AEAD_AUTHENTICATED_DECRYPT) {
- offset = sym_op->aead.data.offset;
- data_offset = offset;
- data_length = sym_op->aead.data.length;
- } else {
- offset = sym_op->auth.data.offset;
- data_offset = offset;
- data_length = sym_op->auth.data.length;
- }
-
- RTE_ASSERT(m_src != NULL);
-
- while (offset >= m_src->data_len && data_length != 0) {
- offset -= m_src->data_len;
- m_src = m_src->next;
-
- RTE_ASSERT(m_src != NULL);
- }
-
- src = rte_pktmbuf_mtod_offset(m_src, uint8_t *, offset);
-
- data_len = m_src->data_len - offset;
- part_len = (data_len < data_length) ? data_len :
- data_length;
-
- RTE_ASSERT((sym_op->m_dst == NULL) ||
- ((sym_op->m_dst != NULL) &&
- rte_pktmbuf_is_contiguous(sym_op->m_dst)));
-
- /* In-place */
- if (sym_op->m_dst == NULL || (sym_op->m_dst == sym_op->m_src))
- dst = src;
- /* Out-of-place */
- else {
- oop = 1;
- /* Segmented destination buffer is not supported
- * if operation is Out-of-place
- */
- RTE_ASSERT(rte_pktmbuf_is_contiguous(sym_op->m_dst));
- dst = rte_pktmbuf_mtod_offset(sym_op->m_dst, uint8_t *,
- data_offset);
- }
-
- iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
- session->iv.offset);
-
- if (session->op == IPSEC_MB_OP_AEAD_AUTHENTICATED_ENCRYPT) {
- ops->init(&session->gdata_key, &qp_data->gcm_ctx_data, iv_ptr,
- sym_op->aead.aad.data,
- (uint64_t)session->aad_length);
-
- ops->update_enc(&session->gdata_key, &qp_data->gcm_ctx_data,
- dst, src, (uint64_t)part_len);
- total_len = data_length - part_len;
-
- while (total_len) {
- m_src = m_src->next;
-
- RTE_ASSERT(m_src != NULL);
-
- src = rte_pktmbuf_mtod(m_src, uint8_t *);
- if (oop)
- dst += part_len;
- else
- dst = src;
- part_len = (m_src->data_len < total_len) ?
- m_src->data_len : total_len;
-
- ops->update_enc(&session->gdata_key,
- &qp_data->gcm_ctx_data,
- dst, src, (uint64_t)part_len);
- total_len -= part_len;
- }
-
- if (session->req_digest_length != session->gen_digest_length)
- tag = qp_data->temp_digest;
- else
- tag = sym_op->aead.digest.data;
-
- ops->finalize_enc(&session->gdata_key, &qp_data->gcm_ctx_data,
- tag, session->gen_digest_length);
- } else if (session->op == IPSEC_MB_OP_AEAD_AUTHENTICATED_DECRYPT) {
- ops->init(&session->gdata_key, &qp_data->gcm_ctx_data, iv_ptr,
- sym_op->aead.aad.data,
- (uint64_t)session->aad_length);
-
- ops->update_dec(&session->gdata_key, &qp_data->gcm_ctx_data,
- dst, src, (uint64_t)part_len);
- total_len = data_length - part_len;
-
- while (total_len) {
- m_src = m_src->next;
-
- RTE_ASSERT(m_src != NULL);
-
- src = rte_pktmbuf_mtod(m_src, uint8_t *);
- if (oop)
- dst += part_len;
- else
- dst = src;
- part_len = (m_src->data_len < total_len) ?
- m_src->data_len : total_len;
-
- ops->update_dec(&session->gdata_key,
- &qp_data->gcm_ctx_data,
- dst, src, (uint64_t)part_len);
- total_len -= part_len;
- }
-
- tag = qp_data->temp_digest;
- ops->finalize_dec(&session->gdata_key, &qp_data->gcm_ctx_data,
- tag, session->gen_digest_length);
- } else if (session->op == IPSEC_MB_OP_HASH_GEN_ONLY) {
- ops->gmac_init(&session->gdata_key, &qp_data->gcm_ctx_data,
- iv_ptr, session->iv.length);
-
- ops->gmac_update(&session->gdata_key, &qp_data->gcm_ctx_data,
- src, (uint64_t)part_len);
- total_len = data_length - part_len;
-
- while (total_len) {
- m_src = m_src->next;
-
- RTE_ASSERT(m_src != NULL);
-
- src = rte_pktmbuf_mtod(m_src, uint8_t *);
- part_len = (m_src->data_len < total_len) ?
- m_src->data_len : total_len;
-
- ops->gmac_update(&session->gdata_key,
- &qp_data->gcm_ctx_data, src,
- (uint64_t)part_len);
- total_len -= part_len;
- }
-
- if (session->req_digest_length != session->gen_digest_length)
- tag = qp_data->temp_digest;
- else
- tag = sym_op->auth.digest.data;
-
- ops->gmac_finalize(&session->gdata_key, &qp_data->gcm_ctx_data,
- tag, session->gen_digest_length);
- } else { /* IPSEC_MB_OP_HASH_VERIFY_ONLY */
- ops->gmac_init(&session->gdata_key, &qp_data->gcm_ctx_data,
- iv_ptr, session->iv.length);
-
- ops->gmac_update(&session->gdata_key, &qp_data->gcm_ctx_data,
- src, (uint64_t)part_len);
- total_len = data_length - part_len;
-
- while (total_len) {
- m_src = m_src->next;
-
- RTE_ASSERT(m_src != NULL);
-
- src = rte_pktmbuf_mtod(m_src, uint8_t *);
- part_len = (m_src->data_len < total_len) ?
- m_src->data_len : total_len;
-
- ops->gmac_update(&session->gdata_key,
- &qp_data->gcm_ctx_data, src,
- (uint64_t)part_len);
- total_len -= part_len;
- }
-
- tag = qp_data->temp_digest;
-
- ops->gmac_finalize(&session->gdata_key, &qp_data->gcm_ctx_data,
- tag, session->gen_digest_length);
- }
- return 0;
-}
-
-/** Get gcm session */
-static inline struct aesni_gcm_session *
-aesni_gcm_get_session(struct ipsec_mb_qp *qp,
- struct rte_crypto_op *op)
-{
- struct rte_cryptodev_sym_session *sess = NULL;
- struct rte_crypto_sym_op *sym_op = op->sym;
-
- if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
- if (likely(sym_op->session != NULL))
- sess = sym_op->session;
- } else {
- if (rte_mempool_get(qp->sess_mp, (void **)&sess))
- return NULL;
-
- if (unlikely(sess->sess_data_sz <
- sizeof(struct aesni_gcm_session))) {
- rte_mempool_put(qp->sess_mp, sess);
- return NULL;
- }
-
- if (unlikely(aesni_gcm_session_configure(qp->mb_mgr,
- CRYPTODEV_GET_SYM_SESS_PRIV(sess),
- sym_op->xform) != 0)) {
- rte_mempool_put(qp->sess_mp, sess);
- sess = NULL;
- }
- sym_op->session = sess;
- }
-
- if (unlikely(sess == NULL))
- op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
-
- return CRYPTODEV_GET_SYM_SESS_PRIV(sess);
-}
-
-static uint16_t
-aesni_gcm_pmd_dequeue_burst(void *queue_pair,
- struct rte_crypto_op **ops, uint16_t nb_ops)
-{
- struct aesni_gcm_session *sess;
- struct ipsec_mb_qp *qp = queue_pair;
-
- int retval = 0;
- unsigned int i, nb_dequeued;
-
- nb_dequeued = rte_ring_dequeue_burst(qp->ingress_queue,
- (void **)ops, nb_ops, NULL);
-
- for (i = 0; i < nb_dequeued; i++) {
-
- sess = aesni_gcm_get_session(qp, ops[i]);
- if (unlikely(sess == NULL)) {
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- qp->stats.dequeue_err_count++;
- break;
- }
-
- retval = process_gcm_crypto_op(qp, ops[i], sess);
- if (retval < 0) {
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- qp->stats.dequeue_err_count++;
- break;
- }
-
- handle_completed_gcm_crypto_op(qp, ops[i], sess);
- }
-
- qp->stats.dequeued_count += i;
-
- return i;
-}
-
-static inline void
-aesni_gcm_fill_error_code(struct rte_crypto_sym_vec *vec,
- int32_t errnum)
-{
- uint32_t i;
-
- for (i = 0; i < vec->num; i++)
- vec->status[i] = errnum;
-}
-
-static inline int32_t
-aesni_gcm_sgl_op_finalize_encryption(const struct aesni_gcm_session *s,
- struct gcm_context_data *gdata_ctx,
- uint8_t *digest, struct aesni_gcm_ops ops)
-{
- if (s->req_digest_length != s->gen_digest_length) {
- uint8_t tmpdigest[s->gen_digest_length];
-
- ops.finalize_enc(&s->gdata_key, gdata_ctx, tmpdigest,
- s->gen_digest_length);
- memcpy(digest, tmpdigest, s->req_digest_length);
- } else {
- ops.finalize_enc(&s->gdata_key, gdata_ctx, digest,
- s->gen_digest_length);
- }
-
- return 0;
-}
-
-static inline int32_t
-aesni_gcm_sgl_op_finalize_decryption(const struct aesni_gcm_session *s,
- struct gcm_context_data *gdata_ctx,
- uint8_t *digest, struct aesni_gcm_ops ops)
-{
- uint8_t tmpdigest[s->gen_digest_length];
-
- ops.finalize_dec(&s->gdata_key, gdata_ctx, tmpdigest,
- s->gen_digest_length);
-
- return memcmp(digest, tmpdigest, s->req_digest_length) == 0 ? 0
- : EBADMSG;
-}
-
-static inline void
-aesni_gcm_process_gcm_sgl_op(const struct aesni_gcm_session *s,
- struct gcm_context_data *gdata_ctx,
- struct rte_crypto_sgl *sgl, void *iv, void *aad,
- struct aesni_gcm_ops ops)
-{
- uint32_t i;
-
- /* init crypto operation */
- ops.init(&s->gdata_key, gdata_ctx, iv, aad,
- (uint64_t)s->aad_length);
-
- /* update with sgl data */
- for (i = 0; i < sgl->num; i++) {
- struct rte_crypto_vec *vec = &sgl->vec[i];
-
- switch (s->op) {
- case IPSEC_MB_OP_AEAD_AUTHENTICATED_ENCRYPT:
- ops.update_enc(&s->gdata_key, gdata_ctx,
- vec->base, vec->base, vec->len);
- break;
- case IPSEC_MB_OP_AEAD_AUTHENTICATED_DECRYPT:
- ops.update_dec(&s->gdata_key, gdata_ctx,
- vec->base, vec->base, vec->len);
- break;
- default:
- IPSEC_MB_LOG(ERR, "Invalid session op");
- break;
- }
-
- }
-}
-
-static inline void
-aesni_gcm_process_gmac_sgl_op(const struct aesni_gcm_session *s,
- struct gcm_context_data *gdata_ctx,
- struct rte_crypto_sgl *sgl, void *iv,
- struct aesni_gcm_ops ops)
-{
- ops.init(&s->gdata_key, gdata_ctx, iv, sgl->vec[0].base,
- sgl->vec[0].len);
-}
-
-static inline uint32_t
-aesni_gcm_sgl_encrypt(struct aesni_gcm_session *s,
- struct gcm_context_data *gdata_ctx,
- struct rte_crypto_sym_vec *vec,
- struct aesni_gcm_ops ops)
-{
- uint32_t i, processed;
-
- processed = 0;
- for (i = 0; i < vec->num; ++i) {
- aesni_gcm_process_gcm_sgl_op(s, gdata_ctx, &vec->src_sgl[i],
- vec->iv[i].va, vec->aad[i].va,
- ops);
- vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(
- s, gdata_ctx, vec->digest[i].va, ops);
- processed += (vec->status[i] == 0);
- }
-
- return processed;
-}
-
-static inline uint32_t
-aesni_gcm_sgl_decrypt(struct aesni_gcm_session *s,
- struct gcm_context_data *gdata_ctx,
- struct rte_crypto_sym_vec *vec,
- struct aesni_gcm_ops ops)
-{
- uint32_t i, processed;
-
- processed = 0;
- for (i = 0; i < vec->num; ++i) {
- aesni_gcm_process_gcm_sgl_op(s, gdata_ctx, &vec->src_sgl[i],
- vec->iv[i].va, vec->aad[i].va,
- ops);
- vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(
- s, gdata_ctx, vec->digest[i].va, ops);
- processed += (vec->status[i] == 0);
- }
-
- return processed;
-}
-
-static inline uint32_t
-aesni_gmac_sgl_generate(struct aesni_gcm_session *s,
- struct gcm_context_data *gdata_ctx,
- struct rte_crypto_sym_vec *vec,
- struct aesni_gcm_ops ops)
-{
- uint32_t i, processed;
-
- processed = 0;
- for (i = 0; i < vec->num; ++i) {
- if (vec->src_sgl[i].num != 1) {
- vec->status[i] = ENOTSUP;
- continue;
- }
-
- aesni_gcm_process_gmac_sgl_op(s, gdata_ctx, &vec->src_sgl[i],
- vec->iv[i].va, ops);
- vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(
- s, gdata_ctx, vec->digest[i].va, ops);
- processed += (vec->status[i] == 0);
- }
-
- return processed;
-}
-
-static inline uint32_t
-aesni_gmac_sgl_verify(struct aesni_gcm_session *s,
- struct gcm_context_data *gdata_ctx,
- struct rte_crypto_sym_vec *vec,
- struct aesni_gcm_ops ops)
-{
- uint32_t i, processed;
-
- processed = 0;
- for (i = 0; i < vec->num; ++i) {
- if (vec->src_sgl[i].num != 1) {
- vec->status[i] = ENOTSUP;
- continue;
- }
-
- aesni_gcm_process_gmac_sgl_op(s, gdata_ctx, &vec->src_sgl[i],
- vec->iv[i].va, ops);
- vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(
- s, gdata_ctx, vec->digest[i].va, ops);
- processed += (vec->status[i] == 0);
- }
-
- return processed;
-}
-
-/** Process CPU crypto bulk operations */
-static uint32_t
-aesni_gcm_process_bulk(struct rte_cryptodev *dev __rte_unused,
- struct rte_cryptodev_sym_session *sess,
- __rte_unused union rte_crypto_sym_ofs ofs,
- struct rte_crypto_sym_vec *vec)
-{
- struct aesni_gcm_session *s = CRYPTODEV_GET_SYM_SESS_PRIV(sess);
- struct gcm_context_data gdata_ctx;
- IMB_MGR *mb_mgr;
-
- /* get per-thread MB MGR, create one if needed */
- mb_mgr = get_per_thread_mb_mgr();
- if (unlikely(mb_mgr == NULL))
- return 0;
-
- /* Check if function pointers have been set for this thread ops. */
- if (unlikely(RTE_PER_LCORE(gcm_ops)[s->key_length].init == NULL))
- aesni_gcm_set_ops(RTE_PER_LCORE(gcm_ops), mb_mgr);
-
- switch (s->op) {
- case IPSEC_MB_OP_AEAD_AUTHENTICATED_ENCRYPT:
- return aesni_gcm_sgl_encrypt(s, &gdata_ctx, vec,
- RTE_PER_LCORE(gcm_ops)[s->key_length]);
- case IPSEC_MB_OP_AEAD_AUTHENTICATED_DECRYPT:
- return aesni_gcm_sgl_decrypt(s, &gdata_ctx, vec,
- RTE_PER_LCORE(gcm_ops)[s->key_length]);
- case IPSEC_MB_OP_HASH_GEN_ONLY:
- return aesni_gmac_sgl_generate(s, &gdata_ctx, vec,
- RTE_PER_LCORE(gcm_ops)[s->key_length]);
- case IPSEC_MB_OP_HASH_VERIFY_ONLY:
- return aesni_gmac_sgl_verify(s, &gdata_ctx, vec,
- RTE_PER_LCORE(gcm_ops)[s->key_length]);
- default:
- aesni_gcm_fill_error_code(vec, EINVAL);
- return 0;
- }
-}
-
-static int
-aesni_gcm_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
- const struct rte_cryptodev_qp_conf *qp_conf,
- int socket_id)
-{
- int ret = ipsec_mb_qp_setup(dev, qp_id, qp_conf, socket_id);
- if (ret < 0)
- return ret;
-
- struct ipsec_mb_qp *qp = dev->data->queue_pairs[qp_id];
- struct aesni_gcm_qp_data *qp_data = ipsec_mb_get_qp_private_data(qp);
- aesni_gcm_set_ops(qp_data->ops, qp->mb_mgr);
- return 0;
-}
+#include "pmd_aesni_mb_priv.h"
struct rte_cryptodev_ops aesni_gcm_pmd_ops = {
.dev_configure = ipsec_mb_config,
@@ -762,10 +16,10 @@ struct rte_cryptodev_ops aesni_gcm_pmd_ops = {
.dev_infos_get = ipsec_mb_info_get,
- .queue_pair_setup = aesni_gcm_qp_setup,
+ .queue_pair_setup = ipsec_mb_qp_setup,
.queue_pair_release = ipsec_mb_qp_release,
- .sym_cpu_process = aesni_gcm_process_bulk,
+ .sym_cpu_process = aesni_mb_process_bulk,
.sym_session_get_size = ipsec_mb_sym_session_get_size,
.sym_session_configure = ipsec_mb_sym_session_configure,
@@ -801,7 +55,7 @@ RTE_INIT(ipsec_mb_register_aesni_gcm)
&ipsec_mb_pmds[IPSEC_MB_PMD_TYPE_AESNI_GCM];
aesni_gcm_data->caps = aesni_gcm_capabilities;
- aesni_gcm_data->dequeue_burst = aesni_gcm_pmd_dequeue_burst;
+ aesni_gcm_data->dequeue_burst = aesni_mb_dequeue_burst;
aesni_gcm_data->feature_flags =
RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
@@ -814,6 +68,6 @@ RTE_INIT(ipsec_mb_register_aesni_gcm)
aesni_gcm_data->ops = &aesni_gcm_pmd_ops;
aesni_gcm_data->qp_priv_size = sizeof(struct aesni_gcm_qp_data);
aesni_gcm_data->queue_pair_configure = NULL;
- aesni_gcm_data->session_configure = aesni_gcm_session_configure;
- aesni_gcm_data->session_priv_size = sizeof(struct aesni_gcm_session);
+ aesni_gcm_data->session_configure = aesni_mb_session_configure;
+ aesni_gcm_data->session_priv_size = sizeof(struct aesni_mb_session);
}
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 4de4866cf3..6f0a1de24d 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -761,7 +761,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
}
/** Configure a aesni multi-buffer session from a crypto xform chain */
-static int
+int
aesni_mb_session_configure(IMB_MGR *mb_mgr,
void *priv_sess,
const struct rte_crypto_sym_xform *xform)
@@ -2131,7 +2131,7 @@ set_job_null_op(IMB_JOB *job, struct rte_crypto_op *op)
}
#if IMB_VERSION(1, 2, 0) < IMB_VERSION_NUM
-static uint16_t
+uint16_t
aesni_mb_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
uint16_t nb_ops)
{
@@ -2321,7 +2321,7 @@ flush_mb_mgr(struct ipsec_mb_qp *qp, IMB_MGR *mb_mgr,
return processed_ops;
}
-static uint16_t
+uint16_t
aesni_mb_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
uint16_t nb_ops)
{
@@ -2456,7 +2456,7 @@ verify_sync_dgst(struct rte_crypto_sym_vec *vec,
return k;
}
-static uint32_t
+uint32_t
aesni_mb_process_bulk(struct rte_cryptodev *dev __rte_unused,
struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs sofs,
struct rte_crypto_sym_vec *vec)
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb_priv.h b/drivers/crypto/ipsec_mb/pmd_aesni_mb_priv.h
index 85994fe5a1..9f0a89d20b 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb_priv.h
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb_priv.h
@@ -21,6 +21,19 @@
#define MAX_NUM_SEGS 16
#endif
+int
+aesni_mb_session_configure(IMB_MGR * m __rte_unused, void *priv_sess,
+ const struct rte_crypto_sym_xform *xform);
+
+uint16_t
+aesni_mb_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
+ uint16_t nb_ops);
+
+uint32_t
+aesni_mb_process_bulk(struct rte_cryptodev *dev __rte_unused,
+ struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs sofs,
+ struct rte_crypto_sym_vec *vec);
+
static const struct rte_cryptodev_capabilities aesni_mb_capabilities[] = {
{ /* MD5 HMAC */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
diff --git a/drivers/crypto/ipsec_mb/pmd_chacha_poly.c b/drivers/crypto/ipsec_mb/pmd_chacha_poly.c
index 97e7cef233..93f8e3588e 100644
--- a/drivers/crypto/ipsec_mb/pmd_chacha_poly.c
+++ b/drivers/crypto/ipsec_mb/pmd_chacha_poly.c
@@ -3,334 +3,7 @@
*/
#include "pmd_chacha_poly_priv.h"
-
-/** Parse crypto xform chain and set private session parameters. */
-static int
-chacha20_poly1305_session_configure(IMB_MGR * mb_mgr __rte_unused,
- void *priv_sess, const struct rte_crypto_sym_xform *xform)
-{
- struct chacha20_poly1305_session *sess = priv_sess;
- const struct rte_crypto_sym_xform *auth_xform;
- const struct rte_crypto_sym_xform *cipher_xform;
- const struct rte_crypto_sym_xform *aead_xform;
-
- uint8_t key_length;
- const uint8_t *key;
- enum ipsec_mb_operation mode;
- int ret = 0;
-
- ret = ipsec_mb_parse_xform(xform, &mode, &auth_xform,
- &cipher_xform, &aead_xform);
- if (ret)
- return ret;
-
- sess->op = mode;
-
- switch (sess->op) {
- case IPSEC_MB_OP_AEAD_AUTHENTICATED_ENCRYPT:
- case IPSEC_MB_OP_AEAD_AUTHENTICATED_DECRYPT:
- if (aead_xform->aead.algo !=
- RTE_CRYPTO_AEAD_CHACHA20_POLY1305) {
- IPSEC_MB_LOG(ERR,
- "The only combined operation supported is CHACHA20 POLY1305");
- ret = -ENOTSUP;
- goto error_exit;
- }
- /* Set IV parameters */
- sess->iv.offset = aead_xform->aead.iv.offset;
- sess->iv.length = aead_xform->aead.iv.length;
- key_length = aead_xform->aead.key.length;
- key = aead_xform->aead.key.data;
- sess->aad_length = aead_xform->aead.aad_length;
- sess->req_digest_length = aead_xform->aead.digest_length;
- break;
- default:
- IPSEC_MB_LOG(
- ERR, "Wrong xform type, has to be AEAD or authentication");
- ret = -ENOTSUP;
- goto error_exit;
- }
-
- /* IV check */
- if (sess->iv.length != CHACHA20_POLY1305_IV_LENGTH &&
- sess->iv.length != 0) {
- IPSEC_MB_LOG(ERR, "Wrong IV length");
- ret = -EINVAL;
- goto error_exit;
- }
-
- /* Check key length */
- if (key_length != CHACHA20_POLY1305_KEY_SIZE) {
- IPSEC_MB_LOG(ERR, "Invalid key length");
- ret = -EINVAL;
- goto error_exit;
- } else {
- memcpy(sess->key, key, CHACHA20_POLY1305_KEY_SIZE);
- }
-
- /* Digest check */
- if (sess->req_digest_length != CHACHA20_POLY1305_DIGEST_LENGTH) {
- IPSEC_MB_LOG(ERR, "Invalid digest length");
- ret = -EINVAL;
- goto error_exit;
- } else {
- sess->gen_digest_length = CHACHA20_POLY1305_DIGEST_LENGTH;
- }
-
-error_exit:
- return ret;
-}
-
-/**
- * Process a crypto operation, calling
- * the direct chacha poly API from the multi buffer library.
- *
- * @param qp queue pair
- * @param op symmetric crypto operation
- * @param session chacha poly session
- *
- * @return
- * - Return 0 if success
- */
-static int
-chacha20_poly1305_crypto_op(struct ipsec_mb_qp *qp, struct rte_crypto_op *op,
- struct chacha20_poly1305_session *session)
-{
- struct chacha20_poly1305_qp_data *qp_data =
- ipsec_mb_get_qp_private_data(qp);
- uint8_t *src, *dst;
- uint8_t *iv_ptr;
- struct rte_crypto_sym_op *sym_op = op->sym;
- struct rte_mbuf *m_src = sym_op->m_src;
- uint32_t offset, data_offset, data_length;
- uint32_t part_len, data_len;
- int total_len;
- uint8_t *tag;
- unsigned int oop = 0;
-
- offset = sym_op->aead.data.offset;
- data_offset = offset;
- data_length = sym_op->aead.data.length;
- RTE_ASSERT(m_src != NULL);
-
- while (offset >= m_src->data_len && data_length != 0) {
- offset -= m_src->data_len;
- m_src = m_src->next;
-
- RTE_ASSERT(m_src != NULL);
- }
-
- src = rte_pktmbuf_mtod_offset(m_src, uint8_t *, offset);
-
- data_len = m_src->data_len - offset;
- part_len = (data_len < data_length) ? data_len :
- data_length;
-
- /* In-place */
- if (sym_op->m_dst == NULL || (sym_op->m_dst == sym_op->m_src))
- dst = src;
- /* Out-of-place */
- else {
- oop = 1;
- /* Segmented destination buffer is not supported
- * if operation is Out-of-place
- */
- RTE_ASSERT(rte_pktmbuf_is_contiguous(sym_op->m_dst));
- dst = rte_pktmbuf_mtod_offset(sym_op->m_dst, uint8_t *,
- data_offset);
- }
-
- iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
- session->iv.offset);
-
- IMB_CHACHA20_POLY1305_INIT(qp->mb_mgr, session->key,
- &qp_data->chacha20_poly1305_ctx_data,
- iv_ptr, sym_op->aead.aad.data,
- (uint64_t)session->aad_length);
-
- if (session->op == IPSEC_MB_OP_AEAD_AUTHENTICATED_ENCRYPT) {
- IMB_CHACHA20_POLY1305_ENC_UPDATE(qp->mb_mgr,
- session->key,
- &qp_data->chacha20_poly1305_ctx_data,
- dst, src, (uint64_t)part_len);
- total_len = data_length - part_len;
-
- while (total_len) {
- m_src = m_src->next;
- RTE_ASSERT(m_src != NULL);
-
- src = rte_pktmbuf_mtod(m_src, uint8_t *);
- if (oop)
- dst += part_len;
- else
- dst = src;
- part_len = (m_src->data_len < total_len) ?
- m_src->data_len : total_len;
-
- if (dst == NULL || src == NULL) {
- IPSEC_MB_LOG(ERR, "Invalid src or dst input");
- return -EINVAL;
- }
- IMB_CHACHA20_POLY1305_ENC_UPDATE(qp->mb_mgr,
- session->key,
- &qp_data->chacha20_poly1305_ctx_data,
- dst, src, (uint64_t)part_len);
- total_len -= part_len;
- if (total_len < 0) {
- IPSEC_MB_LOG(ERR, "Invalid part len");
- return -EINVAL;
- }
- }
-
- tag = sym_op->aead.digest.data;
- IMB_CHACHA20_POLY1305_ENC_FINALIZE(qp->mb_mgr,
- &qp_data->chacha20_poly1305_ctx_data,
- tag, session->gen_digest_length);
-
- } else {
- IMB_CHACHA20_POLY1305_DEC_UPDATE(qp->mb_mgr,
- session->key,
- &qp_data->chacha20_poly1305_ctx_data,
- dst, src, (uint64_t)part_len);
-
- total_len = data_length - part_len;
-
- while (total_len) {
- m_src = m_src->next;
-
- RTE_ASSERT(m_src != NULL);
-
- src = rte_pktmbuf_mtod(m_src, uint8_t *);
- if (oop)
- dst += part_len;
- else
- dst = src;
- part_len = (m_src->data_len < total_len) ?
- m_src->data_len : total_len;
-
- if (dst == NULL || src == NULL) {
- IPSEC_MB_LOG(ERR, "Invalid src or dst input");
- return -EINVAL;
- }
- IMB_CHACHA20_POLY1305_DEC_UPDATE(qp->mb_mgr,
- session->key,
- &qp_data->chacha20_poly1305_ctx_data,
- dst, src, (uint64_t)part_len);
- total_len -= part_len;
- if (total_len < 0) {
- IPSEC_MB_LOG(ERR, "Invalid part len");
- return -EINVAL;
- }
- }
-
- tag = qp_data->temp_digest;
- IMB_CHACHA20_POLY1305_DEC_FINALIZE(qp->mb_mgr,
- &qp_data->chacha20_poly1305_ctx_data,
- tag, session->gen_digest_length);
- }
-
- return 0;
-}
-
-/**
- * Process a completed chacha poly op
- *
- * @param qp Queue Pair to process
- * @param op Crypto operation
- * @param sess Crypto session
- *
- * @return
- * - void
- */
-static void
-post_process_chacha20_poly1305_crypto_op(struct ipsec_mb_qp *qp,
- struct rte_crypto_op *op,
- struct chacha20_poly1305_session *session)
-{
- struct chacha20_poly1305_qp_data *qp_data =
- ipsec_mb_get_qp_private_data(qp);
-
- op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- /* Verify digest if required */
- if (session->op == IPSEC_MB_OP_AEAD_AUTHENTICATED_DECRYPT ||
- session->op == IPSEC_MB_OP_HASH_VERIFY_ONLY) {
- uint8_t *digest = op->sym->aead.digest.data;
- uint8_t *tag = qp_data->temp_digest;
-
-#ifdef RTE_LIBRTE_PMD_CHACHA20_POLY1305_DEBUG
- rte_hexdump(stdout, "auth tag (orig):",
- digest, session->req_digest_length);
- rte_hexdump(stdout, "auth tag (calc):",
- tag, session->req_digest_length);
-#endif
- if (memcmp(tag, digest, session->req_digest_length) != 0)
- op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
-
- }
-
-}
-
-/**
- * Process a completed Chacha20_poly1305 request
- *
- * @param qp Queue Pair to process
- * @param op Crypto operation
- * @param sess Crypto session
- *
- * @return
- * - void
- */
-static void
-handle_completed_chacha20_poly1305_crypto_op(struct ipsec_mb_qp *qp,
- struct rte_crypto_op *op,
- struct chacha20_poly1305_session *sess)
-{
- post_process_chacha20_poly1305_crypto_op(qp, op, sess);
-
- /* Free session if a session-less crypto op */
- if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
- memset(sess, 0, sizeof(struct chacha20_poly1305_session));
- rte_mempool_put(qp->sess_mp, op->sym->session);
- op->sym->session = NULL;
- }
-}
-
-static uint16_t
-chacha20_poly1305_pmd_dequeue_burst(void *queue_pair,
- struct rte_crypto_op **ops, uint16_t nb_ops)
-{
- struct chacha20_poly1305_session *sess;
- struct ipsec_mb_qp *qp = queue_pair;
-
- int retval = 0;
- unsigned int i = 0, nb_dequeued;
-
- nb_dequeued = rte_ring_dequeue_burst(qp->ingress_queue,
- (void **)ops, nb_ops, NULL);
-
- for (i = 0; i < nb_dequeued; i++) {
-
- sess = ipsec_mb_get_session_private(qp, ops[i]);
- if (unlikely(sess == NULL)) {
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- qp->stats.dequeue_err_count++;
- break;
- }
-
- retval = chacha20_poly1305_crypto_op(qp, ops[i], sess);
- if (retval < 0) {
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- qp->stats.dequeue_err_count++;
- break;
- }
-
- handle_completed_chacha20_poly1305_crypto_op(qp, ops[i], sess);
- }
-
- qp->stats.dequeued_count += i;
-
- return i;
-}
+#include "pmd_aesni_mb_priv.h"
struct rte_cryptodev_ops chacha20_poly1305_pmd_ops = {
.dev_configure = ipsec_mb_config,
@@ -384,7 +57,7 @@ RTE_INIT(ipsec_mb_register_chacha20_poly1305)
= &ipsec_mb_pmds[IPSEC_MB_PMD_TYPE_CHACHA20_POLY1305];
chacha_poly_data->caps = chacha20_poly1305_capabilities;
- chacha_poly_data->dequeue_burst = chacha20_poly1305_pmd_dequeue_burst;
+ chacha_poly_data->dequeue_burst = aesni_mb_dequeue_burst;
chacha_poly_data->feature_flags =
RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
@@ -398,7 +71,7 @@ RTE_INIT(ipsec_mb_register_chacha20_poly1305)
chacha_poly_data->qp_priv_size =
sizeof(struct chacha20_poly1305_qp_data);
chacha_poly_data->session_configure =
- chacha20_poly1305_session_configure;
+ aesni_mb_session_configure;
chacha_poly_data->session_priv_size =
- sizeof(struct chacha20_poly1305_session);
+ sizeof(struct aesni_mb_session);
}
diff --git a/drivers/crypto/ipsec_mb/pmd_kasumi.c b/drivers/crypto/ipsec_mb/pmd_kasumi.c
index 5db9c523cd..0c549f9459 100644
--- a/drivers/crypto/ipsec_mb/pmd_kasumi.c
+++ b/drivers/crypto/ipsec_mb/pmd_kasumi.c
@@ -10,403 +10,7 @@
#include <rte_malloc.h>
#include "pmd_kasumi_priv.h"
-
-/** Parse crypto xform chain and set private session parameters. */
-static int
-kasumi_session_configure(IMB_MGR *mgr, void *priv_sess,
- const struct rte_crypto_sym_xform *xform)
-{
- const struct rte_crypto_sym_xform *auth_xform = NULL;
- const struct rte_crypto_sym_xform *cipher_xform = NULL;
- enum ipsec_mb_operation mode;
- struct kasumi_session *sess = (struct kasumi_session *)priv_sess;
- /* Select Crypto operation - hash then cipher / cipher then hash */
- int ret = ipsec_mb_parse_xform(xform, &mode, &auth_xform,
- &cipher_xform, NULL);
-
- if (ret)
- return ret;
-
- if (cipher_xform) {
- /* Only KASUMI F8 supported */
- if (cipher_xform->cipher.algo != RTE_CRYPTO_CIPHER_KASUMI_F8) {
- IPSEC_MB_LOG(ERR, "Unsupported cipher algorithm ");
- return -ENOTSUP;
- }
-
- sess->cipher_iv_offset = cipher_xform->cipher.iv.offset;
- if (cipher_xform->cipher.iv.length != KASUMI_IV_LENGTH) {
- IPSEC_MB_LOG(ERR, "Wrong IV length");
- return -EINVAL;
- }
-
- /* Initialize key */
- IMB_KASUMI_INIT_F8_KEY_SCHED(mgr,
- cipher_xform->cipher.key.data,
- &sess->pKeySched_cipher);
- }
-
- if (auth_xform) {
- /* Only KASUMI F9 supported */
- if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_KASUMI_F9) {
- IPSEC_MB_LOG(ERR, "Unsupported authentication");
- return -ENOTSUP;
- }
-
- if (auth_xform->auth.digest_length != KASUMI_DIGEST_LENGTH) {
- IPSEC_MB_LOG(ERR, "Wrong digest length");
- return -EINVAL;
- }
-
- sess->auth_op = auth_xform->auth.op;
-
- /* Initialize key */
- IMB_KASUMI_INIT_F9_KEY_SCHED(mgr, auth_xform->auth.key.data,
- &sess->pKeySched_hash);
- }
-
- sess->op = mode;
- return ret;
-}
-
-/** Encrypt/decrypt mbufs with same cipher key. */
-static uint8_t
-process_kasumi_cipher_op(struct ipsec_mb_qp *qp, struct rte_crypto_op **ops,
- struct kasumi_session *session, uint8_t num_ops)
-{
- unsigned int i;
- uint8_t processed_ops = 0;
- const void *src[num_ops];
- void *dst[num_ops];
- uint8_t *iv_ptr;
- uint64_t iv[num_ops];
- uint32_t num_bytes[num_ops];
-
- for (i = 0; i < num_ops; i++) {
- src[i] = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *)
- + (ops[i]->sym->cipher.data.offset >> 3);
- dst[i] = ops[i]->sym->m_dst
- ? rte_pktmbuf_mtod(ops[i]->sym->m_dst, uint8_t *)
- + (ops[i]->sym->cipher.data.offset >> 3)
- : rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *)
- + (ops[i]->sym->cipher.data.offset >> 3);
- iv_ptr = rte_crypto_op_ctod_offset(ops[i], uint8_t *,
- session->cipher_iv_offset);
- iv[i] = *((uint64_t *)(iv_ptr));
- num_bytes[i] = ops[i]->sym->cipher.data.length >> 3;
-
- processed_ops++;
- }
-
- if (processed_ops != 0)
- IMB_KASUMI_F8_N_BUFFER(qp->mb_mgr, &session->pKeySched_cipher,
- iv, src, dst, num_bytes,
- processed_ops);
-
- return processed_ops;
-}
-
-/** Encrypt/decrypt mbuf (bit level function). */
-static uint8_t
-process_kasumi_cipher_op_bit(struct ipsec_mb_qp *qp, struct rte_crypto_op *op,
- struct kasumi_session *session)
-{
- uint8_t *src, *dst;
- uint8_t *iv_ptr;
- uint64_t iv;
- uint32_t length_in_bits, offset_in_bits;
-
- offset_in_bits = op->sym->cipher.data.offset;
- src = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
- if (op->sym->m_dst == NULL)
- dst = src;
- else
- dst = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
- iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
- session->cipher_iv_offset);
- iv = *((uint64_t *)(iv_ptr));
- length_in_bits = op->sym->cipher.data.length;
-
- IMB_KASUMI_F8_1_BUFFER_BIT(qp->mb_mgr, &session->pKeySched_cipher, iv,
- src, dst, length_in_bits, offset_in_bits);
-
- return 1;
-}
-
-/** Generate/verify hash from mbufs with same hash key. */
-static int
-process_kasumi_hash_op(struct ipsec_mb_qp *qp, struct rte_crypto_op **ops,
- struct kasumi_session *session, uint8_t num_ops)
-{
- unsigned int i;
- uint8_t processed_ops = 0;
- uint8_t *src, *dst;
- uint32_t length_in_bits;
- uint32_t num_bytes;
- struct kasumi_qp_data *qp_data = ipsec_mb_get_qp_private_data(qp);
-
- for (i = 0; i < num_ops; i++) {
- /* Data must be byte aligned */
- if ((ops[i]->sym->auth.data.offset % BYTE_LEN) != 0) {
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- IPSEC_MB_LOG(ERR, "Invalid Offset");
- break;
- }
-
- length_in_bits = ops[i]->sym->auth.data.length;
-
- src = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *)
- + (ops[i]->sym->auth.data.offset >> 3);
- /* Direction from next bit after end of message */
- num_bytes = length_in_bits >> 3;
-
- if (session->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
- dst = qp_data->temp_digest;
- IMB_KASUMI_F9_1_BUFFER(qp->mb_mgr,
- &session->pKeySched_hash, src,
- num_bytes, dst);
-
- /* Verify digest. */
- if (memcmp(dst, ops[i]->sym->auth.digest.data,
- KASUMI_DIGEST_LENGTH)
- != 0)
- ops[i]->status
- = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
- } else {
- dst = ops[i]->sym->auth.digest.data;
-
- IMB_KASUMI_F9_1_BUFFER(qp->mb_mgr,
- &session->pKeySched_hash, src,
- num_bytes, dst);
- }
- processed_ops++;
- }
-
- return processed_ops;
-}
-
-/** Process a batch of crypto ops which shares the same session. */
-static int
-process_ops(struct rte_crypto_op **ops, struct kasumi_session *session,
- struct ipsec_mb_qp *qp, uint8_t num_ops)
-{
- unsigned int i;
- unsigned int processed_ops;
-
- switch (session->op) {
- case IPSEC_MB_OP_ENCRYPT_ONLY:
- case IPSEC_MB_OP_DECRYPT_ONLY:
- processed_ops
- = process_kasumi_cipher_op(qp, ops, session, num_ops);
- break;
- case IPSEC_MB_OP_HASH_GEN_ONLY:
- case IPSEC_MB_OP_HASH_VERIFY_ONLY:
- processed_ops
- = process_kasumi_hash_op(qp, ops, session, num_ops);
- break;
- case IPSEC_MB_OP_ENCRYPT_THEN_HASH_GEN:
- case IPSEC_MB_OP_DECRYPT_THEN_HASH_VERIFY:
- processed_ops
- = process_kasumi_cipher_op(qp, ops, session, num_ops);
- process_kasumi_hash_op(qp, ops, session, processed_ops);
- break;
- case IPSEC_MB_OP_HASH_VERIFY_THEN_DECRYPT:
- case IPSEC_MB_OP_HASH_GEN_THEN_ENCRYPT:
- processed_ops
- = process_kasumi_hash_op(qp, ops, session, num_ops);
- process_kasumi_cipher_op(qp, ops, session, processed_ops);
- break;
- default:
- /* Operation not supported. */
- processed_ops = 0;
- }
-
- for (i = 0; i < num_ops; i++) {
- /*
- * If there was no error/authentication failure,
- * change status to successful.
- */
- if (ops[i]->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
- ops[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- /* Free session if a session-less crypto op. */
- if (ops[i]->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
- memset(session, 0, sizeof(struct kasumi_session));
- rte_mempool_put(qp->sess_mp, ops[i]->sym->session);
- ops[i]->sym->session = NULL;
- }
- }
- return processed_ops;
-}
-
-/** Process a crypto op with length/offset in bits. */
-static int
-process_op_bit(struct rte_crypto_op *op, struct kasumi_session *session,
- struct ipsec_mb_qp *qp)
-{
- unsigned int processed_op;
-
- switch (session->op) {
- /* case KASUMI_OP_ONLY_CIPHER: */
- case IPSEC_MB_OP_ENCRYPT_ONLY:
- case IPSEC_MB_OP_DECRYPT_ONLY:
- processed_op = process_kasumi_cipher_op_bit(qp, op, session);
- break;
- /* case KASUMI_OP_ONLY_AUTH: */
- case IPSEC_MB_OP_HASH_GEN_ONLY:
- case IPSEC_MB_OP_HASH_VERIFY_ONLY:
- processed_op = process_kasumi_hash_op(qp, &op, session, 1);
- break;
- /* case KASUMI_OP_CIPHER_AUTH: */
- case IPSEC_MB_OP_ENCRYPT_THEN_HASH_GEN:
- processed_op = process_kasumi_cipher_op_bit(qp, op, session);
- if (processed_op == 1)
- process_kasumi_hash_op(qp, &op, session, 1);
- break;
- /* case KASUMI_OP_AUTH_CIPHER: */
- case IPSEC_MB_OP_HASH_VERIFY_THEN_DECRYPT:
- processed_op = process_kasumi_hash_op(qp, &op, session, 1);
- if (processed_op == 1)
- process_kasumi_cipher_op_bit(qp, op, session);
- break;
- default:
- /* Operation not supported. */
- processed_op = 0;
- }
-
- /*
- * If there was no error/authentication failure,
- * change status to successful.
- */
- if (op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
- op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
-
- /* Free session if a session-less crypto op. */
- if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
- memset(CRYPTODEV_GET_SYM_SESS_PRIV(op->sym->session), 0,
- sizeof(struct kasumi_session));
- rte_mempool_put(qp->sess_mp, (void *)op->sym->session);
- op->sym->session = NULL;
- }
- return processed_op;
-}
-
-static uint16_t
-kasumi_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
- uint16_t nb_ops)
-{
- struct rte_crypto_op *c_ops[nb_ops];
- struct rte_crypto_op *curr_c_op = NULL;
-
- struct kasumi_session *prev_sess = NULL, *curr_sess = NULL;
- struct ipsec_mb_qp *qp = queue_pair;
- unsigned int i;
- uint8_t burst_size = 0;
- uint8_t processed_ops;
- unsigned int nb_dequeued;
-
- nb_dequeued = rte_ring_dequeue_burst(qp->ingress_queue,
- (void **)ops, nb_ops, NULL);
- for (i = 0; i < nb_dequeued; i++) {
- curr_c_op = ops[i];
-
-#ifdef RTE_LIBRTE_PMD_KASUMI_DEBUG
- if (!rte_pktmbuf_is_contiguous(curr_c_op->sym->m_src)
- || (curr_c_op->sym->m_dst != NULL
- && !rte_pktmbuf_is_contiguous(
- curr_c_op->sym->m_dst))) {
- IPSEC_MB_LOG(ERR,
- "PMD supports only contiguous mbufs, op (%p) provides noncontiguous mbuf as source/destination buffer.",
- curr_c_op);
- curr_c_op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- break;
- }
-#endif
-
- /* Set status as enqueued (not processed yet) by default. */
- curr_c_op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
-
- curr_sess = (struct kasumi_session *)
- ipsec_mb_get_session_private(qp, curr_c_op);
- if (unlikely(curr_sess == NULL
- || curr_sess->op == IPSEC_MB_OP_NOT_SUPPORTED)) {
- curr_c_op->status
- = RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
- break;
- }
-
- /* If length/offset is at bit-level, process this buffer alone.
- */
- if (((curr_c_op->sym->cipher.data.length % BYTE_LEN) != 0)
- || ((ops[i]->sym->cipher.data.offset % BYTE_LEN) != 0)) {
- /* Process the ops of the previous session. */
- if (prev_sess != NULL) {
- processed_ops = process_ops(c_ops, prev_sess,
- qp, burst_size);
- if (processed_ops < burst_size) {
- burst_size = 0;
- break;
- }
-
- burst_size = 0;
- prev_sess = NULL;
- }
-
- processed_ops = process_op_bit(curr_c_op,
- curr_sess, qp);
- if (processed_ops != 1)
- break;
-
- continue;
- }
-
- /* Batch ops that share the same session. */
- if (prev_sess == NULL) {
- prev_sess = curr_sess;
- c_ops[burst_size++] = curr_c_op;
- } else if (curr_sess == prev_sess) {
- c_ops[burst_size++] = curr_c_op;
- /*
- * When there are enough ops to process in a batch,
- * process them, and start a new batch.
- */
- if (burst_size == KASUMI_MAX_BURST) {
- processed_ops = process_ops(c_ops, prev_sess,
- qp, burst_size);
- if (processed_ops < burst_size) {
- burst_size = 0;
- break;
- }
-
- burst_size = 0;
- prev_sess = NULL;
- }
- } else {
- /*
- * Different session, process the ops
- * of the previous session.
- */
- processed_ops = process_ops(c_ops, prev_sess, qp,
- burst_size);
- if (processed_ops < burst_size) {
- burst_size = 0;
- break;
- }
-
- burst_size = 0;
- prev_sess = curr_sess;
-
- c_ops[burst_size++] = curr_c_op;
- }
- }
-
- if (burst_size != 0) {
- /* Process the crypto ops of the last session. */
- processed_ops = process_ops(c_ops, prev_sess, qp, burst_size);
- }
-
- qp->stats.dequeued_count += i;
- return i;
-}
+#include "pmd_aesni_mb_priv.h"
struct rte_cryptodev_ops kasumi_pmd_ops = {
.dev_configure = ipsec_mb_config,
@@ -457,7 +61,7 @@ RTE_INIT(ipsec_mb_register_kasumi)
= &ipsec_mb_pmds[IPSEC_MB_PMD_TYPE_KASUMI];
kasumi_data->caps = kasumi_capabilities;
- kasumi_data->dequeue_burst = kasumi_pmd_dequeue_burst;
+ kasumi_data->dequeue_burst = aesni_mb_dequeue_burst;
kasumi_data->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO
| RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING
| RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA
@@ -467,6 +71,6 @@ RTE_INIT(ipsec_mb_register_kasumi)
kasumi_data->internals_priv_size = 0;
kasumi_data->ops = &kasumi_pmd_ops;
kasumi_data->qp_priv_size = sizeof(struct kasumi_qp_data);
- kasumi_data->session_configure = kasumi_session_configure;
- kasumi_data->session_priv_size = sizeof(struct kasumi_session);
+ kasumi_data->session_configure = aesni_mb_session_configure;
+ kasumi_data->session_priv_size = sizeof(struct aesni_mb_session);
}
diff --git a/drivers/crypto/ipsec_mb/pmd_snow3g.c b/drivers/crypto/ipsec_mb/pmd_snow3g.c
index e64df1a462..92ec955baa 100644
--- a/drivers/crypto/ipsec_mb/pmd_snow3g.c
+++ b/drivers/crypto/ipsec_mb/pmd_snow3g.c
@@ -3,539 +3,7 @@
*/
#include "pmd_snow3g_priv.h"
-
-/** Parse crypto xform chain and set private session parameters. */
-static int
-snow3g_session_configure(IMB_MGR *mgr, void *priv_sess,
- const struct rte_crypto_sym_xform *xform)
-{
- struct snow3g_session *sess = (struct snow3g_session *)priv_sess;
- const struct rte_crypto_sym_xform *auth_xform = NULL;
- const struct rte_crypto_sym_xform *cipher_xform = NULL;
- enum ipsec_mb_operation mode;
-
- /* Select Crypto operation - hash then cipher / cipher then hash */
- int ret = ipsec_mb_parse_xform(xform, &mode, &auth_xform,
- &cipher_xform, NULL);
- if (ret)
- return ret;
-
- if (cipher_xform) {
- /* Only SNOW 3G UEA2 supported */
- if (cipher_xform->cipher.algo != RTE_CRYPTO_CIPHER_SNOW3G_UEA2)
- return -ENOTSUP;
-
- if (cipher_xform->cipher.iv.length != SNOW3G_IV_LENGTH) {
- IPSEC_MB_LOG(ERR, "Wrong IV length");
- return -EINVAL;
- }
- if (cipher_xform->cipher.key.length > SNOW3G_MAX_KEY_SIZE) {
- IPSEC_MB_LOG(ERR, "Not enough memory to store the key");
- return -ENOMEM;
- }
-
- sess->cipher_iv_offset = cipher_xform->cipher.iv.offset;
-
- /* Initialize key */
- IMB_SNOW3G_INIT_KEY_SCHED(mgr, cipher_xform->cipher.key.data,
- &sess->pKeySched_cipher);
- }
-
- if (auth_xform) {
- /* Only SNOW 3G UIA2 supported */
- if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_SNOW3G_UIA2)
- return -ENOTSUP;
-
- if (auth_xform->auth.digest_length != SNOW3G_DIGEST_LENGTH) {
- IPSEC_MB_LOG(ERR, "Wrong digest length");
- return -EINVAL;
- }
- if (auth_xform->auth.key.length > SNOW3G_MAX_KEY_SIZE) {
- IPSEC_MB_LOG(ERR, "Not enough memory to store the key");
- return -ENOMEM;
- }
-
- sess->auth_op = auth_xform->auth.op;
-
- if (auth_xform->auth.iv.length != SNOW3G_IV_LENGTH) {
- IPSEC_MB_LOG(ERR, "Wrong IV length");
- return -EINVAL;
- }
- sess->auth_iv_offset = auth_xform->auth.iv.offset;
-
- /* Initialize key */
- IMB_SNOW3G_INIT_KEY_SCHED(mgr, auth_xform->auth.key.data,
- &sess->pKeySched_hash);
- }
-
- sess->op = mode;
-
- return 0;
-}
-
-/** Check if conditions are met for digest-appended operations */
-static uint8_t *
-snow3g_digest_appended_in_src(struct rte_crypto_op *op)
-{
- unsigned int auth_size, cipher_size;
-
- auth_size = (op->sym->auth.data.offset >> 3) +
- (op->sym->auth.data.length >> 3);
- cipher_size = (op->sym->cipher.data.offset >> 3) +
- (op->sym->cipher.data.length >> 3);
-
- if (auth_size < cipher_size)
- return rte_pktmbuf_mtod_offset(op->sym->m_src,
- uint8_t *, auth_size);
-
- return NULL;
-}
-
-/** Encrypt/decrypt mbufs with same cipher key. */
-static uint8_t
-process_snow3g_cipher_op(struct ipsec_mb_qp *qp, struct rte_crypto_op **ops,
- struct snow3g_session *session,
- uint8_t num_ops)
-{
- uint32_t i;
- uint8_t processed_ops = 0;
- const void *src[SNOW3G_MAX_BURST] = {NULL};
- void *dst[SNOW3G_MAX_BURST] = {NULL};
- uint8_t *digest_appended[SNOW3G_MAX_BURST] = {NULL};
- const void *iv[SNOW3G_MAX_BURST] = {NULL};
- uint32_t num_bytes[SNOW3G_MAX_BURST] = {0};
- uint32_t cipher_off, cipher_len;
- int unencrypted_bytes = 0;
-
- for (i = 0; i < num_ops; i++) {
-
- cipher_off = ops[i]->sym->cipher.data.offset >> 3;
- cipher_len = ops[i]->sym->cipher.data.length >> 3;
- src[i] = rte_pktmbuf_mtod_offset(
- ops[i]->sym->m_src, uint8_t *, cipher_off);
-
- /* If out-of-place operation */
- if (ops[i]->sym->m_dst &&
- ops[i]->sym->m_src != ops[i]->sym->m_dst) {
- dst[i] = rte_pktmbuf_mtod_offset(
- ops[i]->sym->m_dst, uint8_t *, cipher_off);
-
- /* In case of out-of-place, auth-cipher operation
- * with partial encryption of the digest, copy
- * the remaining, unencrypted part.
- */
- if (session->op == IPSEC_MB_OP_HASH_VERIFY_THEN_DECRYPT
- || session->op == IPSEC_MB_OP_HASH_GEN_THEN_ENCRYPT)
- unencrypted_bytes =
- (ops[i]->sym->auth.data.offset >> 3) +
- (ops[i]->sym->auth.data.length >> 3) +
- (SNOW3G_DIGEST_LENGTH) -
- cipher_off - cipher_len;
- if (unencrypted_bytes > 0)
- rte_memcpy(
- rte_pktmbuf_mtod_offset(
- ops[i]->sym->m_dst, uint8_t *,
- cipher_off + cipher_len),
- rte_pktmbuf_mtod_offset(
- ops[i]->sym->m_src, uint8_t *,
- cipher_off + cipher_len),
- unencrypted_bytes);
- } else
- dst[i] = rte_pktmbuf_mtod_offset(ops[i]->sym->m_src,
- uint8_t *, cipher_off);
-
- iv[i] = rte_crypto_op_ctod_offset(ops[i], uint8_t *,
- session->cipher_iv_offset);
- num_bytes[i] = cipher_len;
- processed_ops++;
- }
-
- IMB_SNOW3G_F8_N_BUFFER(qp->mb_mgr, &session->pKeySched_cipher, iv,
- src, dst, num_bytes, processed_ops);
-
- /* Take care of the raw digest data in src buffer */
- for (i = 0; i < num_ops; i++) {
- if ((session->op == IPSEC_MB_OP_HASH_VERIFY_THEN_DECRYPT ||
- session->op == IPSEC_MB_OP_HASH_GEN_THEN_ENCRYPT) &&
- ops[i]->sym->m_dst != NULL) {
- digest_appended[i] =
- snow3g_digest_appended_in_src(ops[i]);
- /* Clear unencrypted digest from
- * the src buffer
- */
- if (digest_appended[i] != NULL)
- memset(digest_appended[i],
- 0, SNOW3G_DIGEST_LENGTH);
- }
- }
- return processed_ops;
-}
-
-/** Encrypt/decrypt mbuf (bit level function). */
-static uint8_t
-process_snow3g_cipher_op_bit(struct ipsec_mb_qp *qp,
- struct rte_crypto_op *op,
- struct snow3g_session *session)
-{
- uint8_t *src, *dst;
- uint8_t *iv;
- uint32_t length_in_bits, offset_in_bits;
- int unencrypted_bytes = 0;
-
- offset_in_bits = op->sym->cipher.data.offset;
- src = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
- if (op->sym->m_dst == NULL) {
- op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- IPSEC_MB_LOG(ERR, "bit-level in-place not supported\n");
- return 0;
- }
- length_in_bits = op->sym->cipher.data.length;
- dst = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
- /* In case of out-of-place, auth-cipher operation
- * with partial encryption of the digest, copy
- * the remaining, unencrypted part.
- */
- if (session->op == IPSEC_MB_OP_HASH_VERIFY_THEN_DECRYPT ||
- session->op == IPSEC_MB_OP_HASH_GEN_THEN_ENCRYPT)
- unencrypted_bytes =
- (op->sym->auth.data.offset >> 3) +
- (op->sym->auth.data.length >> 3) +
- (SNOW3G_DIGEST_LENGTH) -
- (offset_in_bits >> 3) -
- (length_in_bits >> 3);
- if (unencrypted_bytes > 0)
- rte_memcpy(
- rte_pktmbuf_mtod_offset(
- op->sym->m_dst, uint8_t *,
- (length_in_bits >> 3)),
- rte_pktmbuf_mtod_offset(
- op->sym->m_src, uint8_t *,
- (length_in_bits >> 3)),
- unencrypted_bytes);
-
- iv = rte_crypto_op_ctod_offset(op, uint8_t *,
- session->cipher_iv_offset);
-
- IMB_SNOW3G_F8_1_BUFFER_BIT(qp->mb_mgr, &session->pKeySched_cipher, iv,
- src, dst, length_in_bits, offset_in_bits);
-
- return 1;
-}
-
-/** Generate/verify hash from mbufs with same hash key. */
-static int
-process_snow3g_hash_op(struct ipsec_mb_qp *qp, struct rte_crypto_op **ops,
- struct snow3g_session *session,
- uint8_t num_ops)
-{
- uint32_t i;
- uint8_t processed_ops = 0;
- uint8_t *src, *dst;
- uint32_t length_in_bits;
- uint8_t *iv;
- uint8_t digest_appended = 0;
- struct snow3g_qp_data *qp_data = ipsec_mb_get_qp_private_data(qp);
-
- for (i = 0; i < num_ops; i++) {
- /* Data must be byte aligned */
- if ((ops[i]->sym->auth.data.offset % BYTE_LEN) != 0) {
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- IPSEC_MB_LOG(ERR, "Offset");
- break;
- }
-
- dst = NULL;
-
- length_in_bits = ops[i]->sym->auth.data.length;
-
- src = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
- (ops[i]->sym->auth.data.offset >> 3);
- iv = rte_crypto_op_ctod_offset(ops[i], uint8_t *,
- session->auth_iv_offset);
-
- if (session->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
- dst = qp_data->temp_digest;
- /* Handle auth cipher verify oop case*/
- if ((session->op ==
- IPSEC_MB_OP_ENCRYPT_THEN_HASH_GEN ||
- session->op ==
- IPSEC_MB_OP_DECRYPT_THEN_HASH_VERIFY) &&
- ops[i]->sym->m_dst != NULL)
- src = rte_pktmbuf_mtod_offset(
- ops[i]->sym->m_dst, uint8_t *,
- ops[i]->sym->auth.data.offset >> 3);
-
- IMB_SNOW3G_F9_1_BUFFER(qp->mb_mgr,
- &session->pKeySched_hash,
- iv, src, length_in_bits, dst);
- /* Verify digest. */
- if (memcmp(dst, ops[i]->sym->auth.digest.data,
- SNOW3G_DIGEST_LENGTH) != 0)
- ops[i]->status =
- RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
- } else {
- if (session->op ==
- IPSEC_MB_OP_HASH_VERIFY_THEN_DECRYPT ||
- session->op ==
- IPSEC_MB_OP_HASH_GEN_THEN_ENCRYPT)
- dst = snow3g_digest_appended_in_src(ops[i]);
-
- if (dst != NULL)
- digest_appended = 1;
- else
- dst = ops[i]->sym->auth.digest.data;
-
- IMB_SNOW3G_F9_1_BUFFER(qp->mb_mgr,
- &session->pKeySched_hash,
- iv, src, length_in_bits, dst);
-
- /* Copy back digest from src to auth.digest.data */
- if (digest_appended)
- rte_memcpy(ops[i]->sym->auth.digest.data,
- dst, SNOW3G_DIGEST_LENGTH);
- }
- processed_ops++;
- }
-
- return processed_ops;
-}
-
-/** Process a batch of crypto ops which shares the same session. */
-static int
-process_ops(struct rte_crypto_op **ops, struct snow3g_session *session,
- struct ipsec_mb_qp *qp, uint8_t num_ops)
-{
- uint32_t i;
- uint32_t processed_ops;
-
-#ifdef RTE_LIBRTE_PMD_SNOW3G_DEBUG
- for (i = 0; i < num_ops; i++) {
- if (!rte_pktmbuf_is_contiguous(ops[i]->sym->m_src) ||
- (ops[i]->sym->m_dst != NULL &&
- !rte_pktmbuf_is_contiguous(
- ops[i]->sym->m_dst))) {
- IPSEC_MB_LOG(ERR,
- "PMD supports only contiguous mbufs, "
- "op (%p) provides noncontiguous mbuf as "
- "source/destination buffer.\n", ops[i]);
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- return 0;
- }
- }
-#endif
-
- switch (session->op) {
- case IPSEC_MB_OP_ENCRYPT_ONLY:
- case IPSEC_MB_OP_DECRYPT_ONLY:
- processed_ops = process_snow3g_cipher_op(qp, ops,
- session, num_ops);
- break;
- case IPSEC_MB_OP_HASH_GEN_ONLY:
- case IPSEC_MB_OP_HASH_VERIFY_ONLY:
- processed_ops = process_snow3g_hash_op(qp, ops, session,
- num_ops);
- break;
- case IPSEC_MB_OP_ENCRYPT_THEN_HASH_GEN:
- case IPSEC_MB_OP_DECRYPT_THEN_HASH_VERIFY:
- processed_ops = process_snow3g_cipher_op(qp, ops, session,
- num_ops);
- process_snow3g_hash_op(qp, ops, session, processed_ops);
- break;
- case IPSEC_MB_OP_HASH_VERIFY_THEN_DECRYPT:
- case IPSEC_MB_OP_HASH_GEN_THEN_ENCRYPT:
- processed_ops = process_snow3g_hash_op(qp, ops, session,
- num_ops);
- process_snow3g_cipher_op(qp, ops, session, processed_ops);
- break;
- default:
- /* Operation not supported. */
- processed_ops = 0;
- }
-
- for (i = 0; i < num_ops; i++) {
- /*
- * If there was no error/authentication failure,
- * change status to successful.
- */
- if (ops[i]->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
- ops[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- /* Free session if a session-less crypto op. */
- if (ops[i]->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
- memset(session, 0, sizeof(struct snow3g_session));
- rte_mempool_put(qp->sess_mp, ops[i]->sym->session);
- ops[i]->sym->session = NULL;
- }
- }
- return processed_ops;
-}
-
-/** Process a crypto op with length/offset in bits. */
-static int
-process_op_bit(struct rte_crypto_op *op, struct snow3g_session *session,
- struct ipsec_mb_qp *qp)
-{
- unsigned int processed_op;
- int ret;
-
- switch (session->op) {
- case IPSEC_MB_OP_ENCRYPT_ONLY:
- case IPSEC_MB_OP_DECRYPT_ONLY:
-
- processed_op = process_snow3g_cipher_op_bit(qp, op,
- session);
- break;
- case IPSEC_MB_OP_HASH_GEN_ONLY:
- case IPSEC_MB_OP_HASH_VERIFY_ONLY:
- processed_op = process_snow3g_hash_op(qp, &op, session, 1);
- break;
- case IPSEC_MB_OP_ENCRYPT_THEN_HASH_GEN:
- case IPSEC_MB_OP_DECRYPT_THEN_HASH_VERIFY:
- processed_op = process_snow3g_cipher_op_bit(qp, op, session);
- if (processed_op == 1)
- process_snow3g_hash_op(qp, &op, session, 1);
- break;
- case IPSEC_MB_OP_HASH_VERIFY_THEN_DECRYPT:
- case IPSEC_MB_OP_HASH_GEN_THEN_ENCRYPT:
- processed_op = process_snow3g_hash_op(qp, &op, session, 1);
- if (processed_op == 1)
- process_snow3g_cipher_op_bit(qp, op, session);
- break;
- default:
- /* Operation not supported. */
- processed_op = 0;
- }
-
- /*
- * If there was no error/authentication failure,
- * change status to successful.
- */
- if (op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
- op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
-
- /* Free session if a session-less crypto op. */
- if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
- memset(CRYPTODEV_GET_SYM_SESS_PRIV(op->sym->session), 0,
- sizeof(struct snow3g_session));
- rte_mempool_put(qp->sess_mp, (void *)op->sym->session);
- op->sym->session = NULL;
- }
-
- if (unlikely(processed_op != 1))
- return 0;
-
- ret = rte_ring_enqueue(qp->ingress_queue, op);
- if (ret != 0)
- return ret;
-
- return 1;
-}
-
-static uint16_t
-snow3g_pmd_dequeue_burst(void *queue_pair,
- struct rte_crypto_op **ops, uint16_t nb_ops)
-{
- struct ipsec_mb_qp *qp = queue_pair;
- struct rte_crypto_op *c_ops[SNOW3G_MAX_BURST];
- struct rte_crypto_op *curr_c_op;
-
- struct snow3g_session *prev_sess = NULL, *curr_sess = NULL;
- uint32_t i;
- uint8_t burst_size = 0;
- uint8_t processed_ops;
- uint32_t nb_dequeued;
-
- nb_dequeued = rte_ring_dequeue_burst(qp->ingress_queue,
- (void **)ops, nb_ops, NULL);
-
- for (i = 0; i < nb_dequeued; i++) {
- curr_c_op = ops[i];
-
- /* Set status as enqueued (not processed yet) by default. */
- curr_c_op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
-
- curr_sess = ipsec_mb_get_session_private(qp, curr_c_op);
- if (unlikely(curr_sess == NULL ||
- curr_sess->op == IPSEC_MB_OP_NOT_SUPPORTED)) {
- curr_c_op->status =
- RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
- break;
- }
-
- /* If length/offset is at bit-level,
- * process this buffer alone.
- */
- if (((curr_c_op->sym->cipher.data.length % BYTE_LEN) != 0)
- || ((curr_c_op->sym->cipher.data.offset
- % BYTE_LEN) != 0)) {
- /* Process the ops of the previous session. */
- if (prev_sess != NULL) {
- processed_ops = process_ops(c_ops, prev_sess,
- qp, burst_size);
- if (processed_ops < burst_size) {
- burst_size = 0;
- break;
- }
-
- burst_size = 0;
- prev_sess = NULL;
- }
-
- processed_ops = process_op_bit(curr_c_op, curr_sess, qp);
- if (processed_ops != 1)
- break;
-
- continue;
- }
-
- /* Batch ops that share the same session. */
- if (prev_sess == NULL) {
- prev_sess = curr_sess;
- c_ops[burst_size++] = curr_c_op;
- } else if (curr_sess == prev_sess) {
- c_ops[burst_size++] = curr_c_op;
- /*
- * When there are enough ops to process in a batch,
- * process them, and start a new batch.
- */
- if (burst_size == SNOW3G_MAX_BURST) {
- processed_ops = process_ops(c_ops, prev_sess,
- qp, burst_size);
- if (processed_ops < burst_size) {
- burst_size = 0;
- break;
- }
-
- burst_size = 0;
- prev_sess = NULL;
- }
- } else {
- /*
- * Different session, process the ops
- * of the previous session.
- */
- processed_ops = process_ops(c_ops, prev_sess,
- qp, burst_size);
- if (processed_ops < burst_size) {
- burst_size = 0;
- break;
- }
-
- burst_size = 0;
- prev_sess = curr_sess;
-
- c_ops[burst_size++] = curr_c_op;
- }
- }
-
- if (burst_size != 0) {
- /* Process the crypto ops of the last session. */
- processed_ops = process_ops(c_ops, prev_sess,
- qp, burst_size);
- }
-
- qp->stats.dequeued_count += i;
- return i;
-}
+#include "pmd_aesni_mb_priv.h"
struct rte_cryptodev_ops snow3g_pmd_ops = {
.dev_configure = ipsec_mb_config,
@@ -586,7 +54,7 @@ RTE_INIT(ipsec_mb_register_snow3g)
= &ipsec_mb_pmds[IPSEC_MB_PMD_TYPE_SNOW3G];
snow3g_data->caps = snow3g_capabilities;
- snow3g_data->dequeue_burst = snow3g_pmd_dequeue_burst;
+ snow3g_data->dequeue_burst = aesni_mb_dequeue_burst;
snow3g_data->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA |
@@ -596,6 +64,6 @@ RTE_INIT(ipsec_mb_register_snow3g)
snow3g_data->internals_priv_size = 0;
snow3g_data->ops = &snow3g_pmd_ops;
snow3g_data->qp_priv_size = sizeof(struct snow3g_qp_data);
- snow3g_data->session_configure = snow3g_session_configure;
- snow3g_data->session_priv_size = sizeof(struct snow3g_session);
+ snow3g_data->session_configure = aesni_mb_session_configure;
+ snow3g_data->session_priv_size = sizeof(struct aesni_mb_session);
}
diff --git a/drivers/crypto/ipsec_mb/pmd_zuc.c b/drivers/crypto/ipsec_mb/pmd_zuc.c
index 92fd9d1808..a4eef57d62 100644
--- a/drivers/crypto/ipsec_mb/pmd_zuc.c
+++ b/drivers/crypto/ipsec_mb/pmd_zuc.c
@@ -3,341 +3,7 @@
*/
#include "pmd_zuc_priv.h"
-
-/** Parse crypto xform chain and set private session parameters. */
-static int
-zuc_session_configure(__rte_unused IMB_MGR * mgr, void *zuc_sess,
- const struct rte_crypto_sym_xform *xform)
-{
- struct zuc_session *sess = (struct zuc_session *) zuc_sess;
- const struct rte_crypto_sym_xform *auth_xform = NULL;
- const struct rte_crypto_sym_xform *cipher_xform = NULL;
- enum ipsec_mb_operation mode;
- /* Select Crypto operation - hash then cipher / cipher then hash */
- int ret = ipsec_mb_parse_xform(xform, &mode, &auth_xform,
- &cipher_xform, NULL);
-
- if (ret)
- return ret;
-
- if (cipher_xform) {
- /* Only ZUC EEA3 supported */
- if (cipher_xform->cipher.algo != RTE_CRYPTO_CIPHER_ZUC_EEA3)
- return -ENOTSUP;
-
- if (cipher_xform->cipher.iv.length != ZUC_IV_KEY_LENGTH) {
- IPSEC_MB_LOG(ERR, "Wrong IV length");
- return -EINVAL;
- }
- sess->cipher_iv_offset = cipher_xform->cipher.iv.offset;
-
- /* Copy the key */
- memcpy(sess->pKey_cipher, cipher_xform->cipher.key.data,
- ZUC_IV_KEY_LENGTH);
- }
-
- if (auth_xform) {
- /* Only ZUC EIA3 supported */
- if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_ZUC_EIA3)
- return -ENOTSUP;
-
- if (auth_xform->auth.digest_length != ZUC_DIGEST_LENGTH) {
- IPSEC_MB_LOG(ERR, "Wrong digest length");
- return -EINVAL;
- }
-
- sess->auth_op = auth_xform->auth.op;
-
- if (auth_xform->auth.iv.length != ZUC_IV_KEY_LENGTH) {
- IPSEC_MB_LOG(ERR, "Wrong IV length");
- return -EINVAL;
- }
- sess->auth_iv_offset = auth_xform->auth.iv.offset;
-
- /* Copy the key */
- memcpy(sess->pKey_hash, auth_xform->auth.key.data,
- ZUC_IV_KEY_LENGTH);
- }
-
- sess->op = mode;
- return 0;
-}
-
-/** Encrypt/decrypt mbufs. */
-static uint8_t
-process_zuc_cipher_op(struct ipsec_mb_qp *qp, struct rte_crypto_op **ops,
- struct zuc_session **sessions,
- uint8_t num_ops)
-{
- unsigned int i;
- uint8_t processed_ops = 0;
- const void *src[ZUC_MAX_BURST];
- void *dst[ZUC_MAX_BURST];
- const void *iv[ZUC_MAX_BURST];
- uint32_t num_bytes[ZUC_MAX_BURST];
- const void *cipher_keys[ZUC_MAX_BURST];
- struct zuc_session *sess;
-
- for (i = 0; i < num_ops; i++) {
- if (((ops[i]->sym->cipher.data.length % BYTE_LEN) != 0)
- || ((ops[i]->sym->cipher.data.offset
- % BYTE_LEN) != 0)) {
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- IPSEC_MB_LOG(ERR, "Data Length or offset");
- break;
- }
-
- sess = sessions[i];
-
-#ifdef RTE_LIBRTE_PMD_ZUC_DEBUG
- if (!rte_pktmbuf_is_contiguous(ops[i]->sym->m_src) ||
- (ops[i]->sym->m_dst != NULL &&
- !rte_pktmbuf_is_contiguous(
- ops[i]->sym->m_dst))) {
- IPSEC_MB_LOG(ERR, "PMD supports only "
- " contiguous mbufs, op (%p) "
- "provides noncontiguous mbuf "
- "as source/destination buffer.\n",
- "PMD supports only contiguous mbufs, "
- "op (%p) provides noncontiguous mbuf "
- "as source/destination buffer.\n",
- ops[i]);
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- break;
- }
-#endif
-
- src[i] = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
- (ops[i]->sym->cipher.data.offset >> 3);
- dst[i] = ops[i]->sym->m_dst ?
- rte_pktmbuf_mtod(ops[i]->sym->m_dst, uint8_t *) +
- (ops[i]->sym->cipher.data.offset >> 3) :
- rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
- (ops[i]->sym->cipher.data.offset >> 3);
- iv[i] = rte_crypto_op_ctod_offset(ops[i], uint8_t *,
- sess->cipher_iv_offset);
- num_bytes[i] = ops[i]->sym->cipher.data.length >> 3;
-
- cipher_keys[i] = sess->pKey_cipher;
-
- processed_ops++;
- }
-
- IMB_ZUC_EEA3_N_BUFFER(qp->mb_mgr, (const void **)cipher_keys,
- (const void **)iv, (const void **)src, (void **)dst,
- num_bytes, processed_ops);
-
- return processed_ops;
-}
-
-/** Generate/verify hash from mbufs. */
-static int
-process_zuc_hash_op(struct ipsec_mb_qp *qp, struct rte_crypto_op **ops,
- struct zuc_session **sessions,
- uint8_t num_ops)
-{
- unsigned int i;
- uint8_t processed_ops = 0;
- uint8_t *src[ZUC_MAX_BURST] = { 0 };
- uint32_t *dst[ZUC_MAX_BURST];
- uint32_t length_in_bits[ZUC_MAX_BURST] = { 0 };
- uint8_t *iv[ZUC_MAX_BURST] = { 0 };
- const void *hash_keys[ZUC_MAX_BURST] = { 0 };
- struct zuc_session *sess;
- struct zuc_qp_data *qp_data = ipsec_mb_get_qp_private_data(qp);
-
-
- for (i = 0; i < num_ops; i++) {
- /* Data must be byte aligned */
- if ((ops[i]->sym->auth.data.offset % BYTE_LEN) != 0) {
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- IPSEC_MB_LOG(ERR, "Offset");
- break;
- }
-
- sess = sessions[i];
-
- length_in_bits[i] = ops[i]->sym->auth.data.length;
-
- src[i] = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
- (ops[i]->sym->auth.data.offset >> 3);
- iv[i] = rte_crypto_op_ctod_offset(ops[i], uint8_t *,
- sess->auth_iv_offset);
-
- hash_keys[i] = sess->pKey_hash;
- if (sess->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY)
- dst[i] = (uint32_t *)qp_data->temp_digest[i];
- else
- dst[i] = (uint32_t *)ops[i]->sym->auth.digest.data;
-
- processed_ops++;
- }
-
- IMB_ZUC_EIA3_N_BUFFER(qp->mb_mgr, (const void **)hash_keys,
- (const void * const *)iv, (const void * const *)src,
- length_in_bits, dst, processed_ops);
-
- /*
- * If tag needs to be verified, compare generated tag
- * with attached tag
- */
- for (i = 0; i < processed_ops; i++)
- if (sessions[i]->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY)
- if (memcmp(dst[i], ops[i]->sym->auth.digest.data,
- ZUC_DIGEST_LENGTH) != 0)
- ops[i]->status =
- RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
-
- return processed_ops;
-}
-
-/** Process a batch of crypto ops which shares the same operation type. */
-static int
-process_ops(struct rte_crypto_op **ops, enum ipsec_mb_operation op_type,
- struct zuc_session **sessions,
- struct ipsec_mb_qp *qp, uint8_t num_ops)
-{
- unsigned int i;
- unsigned int processed_ops = 0;
-
- switch (op_type) {
- case IPSEC_MB_OP_ENCRYPT_ONLY:
- case IPSEC_MB_OP_DECRYPT_ONLY:
- processed_ops = process_zuc_cipher_op(qp, ops,
- sessions, num_ops);
- break;
- case IPSEC_MB_OP_HASH_GEN_ONLY:
- case IPSEC_MB_OP_HASH_VERIFY_ONLY:
- processed_ops = process_zuc_hash_op(qp, ops, sessions,
- num_ops);
- break;
- case IPSEC_MB_OP_ENCRYPT_THEN_HASH_GEN:
- case IPSEC_MB_OP_DECRYPT_THEN_HASH_VERIFY:
- processed_ops = process_zuc_cipher_op(qp, ops, sessions,
- num_ops);
- process_zuc_hash_op(qp, ops, sessions, processed_ops);
- break;
- case IPSEC_MB_OP_HASH_VERIFY_THEN_DECRYPT:
- case IPSEC_MB_OP_HASH_GEN_THEN_ENCRYPT:
- processed_ops = process_zuc_hash_op(qp, ops, sessions,
- num_ops);
- process_zuc_cipher_op(qp, ops, sessions, processed_ops);
- break;
- default:
- /* Operation not supported. */
- for (i = 0; i < num_ops; i++)
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
- }
-
- for (i = 0; i < num_ops; i++) {
- /*
- * If there was no error/authentication failure,
- * change status to successful.
- */
- if (ops[i]->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
- ops[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- /* Free session if a session-less crypto op. */
- if (ops[i]->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
- memset(sessions[i], 0, sizeof(struct zuc_session));
- rte_mempool_put(qp->sess_mp, ops[i]->sym->session);
- ops[i]->sym->session = NULL;
- }
- }
- return processed_ops;
-}
-
-static uint16_t
-zuc_pmd_dequeue_burst(void *queue_pair,
- struct rte_crypto_op **c_ops, uint16_t nb_ops)
-{
-
- struct rte_crypto_op *curr_c_op;
-
- struct zuc_session *curr_sess;
- struct zuc_session *sessions[ZUC_MAX_BURST];
- struct rte_crypto_op *int_c_ops[ZUC_MAX_BURST];
- enum ipsec_mb_operation prev_zuc_op = IPSEC_MB_OP_NOT_SUPPORTED;
- enum ipsec_mb_operation curr_zuc_op;
- struct ipsec_mb_qp *qp = queue_pair;
- unsigned int nb_dequeued;
- unsigned int i;
- uint8_t burst_size = 0;
- uint8_t processed_ops;
-
- nb_dequeued = rte_ring_dequeue_burst(qp->ingress_queue,
- (void **)c_ops, nb_ops, NULL);
-
-
- for (i = 0; i < nb_dequeued; i++) {
- curr_c_op = c_ops[i];
-
- curr_sess = (struct zuc_session *)
- ipsec_mb_get_session_private(qp, curr_c_op);
- if (unlikely(curr_sess == NULL)) {
- curr_c_op->status =
- RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
- break;
- }
-
- curr_zuc_op = curr_sess->op;
-
- /*
- * Batch ops that share the same operation type
- * (cipher only, auth only...).
- */
- if (burst_size == 0) {
- prev_zuc_op = curr_zuc_op;
- int_c_ops[0] = curr_c_op;
- sessions[0] = curr_sess;
- burst_size++;
- } else if (curr_zuc_op == prev_zuc_op) {
- int_c_ops[burst_size] = curr_c_op;
- sessions[burst_size] = curr_sess;
- burst_size++;
- /*
- * When there are enough ops to process in a batch,
- * process them, and start a new batch.
- */
- if (burst_size == ZUC_MAX_BURST) {
- processed_ops = process_ops(int_c_ops, curr_zuc_op,
- sessions, qp, burst_size);
- if (processed_ops < burst_size) {
- burst_size = 0;
- break;
- }
-
- burst_size = 0;
- }
- } else {
- /*
- * Different operation type, process the ops
- * of the previous type.
- */
- processed_ops = process_ops(int_c_ops, prev_zuc_op,
- sessions, qp, burst_size);
- if (processed_ops < burst_size) {
- burst_size = 0;
- break;
- }
-
- burst_size = 0;
- prev_zuc_op = curr_zuc_op;
-
- int_c_ops[0] = curr_c_op;
- sessions[0] = curr_sess;
- burst_size++;
- }
- }
-
- if (burst_size != 0) {
- /* Process the crypto ops of the last operation type. */
- processed_ops = process_ops(int_c_ops, prev_zuc_op,
- sessions, qp, burst_size);
- }
-
- qp->stats.dequeued_count += i;
- return i;
-}
+#include "pmd_aesni_mb_priv.h"
struct rte_cryptodev_ops zuc_pmd_ops = {
.dev_configure = ipsec_mb_config,
@@ -388,7 +54,7 @@ RTE_INIT(ipsec_mb_register_zuc)
= &ipsec_mb_pmds[IPSEC_MB_PMD_TYPE_ZUC];
zuc_data->caps = zuc_capabilities;
- zuc_data->dequeue_burst = zuc_pmd_dequeue_burst;
+ zuc_data->dequeue_burst = aesni_mb_dequeue_burst;
zuc_data->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO
| RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING
| RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA
@@ -398,6 +64,6 @@ RTE_INIT(ipsec_mb_register_zuc)
zuc_data->internals_priv_size = 0;
zuc_data->ops = &zuc_pmd_ops;
zuc_data->qp_priv_size = sizeof(struct zuc_qp_data);
- zuc_data->session_configure = zuc_session_configure;
- zuc_data->session_priv_size = sizeof(struct zuc_session);
+ zuc_data->session_configure = aesni_mb_session_configure;
+ zuc_data->session_priv_size = sizeof(struct aesni_mb_session);
}
--
2.25.1
next prev parent reply other threads:[~2023-12-14 15:15 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-12 15:36 [PATCH v1] " Brian Dooley
2023-12-14 15:15 ` Brian Dooley [this message]
2024-01-18 12:00 ` [PATCH v3] " Brian Dooley
2024-02-28 11:33 ` [PATCH v4] " Brian Dooley
2024-02-28 11:50 ` Power, Ciara
2024-02-29 16:23 ` Dooley, Brian
2024-02-29 16:32 ` Akhil Goyal
2024-03-04 7:33 ` Akhil Goyal
2024-03-05 5:39 ` Honnappa Nagarahalli
2024-03-05 17:31 ` Wathsala Wathawana Vithanage
2024-03-05 15:21 ` Wathsala Wathawana Vithanage
2024-03-05 17:42 ` [PATCH v5 1/4] crypto/ipsec_mb: bump minimum IPsec Multi-buffer version Brian Dooley
2024-03-05 17:42 ` [PATCH v5 2/4] doc: remove outdated version details Brian Dooley
2024-03-05 17:42 ` [PATCH v5 3/4] crypto/ipsec_mb: use new ipad/opad calculation API Brian Dooley
2024-03-05 17:42 ` [PATCH v5 4/4] crypto/ipsec_mb: unified IPsec MB interface Brian Dooley
2024-03-15 18:25 ` Patrick Robb
2024-03-05 19:11 ` [EXTERNAL] [PATCH v5 1/4] crypto/ipsec_mb: bump minimum IPsec Multi-buffer version Akhil Goyal
2024-03-05 19:50 ` Patrick Robb
2024-03-05 23:30 ` Patrick Robb
2024-03-06 3:57 ` Patrick Robb
2024-03-06 11:12 ` Power, Ciara
2024-03-06 14:59 ` Patrick Robb
2024-03-06 15:29 ` Power, Ciara
2024-03-07 16:21 ` Wathsala Wathawana Vithanage
2024-03-08 16:05 ` Power, Ciara
2024-03-12 16:26 ` Wathsala Wathawana Vithanage
2024-03-15 18:24 ` Patrick Robb
2024-03-12 13:50 ` [PATCH v6 1/5] ci: replace IPsec-mb package install Brian Dooley
2024-03-12 13:50 ` [PATCH v6 2/5] crypto/ipsec_mb: bump minimum IPsec Multi-buffer version Brian Dooley
2024-03-12 13:50 ` [PATCH v6 3/5] doc: remove outdated version details Brian Dooley
2024-03-12 13:50 ` [PATCH v6 4/5] crypto/ipsec_mb: use new ipad/opad calculation API Brian Dooley
2024-03-12 13:50 ` [PATCH v6 5/5] crypto/ipsec_mb: unify some IPsec MB PMDs Brian Dooley
2024-03-12 13:54 ` [PATCH v6 1/5] ci: replace IPsec-mb package install David Marchand
2024-03-12 15:26 ` Power, Ciara
2024-03-12 16:13 ` David Marchand
2024-03-12 17:07 ` Power, Ciara
2024-03-12 16:05 ` David Marchand
2024-03-12 16:16 ` Jack Bond-Preston
2024-03-12 17:08 ` Power, Ciara
2024-03-12 18:04 ` Power, Ciara
2024-03-15 18:26 ` Patrick Robb
2024-03-14 10:37 ` [PATCH v7 1/2] doc: remove outdated version details Brian Dooley
2024-03-14 10:37 ` [PATCH v7 2/2] doc: announce Intel IPsec MB version bump Brian Dooley
2024-03-14 12:04 ` Power, Ciara
2024-03-22 19:33 ` [EXTERNAL] [PATCH v7 1/2] doc: remove outdated version details Akhil Goyal
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231214151544.2189302-1-brian.dooley@intel.com \
--to=brian.dooley@intel.com \
--cc=dev@dpdk.org \
--cc=gakhil@marvell.com \
--cc=kai.ji@intel.com \
--cc=pablo.de.lara.guarch@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).