* [dpdk-dev] [PATCH 0/3] Add AES-GCM and cipher only offload support
@ 2020-07-24 11:00 Nagadheeraj Rottela
2020-07-24 11:00 ` [dpdk-dev] [PATCH 1/3] test/crypto: replace NITROX PMD specific test suite Nagadheeraj Rottela
` (3 more replies)
0 siblings, 4 replies; 17+ messages in thread
From: Nagadheeraj Rottela @ 2020-07-24 11:00 UTC (permalink / raw)
To: akhil.goyal; +Cc: dev, jsrikanth, Nagadheeraj Rottela
This patch set replaces the NITROX PMD specific test suite with generic
test suite and adds support for AES-GCM and cipher only offload.
Nagadheeraj Rottela (3):
test/crypto: replace NITROX PMD specific test suite
crypto/nitrox: support AES-GCM
crypto/nitrox: support cipher only crypto operations
app/test/test_cryptodev.c | 18 +-
doc/guides/cryptodevs/features/nitrox.ini | 3 +
doc/guides/cryptodevs/nitrox.rst | 6 +-
drivers/crypto/nitrox/nitrox_sym.c | 85 ++++-
.../crypto/nitrox/nitrox_sym_capabilities.c | 30 ++
drivers/crypto/nitrox/nitrox_sym_ctx.h | 5 +-
drivers/crypto/nitrox/nitrox_sym_reqmgr.c | 357 ++++++++++++++----
7 files changed, 405 insertions(+), 99 deletions(-)
--
2.20.1
^ permalink raw reply [flat|nested] 17+ messages in thread
* [dpdk-dev] [PATCH 1/3] test/crypto: replace NITROX PMD specific test suite
2020-07-24 11:00 [dpdk-dev] [PATCH 0/3] Add AES-GCM and cipher only offload support Nagadheeraj Rottela
@ 2020-07-24 11:00 ` Nagadheeraj Rottela
2020-07-24 11:00 ` [dpdk-dev] [PATCH 2/3] crypto/nitrox: support AES-GCM Nagadheeraj Rottela
` (2 subsequent siblings)
3 siblings, 0 replies; 17+ messages in thread
From: Nagadheeraj Rottela @ 2020-07-24 11:00 UTC (permalink / raw)
To: akhil.goyal; +Cc: dev, jsrikanth, Nagadheeraj Rottela
Replace NITROX PMD specific tests with generic test suite.
Signed-off-by: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
---
app/test/test_cryptodev.c | 18 +-----------------
1 file changed, 1 insertion(+), 17 deletions(-)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 70bf6fe2c..162134a5c 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -12665,22 +12665,6 @@ static struct unit_test_suite cryptodev_ccp_testsuite = {
}
};
-static struct unit_test_suite cryptodev_nitrox_testsuite = {
- .suite_name = "Crypto NITROX Unit Test Suite",
- .setup = testsuite_setup,
- .teardown = testsuite_teardown,
- .unit_test_cases = {
- TEST_CASE_ST(ut_setup, ut_teardown,
- test_device_configure_invalid_dev_id),
- TEST_CASE_ST(ut_setup, ut_teardown,
- test_device_configure_invalid_queue_pair_ids),
- TEST_CASE_ST(ut_setup, ut_teardown, test_AES_chain_all),
- TEST_CASE_ST(ut_setup, ut_teardown, test_3DES_chain_all),
-
- TEST_CASES_END() /**< NULL terminate unit test array */
- }
-};
-
static int
test_cryptodev_qat(void /*argv __rte_unused, int argc __rte_unused*/)
{
@@ -13038,7 +13022,7 @@ test_cryptodev_nitrox(void)
return TEST_FAILED;
}
- return unit_test_suite_runner(&cryptodev_nitrox_testsuite);
+ return unit_test_suite_runner(&cryptodev_testsuite);
}
REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
--
2.20.1
^ permalink raw reply [flat|nested] 17+ messages in thread
* [dpdk-dev] [PATCH 2/3] crypto/nitrox: support AES-GCM
2020-07-24 11:00 [dpdk-dev] [PATCH 0/3] Add AES-GCM and cipher only offload support Nagadheeraj Rottela
2020-07-24 11:00 ` [dpdk-dev] [PATCH 1/3] test/crypto: replace NITROX PMD specific test suite Nagadheeraj Rottela
@ 2020-07-24 11:00 ` Nagadheeraj Rottela
2020-07-24 11:00 ` [dpdk-dev] [PATCH 3/3] crypto/nitrox: support cipher only crypto operations Nagadheeraj Rottela
2020-07-26 18:57 ` [dpdk-dev] [PATCH 0/3] Add AES-GCM and cipher only offload support Akhil Goyal
3 siblings, 0 replies; 17+ messages in thread
From: Nagadheeraj Rottela @ 2020-07-24 11:00 UTC (permalink / raw)
To: akhil.goyal; +Cc: dev, jsrikanth, Nagadheeraj Rottela
This patch adds AES-GCM AEAD algorithm.
Signed-off-by: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
---
doc/guides/cryptodevs/features/nitrox.ini | 3 +
doc/guides/cryptodevs/nitrox.rst | 4 +
drivers/crypto/nitrox/nitrox_sym.c | 82 +++++++-
.../crypto/nitrox/nitrox_sym_capabilities.c | 30 +++
drivers/crypto/nitrox/nitrox_sym_ctx.h | 5 +-
drivers/crypto/nitrox/nitrox_sym_reqmgr.c | 182 +++++++++++++++---
6 files changed, 268 insertions(+), 38 deletions(-)
diff --git a/doc/guides/cryptodevs/features/nitrox.ini b/doc/guides/cryptodevs/features/nitrox.ini
index 183494731..a1d6bcb4f 100644
--- a/doc/guides/cryptodevs/features/nitrox.ini
+++ b/doc/guides/cryptodevs/features/nitrox.ini
@@ -34,6 +34,9 @@ SHA256 HMAC = Y
; Supported AEAD algorithms of the 'nitrox' crypto driver.
;
[AEAD]
+AES GCM (128) = Y
+AES GCM (192) = Y
+AES GCM (256) = Y
;
; Supported Asymmetric algorithms of the 'nitrox' crypto driver.
diff --git a/doc/guides/cryptodevs/nitrox.rst b/doc/guides/cryptodevs/nitrox.rst
index 85f5212b6..91fca905a 100644
--- a/doc/guides/cryptodevs/nitrox.rst
+++ b/doc/guides/cryptodevs/nitrox.rst
@@ -26,6 +26,10 @@ Hash algorithms:
* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+Supported AEAD algorithms:
+
+* ``RTE_CRYPTO_AEAD_AES_GCM``
+
Limitations
-----------
diff --git a/drivers/crypto/nitrox/nitrox_sym.c b/drivers/crypto/nitrox/nitrox_sym.c
index fad4a7a48..fe3ee6e23 100644
--- a/drivers/crypto/nitrox/nitrox_sym.c
+++ b/drivers/crypto/nitrox/nitrox_sym.c
@@ -20,6 +20,7 @@
#define NPS_PKT_IN_INSTR_SIZE 64
#define IV_FROM_DPTR 1
#define FLEXI_CRYPTO_ENCRYPT_HMAC 0x33
+#define FLEXI_CRYPTO_MAX_AAD_LEN 512
#define AES_KEYSIZE_128 16
#define AES_KEYSIZE_192 24
#define AES_KEYSIZE_256 32
@@ -297,6 +298,9 @@ get_crypto_chain_order(const struct rte_crypto_sym_xform *xform)
}
}
break;
+ case RTE_CRYPTO_SYM_XFORM_AEAD:
+ res = NITROX_CHAIN_COMBINED;
+ break;
default:
break;
}
@@ -431,17 +435,17 @@ get_flexi_auth_type(enum rte_crypto_auth_algorithm algo)
}
static bool
-auth_key_digest_is_valid(struct rte_crypto_auth_xform *xform,
- struct flexi_crypto_context *fctx)
+auth_key_is_valid(const uint8_t *data, uint16_t length,
+ struct flexi_crypto_context *fctx)
{
- if (unlikely(!xform->key.data && xform->key.length)) {
+ if (unlikely(!data && length)) {
NITROX_LOG(ERR, "Invalid auth key\n");
return false;
}
- if (unlikely(xform->key.length > sizeof(fctx->auth.opad))) {
+ if (unlikely(length > sizeof(fctx->auth.opad))) {
NITROX_LOG(ERR, "Invalid auth key length %d\n",
- xform->key.length);
+ length);
return false;
}
@@ -459,11 +463,10 @@ configure_auth_ctx(struct rte_crypto_auth_xform *xform,
if (unlikely(type == AUTH_INVALID))
return -ENOTSUP;
- if (unlikely(!auth_key_digest_is_valid(xform, fctx)))
+ if (unlikely(!auth_key_is_valid(xform->key.data, xform->key.length,
+ fctx)))
return -EINVAL;
- ctx->auth_op = xform->op;
- ctx->auth_algo = xform->algo;
ctx->digest_length = xform->digest_length;
fctx->flags = rte_be_to_cpu_64(fctx->flags);
@@ -476,6 +479,56 @@ configure_auth_ctx(struct rte_crypto_auth_xform *xform,
return 0;
}
+static int
+configure_aead_ctx(struct rte_crypto_aead_xform *xform,
+ struct nitrox_crypto_ctx *ctx)
+{
+ int aes_keylen;
+ struct flexi_crypto_context *fctx = &ctx->fctx;
+
+ if (unlikely(xform->aad_length > FLEXI_CRYPTO_MAX_AAD_LEN)) {
+ NITROX_LOG(ERR, "AAD length %d not supported\n",
+ xform->aad_length);
+ return -ENOTSUP;
+ }
+
+ if (unlikely(xform->algo != RTE_CRYPTO_AEAD_AES_GCM))
+ return -ENOTSUP;
+
+ aes_keylen = flexi_aes_keylen(xform->key.length, true);
+ if (unlikely(aes_keylen < 0))
+ return -EINVAL;
+
+ if (unlikely(!auth_key_is_valid(xform->key.data, xform->key.length,
+ fctx)))
+ return -EINVAL;
+
+ if (unlikely(xform->iv.length > MAX_IV_LEN))
+ return -EINVAL;
+
+ fctx->flags = rte_be_to_cpu_64(fctx->flags);
+ fctx->w0.cipher_type = CIPHER_AES_GCM;
+ fctx->w0.aes_keylen = aes_keylen;
+ fctx->w0.iv_source = IV_FROM_DPTR;
+ fctx->w0.hash_type = AUTH_NULL;
+ fctx->w0.auth_input_type = 1;
+ fctx->w0.mac_len = xform->digest_length;
+ fctx->flags = rte_cpu_to_be_64(fctx->flags);
+ memset(fctx->crypto.key, 0, sizeof(fctx->crypto.key));
+ memcpy(fctx->crypto.key, xform->key.data, xform->key.length);
+ memset(&fctx->auth, 0, sizeof(fctx->auth));
+ memcpy(fctx->auth.opad, xform->key.data, xform->key.length);
+
+ ctx->opcode = FLEXI_CRYPTO_ENCRYPT_HMAC;
+ ctx->req_op = (xform->op == RTE_CRYPTO_AEAD_OP_ENCRYPT) ?
+ NITROX_OP_ENCRYPT : NITROX_OP_DECRYPT;
+ ctx->iv.offset = xform->iv.offset;
+ ctx->iv.length = xform->iv.length;
+ ctx->digest_length = xform->digest_length;
+ ctx->aad_length = xform->aad_length;
+ return 0;
+}
+
static int
nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
struct rte_crypto_sym_xform *xform,
@@ -486,6 +539,8 @@ nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
struct nitrox_crypto_ctx *ctx;
struct rte_crypto_cipher_xform *cipher_xform = NULL;
struct rte_crypto_auth_xform *auth_xform = NULL;
+ struct rte_crypto_aead_xform *aead_xform = NULL;
+ int ret = -EINVAL;
if (rte_mempool_get(mempool, &mp_obj)) {
NITROX_LOG(ERR, "Couldn't allocate context\n");
@@ -503,8 +558,12 @@ nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
auth_xform = &xform->auth;
cipher_xform = &xform->next->cipher;
break;
+ case NITROX_CHAIN_COMBINED:
+ aead_xform = &xform->aead;
+ break;
default:
NITROX_LOG(ERR, "Crypto chain not supported\n");
+ ret = -ENOTSUP;
goto err;
}
@@ -518,12 +577,17 @@ nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
goto err;
}
+ if (aead_xform && unlikely(configure_aead_ctx(aead_xform, ctx))) {
+ NITROX_LOG(ERR, "Failed to configure aead ctx\n");
+ goto err;
+ }
+
ctx->iova = rte_mempool_virt2iova(ctx);
set_sym_session_private_data(sess, cdev->driver_id, ctx);
return 0;
err:
rte_mempool_put(mempool, mp_obj);
- return -EINVAL;
+ return ret;
}
static void
diff --git a/drivers/crypto/nitrox/nitrox_sym_capabilities.c b/drivers/crypto/nitrox/nitrox_sym_capabilities.c
index dc4df9185..a30cd9f8f 100644
--- a/drivers/crypto/nitrox/nitrox_sym_capabilities.c
+++ b/drivers/crypto/nitrox/nitrox_sym_capabilities.c
@@ -108,6 +108,36 @@ static const struct rte_cryptodev_capabilities nitrox_capabilities[] = {
}, }
}, }
},
+ { /* AES GCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 1
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 512,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 16,
+ .increment = 4
+ },
+ }, }
+ }, }
+ },
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
diff --git a/drivers/crypto/nitrox/nitrox_sym_ctx.h b/drivers/crypto/nitrox/nitrox_sym_ctx.h
index 2985e519f..deb00fc1e 100644
--- a/drivers/crypto/nitrox/nitrox_sym_ctx.h
+++ b/drivers/crypto/nitrox/nitrox_sym_ctx.h
@@ -11,6 +11,7 @@
#define AES_MAX_KEY_SIZE 32
#define AES_BLOCK_SIZE 16
+#define AES_GCM_SALT_SIZE 4
enum nitrox_chain {
NITROX_CHAIN_CIPHER_ONLY,
@@ -69,14 +70,14 @@ struct flexi_crypto_context {
struct nitrox_crypto_ctx {
struct flexi_crypto_context fctx;
enum nitrox_chain nitrox_chain;
- enum rte_crypto_auth_operation auth_op;
- enum rte_crypto_auth_algorithm auth_algo;
struct {
uint16_t offset;
uint16_t length;
} iv;
rte_iova_t iova;
+ uint8_t salt[AES_GCM_SALT_SIZE];
uint16_t digest_length;
+ uint16_t aad_length;
uint8_t opcode;
uint8_t req_op;
};
diff --git a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
index d9b426776..93d59b048 100644
--- a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
+++ b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
@@ -238,12 +238,13 @@ create_se_instr(struct nitrox_softreq *sr, uint8_t qno)
}
static void
-softreq_copy_iv(struct nitrox_softreq *sr)
+softreq_copy_iv(struct nitrox_softreq *sr, uint8_t salt_size)
{
- sr->iv.virt = rte_crypto_op_ctod_offset(sr->op, uint8_t *,
- sr->ctx->iv.offset);
- sr->iv.iova = rte_crypto_op_ctophys_offset(sr->op, sr->ctx->iv.offset);
- sr->iv.len = sr->ctx->iv.length;
+ uint16_t offset = sr->ctx->iv.offset + salt_size;
+
+ sr->iv.virt = rte_crypto_op_ctod_offset(sr->op, uint8_t *, offset);
+ sr->iv.iova = rte_crypto_op_ctophys_offset(sr->op, offset);
+ sr->iv.len = sr->ctx->iv.length - salt_size;
}
static int
@@ -254,7 +255,7 @@ extract_cipher_auth_digest(struct nitrox_softreq *sr,
struct rte_mbuf *mdst = op->sym->m_dst ? op->sym->m_dst :
op->sym->m_src;
- if (sr->ctx->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY &&
+ if (sr->ctx->req_op == NITROX_OP_DECRYPT &&
unlikely(!op->sym->auth.digest.data))
return -EINVAL;
@@ -352,6 +353,13 @@ create_cipher_auth_sglist(struct nitrox_softreq *sr,
if (unlikely(auth_only_len < 0))
return -EINVAL;
+ if (unlikely(
+ op->sym->cipher.data.offset + op->sym->cipher.data.length !=
+ op->sym->auth.data.offset + op->sym->auth.data.length)) {
+ NITROX_LOG(ERR, "Auth only data after cipher data not supported\n");
+ return -ENOTSUP;
+ }
+
err = create_sglist_from_mbuf(sgtbl, mbuf, op->sym->auth.data.offset,
auth_only_len);
if (unlikely(err))
@@ -365,6 +373,41 @@ create_cipher_auth_sglist(struct nitrox_softreq *sr,
return 0;
}
+static int
+create_combined_sglist(struct nitrox_softreq *sr, struct nitrox_sgtable *sgtbl,
+ struct rte_mbuf *mbuf)
+{
+ struct rte_crypto_op *op = sr->op;
+
+ fill_sglist(sgtbl, sr->iv.len, sr->iv.iova, sr->iv.virt);
+ fill_sglist(sgtbl, sr->ctx->aad_length, op->sym->aead.aad.phys_addr,
+ op->sym->aead.aad.data);
+ return create_sglist_from_mbuf(sgtbl, mbuf, op->sym->cipher.data.offset,
+ op->sym->cipher.data.length);
+}
+
+static int
+create_aead_sglist(struct nitrox_softreq *sr, struct nitrox_sgtable *sgtbl,
+ struct rte_mbuf *mbuf)
+{
+ int err;
+
+ switch (sr->ctx->nitrox_chain) {
+ case NITROX_CHAIN_CIPHER_AUTH:
+ case NITROX_CHAIN_AUTH_CIPHER:
+ err = create_cipher_auth_sglist(sr, sgtbl, mbuf);
+ break;
+ case NITROX_CHAIN_COMBINED:
+ err = create_combined_sglist(sr, sgtbl, mbuf);
+ break;
+ default:
+ err = -EINVAL;
+ break;
+ }
+
+ return err;
+}
+
static void
create_sgcomp(struct nitrox_sgtable *sgtbl)
{
@@ -383,17 +426,16 @@ create_sgcomp(struct nitrox_sgtable *sgtbl)
}
static int
-create_cipher_auth_inbuf(struct nitrox_softreq *sr,
- struct nitrox_sglist *digest)
+create_aead_inbuf(struct nitrox_softreq *sr, struct nitrox_sglist *digest)
{
int err;
struct nitrox_crypto_ctx *ctx = sr->ctx;
- err = create_cipher_auth_sglist(sr, &sr->in, sr->op->sym->m_src);
+ err = create_aead_sglist(sr, &sr->in, sr->op->sym->m_src);
if (unlikely(err))
return err;
- if (ctx->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY)
+ if (ctx->req_op == NITROX_OP_DECRYPT)
fill_sglist(&sr->in, digest->len, digest->iova, digest->virt);
create_sgcomp(&sr->in);
@@ -402,25 +444,24 @@ create_cipher_auth_inbuf(struct nitrox_softreq *sr,
}
static int
-create_cipher_auth_oop_outbuf(struct nitrox_softreq *sr,
- struct nitrox_sglist *digest)
+create_aead_oop_outbuf(struct nitrox_softreq *sr, struct nitrox_sglist *digest)
{
int err;
struct nitrox_crypto_ctx *ctx = sr->ctx;
- err = create_cipher_auth_sglist(sr, &sr->out, sr->op->sym->m_dst);
+ err = create_aead_sglist(sr, &sr->out, sr->op->sym->m_dst);
if (unlikely(err))
return err;
- if (ctx->auth_op == RTE_CRYPTO_AUTH_OP_GENERATE)
+ if (ctx->req_op == NITROX_OP_ENCRYPT)
fill_sglist(&sr->out, digest->len, digest->iova, digest->virt);
return 0;
}
static void
-create_cipher_auth_inplace_outbuf(struct nitrox_softreq *sr,
- struct nitrox_sglist *digest)
+create_aead_inplace_outbuf(struct nitrox_softreq *sr,
+ struct nitrox_sglist *digest)
{
int i, cnt;
struct nitrox_crypto_ctx *ctx = sr->ctx;
@@ -433,17 +474,16 @@ create_cipher_auth_inplace_outbuf(struct nitrox_softreq *sr,
}
sr->out.map_bufs_cnt = cnt;
- if (ctx->auth_op == RTE_CRYPTO_AUTH_OP_GENERATE) {
+ if (ctx->req_op == NITROX_OP_ENCRYPT) {
fill_sglist(&sr->out, digest->len, digest->iova,
digest->virt);
- } else if (ctx->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
+ } else if (ctx->req_op == NITROX_OP_DECRYPT) {
sr->out.map_bufs_cnt--;
}
}
static int
-create_cipher_auth_outbuf(struct nitrox_softreq *sr,
- struct nitrox_sglist *digest)
+create_aead_outbuf(struct nitrox_softreq *sr, struct nitrox_sglist *digest)
{
struct rte_crypto_op *op = sr->op;
int cnt = 0;
@@ -458,11 +498,11 @@ create_cipher_auth_outbuf(struct nitrox_softreq *sr,
if (op->sym->m_dst) {
int err;
- err = create_cipher_auth_oop_outbuf(sr, digest);
+ err = create_aead_oop_outbuf(sr, digest);
if (unlikely(err))
return err;
} else {
- create_cipher_auth_inplace_outbuf(sr, digest);
+ create_aead_inplace_outbuf(sr, digest);
}
cnt = sr->out.map_bufs_cnt;
@@ -516,16 +556,16 @@ process_cipher_auth_data(struct nitrox_softreq *sr)
int err;
struct nitrox_sglist digest;
- softreq_copy_iv(sr);
+ softreq_copy_iv(sr, 0);
err = extract_cipher_auth_digest(sr, &digest);
if (unlikely(err))
return err;
- err = create_cipher_auth_inbuf(sr, &digest);
+ err = create_aead_inbuf(sr, &digest);
if (unlikely(err))
return err;
- err = create_cipher_auth_outbuf(sr, &digest);
+ err = create_aead_outbuf(sr, &digest);
if (unlikely(err))
return err;
@@ -534,6 +574,86 @@ process_cipher_auth_data(struct nitrox_softreq *sr)
return 0;
}
+static int
+softreq_copy_salt(struct nitrox_softreq *sr)
+{
+ struct nitrox_crypto_ctx *ctx = sr->ctx;
+ uint8_t *addr;
+
+ if (unlikely(ctx->iv.length < AES_GCM_SALT_SIZE)) {
+ NITROX_LOG(ERR, "Invalid IV length %d\n", ctx->iv.length);
+ return -EINVAL;
+ }
+
+ addr = rte_crypto_op_ctod_offset(sr->op, uint8_t *, ctx->iv.offset);
+ if (!memcmp(ctx->salt, addr, AES_GCM_SALT_SIZE))
+ return 0;
+
+ memcpy(ctx->salt, addr, AES_GCM_SALT_SIZE);
+ memcpy(ctx->fctx.crypto.iv, addr, AES_GCM_SALT_SIZE);
+ return 0;
+}
+
+static int
+extract_combined_digest(struct nitrox_softreq *sr, struct nitrox_sglist *digest)
+{
+ struct rte_crypto_op *op = sr->op;
+ struct rte_mbuf *mdst = op->sym->m_dst ? op->sym->m_dst :
+ op->sym->m_src;
+
+ digest->len = sr->ctx->digest_length;
+ if (op->sym->aead.digest.data) {
+ digest->iova = op->sym->aead.digest.phys_addr;
+ digest->virt = op->sym->aead.digest.data;
+
+ return 0;
+ }
+
+ if (unlikely(rte_pktmbuf_data_len(mdst) < op->sym->aead.data.offset +
+ op->sym->aead.data.length + digest->len))
+ return -EINVAL;
+
+ digest->iova = rte_pktmbuf_mtophys_offset(mdst,
+ op->sym->aead.data.offset +
+ op->sym->aead.data.length);
+ digest->virt = rte_pktmbuf_mtod_offset(mdst, uint8_t *,
+ op->sym->aead.data.offset +
+ op->sym->aead.data.length);
+
+ return 0;
+}
+
+static int
+process_combined_data(struct nitrox_softreq *sr)
+{
+ int err;
+ struct nitrox_sglist digest;
+ struct rte_crypto_op *op = sr->op;
+
+ err = softreq_copy_salt(sr);
+ if (unlikely(err))
+ return err;
+
+ softreq_copy_iv(sr, AES_GCM_SALT_SIZE);
+ err = extract_combined_digest(sr, &digest);
+ if (unlikely(err))
+ return err;
+
+ err = create_aead_inbuf(sr, &digest);
+ if (unlikely(err))
+ return err;
+
+ err = create_aead_outbuf(sr, &digest);
+ if (unlikely(err))
+ return err;
+
+ create_aead_gph(op->sym->aead.data.length, sr->iv.len,
+ op->sym->aead.data.length + sr->ctx->aad_length,
+ &sr->gph);
+
+ return 0;
+}
+
static int
process_softreq(struct nitrox_softreq *sr)
{
@@ -545,6 +665,9 @@ process_softreq(struct nitrox_softreq *sr)
case NITROX_CHAIN_AUTH_CIPHER:
err = process_cipher_auth_data(sr);
break;
+ case NITROX_CHAIN_COMBINED:
+ err = process_combined_data(sr);
+ break;
default:
err = -EINVAL;
break;
@@ -558,10 +681,15 @@ nitrox_process_se_req(uint16_t qno, struct rte_crypto_op *op,
struct nitrox_crypto_ctx *ctx,
struct nitrox_softreq *sr)
{
+ int err;
+
softreq_init(sr, sr->iova);
sr->ctx = ctx;
sr->op = op;
- process_softreq(sr);
+ err = process_softreq(sr);
+ if (unlikely(err))
+ return err;
+
create_se_instr(sr, qno);
sr->timeout = rte_get_timer_cycles() + CMD_TIMEOUT * rte_get_timer_hz();
return 0;
@@ -577,7 +705,7 @@ nitrox_check_se_req(struct nitrox_softreq *sr, struct rte_crypto_op **op)
cc = *(volatile uint64_t *)(&sr->resp.completion);
orh = *(volatile uint64_t *)(&sr->resp.orh);
if (cc != PENDING_SIG)
- err = 0;
+ err = orh & 0xff;
else if ((orh != PENDING_SIG) && (orh & 0xff))
err = orh & 0xff;
else if (rte_get_timer_cycles() >= sr->timeout)
--
2.20.1
^ permalink raw reply [flat|nested] 17+ messages in thread
* [dpdk-dev] [PATCH 3/3] crypto/nitrox: support cipher only crypto operations
2020-07-24 11:00 [dpdk-dev] [PATCH 0/3] Add AES-GCM and cipher only offload support Nagadheeraj Rottela
2020-07-24 11:00 ` [dpdk-dev] [PATCH 1/3] test/crypto: replace NITROX PMD specific test suite Nagadheeraj Rottela
2020-07-24 11:00 ` [dpdk-dev] [PATCH 2/3] crypto/nitrox: support AES-GCM Nagadheeraj Rottela
@ 2020-07-24 11:00 ` Nagadheeraj Rottela
2020-09-22 19:11 ` Akhil Goyal
2020-07-26 18:57 ` [dpdk-dev] [PATCH 0/3] Add AES-GCM and cipher only offload support Akhil Goyal
3 siblings, 1 reply; 17+ messages in thread
From: Nagadheeraj Rottela @ 2020-07-24 11:00 UTC (permalink / raw)
To: akhil.goyal; +Cc: dev, jsrikanth, Nagadheeraj Rottela
This patch adds cipher only crypto operation support.
Signed-off-by: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
---
doc/guides/cryptodevs/nitrox.rst | 2 -
drivers/crypto/nitrox/nitrox_sym.c | 3 +
drivers/crypto/nitrox/nitrox_sym_reqmgr.c | 189 ++++++++++++++++------
3 files changed, 143 insertions(+), 51 deletions(-)
diff --git a/doc/guides/cryptodevs/nitrox.rst b/doc/guides/cryptodevs/nitrox.rst
index 91fca905a..095e545c6 100644
--- a/doc/guides/cryptodevs/nitrox.rst
+++ b/doc/guides/cryptodevs/nitrox.rst
@@ -33,8 +33,6 @@ Supported AEAD algorithms:
Limitations
-----------
-* AES_CBC Cipher Only combination is not supported.
-* 3DES Cipher Only combination is not supported.
* Session-less APIs are not supported.
Installation
diff --git a/drivers/crypto/nitrox/nitrox_sym.c b/drivers/crypto/nitrox/nitrox_sym.c
index fe3ee6e23..2768bdd2e 100644
--- a/drivers/crypto/nitrox/nitrox_sym.c
+++ b/drivers/crypto/nitrox/nitrox_sym.c
@@ -550,6 +550,9 @@ nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
ctx = mp_obj;
ctx->nitrox_chain = get_crypto_chain_order(xform);
switch (ctx->nitrox_chain) {
+ case NITROX_CHAIN_CIPHER_ONLY:
+ cipher_xform = &xform->cipher;
+ break;
case NITROX_CHAIN_CIPHER_AUTH:
cipher_xform = &xform->cipher;
auth_xform = &xform->next->auth;
diff --git a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
index 93d59b048..b5bbd1fd2 100644
--- a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
+++ b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
@@ -247,38 +247,6 @@ softreq_copy_iv(struct nitrox_softreq *sr, uint8_t salt_size)
sr->iv.len = sr->ctx->iv.length - salt_size;
}
-static int
-extract_cipher_auth_digest(struct nitrox_softreq *sr,
- struct nitrox_sglist *digest)
-{
- struct rte_crypto_op *op = sr->op;
- struct rte_mbuf *mdst = op->sym->m_dst ? op->sym->m_dst :
- op->sym->m_src;
-
- if (sr->ctx->req_op == NITROX_OP_DECRYPT &&
- unlikely(!op->sym->auth.digest.data))
- return -EINVAL;
-
- digest->len = sr->ctx->digest_length;
- if (op->sym->auth.digest.data) {
- digest->iova = op->sym->auth.digest.phys_addr;
- digest->virt = op->sym->auth.digest.data;
- return 0;
- }
-
- if (unlikely(rte_pktmbuf_data_len(mdst) < op->sym->auth.data.offset +
- op->sym->auth.data.length + digest->len))
- return -EINVAL;
-
- digest->iova = rte_pktmbuf_mtophys_offset(mdst,
- op->sym->auth.data.offset +
- op->sym->auth.data.length);
- digest->virt = rte_pktmbuf_mtod_offset(mdst, uint8_t *,
- op->sym->auth.data.offset +
- op->sym->auth.data.length);
- return 0;
-}
-
static void
fill_sglist(struct nitrox_sgtable *sgtbl, uint16_t len, rte_iova_t iova,
void *virt)
@@ -340,6 +308,143 @@ create_sglist_from_mbuf(struct nitrox_sgtable *sgtbl, struct rte_mbuf *mbuf,
return 0;
}
+static void
+create_sgcomp(struct nitrox_sgtable *sgtbl)
+{
+ int i, j, nr_sgcomp;
+ struct nitrox_sgcomp *sgcomp = sgtbl->sgcomp;
+ struct nitrox_sglist *sglist = sgtbl->sglist;
+
+ nr_sgcomp = RTE_ALIGN_MUL_CEIL(sgtbl->map_bufs_cnt, 4) / 4;
+ sgtbl->nr_sgcomp = nr_sgcomp;
+ for (i = 0; i < nr_sgcomp; i++, sgcomp++) {
+ for (j = 0; j < 4; j++, sglist++) {
+ sgcomp->len[j] = rte_cpu_to_be_16(sglist->len);
+ sgcomp->iova[j] = rte_cpu_to_be_64(sglist->iova);
+ }
+ }
+}
+
+static int
+create_cipher_inbuf(struct nitrox_softreq *sr)
+{
+ int err;
+ struct rte_crypto_op *op = sr->op;
+
+ fill_sglist(&sr->in, sr->iv.len, sr->iv.iova, sr->iv.virt);
+ err = create_sglist_from_mbuf(&sr->in, op->sym->m_src,
+ op->sym->cipher.data.offset,
+ op->sym->cipher.data.length);
+ if (unlikely(err))
+ return err;
+
+ create_sgcomp(&sr->in);
+ sr->dptr = sr->iova + offsetof(struct nitrox_softreq, in.sgcomp);
+
+ return 0;
+}
+
+static int
+create_cipher_outbuf(struct nitrox_softreq *sr)
+{
+ struct rte_crypto_op *op = sr->op;
+ int err, cnt = 0;
+ struct rte_mbuf *m_dst = op->sym->m_dst ? op->sym->m_dst :
+ op->sym->m_src;
+
+ sr->resp.orh = PENDING_SIG;
+ sr->out.sglist[cnt].len = sizeof(sr->resp.orh);
+ sr->out.sglist[cnt].iova = sr->iova + offsetof(struct nitrox_softreq,
+ resp.orh);
+ sr->out.sglist[cnt].virt = &sr->resp.orh;
+ cnt++;
+
+ sr->out.map_bufs_cnt = cnt;
+ fill_sglist(&sr->out, sr->iv.len, sr->iv.iova, sr->iv.virt);
+ err = create_sglist_from_mbuf(&sr->out, m_dst,
+ op->sym->cipher.data.offset,
+ op->sym->cipher.data.length);
+ if (unlikely(err))
+ return err;
+
+ cnt = sr->out.map_bufs_cnt;
+ sr->resp.completion = PENDING_SIG;
+ sr->out.sglist[cnt].len = sizeof(sr->resp.completion);
+ sr->out.sglist[cnt].iova = sr->iova + offsetof(struct nitrox_softreq,
+ resp.completion);
+ sr->out.sglist[cnt].virt = &sr->resp.completion;
+ cnt++;
+
+ RTE_VERIFY(cnt <= MAX_SGBUF_CNT);
+ sr->out.map_bufs_cnt = cnt;
+
+ create_sgcomp(&sr->out);
+ sr->rptr = sr->iova + offsetof(struct nitrox_softreq, out.sgcomp);
+
+ return 0;
+}
+
+static void
+create_cipher_gph(uint32_t cryptlen, uint16_t ivlen, struct gphdr *gph)
+{
+ gph->param0 = rte_cpu_to_be_16(cryptlen);
+ gph->param1 = 0;
+ gph->param2 = rte_cpu_to_be_16(ivlen);
+ gph->param3 = 0;
+}
+
+static int
+process_cipher_data(struct nitrox_softreq *sr)
+{
+ struct rte_crypto_op *op = sr->op;
+ int err;
+
+ softreq_copy_iv(sr, 0);
+ err = create_cipher_inbuf(sr);
+ if (unlikely(err))
+ return err;
+
+ err = create_cipher_outbuf(sr);
+ if (unlikely(err))
+ return err;
+
+ create_cipher_gph(op->sym->cipher.data.length, sr->iv.len, &sr->gph);
+
+ return 0;
+}
+
+static int
+extract_cipher_auth_digest(struct nitrox_softreq *sr,
+ struct nitrox_sglist *digest)
+{
+ struct rte_crypto_op *op = sr->op;
+ struct rte_mbuf *mdst = op->sym->m_dst ? op->sym->m_dst :
+ op->sym->m_src;
+
+ if (sr->ctx->req_op == NITROX_OP_DECRYPT &&
+ unlikely(!op->sym->auth.digest.data))
+ return -EINVAL;
+
+ digest->len = sr->ctx->digest_length;
+ if (op->sym->auth.digest.data) {
+ digest->iova = op->sym->auth.digest.phys_addr;
+ digest->virt = op->sym->auth.digest.data;
+ return 0;
+ }
+
+ if (unlikely(rte_pktmbuf_data_len(mdst) < op->sym->auth.data.offset +
+ op->sym->auth.data.length + digest->len))
+ return -EINVAL;
+
+ digest->iova = rte_pktmbuf_mtophys_offset(mdst,
+ op->sym->auth.data.offset +
+ op->sym->auth.data.length);
+ digest->virt = rte_pktmbuf_mtod_offset(mdst, uint8_t *,
+ op->sym->auth.data.offset +
+ op->sym->auth.data.length);
+ return 0;
+}
+
static int
create_cipher_auth_sglist(struct nitrox_softreq *sr,
struct nitrox_sgtable *sgtbl, struct rte_mbuf *mbuf)
@@ -408,23 +513,6 @@ create_aead_sglist(struct nitrox_softreq *sr, struct nitrox_sgtable *sgtbl,
return err;
}
-static void
-create_sgcomp(struct nitrox_sgtable *sgtbl)
-{
- int i, j, nr_sgcomp;
- struct nitrox_sgcomp *sgcomp = sgtbl->sgcomp;
- struct nitrox_sglist *sglist = sgtbl->sglist;
-
- nr_sgcomp = RTE_ALIGN_MUL_CEIL(sgtbl->map_bufs_cnt, 4) / 4;
- sgtbl->nr_sgcomp = nr_sgcomp;
- for (i = 0; i < nr_sgcomp; i++, sgcomp++) {
- for (j = 0; j < 4; j++, sglist++) {
- sgcomp->len[j] = rte_cpu_to_be_16(sglist->len);
- sgcomp->iova[j] = rte_cpu_to_be_64(sglist->iova);
- }
- }
-}
-
static int
create_aead_inbuf(struct nitrox_softreq *sr, struct nitrox_sglist *digest)
{
@@ -661,6 +749,9 @@ process_softreq(struct nitrox_softreq *sr)
int err = 0;
switch (ctx->nitrox_chain) {
+ case NITROX_CHAIN_CIPHER_ONLY:
+ err = process_cipher_data(sr);
+ break;
case NITROX_CHAIN_CIPHER_AUTH:
case NITROX_CHAIN_AUTH_CIPHER:
err = process_cipher_auth_data(sr);
--
2.20.1
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 0/3] Add AES-GCM and cipher only offload support
2020-07-24 11:00 [dpdk-dev] [PATCH 0/3] Add AES-GCM and cipher only offload support Nagadheeraj Rottela
` (2 preceding siblings ...)
2020-07-24 11:00 ` [dpdk-dev] [PATCH 3/3] crypto/nitrox: support cipher only crypto operations Nagadheeraj Rottela
@ 2020-07-26 18:57 ` Akhil Goyal
2020-07-28 6:41 ` Nagadheeraj Rottela
3 siblings, 1 reply; 17+ messages in thread
From: Akhil Goyal @ 2020-07-26 18:57 UTC (permalink / raw)
To: Nagadheeraj Rottela; +Cc: dev, jsrikanth
> Subject: [PATCH 0/3] Add AES-GCM and cipher only offload support
>
> This patch set replaces the NITROX PMD specific test suite with generic
> test suite and adds support for AES-GCM and cipher only offload.
>
I hope this series is for next release cycle.
From next time, please mention 20.11 in subject while 20.08 release cycle is still in progress.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 0/3] Add AES-GCM and cipher only offload support
2020-07-26 18:57 ` [dpdk-dev] [PATCH 0/3] Add AES-GCM and cipher only offload support Akhil Goyal
@ 2020-07-28 6:41 ` Nagadheeraj Rottela
0 siblings, 0 replies; 17+ messages in thread
From: Nagadheeraj Rottela @ 2020-07-28 6:41 UTC (permalink / raw)
To: Akhil Goyal; +Cc: dev, Srikanth Jampala
> -----Original Message-----
> From: Akhil Goyal <akhil.goyal@nxp.com>
> Sent: Monday, July 27, 2020 12:28 AM
> To: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
> Cc: dev@dpdk.org; Srikanth Jampala <jsrikanth@marvell.com>
> Subject: [EXT] RE: [PATCH 0/3] Add AES-GCM and cipher only offload support
>
> External Email
>
> ----------------------------------------------------------------------
> > Subject: [PATCH 0/3] Add AES-GCM and cipher only offload support
> >
> > This patch set replaces the NITROX PMD specific test suite with
> > generic test suite and adds support for AES-GCM and cipher only offload.
> >
> I hope this series is for next release cycle.
> From next time, please mention 20.11 in subject while 20.08 release cycle is
> still in progress.
Sure, I will comply with this from the next time.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 3/3] crypto/nitrox: support cipher only crypto operations
2020-07-24 11:00 ` [dpdk-dev] [PATCH 3/3] crypto/nitrox: support cipher only crypto operations Nagadheeraj Rottela
@ 2020-09-22 19:11 ` Akhil Goyal
2020-09-24 13:04 ` [dpdk-dev] [PATCH v2 0/3] Add AES-GCM and cipher only offload support Nagadheeraj Rottela
0 siblings, 1 reply; 17+ messages in thread
From: Akhil Goyal @ 2020-09-22 19:11 UTC (permalink / raw)
To: Nagadheeraj Rottela; +Cc: dev, jsrikanth
Hi Nagadheeraj,
> Subject: [PATCH 3/3] crypto/nitrox: support cipher only crypto operations
>
> This patch adds cipher only crypto operation support.
>
> Signed-off-by: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
> ---
Could you please rebase this patch on latest TOT?
I see a conflict in the last patch.
You can also add an entry in release notes for aes-gcm and cipher only support.
Regards,
Akhil
^ permalink raw reply [flat|nested] 17+ messages in thread
* [dpdk-dev] [PATCH v2 0/3] Add AES-GCM and cipher only offload support
2020-09-22 19:11 ` Akhil Goyal
@ 2020-09-24 13:04 ` Nagadheeraj Rottela
2020-09-24 13:04 ` [dpdk-dev] [PATCH v2 1/3] test/crypto: replace NITROX PMD specific test suite Nagadheeraj Rottela
` (2 more replies)
0 siblings, 3 replies; 17+ messages in thread
From: Nagadheeraj Rottela @ 2020-09-24 13:04 UTC (permalink / raw)
To: akhil.goyal; +Cc: dev, jsrikanth, Nagadheeraj Rottela
This patch set replaces the NITROX PMD specific test suite with generic
test suite and adds support for AES-GCM and cipher only offload.
---
v2:
* Rebased patches to latest master and resolved merge conflict.
* Updated release notes.
Nagadheeraj Rottela (3):
test/crypto: replace NITROX PMD specific test suite
crypto/nitrox: support AES-GCM
crypto/nitrox: support cipher only crypto operations
app/test/test_cryptodev.c | 18 +-
doc/guides/cryptodevs/features/nitrox.ini | 3 +
doc/guides/cryptodevs/nitrox.rst | 6 +-
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/crypto/nitrox/nitrox_sym.c | 85 ++++-
.../crypto/nitrox/nitrox_sym_capabilities.c | 30 ++
drivers/crypto/nitrox/nitrox_sym_ctx.h | 5 +-
drivers/crypto/nitrox/nitrox_sym_reqmgr.c | 357 ++++++++++++++----
8 files changed, 410 insertions(+), 99 deletions(-)
--
2.20.1
^ permalink raw reply [flat|nested] 17+ messages in thread
* [dpdk-dev] [PATCH v2 1/3] test/crypto: replace NITROX PMD specific test suite
2020-09-24 13:04 ` [dpdk-dev] [PATCH v2 0/3] Add AES-GCM and cipher only offload support Nagadheeraj Rottela
@ 2020-09-24 13:04 ` Nagadheeraj Rottela
2020-10-06 20:07 ` Akhil Goyal
2020-09-24 13:04 ` [dpdk-dev] [PATCH v2 2/3] crypto/nitrox: support AES-GCM Nagadheeraj Rottela
2020-09-24 13:04 ` [dpdk-dev] [PATCH v2 3/3] crypto/nitrox: support cipher only crypto operations Nagadheeraj Rottela
2 siblings, 1 reply; 17+ messages in thread
From: Nagadheeraj Rottela @ 2020-09-24 13:04 UTC (permalink / raw)
To: akhil.goyal; +Cc: dev, jsrikanth, Nagadheeraj Rottela
Replace NITROX PMD specific tests with generic test suite.
Signed-off-by: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
---
app/test/test_cryptodev.c | 18 +-----------------
1 file changed, 1 insertion(+), 17 deletions(-)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 70bf6fe2c..162134a5c 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -12665,22 +12665,6 @@ static struct unit_test_suite cryptodev_ccp_testsuite = {
}
};
-static struct unit_test_suite cryptodev_nitrox_testsuite = {
- .suite_name = "Crypto NITROX Unit Test Suite",
- .setup = testsuite_setup,
- .teardown = testsuite_teardown,
- .unit_test_cases = {
- TEST_CASE_ST(ut_setup, ut_teardown,
- test_device_configure_invalid_dev_id),
- TEST_CASE_ST(ut_setup, ut_teardown,
- test_device_configure_invalid_queue_pair_ids),
- TEST_CASE_ST(ut_setup, ut_teardown, test_AES_chain_all),
- TEST_CASE_ST(ut_setup, ut_teardown, test_3DES_chain_all),
-
- TEST_CASES_END() /**< NULL terminate unit test array */
- }
-};
-
static int
test_cryptodev_qat(void /*argv __rte_unused, int argc __rte_unused*/)
{
@@ -13038,7 +13022,7 @@ test_cryptodev_nitrox(void)
return TEST_FAILED;
}
- return unit_test_suite_runner(&cryptodev_nitrox_testsuite);
+ return unit_test_suite_runner(&cryptodev_testsuite);
}
REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
--
2.20.1
^ permalink raw reply [flat|nested] 17+ messages in thread
* [dpdk-dev] [PATCH v2 2/3] crypto/nitrox: support AES-GCM
2020-09-24 13:04 ` [dpdk-dev] [PATCH v2 0/3] Add AES-GCM and cipher only offload support Nagadheeraj Rottela
2020-09-24 13:04 ` [dpdk-dev] [PATCH v2 1/3] test/crypto: replace NITROX PMD specific test suite Nagadheeraj Rottela
@ 2020-09-24 13:04 ` Nagadheeraj Rottela
2020-10-06 20:04 ` Akhil Goyal
2020-09-24 13:04 ` [dpdk-dev] [PATCH v2 3/3] crypto/nitrox: support cipher only crypto operations Nagadheeraj Rottela
2 siblings, 1 reply; 17+ messages in thread
From: Nagadheeraj Rottela @ 2020-09-24 13:04 UTC (permalink / raw)
To: akhil.goyal; +Cc: dev, jsrikanth, Nagadheeraj Rottela
This patch adds AES-GCM AEAD algorithm.
Signed-off-by: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
---
doc/guides/cryptodevs/features/nitrox.ini | 3 +
doc/guides/cryptodevs/nitrox.rst | 4 +
drivers/crypto/nitrox/nitrox_sym.c | 82 +++++++-
.../crypto/nitrox/nitrox_sym_capabilities.c | 30 +++
drivers/crypto/nitrox/nitrox_sym_ctx.h | 5 +-
drivers/crypto/nitrox/nitrox_sym_reqmgr.c | 182 +++++++++++++++---
6 files changed, 268 insertions(+), 38 deletions(-)
diff --git a/doc/guides/cryptodevs/features/nitrox.ini b/doc/guides/cryptodevs/features/nitrox.ini
index 183494731..a1d6bcb4f 100644
--- a/doc/guides/cryptodevs/features/nitrox.ini
+++ b/doc/guides/cryptodevs/features/nitrox.ini
@@ -34,6 +34,9 @@ SHA256 HMAC = Y
; Supported AEAD algorithms of the 'nitrox' crypto driver.
;
[AEAD]
+AES GCM (128) = Y
+AES GCM (192) = Y
+AES GCM (256) = Y
;
; Supported Asymmetric algorithms of the 'nitrox' crypto driver.
diff --git a/doc/guides/cryptodevs/nitrox.rst b/doc/guides/cryptodevs/nitrox.rst
index 85f5212b6..91fca905a 100644
--- a/doc/guides/cryptodevs/nitrox.rst
+++ b/doc/guides/cryptodevs/nitrox.rst
@@ -26,6 +26,10 @@ Hash algorithms:
* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+Supported AEAD algorithms:
+
+* ``RTE_CRYPTO_AEAD_AES_GCM``
+
Limitations
-----------
diff --git a/drivers/crypto/nitrox/nitrox_sym.c b/drivers/crypto/nitrox/nitrox_sym.c
index fad4a7a48..fe3ee6e23 100644
--- a/drivers/crypto/nitrox/nitrox_sym.c
+++ b/drivers/crypto/nitrox/nitrox_sym.c
@@ -20,6 +20,7 @@
#define NPS_PKT_IN_INSTR_SIZE 64
#define IV_FROM_DPTR 1
#define FLEXI_CRYPTO_ENCRYPT_HMAC 0x33
+#define FLEXI_CRYPTO_MAX_AAD_LEN 512
#define AES_KEYSIZE_128 16
#define AES_KEYSIZE_192 24
#define AES_KEYSIZE_256 32
@@ -297,6 +298,9 @@ get_crypto_chain_order(const struct rte_crypto_sym_xform *xform)
}
}
break;
+ case RTE_CRYPTO_SYM_XFORM_AEAD:
+ res = NITROX_CHAIN_COMBINED;
+ break;
default:
break;
}
@@ -431,17 +435,17 @@ get_flexi_auth_type(enum rte_crypto_auth_algorithm algo)
}
static bool
-auth_key_digest_is_valid(struct rte_crypto_auth_xform *xform,
- struct flexi_crypto_context *fctx)
+auth_key_is_valid(const uint8_t *data, uint16_t length,
+ struct flexi_crypto_context *fctx)
{
- if (unlikely(!xform->key.data && xform->key.length)) {
+ if (unlikely(!data && length)) {
NITROX_LOG(ERR, "Invalid auth key\n");
return false;
}
- if (unlikely(xform->key.length > sizeof(fctx->auth.opad))) {
+ if (unlikely(length > sizeof(fctx->auth.opad))) {
NITROX_LOG(ERR, "Invalid auth key length %d\n",
- xform->key.length);
+ length);
return false;
}
@@ -459,11 +463,10 @@ configure_auth_ctx(struct rte_crypto_auth_xform *xform,
if (unlikely(type == AUTH_INVALID))
return -ENOTSUP;
- if (unlikely(!auth_key_digest_is_valid(xform, fctx)))
+ if (unlikely(!auth_key_is_valid(xform->key.data, xform->key.length,
+ fctx)))
return -EINVAL;
- ctx->auth_op = xform->op;
- ctx->auth_algo = xform->algo;
ctx->digest_length = xform->digest_length;
fctx->flags = rte_be_to_cpu_64(fctx->flags);
@@ -476,6 +479,56 @@ configure_auth_ctx(struct rte_crypto_auth_xform *xform,
return 0;
}
+static int
+configure_aead_ctx(struct rte_crypto_aead_xform *xform,
+ struct nitrox_crypto_ctx *ctx)
+{
+ int aes_keylen;
+ struct flexi_crypto_context *fctx = &ctx->fctx;
+
+ if (unlikely(xform->aad_length > FLEXI_CRYPTO_MAX_AAD_LEN)) {
+ NITROX_LOG(ERR, "AAD length %d not supported\n",
+ xform->aad_length);
+ return -ENOTSUP;
+ }
+
+ if (unlikely(xform->algo != RTE_CRYPTO_AEAD_AES_GCM))
+ return -ENOTSUP;
+
+ aes_keylen = flexi_aes_keylen(xform->key.length, true);
+ if (unlikely(aes_keylen < 0))
+ return -EINVAL;
+
+ if (unlikely(!auth_key_is_valid(xform->key.data, xform->key.length,
+ fctx)))
+ return -EINVAL;
+
+ if (unlikely(xform->iv.length > MAX_IV_LEN))
+ return -EINVAL;
+
+ fctx->flags = rte_be_to_cpu_64(fctx->flags);
+ fctx->w0.cipher_type = CIPHER_AES_GCM;
+ fctx->w0.aes_keylen = aes_keylen;
+ fctx->w0.iv_source = IV_FROM_DPTR;
+ fctx->w0.hash_type = AUTH_NULL;
+ fctx->w0.auth_input_type = 1;
+ fctx->w0.mac_len = xform->digest_length;
+ fctx->flags = rte_cpu_to_be_64(fctx->flags);
+ memset(fctx->crypto.key, 0, sizeof(fctx->crypto.key));
+ memcpy(fctx->crypto.key, xform->key.data, xform->key.length);
+ memset(&fctx->auth, 0, sizeof(fctx->auth));
+ memcpy(fctx->auth.opad, xform->key.data, xform->key.length);
+
+ ctx->opcode = FLEXI_CRYPTO_ENCRYPT_HMAC;
+ ctx->req_op = (xform->op == RTE_CRYPTO_AEAD_OP_ENCRYPT) ?
+ NITROX_OP_ENCRYPT : NITROX_OP_DECRYPT;
+ ctx->iv.offset = xform->iv.offset;
+ ctx->iv.length = xform->iv.length;
+ ctx->digest_length = xform->digest_length;
+ ctx->aad_length = xform->aad_length;
+ return 0;
+}
+
static int
nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
struct rte_crypto_sym_xform *xform,
@@ -486,6 +539,8 @@ nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
struct nitrox_crypto_ctx *ctx;
struct rte_crypto_cipher_xform *cipher_xform = NULL;
struct rte_crypto_auth_xform *auth_xform = NULL;
+ struct rte_crypto_aead_xform *aead_xform = NULL;
+ int ret = -EINVAL;
if (rte_mempool_get(mempool, &mp_obj)) {
NITROX_LOG(ERR, "Couldn't allocate context\n");
@@ -503,8 +558,12 @@ nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
auth_xform = &xform->auth;
cipher_xform = &xform->next->cipher;
break;
+ case NITROX_CHAIN_COMBINED:
+ aead_xform = &xform->aead;
+ break;
default:
NITROX_LOG(ERR, "Crypto chain not supported\n");
+ ret = -ENOTSUP;
goto err;
}
@@ -518,12 +577,17 @@ nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
goto err;
}
+ if (aead_xform && unlikely(configure_aead_ctx(aead_xform, ctx))) {
+ NITROX_LOG(ERR, "Failed to configure aead ctx\n");
+ goto err;
+ }
+
ctx->iova = rte_mempool_virt2iova(ctx);
set_sym_session_private_data(sess, cdev->driver_id, ctx);
return 0;
err:
rte_mempool_put(mempool, mp_obj);
- return -EINVAL;
+ return ret;
}
static void
diff --git a/drivers/crypto/nitrox/nitrox_sym_capabilities.c b/drivers/crypto/nitrox/nitrox_sym_capabilities.c
index dc4df9185..a30cd9f8f 100644
--- a/drivers/crypto/nitrox/nitrox_sym_capabilities.c
+++ b/drivers/crypto/nitrox/nitrox_sym_capabilities.c
@@ -108,6 +108,36 @@ static const struct rte_cryptodev_capabilities nitrox_capabilities[] = {
}, }
}, }
},
+ { /* AES GCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 1
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 512,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 16,
+ .increment = 4
+ },
+ }, }
+ }, }
+ },
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
diff --git a/drivers/crypto/nitrox/nitrox_sym_ctx.h b/drivers/crypto/nitrox/nitrox_sym_ctx.h
index 2985e519f..deb00fc1e 100644
--- a/drivers/crypto/nitrox/nitrox_sym_ctx.h
+++ b/drivers/crypto/nitrox/nitrox_sym_ctx.h
@@ -11,6 +11,7 @@
#define AES_MAX_KEY_SIZE 32
#define AES_BLOCK_SIZE 16
+#define AES_GCM_SALT_SIZE 4
enum nitrox_chain {
NITROX_CHAIN_CIPHER_ONLY,
@@ -69,14 +70,14 @@ struct flexi_crypto_context {
struct nitrox_crypto_ctx {
struct flexi_crypto_context fctx;
enum nitrox_chain nitrox_chain;
- enum rte_crypto_auth_operation auth_op;
- enum rte_crypto_auth_algorithm auth_algo;
struct {
uint16_t offset;
uint16_t length;
} iv;
rte_iova_t iova;
+ uint8_t salt[AES_GCM_SALT_SIZE];
uint16_t digest_length;
+ uint16_t aad_length;
uint8_t opcode;
uint8_t req_op;
};
diff --git a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
index 449224780..47f5244b1 100644
--- a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
+++ b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
@@ -238,12 +238,13 @@ create_se_instr(struct nitrox_softreq *sr, uint8_t qno)
}
static void
-softreq_copy_iv(struct nitrox_softreq *sr)
+softreq_copy_iv(struct nitrox_softreq *sr, uint8_t salt_size)
{
- sr->iv.virt = rte_crypto_op_ctod_offset(sr->op, uint8_t *,
- sr->ctx->iv.offset);
- sr->iv.iova = rte_crypto_op_ctophys_offset(sr->op, sr->ctx->iv.offset);
- sr->iv.len = sr->ctx->iv.length;
+ uint16_t offset = sr->ctx->iv.offset + salt_size;
+
+ sr->iv.virt = rte_crypto_op_ctod_offset(sr->op, uint8_t *, offset);
+ sr->iv.iova = rte_crypto_op_ctophys_offset(sr->op, offset);
+ sr->iv.len = sr->ctx->iv.length - salt_size;
}
static int
@@ -254,7 +255,7 @@ extract_cipher_auth_digest(struct nitrox_softreq *sr,
struct rte_mbuf *mdst = op->sym->m_dst ? op->sym->m_dst :
op->sym->m_src;
- if (sr->ctx->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY &&
+ if (sr->ctx->req_op == NITROX_OP_DECRYPT &&
unlikely(!op->sym->auth.digest.data))
return -EINVAL;
@@ -352,6 +353,13 @@ create_cipher_auth_sglist(struct nitrox_softreq *sr,
if (unlikely(auth_only_len < 0))
return -EINVAL;
+ if (unlikely(
+ op->sym->cipher.data.offset + op->sym->cipher.data.length !=
+ op->sym->auth.data.offset + op->sym->auth.data.length)) {
+ NITROX_LOG(ERR, "Auth only data after cipher data not supported\n");
+ return -ENOTSUP;
+ }
+
err = create_sglist_from_mbuf(sgtbl, mbuf, op->sym->auth.data.offset,
auth_only_len);
if (unlikely(err))
@@ -365,6 +373,41 @@ create_cipher_auth_sglist(struct nitrox_softreq *sr,
return 0;
}
+static int
+create_combined_sglist(struct nitrox_softreq *sr, struct nitrox_sgtable *sgtbl,
+ struct rte_mbuf *mbuf)
+{
+ struct rte_crypto_op *op = sr->op;
+
+ fill_sglist(sgtbl, sr->iv.len, sr->iv.iova, sr->iv.virt);
+ fill_sglist(sgtbl, sr->ctx->aad_length, op->sym->aead.aad.phys_addr,
+ op->sym->aead.aad.data);
+ return create_sglist_from_mbuf(sgtbl, mbuf, op->sym->cipher.data.offset,
+ op->sym->cipher.data.length);
+}
+
+static int
+create_aead_sglist(struct nitrox_softreq *sr, struct nitrox_sgtable *sgtbl,
+ struct rte_mbuf *mbuf)
+{
+ int err;
+
+ switch (sr->ctx->nitrox_chain) {
+ case NITROX_CHAIN_CIPHER_AUTH:
+ case NITROX_CHAIN_AUTH_CIPHER:
+ err = create_cipher_auth_sglist(sr, sgtbl, mbuf);
+ break;
+ case NITROX_CHAIN_COMBINED:
+ err = create_combined_sglist(sr, sgtbl, mbuf);
+ break;
+ default:
+ err = -EINVAL;
+ break;
+ }
+
+ return err;
+}
+
static void
create_sgcomp(struct nitrox_sgtable *sgtbl)
{
@@ -383,17 +426,16 @@ create_sgcomp(struct nitrox_sgtable *sgtbl)
}
static int
-create_cipher_auth_inbuf(struct nitrox_softreq *sr,
- struct nitrox_sglist *digest)
+create_aead_inbuf(struct nitrox_softreq *sr, struct nitrox_sglist *digest)
{
int err;
struct nitrox_crypto_ctx *ctx = sr->ctx;
- err = create_cipher_auth_sglist(sr, &sr->in, sr->op->sym->m_src);
+ err = create_aead_sglist(sr, &sr->in, sr->op->sym->m_src);
if (unlikely(err))
return err;
- if (ctx->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY)
+ if (ctx->req_op == NITROX_OP_DECRYPT)
fill_sglist(&sr->in, digest->len, digest->iova, digest->virt);
create_sgcomp(&sr->in);
@@ -402,25 +444,24 @@ create_cipher_auth_inbuf(struct nitrox_softreq *sr,
}
static int
-create_cipher_auth_oop_outbuf(struct nitrox_softreq *sr,
- struct nitrox_sglist *digest)
+create_aead_oop_outbuf(struct nitrox_softreq *sr, struct nitrox_sglist *digest)
{
int err;
struct nitrox_crypto_ctx *ctx = sr->ctx;
- err = create_cipher_auth_sglist(sr, &sr->out, sr->op->sym->m_dst);
+ err = create_aead_sglist(sr, &sr->out, sr->op->sym->m_dst);
if (unlikely(err))
return err;
- if (ctx->auth_op == RTE_CRYPTO_AUTH_OP_GENERATE)
+ if (ctx->req_op == NITROX_OP_ENCRYPT)
fill_sglist(&sr->out, digest->len, digest->iova, digest->virt);
return 0;
}
static void
-create_cipher_auth_inplace_outbuf(struct nitrox_softreq *sr,
- struct nitrox_sglist *digest)
+create_aead_inplace_outbuf(struct nitrox_softreq *sr,
+ struct nitrox_sglist *digest)
{
int i, cnt;
struct nitrox_crypto_ctx *ctx = sr->ctx;
@@ -433,17 +474,16 @@ create_cipher_auth_inplace_outbuf(struct nitrox_softreq *sr,
}
sr->out.map_bufs_cnt = cnt;
- if (ctx->auth_op == RTE_CRYPTO_AUTH_OP_GENERATE) {
+ if (ctx->req_op == NITROX_OP_ENCRYPT) {
fill_sglist(&sr->out, digest->len, digest->iova,
digest->virt);
- } else if (ctx->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
+ } else if (ctx->req_op == NITROX_OP_DECRYPT) {
sr->out.map_bufs_cnt--;
}
}
static int
-create_cipher_auth_outbuf(struct nitrox_softreq *sr,
- struct nitrox_sglist *digest)
+create_aead_outbuf(struct nitrox_softreq *sr, struct nitrox_sglist *digest)
{
struct rte_crypto_op *op = sr->op;
int cnt = 0;
@@ -458,11 +498,11 @@ create_cipher_auth_outbuf(struct nitrox_softreq *sr,
if (op->sym->m_dst) {
int err;
- err = create_cipher_auth_oop_outbuf(sr, digest);
+ err = create_aead_oop_outbuf(sr, digest);
if (unlikely(err))
return err;
} else {
- create_cipher_auth_inplace_outbuf(sr, digest);
+ create_aead_inplace_outbuf(sr, digest);
}
cnt = sr->out.map_bufs_cnt;
@@ -516,16 +556,16 @@ process_cipher_auth_data(struct nitrox_softreq *sr)
int err;
struct nitrox_sglist digest;
- softreq_copy_iv(sr);
+ softreq_copy_iv(sr, 0);
err = extract_cipher_auth_digest(sr, &digest);
if (unlikely(err))
return err;
- err = create_cipher_auth_inbuf(sr, &digest);
+ err = create_aead_inbuf(sr, &digest);
if (unlikely(err))
return err;
- err = create_cipher_auth_outbuf(sr, &digest);
+ err = create_aead_outbuf(sr, &digest);
if (unlikely(err))
return err;
@@ -534,6 +574,86 @@ process_cipher_auth_data(struct nitrox_softreq *sr)
return 0;
}
+static int
+softreq_copy_salt(struct nitrox_softreq *sr)
+{
+ struct nitrox_crypto_ctx *ctx = sr->ctx;
+ uint8_t *addr;
+
+ if (unlikely(ctx->iv.length < AES_GCM_SALT_SIZE)) {
+ NITROX_LOG(ERR, "Invalid IV length %d\n", ctx->iv.length);
+ return -EINVAL;
+ }
+
+ addr = rte_crypto_op_ctod_offset(sr->op, uint8_t *, ctx->iv.offset);
+ if (!memcmp(ctx->salt, addr, AES_GCM_SALT_SIZE))
+ return 0;
+
+ memcpy(ctx->salt, addr, AES_GCM_SALT_SIZE);
+ memcpy(ctx->fctx.crypto.iv, addr, AES_GCM_SALT_SIZE);
+ return 0;
+}
+
+static int
+extract_combined_digest(struct nitrox_softreq *sr, struct nitrox_sglist *digest)
+{
+ struct rte_crypto_op *op = sr->op;
+ struct rte_mbuf *mdst = op->sym->m_dst ? op->sym->m_dst :
+ op->sym->m_src;
+
+ digest->len = sr->ctx->digest_length;
+ if (op->sym->aead.digest.data) {
+ digest->iova = op->sym->aead.digest.phys_addr;
+ digest->virt = op->sym->aead.digest.data;
+
+ return 0;
+ }
+
+ if (unlikely(rte_pktmbuf_data_len(mdst) < op->sym->aead.data.offset +
+ op->sym->aead.data.length + digest->len))
+ return -EINVAL;
+
+ digest->iova = rte_pktmbuf_mtophys_offset(mdst,
+ op->sym->aead.data.offset +
+ op->sym->aead.data.length);
+ digest->virt = rte_pktmbuf_mtod_offset(mdst, uint8_t *,
+ op->sym->aead.data.offset +
+ op->sym->aead.data.length);
+
+ return 0;
+}
+
+static int
+process_combined_data(struct nitrox_softreq *sr)
+{
+ int err;
+ struct nitrox_sglist digest;
+ struct rte_crypto_op *op = sr->op;
+
+ err = softreq_copy_salt(sr);
+ if (unlikely(err))
+ return err;
+
+ softreq_copy_iv(sr, AES_GCM_SALT_SIZE);
+ err = extract_combined_digest(sr, &digest);
+ if (unlikely(err))
+ return err;
+
+ err = create_aead_inbuf(sr, &digest);
+ if (unlikely(err))
+ return err;
+
+ err = create_aead_outbuf(sr, &digest);
+ if (unlikely(err))
+ return err;
+
+ create_aead_gph(op->sym->aead.data.length, sr->iv.len,
+ op->sym->aead.data.length + sr->ctx->aad_length,
+ &sr->gph);
+
+ return 0;
+}
+
static int
process_softreq(struct nitrox_softreq *sr)
{
@@ -545,6 +665,9 @@ process_softreq(struct nitrox_softreq *sr)
case NITROX_CHAIN_AUTH_CIPHER:
err = process_cipher_auth_data(sr);
break;
+ case NITROX_CHAIN_COMBINED:
+ err = process_combined_data(sr);
+ break;
default:
err = -EINVAL;
break;
@@ -558,10 +681,15 @@ nitrox_process_se_req(uint16_t qno, struct rte_crypto_op *op,
struct nitrox_crypto_ctx *ctx,
struct nitrox_softreq *sr)
{
+ int err;
+
softreq_init(sr, sr->iova);
sr->ctx = ctx;
sr->op = op;
- process_softreq(sr);
+ err = process_softreq(sr);
+ if (unlikely(err))
+ return err;
+
create_se_instr(sr, qno);
sr->timeout = rte_get_timer_cycles() + CMD_TIMEOUT * rte_get_timer_hz();
return 0;
@@ -577,7 +705,7 @@ nitrox_check_se_req(struct nitrox_softreq *sr, struct rte_crypto_op **op)
cc = *(volatile uint64_t *)(&sr->resp.completion);
orh = *(volatile uint64_t *)(&sr->resp.orh);
if (cc != PENDING_SIG)
- err = 0;
+ err = orh & 0xff;
else if ((orh != PENDING_SIG) && (orh & 0xff))
err = orh & 0xff;
else if (rte_get_timer_cycles() >= sr->timeout)
--
2.20.1
^ permalink raw reply [flat|nested] 17+ messages in thread
* [dpdk-dev] [PATCH v2 3/3] crypto/nitrox: support cipher only crypto operations
2020-09-24 13:04 ` [dpdk-dev] [PATCH v2 0/3] Add AES-GCM and cipher only offload support Nagadheeraj Rottela
2020-09-24 13:04 ` [dpdk-dev] [PATCH v2 1/3] test/crypto: replace NITROX PMD specific test suite Nagadheeraj Rottela
2020-09-24 13:04 ` [dpdk-dev] [PATCH v2 2/3] crypto/nitrox: support AES-GCM Nagadheeraj Rottela
@ 2020-09-24 13:04 ` Nagadheeraj Rottela
2 siblings, 0 replies; 17+ messages in thread
From: Nagadheeraj Rottela @ 2020-09-24 13:04 UTC (permalink / raw)
To: akhil.goyal; +Cc: dev, jsrikanth, Nagadheeraj Rottela
This patch adds cipher only crypto operation support.
Signed-off-by: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
---
doc/guides/cryptodevs/nitrox.rst | 2 -
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/crypto/nitrox/nitrox_sym.c | 3 +
drivers/crypto/nitrox/nitrox_sym_reqmgr.c | 191 ++++++++++++++++------
4 files changed, 149 insertions(+), 52 deletions(-)
diff --git a/doc/guides/cryptodevs/nitrox.rst b/doc/guides/cryptodevs/nitrox.rst
index 91fca905a..095e545c6 100644
--- a/doc/guides/cryptodevs/nitrox.rst
+++ b/doc/guides/cryptodevs/nitrox.rst
@@ -33,8 +33,6 @@ Supported AEAD algorithms:
Limitations
-----------
-* AES_CBC Cipher Only combination is not supported.
-* 3DES Cipher Only combination is not supported.
* Session-less APIs are not supported.
Installation
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 73ac08fb0..ddcf90356 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -55,6 +55,11 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Updated Marvell NITROX symmetric crypto PMD.**
+
+ * Added cipher only offload support.
+ * Added AES-GCM support.
+
Removed Items
-------------
diff --git a/drivers/crypto/nitrox/nitrox_sym.c b/drivers/crypto/nitrox/nitrox_sym.c
index fe3ee6e23..2768bdd2e 100644
--- a/drivers/crypto/nitrox/nitrox_sym.c
+++ b/drivers/crypto/nitrox/nitrox_sym.c
@@ -550,6 +550,9 @@ nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
ctx = mp_obj;
ctx->nitrox_chain = get_crypto_chain_order(xform);
switch (ctx->nitrox_chain) {
+ case NITROX_CHAIN_CIPHER_ONLY:
+ cipher_xform = &xform->cipher;
+ break;
case NITROX_CHAIN_CIPHER_AUTH:
cipher_xform = &xform->cipher;
auth_xform = &xform->next->auth;
diff --git a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
index 47f5244b1..fe3ca25a0 100644
--- a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
+++ b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
@@ -247,38 +247,6 @@ softreq_copy_iv(struct nitrox_softreq *sr, uint8_t salt_size)
sr->iv.len = sr->ctx->iv.length - salt_size;
}
-static int
-extract_cipher_auth_digest(struct nitrox_softreq *sr,
- struct nitrox_sglist *digest)
-{
- struct rte_crypto_op *op = sr->op;
- struct rte_mbuf *mdst = op->sym->m_dst ? op->sym->m_dst :
- op->sym->m_src;
-
- if (sr->ctx->req_op == NITROX_OP_DECRYPT &&
- unlikely(!op->sym->auth.digest.data))
- return -EINVAL;
-
- digest->len = sr->ctx->digest_length;
- if (op->sym->auth.digest.data) {
- digest->iova = op->sym->auth.digest.phys_addr;
- digest->virt = op->sym->auth.digest.data;
- return 0;
- }
-
- if (unlikely(rte_pktmbuf_data_len(mdst) < op->sym->auth.data.offset +
- op->sym->auth.data.length + digest->len))
- return -EINVAL;
-
- digest->iova = rte_pktmbuf_iova_offset(mdst,
- op->sym->auth.data.offset +
- op->sym->auth.data.length);
- digest->virt = rte_pktmbuf_mtod_offset(mdst, uint8_t *,
- op->sym->auth.data.offset +
- op->sym->auth.data.length);
- return 0;
-}
-
static void
fill_sglist(struct nitrox_sgtable *sgtbl, uint16_t len, rte_iova_t iova,
void *virt)
@@ -340,6 +308,143 @@ create_sglist_from_mbuf(struct nitrox_sgtable *sgtbl, struct rte_mbuf *mbuf,
return 0;
}
+static void
+create_sgcomp(struct nitrox_sgtable *sgtbl)
+{
+ int i, j, nr_sgcomp;
+ struct nitrox_sgcomp *sgcomp = sgtbl->sgcomp;
+ struct nitrox_sglist *sglist = sgtbl->sglist;
+
+ nr_sgcomp = RTE_ALIGN_MUL_CEIL(sgtbl->map_bufs_cnt, 4) / 4;
+ sgtbl->nr_sgcomp = nr_sgcomp;
+ for (i = 0; i < nr_sgcomp; i++, sgcomp++) {
+ for (j = 0; j < 4; j++, sglist++) {
+ sgcomp->len[j] = rte_cpu_to_be_16(sglist->len);
+ sgcomp->iova[j] = rte_cpu_to_be_64(sglist->iova);
+ }
+ }
+}
+
+static int
+create_cipher_inbuf(struct nitrox_softreq *sr)
+{
+ int err;
+ struct rte_crypto_op *op = sr->op;
+
+ fill_sglist(&sr->in, sr->iv.len, sr->iv.iova, sr->iv.virt);
+ err = create_sglist_from_mbuf(&sr->in, op->sym->m_src,
+ op->sym->cipher.data.offset,
+ op->sym->cipher.data.length);
+ if (unlikely(err))
+ return err;
+
+ create_sgcomp(&sr->in);
+ sr->dptr = sr->iova + offsetof(struct nitrox_softreq, in.sgcomp);
+
+ return 0;
+}
+
+static int
+create_cipher_outbuf(struct nitrox_softreq *sr)
+{
+ struct rte_crypto_op *op = sr->op;
+ int err, cnt = 0;
+ struct rte_mbuf *m_dst = op->sym->m_dst ? op->sym->m_dst :
+ op->sym->m_src;
+
+ sr->resp.orh = PENDING_SIG;
+ sr->out.sglist[cnt].len = sizeof(sr->resp.orh);
+ sr->out.sglist[cnt].iova = sr->iova + offsetof(struct nitrox_softreq,
+ resp.orh);
+ sr->out.sglist[cnt].virt = &sr->resp.orh;
+ cnt++;
+
+ sr->out.map_bufs_cnt = cnt;
+ fill_sglist(&sr->out, sr->iv.len, sr->iv.iova, sr->iv.virt);
+ err = create_sglist_from_mbuf(&sr->out, m_dst,
+ op->sym->cipher.data.offset,
+ op->sym->cipher.data.length);
+ if (unlikely(err))
+ return err;
+
+ cnt = sr->out.map_bufs_cnt;
+ sr->resp.completion = PENDING_SIG;
+ sr->out.sglist[cnt].len = sizeof(sr->resp.completion);
+ sr->out.sglist[cnt].iova = sr->iova + offsetof(struct nitrox_softreq,
+ resp.completion);
+ sr->out.sglist[cnt].virt = &sr->resp.completion;
+ cnt++;
+
+ RTE_VERIFY(cnt <= MAX_SGBUF_CNT);
+ sr->out.map_bufs_cnt = cnt;
+
+ create_sgcomp(&sr->out);
+ sr->rptr = sr->iova + offsetof(struct nitrox_softreq, out.sgcomp);
+
+ return 0;
+}
+
+static void
+create_cipher_gph(uint32_t cryptlen, uint16_t ivlen, struct gphdr *gph)
+{
+ gph->param0 = rte_cpu_to_be_16(cryptlen);
+ gph->param1 = 0;
+ gph->param2 = rte_cpu_to_be_16(ivlen);
+ gph->param3 = 0;
+}
+
+static int
+process_cipher_data(struct nitrox_softreq *sr)
+{
+ struct rte_crypto_op *op = sr->op;
+ int err;
+
+ softreq_copy_iv(sr, 0);
+ err = create_cipher_inbuf(sr);
+ if (unlikely(err))
+ return err;
+
+ err = create_cipher_outbuf(sr);
+ if (unlikely(err))
+ return err;
+
+ create_cipher_gph(op->sym->cipher.data.length, sr->iv.len, &sr->gph);
+
+ return 0;
+}
+
+static int
+extract_cipher_auth_digest(struct nitrox_softreq *sr,
+ struct nitrox_sglist *digest)
+{
+ struct rte_crypto_op *op = sr->op;
+ struct rte_mbuf *mdst = op->sym->m_dst ? op->sym->m_dst :
+ op->sym->m_src;
+
+ if (sr->ctx->req_op == NITROX_OP_DECRYPT &&
+ unlikely(!op->sym->auth.digest.data))
+ return -EINVAL;
+
+ digest->len = sr->ctx->digest_length;
+ if (op->sym->auth.digest.data) {
+ digest->iova = op->sym->auth.digest.phys_addr;
+ digest->virt = op->sym->auth.digest.data;
+ return 0;
+ }
+
+ if (unlikely(rte_pktmbuf_data_len(mdst) < op->sym->auth.data.offset +
+ op->sym->auth.data.length + digest->len))
+ return -EINVAL;
+
+ digest->iova = rte_pktmbuf_iova_offset(mdst,
+ op->sym->auth.data.offset +
+ op->sym->auth.data.length);
+ digest->virt = rte_pktmbuf_mtod_offset(mdst, uint8_t *,
+ op->sym->auth.data.offset +
+ op->sym->auth.data.length);
+ return 0;
+}
+
static int
create_cipher_auth_sglist(struct nitrox_softreq *sr,
struct nitrox_sgtable *sgtbl, struct rte_mbuf *mbuf)
@@ -408,23 +513,6 @@ create_aead_sglist(struct nitrox_softreq *sr, struct nitrox_sgtable *sgtbl,
return err;
}
-static void
-create_sgcomp(struct nitrox_sgtable *sgtbl)
-{
- int i, j, nr_sgcomp;
- struct nitrox_sgcomp *sgcomp = sgtbl->sgcomp;
- struct nitrox_sglist *sglist = sgtbl->sglist;
-
- nr_sgcomp = RTE_ALIGN_MUL_CEIL(sgtbl->map_bufs_cnt, 4) / 4;
- sgtbl->nr_sgcomp = nr_sgcomp;
- for (i = 0; i < nr_sgcomp; i++, sgcomp++) {
- for (j = 0; j < 4; j++, sglist++) {
- sgcomp->len[j] = rte_cpu_to_be_16(sglist->len);
- sgcomp->iova[j] = rte_cpu_to_be_64(sglist->iova);
- }
- }
-}
-
static int
create_aead_inbuf(struct nitrox_softreq *sr, struct nitrox_sglist *digest)
{
@@ -613,7 +701,7 @@ extract_combined_digest(struct nitrox_softreq *sr, struct nitrox_sglist *digest)
op->sym->aead.data.length + digest->len))
return -EINVAL;
- digest->iova = rte_pktmbuf_mtophys_offset(mdst,
+ digest->iova = rte_pktmbuf_iova_offset(mdst,
op->sym->aead.data.offset +
op->sym->aead.data.length);
digest->virt = rte_pktmbuf_mtod_offset(mdst, uint8_t *,
@@ -661,6 +749,9 @@ process_softreq(struct nitrox_softreq *sr)
int err = 0;
switch (ctx->nitrox_chain) {
+ case NITROX_CHAIN_CIPHER_ONLY:
+ err = process_cipher_data(sr);
+ break;
case NITROX_CHAIN_CIPHER_AUTH:
case NITROX_CHAIN_AUTH_CIPHER:
err = process_cipher_auth_data(sr);
--
2.20.1
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/3] crypto/nitrox: support AES-GCM
2020-09-24 13:04 ` [dpdk-dev] [PATCH v2 2/3] crypto/nitrox: support AES-GCM Nagadheeraj Rottela
@ 2020-10-06 20:04 ` Akhil Goyal
0 siblings, 0 replies; 17+ messages in thread
From: Akhil Goyal @ 2020-10-06 20:04 UTC (permalink / raw)
To: Nagadheeraj Rottela; +Cc: dev, jsrikanth
Hi Nagadheeraj,
> Subject: [PATCH v2 2/3] crypto/nitrox: support AES-GCM
>
> This patch adds AES-GCM AEAD algorithm.
>
> Signed-off-by: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
> ---
This patch is showing compilation error while compiling individually.
./drivers/crypto/nitrox/nitrox_sym_reqmgr.c
../drivers/crypto/nitrox/nitrox_sym_reqmgr.c: In function 'extract_combined_digest':
../drivers/crypto/nitrox/nitrox_sym_reqmgr.c:616:17: error: implicit declaration of function 'rte_pktmbuf_mtophys_offset'; did you mean 'rte_pktmbuf_mtod_offset'? [-Werror=implicit-function-declaration]
digest->iova = rte_pktmbuf_mtophys_offset(mdst,
^~~~~~~~~~~~~~~~~~~~~~~~~~
rte_pktmbuf_mtod_offset
../drivers/crypto/nitrox/nitrox_sym_reqmgr.c:616:17: error: nested extern declaration of 'rte_pktmbuf_mtophys_offset' [-Werror=nested-externs]
cc1: all warnings being treated as errors
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/3] test/crypto: replace NITROX PMD specific test suite
2020-09-24 13:04 ` [dpdk-dev] [PATCH v2 1/3] test/crypto: replace NITROX PMD specific test suite Nagadheeraj Rottela
@ 2020-10-06 20:07 ` Akhil Goyal
2020-10-09 5:57 ` [dpdk-dev] [PATCH v3 0/2] Add AES-GCM and cipher only offload support Nagadheeraj Rottela
0 siblings, 1 reply; 17+ messages in thread
From: Akhil Goyal @ 2020-10-06 20:07 UTC (permalink / raw)
To: Nagadheeraj Rottela; +Cc: dev, jsrikanth
> Subject: [PATCH v2 1/3] test/crypto: replace NITROX PMD specific test suite
>
> Replace NITROX PMD specific tests with generic test suite.
>
> Signed-off-by: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
> ---
Applied this patch to dpdk-next-crypto as is it unrelated to rest of the patches.
Please fix compilation error in the other two patches of the series.
Thanks!
^ permalink raw reply [flat|nested] 17+ messages in thread
* [dpdk-dev] [PATCH v3 0/2] Add AES-GCM and cipher only offload support
2020-10-06 20:07 ` Akhil Goyal
@ 2020-10-09 5:57 ` Nagadheeraj Rottela
2020-10-09 5:57 ` [dpdk-dev] [PATCH v3 1/2] crypto/nitrox: support AES-GCM Nagadheeraj Rottela
` (2 more replies)
0 siblings, 3 replies; 17+ messages in thread
From: Nagadheeraj Rottela @ 2020-10-09 5:57 UTC (permalink / raw)
To: akhil.goyal; +Cc: dev, jsrikanth, Nagadheeraj Rottela
This patchset adds support for AES-GCM and cipher only offload.
---
v3:
* Fixed compilation error while compiling individual patches.
v2:
* Rebased patches to latest master and resolved merge conflict.
* Updated release notes.
Nagadheeraj Rottela (2):
crypto/nitrox: support AES-GCM
crypto/nitrox: support cipher only crypto operations
doc/guides/cryptodevs/features/nitrox.ini | 3 +
doc/guides/cryptodevs/nitrox.rst | 6 +-
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/crypto/nitrox/nitrox_sym.c | 85 ++++-
.../crypto/nitrox/nitrox_sym_capabilities.c | 30 ++
drivers/crypto/nitrox/nitrox_sym_ctx.h | 5 +-
drivers/crypto/nitrox/nitrox_sym_reqmgr.c | 357 ++++++++++++++----
7 files changed, 409 insertions(+), 82 deletions(-)
--
2.20.1
^ permalink raw reply [flat|nested] 17+ messages in thread
* [dpdk-dev] [PATCH v3 1/2] crypto/nitrox: support AES-GCM
2020-10-09 5:57 ` [dpdk-dev] [PATCH v3 0/2] Add AES-GCM and cipher only offload support Nagadheeraj Rottela
@ 2020-10-09 5:57 ` Nagadheeraj Rottela
2020-10-09 5:57 ` [dpdk-dev] [PATCH v3 2/2] crypto/nitrox: support cipher only crypto operations Nagadheeraj Rottela
2020-10-09 12:13 ` [dpdk-dev] [PATCH v3 0/2] Add AES-GCM and cipher only offload support Akhil Goyal
2 siblings, 0 replies; 17+ messages in thread
From: Nagadheeraj Rottela @ 2020-10-09 5:57 UTC (permalink / raw)
To: akhil.goyal; +Cc: dev, jsrikanth, Nagadheeraj Rottela
This patch adds AES-GCM AEAD algorithm.
Signed-off-by: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
---
doc/guides/cryptodevs/features/nitrox.ini | 3 +
doc/guides/cryptodevs/nitrox.rst | 4 +
drivers/crypto/nitrox/nitrox_sym.c | 82 +++++++-
.../crypto/nitrox/nitrox_sym_capabilities.c | 30 +++
drivers/crypto/nitrox/nitrox_sym_ctx.h | 5 +-
drivers/crypto/nitrox/nitrox_sym_reqmgr.c | 182 +++++++++++++++---
6 files changed, 268 insertions(+), 38 deletions(-)
diff --git a/doc/guides/cryptodevs/features/nitrox.ini b/doc/guides/cryptodevs/features/nitrox.ini
index 183494731..a1d6bcb4f 100644
--- a/doc/guides/cryptodevs/features/nitrox.ini
+++ b/doc/guides/cryptodevs/features/nitrox.ini
@@ -34,6 +34,9 @@ SHA256 HMAC = Y
; Supported AEAD algorithms of the 'nitrox' crypto driver.
;
[AEAD]
+AES GCM (128) = Y
+AES GCM (192) = Y
+AES GCM (256) = Y
;
; Supported Asymmetric algorithms of the 'nitrox' crypto driver.
diff --git a/doc/guides/cryptodevs/nitrox.rst b/doc/guides/cryptodevs/nitrox.rst
index 85f5212b6..91fca905a 100644
--- a/doc/guides/cryptodevs/nitrox.rst
+++ b/doc/guides/cryptodevs/nitrox.rst
@@ -26,6 +26,10 @@ Hash algorithms:
* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+Supported AEAD algorithms:
+
+* ``RTE_CRYPTO_AEAD_AES_GCM``
+
Limitations
-----------
diff --git a/drivers/crypto/nitrox/nitrox_sym.c b/drivers/crypto/nitrox/nitrox_sym.c
index fad4a7a48..fe3ee6e23 100644
--- a/drivers/crypto/nitrox/nitrox_sym.c
+++ b/drivers/crypto/nitrox/nitrox_sym.c
@@ -20,6 +20,7 @@
#define NPS_PKT_IN_INSTR_SIZE 64
#define IV_FROM_DPTR 1
#define FLEXI_CRYPTO_ENCRYPT_HMAC 0x33
+#define FLEXI_CRYPTO_MAX_AAD_LEN 512
#define AES_KEYSIZE_128 16
#define AES_KEYSIZE_192 24
#define AES_KEYSIZE_256 32
@@ -297,6 +298,9 @@ get_crypto_chain_order(const struct rte_crypto_sym_xform *xform)
}
}
break;
+ case RTE_CRYPTO_SYM_XFORM_AEAD:
+ res = NITROX_CHAIN_COMBINED;
+ break;
default:
break;
}
@@ -431,17 +435,17 @@ get_flexi_auth_type(enum rte_crypto_auth_algorithm algo)
}
static bool
-auth_key_digest_is_valid(struct rte_crypto_auth_xform *xform,
- struct flexi_crypto_context *fctx)
+auth_key_is_valid(const uint8_t *data, uint16_t length,
+ struct flexi_crypto_context *fctx)
{
- if (unlikely(!xform->key.data && xform->key.length)) {
+ if (unlikely(!data && length)) {
NITROX_LOG(ERR, "Invalid auth key\n");
return false;
}
- if (unlikely(xform->key.length > sizeof(fctx->auth.opad))) {
+ if (unlikely(length > sizeof(fctx->auth.opad))) {
NITROX_LOG(ERR, "Invalid auth key length %d\n",
- xform->key.length);
+ length);
return false;
}
@@ -459,11 +463,10 @@ configure_auth_ctx(struct rte_crypto_auth_xform *xform,
if (unlikely(type == AUTH_INVALID))
return -ENOTSUP;
- if (unlikely(!auth_key_digest_is_valid(xform, fctx)))
+ if (unlikely(!auth_key_is_valid(xform->key.data, xform->key.length,
+ fctx)))
return -EINVAL;
- ctx->auth_op = xform->op;
- ctx->auth_algo = xform->algo;
ctx->digest_length = xform->digest_length;
fctx->flags = rte_be_to_cpu_64(fctx->flags);
@@ -476,6 +479,56 @@ configure_auth_ctx(struct rte_crypto_auth_xform *xform,
return 0;
}
+static int
+configure_aead_ctx(struct rte_crypto_aead_xform *xform,
+ struct nitrox_crypto_ctx *ctx)
+{
+ int aes_keylen;
+ struct flexi_crypto_context *fctx = &ctx->fctx;
+
+ if (unlikely(xform->aad_length > FLEXI_CRYPTO_MAX_AAD_LEN)) {
+ NITROX_LOG(ERR, "AAD length %d not supported\n",
+ xform->aad_length);
+ return -ENOTSUP;
+ }
+
+ if (unlikely(xform->algo != RTE_CRYPTO_AEAD_AES_GCM))
+ return -ENOTSUP;
+
+ aes_keylen = flexi_aes_keylen(xform->key.length, true);
+ if (unlikely(aes_keylen < 0))
+ return -EINVAL;
+
+ if (unlikely(!auth_key_is_valid(xform->key.data, xform->key.length,
+ fctx)))
+ return -EINVAL;
+
+ if (unlikely(xform->iv.length > MAX_IV_LEN))
+ return -EINVAL;
+
+ fctx->flags = rte_be_to_cpu_64(fctx->flags);
+ fctx->w0.cipher_type = CIPHER_AES_GCM;
+ fctx->w0.aes_keylen = aes_keylen;
+ fctx->w0.iv_source = IV_FROM_DPTR;
+ fctx->w0.hash_type = AUTH_NULL;
+ fctx->w0.auth_input_type = 1;
+ fctx->w0.mac_len = xform->digest_length;
+ fctx->flags = rte_cpu_to_be_64(fctx->flags);
+ memset(fctx->crypto.key, 0, sizeof(fctx->crypto.key));
+ memcpy(fctx->crypto.key, xform->key.data, xform->key.length);
+ memset(&fctx->auth, 0, sizeof(fctx->auth));
+ memcpy(fctx->auth.opad, xform->key.data, xform->key.length);
+
+ ctx->opcode = FLEXI_CRYPTO_ENCRYPT_HMAC;
+ ctx->req_op = (xform->op == RTE_CRYPTO_AEAD_OP_ENCRYPT) ?
+ NITROX_OP_ENCRYPT : NITROX_OP_DECRYPT;
+ ctx->iv.offset = xform->iv.offset;
+ ctx->iv.length = xform->iv.length;
+ ctx->digest_length = xform->digest_length;
+ ctx->aad_length = xform->aad_length;
+ return 0;
+}
+
static int
nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
struct rte_crypto_sym_xform *xform,
@@ -486,6 +539,8 @@ nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
struct nitrox_crypto_ctx *ctx;
struct rte_crypto_cipher_xform *cipher_xform = NULL;
struct rte_crypto_auth_xform *auth_xform = NULL;
+ struct rte_crypto_aead_xform *aead_xform = NULL;
+ int ret = -EINVAL;
if (rte_mempool_get(mempool, &mp_obj)) {
NITROX_LOG(ERR, "Couldn't allocate context\n");
@@ -503,8 +558,12 @@ nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
auth_xform = &xform->auth;
cipher_xform = &xform->next->cipher;
break;
+ case NITROX_CHAIN_COMBINED:
+ aead_xform = &xform->aead;
+ break;
default:
NITROX_LOG(ERR, "Crypto chain not supported\n");
+ ret = -ENOTSUP;
goto err;
}
@@ -518,12 +577,17 @@ nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
goto err;
}
+ if (aead_xform && unlikely(configure_aead_ctx(aead_xform, ctx))) {
+ NITROX_LOG(ERR, "Failed to configure aead ctx\n");
+ goto err;
+ }
+
ctx->iova = rte_mempool_virt2iova(ctx);
set_sym_session_private_data(sess, cdev->driver_id, ctx);
return 0;
err:
rte_mempool_put(mempool, mp_obj);
- return -EINVAL;
+ return ret;
}
static void
diff --git a/drivers/crypto/nitrox/nitrox_sym_capabilities.c b/drivers/crypto/nitrox/nitrox_sym_capabilities.c
index dc4df9185..a30cd9f8f 100644
--- a/drivers/crypto/nitrox/nitrox_sym_capabilities.c
+++ b/drivers/crypto/nitrox/nitrox_sym_capabilities.c
@@ -108,6 +108,36 @@ static const struct rte_cryptodev_capabilities nitrox_capabilities[] = {
}, }
}, }
},
+ { /* AES GCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 1
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 512,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 16,
+ .increment = 4
+ },
+ }, }
+ }, }
+ },
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
diff --git a/drivers/crypto/nitrox/nitrox_sym_ctx.h b/drivers/crypto/nitrox/nitrox_sym_ctx.h
index 2985e519f..deb00fc1e 100644
--- a/drivers/crypto/nitrox/nitrox_sym_ctx.h
+++ b/drivers/crypto/nitrox/nitrox_sym_ctx.h
@@ -11,6 +11,7 @@
#define AES_MAX_KEY_SIZE 32
#define AES_BLOCK_SIZE 16
+#define AES_GCM_SALT_SIZE 4
enum nitrox_chain {
NITROX_CHAIN_CIPHER_ONLY,
@@ -69,14 +70,14 @@ struct flexi_crypto_context {
struct nitrox_crypto_ctx {
struct flexi_crypto_context fctx;
enum nitrox_chain nitrox_chain;
- enum rte_crypto_auth_operation auth_op;
- enum rte_crypto_auth_algorithm auth_algo;
struct {
uint16_t offset;
uint16_t length;
} iv;
rte_iova_t iova;
+ uint8_t salt[AES_GCM_SALT_SIZE];
uint16_t digest_length;
+ uint16_t aad_length;
uint8_t opcode;
uint8_t req_op;
};
diff --git a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
index 449224780..113ce5d11 100644
--- a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
+++ b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
@@ -238,12 +238,13 @@ create_se_instr(struct nitrox_softreq *sr, uint8_t qno)
}
static void
-softreq_copy_iv(struct nitrox_softreq *sr)
+softreq_copy_iv(struct nitrox_softreq *sr, uint8_t salt_size)
{
- sr->iv.virt = rte_crypto_op_ctod_offset(sr->op, uint8_t *,
- sr->ctx->iv.offset);
- sr->iv.iova = rte_crypto_op_ctophys_offset(sr->op, sr->ctx->iv.offset);
- sr->iv.len = sr->ctx->iv.length;
+ uint16_t offset = sr->ctx->iv.offset + salt_size;
+
+ sr->iv.virt = rte_crypto_op_ctod_offset(sr->op, uint8_t *, offset);
+ sr->iv.iova = rte_crypto_op_ctophys_offset(sr->op, offset);
+ sr->iv.len = sr->ctx->iv.length - salt_size;
}
static int
@@ -254,7 +255,7 @@ extract_cipher_auth_digest(struct nitrox_softreq *sr,
struct rte_mbuf *mdst = op->sym->m_dst ? op->sym->m_dst :
op->sym->m_src;
- if (sr->ctx->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY &&
+ if (sr->ctx->req_op == NITROX_OP_DECRYPT &&
unlikely(!op->sym->auth.digest.data))
return -EINVAL;
@@ -352,6 +353,13 @@ create_cipher_auth_sglist(struct nitrox_softreq *sr,
if (unlikely(auth_only_len < 0))
return -EINVAL;
+ if (unlikely(
+ op->sym->cipher.data.offset + op->sym->cipher.data.length !=
+ op->sym->auth.data.offset + op->sym->auth.data.length)) {
+ NITROX_LOG(ERR, "Auth only data after cipher data not supported\n");
+ return -ENOTSUP;
+ }
+
err = create_sglist_from_mbuf(sgtbl, mbuf, op->sym->auth.data.offset,
auth_only_len);
if (unlikely(err))
@@ -365,6 +373,41 @@ create_cipher_auth_sglist(struct nitrox_softreq *sr,
return 0;
}
+static int
+create_combined_sglist(struct nitrox_softreq *sr, struct nitrox_sgtable *sgtbl,
+ struct rte_mbuf *mbuf)
+{
+ struct rte_crypto_op *op = sr->op;
+
+ fill_sglist(sgtbl, sr->iv.len, sr->iv.iova, sr->iv.virt);
+ fill_sglist(sgtbl, sr->ctx->aad_length, op->sym->aead.aad.phys_addr,
+ op->sym->aead.aad.data);
+ return create_sglist_from_mbuf(sgtbl, mbuf, op->sym->cipher.data.offset,
+ op->sym->cipher.data.length);
+}
+
+static int
+create_aead_sglist(struct nitrox_softreq *sr, struct nitrox_sgtable *sgtbl,
+ struct rte_mbuf *mbuf)
+{
+ int err;
+
+ switch (sr->ctx->nitrox_chain) {
+ case NITROX_CHAIN_CIPHER_AUTH:
+ case NITROX_CHAIN_AUTH_CIPHER:
+ err = create_cipher_auth_sglist(sr, sgtbl, mbuf);
+ break;
+ case NITROX_CHAIN_COMBINED:
+ err = create_combined_sglist(sr, sgtbl, mbuf);
+ break;
+ default:
+ err = -EINVAL;
+ break;
+ }
+
+ return err;
+}
+
static void
create_sgcomp(struct nitrox_sgtable *sgtbl)
{
@@ -383,17 +426,16 @@ create_sgcomp(struct nitrox_sgtable *sgtbl)
}
static int
-create_cipher_auth_inbuf(struct nitrox_softreq *sr,
- struct nitrox_sglist *digest)
+create_aead_inbuf(struct nitrox_softreq *sr, struct nitrox_sglist *digest)
{
int err;
struct nitrox_crypto_ctx *ctx = sr->ctx;
- err = create_cipher_auth_sglist(sr, &sr->in, sr->op->sym->m_src);
+ err = create_aead_sglist(sr, &sr->in, sr->op->sym->m_src);
if (unlikely(err))
return err;
- if (ctx->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY)
+ if (ctx->req_op == NITROX_OP_DECRYPT)
fill_sglist(&sr->in, digest->len, digest->iova, digest->virt);
create_sgcomp(&sr->in);
@@ -402,25 +444,24 @@ create_cipher_auth_inbuf(struct nitrox_softreq *sr,
}
static int
-create_cipher_auth_oop_outbuf(struct nitrox_softreq *sr,
- struct nitrox_sglist *digest)
+create_aead_oop_outbuf(struct nitrox_softreq *sr, struct nitrox_sglist *digest)
{
int err;
struct nitrox_crypto_ctx *ctx = sr->ctx;
- err = create_cipher_auth_sglist(sr, &sr->out, sr->op->sym->m_dst);
+ err = create_aead_sglist(sr, &sr->out, sr->op->sym->m_dst);
if (unlikely(err))
return err;
- if (ctx->auth_op == RTE_CRYPTO_AUTH_OP_GENERATE)
+ if (ctx->req_op == NITROX_OP_ENCRYPT)
fill_sglist(&sr->out, digest->len, digest->iova, digest->virt);
return 0;
}
static void
-create_cipher_auth_inplace_outbuf(struct nitrox_softreq *sr,
- struct nitrox_sglist *digest)
+create_aead_inplace_outbuf(struct nitrox_softreq *sr,
+ struct nitrox_sglist *digest)
{
int i, cnt;
struct nitrox_crypto_ctx *ctx = sr->ctx;
@@ -433,17 +474,16 @@ create_cipher_auth_inplace_outbuf(struct nitrox_softreq *sr,
}
sr->out.map_bufs_cnt = cnt;
- if (ctx->auth_op == RTE_CRYPTO_AUTH_OP_GENERATE) {
+ if (ctx->req_op == NITROX_OP_ENCRYPT) {
fill_sglist(&sr->out, digest->len, digest->iova,
digest->virt);
- } else if (ctx->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
+ } else if (ctx->req_op == NITROX_OP_DECRYPT) {
sr->out.map_bufs_cnt--;
}
}
static int
-create_cipher_auth_outbuf(struct nitrox_softreq *sr,
- struct nitrox_sglist *digest)
+create_aead_outbuf(struct nitrox_softreq *sr, struct nitrox_sglist *digest)
{
struct rte_crypto_op *op = sr->op;
int cnt = 0;
@@ -458,11 +498,11 @@ create_cipher_auth_outbuf(struct nitrox_softreq *sr,
if (op->sym->m_dst) {
int err;
- err = create_cipher_auth_oop_outbuf(sr, digest);
+ err = create_aead_oop_outbuf(sr, digest);
if (unlikely(err))
return err;
} else {
- create_cipher_auth_inplace_outbuf(sr, digest);
+ create_aead_inplace_outbuf(sr, digest);
}
cnt = sr->out.map_bufs_cnt;
@@ -516,16 +556,16 @@ process_cipher_auth_data(struct nitrox_softreq *sr)
int err;
struct nitrox_sglist digest;
- softreq_copy_iv(sr);
+ softreq_copy_iv(sr, 0);
err = extract_cipher_auth_digest(sr, &digest);
if (unlikely(err))
return err;
- err = create_cipher_auth_inbuf(sr, &digest);
+ err = create_aead_inbuf(sr, &digest);
if (unlikely(err))
return err;
- err = create_cipher_auth_outbuf(sr, &digest);
+ err = create_aead_outbuf(sr, &digest);
if (unlikely(err))
return err;
@@ -534,6 +574,86 @@ process_cipher_auth_data(struct nitrox_softreq *sr)
return 0;
}
+static int
+softreq_copy_salt(struct nitrox_softreq *sr)
+{
+ struct nitrox_crypto_ctx *ctx = sr->ctx;
+ uint8_t *addr;
+
+ if (unlikely(ctx->iv.length < AES_GCM_SALT_SIZE)) {
+ NITROX_LOG(ERR, "Invalid IV length %d\n", ctx->iv.length);
+ return -EINVAL;
+ }
+
+ addr = rte_crypto_op_ctod_offset(sr->op, uint8_t *, ctx->iv.offset);
+ if (!memcmp(ctx->salt, addr, AES_GCM_SALT_SIZE))
+ return 0;
+
+ memcpy(ctx->salt, addr, AES_GCM_SALT_SIZE);
+ memcpy(ctx->fctx.crypto.iv, addr, AES_GCM_SALT_SIZE);
+ return 0;
+}
+
+static int
+extract_combined_digest(struct nitrox_softreq *sr, struct nitrox_sglist *digest)
+{
+ struct rte_crypto_op *op = sr->op;
+ struct rte_mbuf *mdst = op->sym->m_dst ? op->sym->m_dst :
+ op->sym->m_src;
+
+ digest->len = sr->ctx->digest_length;
+ if (op->sym->aead.digest.data) {
+ digest->iova = op->sym->aead.digest.phys_addr;
+ digest->virt = op->sym->aead.digest.data;
+
+ return 0;
+ }
+
+ if (unlikely(rte_pktmbuf_data_len(mdst) < op->sym->aead.data.offset +
+ op->sym->aead.data.length + digest->len))
+ return -EINVAL;
+
+ digest->iova = rte_pktmbuf_iova_offset(mdst,
+ op->sym->aead.data.offset +
+ op->sym->aead.data.length);
+ digest->virt = rte_pktmbuf_mtod_offset(mdst, uint8_t *,
+ op->sym->aead.data.offset +
+ op->sym->aead.data.length);
+
+ return 0;
+}
+
+static int
+process_combined_data(struct nitrox_softreq *sr)
+{
+ int err;
+ struct nitrox_sglist digest;
+ struct rte_crypto_op *op = sr->op;
+
+ err = softreq_copy_salt(sr);
+ if (unlikely(err))
+ return err;
+
+ softreq_copy_iv(sr, AES_GCM_SALT_SIZE);
+ err = extract_combined_digest(sr, &digest);
+ if (unlikely(err))
+ return err;
+
+ err = create_aead_inbuf(sr, &digest);
+ if (unlikely(err))
+ return err;
+
+ err = create_aead_outbuf(sr, &digest);
+ if (unlikely(err))
+ return err;
+
+ create_aead_gph(op->sym->aead.data.length, sr->iv.len,
+ op->sym->aead.data.length + sr->ctx->aad_length,
+ &sr->gph);
+
+ return 0;
+}
+
static int
process_softreq(struct nitrox_softreq *sr)
{
@@ -545,6 +665,9 @@ process_softreq(struct nitrox_softreq *sr)
case NITROX_CHAIN_AUTH_CIPHER:
err = process_cipher_auth_data(sr);
break;
+ case NITROX_CHAIN_COMBINED:
+ err = process_combined_data(sr);
+ break;
default:
err = -EINVAL;
break;
@@ -558,10 +681,15 @@ nitrox_process_se_req(uint16_t qno, struct rte_crypto_op *op,
struct nitrox_crypto_ctx *ctx,
struct nitrox_softreq *sr)
{
+ int err;
+
softreq_init(sr, sr->iova);
sr->ctx = ctx;
sr->op = op;
- process_softreq(sr);
+ err = process_softreq(sr);
+ if (unlikely(err))
+ return err;
+
create_se_instr(sr, qno);
sr->timeout = rte_get_timer_cycles() + CMD_TIMEOUT * rte_get_timer_hz();
return 0;
@@ -577,7 +705,7 @@ nitrox_check_se_req(struct nitrox_softreq *sr, struct rte_crypto_op **op)
cc = *(volatile uint64_t *)(&sr->resp.completion);
orh = *(volatile uint64_t *)(&sr->resp.orh);
if (cc != PENDING_SIG)
- err = 0;
+ err = orh & 0xff;
else if ((orh != PENDING_SIG) && (orh & 0xff))
err = orh & 0xff;
else if (rte_get_timer_cycles() >= sr->timeout)
--
2.20.1
^ permalink raw reply [flat|nested] 17+ messages in thread
* [dpdk-dev] [PATCH v3 2/2] crypto/nitrox: support cipher only crypto operations
2020-10-09 5:57 ` [dpdk-dev] [PATCH v3 0/2] Add AES-GCM and cipher only offload support Nagadheeraj Rottela
2020-10-09 5:57 ` [dpdk-dev] [PATCH v3 1/2] crypto/nitrox: support AES-GCM Nagadheeraj Rottela
@ 2020-10-09 5:57 ` Nagadheeraj Rottela
2020-10-09 12:13 ` [dpdk-dev] [PATCH v3 0/2] Add AES-GCM and cipher only offload support Akhil Goyal
2 siblings, 0 replies; 17+ messages in thread
From: Nagadheeraj Rottela @ 2020-10-09 5:57 UTC (permalink / raw)
To: akhil.goyal; +Cc: dev, jsrikanth, Nagadheeraj Rottela
This patch adds cipher only crypto operation support.
Signed-off-by: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
---
doc/guides/cryptodevs/nitrox.rst | 2 -
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/crypto/nitrox/nitrox_sym.c | 3 +
drivers/crypto/nitrox/nitrox_sym_reqmgr.c | 189 ++++++++++++++++------
4 files changed, 148 insertions(+), 51 deletions(-)
diff --git a/doc/guides/cryptodevs/nitrox.rst b/doc/guides/cryptodevs/nitrox.rst
index 91fca905a..095e545c6 100644
--- a/doc/guides/cryptodevs/nitrox.rst
+++ b/doc/guides/cryptodevs/nitrox.rst
@@ -33,8 +33,6 @@ Supported AEAD algorithms:
Limitations
-----------
-* AES_CBC Cipher Only combination is not supported.
-* 3DES Cipher Only combination is not supported.
* Session-less APIs are not supported.
Installation
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 73ac08fb0..ddcf90356 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -55,6 +55,11 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Updated Marvell NITROX symmetric crypto PMD.**
+
+ * Added cipher only offload support.
+ * Added AES-GCM support.
+
Removed Items
-------------
diff --git a/drivers/crypto/nitrox/nitrox_sym.c b/drivers/crypto/nitrox/nitrox_sym.c
index fe3ee6e23..2768bdd2e 100644
--- a/drivers/crypto/nitrox/nitrox_sym.c
+++ b/drivers/crypto/nitrox/nitrox_sym.c
@@ -550,6 +550,9 @@ nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
ctx = mp_obj;
ctx->nitrox_chain = get_crypto_chain_order(xform);
switch (ctx->nitrox_chain) {
+ case NITROX_CHAIN_CIPHER_ONLY:
+ cipher_xform = &xform->cipher;
+ break;
case NITROX_CHAIN_CIPHER_AUTH:
cipher_xform = &xform->cipher;
auth_xform = &xform->next->auth;
diff --git a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
index 113ce5d11..fe3ca25a0 100644
--- a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
+++ b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
@@ -247,38 +247,6 @@ softreq_copy_iv(struct nitrox_softreq *sr, uint8_t salt_size)
sr->iv.len = sr->ctx->iv.length - salt_size;
}
-static int
-extract_cipher_auth_digest(struct nitrox_softreq *sr,
- struct nitrox_sglist *digest)
-{
- struct rte_crypto_op *op = sr->op;
- struct rte_mbuf *mdst = op->sym->m_dst ? op->sym->m_dst :
- op->sym->m_src;
-
- if (sr->ctx->req_op == NITROX_OP_DECRYPT &&
- unlikely(!op->sym->auth.digest.data))
- return -EINVAL;
-
- digest->len = sr->ctx->digest_length;
- if (op->sym->auth.digest.data) {
- digest->iova = op->sym->auth.digest.phys_addr;
- digest->virt = op->sym->auth.digest.data;
- return 0;
- }
-
- if (unlikely(rte_pktmbuf_data_len(mdst) < op->sym->auth.data.offset +
- op->sym->auth.data.length + digest->len))
- return -EINVAL;
-
- digest->iova = rte_pktmbuf_iova_offset(mdst,
- op->sym->auth.data.offset +
- op->sym->auth.data.length);
- digest->virt = rte_pktmbuf_mtod_offset(mdst, uint8_t *,
- op->sym->auth.data.offset +
- op->sym->auth.data.length);
- return 0;
-}
-
static void
fill_sglist(struct nitrox_sgtable *sgtbl, uint16_t len, rte_iova_t iova,
void *virt)
@@ -340,6 +308,143 @@ create_sglist_from_mbuf(struct nitrox_sgtable *sgtbl, struct rte_mbuf *mbuf,
return 0;
}
+static void
+create_sgcomp(struct nitrox_sgtable *sgtbl)
+{
+ int i, j, nr_sgcomp;
+ struct nitrox_sgcomp *sgcomp = sgtbl->sgcomp;
+ struct nitrox_sglist *sglist = sgtbl->sglist;
+
+ nr_sgcomp = RTE_ALIGN_MUL_CEIL(sgtbl->map_bufs_cnt, 4) / 4;
+ sgtbl->nr_sgcomp = nr_sgcomp;
+ for (i = 0; i < nr_sgcomp; i++, sgcomp++) {
+ for (j = 0; j < 4; j++, sglist++) {
+ sgcomp->len[j] = rte_cpu_to_be_16(sglist->len);
+ sgcomp->iova[j] = rte_cpu_to_be_64(sglist->iova);
+ }
+ }
+}
+
+static int
+create_cipher_inbuf(struct nitrox_softreq *sr)
+{
+ int err;
+ struct rte_crypto_op *op = sr->op;
+
+ fill_sglist(&sr->in, sr->iv.len, sr->iv.iova, sr->iv.virt);
+ err = create_sglist_from_mbuf(&sr->in, op->sym->m_src,
+ op->sym->cipher.data.offset,
+ op->sym->cipher.data.length);
+ if (unlikely(err))
+ return err;
+
+ create_sgcomp(&sr->in);
+ sr->dptr = sr->iova + offsetof(struct nitrox_softreq, in.sgcomp);
+
+ return 0;
+}
+
+static int
+create_cipher_outbuf(struct nitrox_softreq *sr)
+{
+ struct rte_crypto_op *op = sr->op;
+ int err, cnt = 0;
+ struct rte_mbuf *m_dst = op->sym->m_dst ? op->sym->m_dst :
+ op->sym->m_src;
+
+ sr->resp.orh = PENDING_SIG;
+ sr->out.sglist[cnt].len = sizeof(sr->resp.orh);
+ sr->out.sglist[cnt].iova = sr->iova + offsetof(struct nitrox_softreq,
+ resp.orh);
+ sr->out.sglist[cnt].virt = &sr->resp.orh;
+ cnt++;
+
+ sr->out.map_bufs_cnt = cnt;
+ fill_sglist(&sr->out, sr->iv.len, sr->iv.iova, sr->iv.virt);
+ err = create_sglist_from_mbuf(&sr->out, m_dst,
+ op->sym->cipher.data.offset,
+ op->sym->cipher.data.length);
+ if (unlikely(err))
+ return err;
+
+ cnt = sr->out.map_bufs_cnt;
+ sr->resp.completion = PENDING_SIG;
+ sr->out.sglist[cnt].len = sizeof(sr->resp.completion);
+ sr->out.sglist[cnt].iova = sr->iova + offsetof(struct nitrox_softreq,
+ resp.completion);
+ sr->out.sglist[cnt].virt = &sr->resp.completion;
+ cnt++;
+
+ RTE_VERIFY(cnt <= MAX_SGBUF_CNT);
+ sr->out.map_bufs_cnt = cnt;
+
+ create_sgcomp(&sr->out);
+ sr->rptr = sr->iova + offsetof(struct nitrox_softreq, out.sgcomp);
+
+ return 0;
+}
+
+static void
+create_cipher_gph(uint32_t cryptlen, uint16_t ivlen, struct gphdr *gph)
+{
+ gph->param0 = rte_cpu_to_be_16(cryptlen);
+ gph->param1 = 0;
+ gph->param2 = rte_cpu_to_be_16(ivlen);
+ gph->param3 = 0;
+}
+
+static int
+process_cipher_data(struct nitrox_softreq *sr)
+{
+ struct rte_crypto_op *op = sr->op;
+ int err;
+
+ softreq_copy_iv(sr, 0);
+ err = create_cipher_inbuf(sr);
+ if (unlikely(err))
+ return err;
+
+ err = create_cipher_outbuf(sr);
+ if (unlikely(err))
+ return err;
+
+ create_cipher_gph(op->sym->cipher.data.length, sr->iv.len, &sr->gph);
+
+ return 0;
+}
+
+static int
+extract_cipher_auth_digest(struct nitrox_softreq *sr,
+ struct nitrox_sglist *digest)
+{
+ struct rte_crypto_op *op = sr->op;
+ struct rte_mbuf *mdst = op->sym->m_dst ? op->sym->m_dst :
+ op->sym->m_src;
+
+ if (sr->ctx->req_op == NITROX_OP_DECRYPT &&
+ unlikely(!op->sym->auth.digest.data))
+ return -EINVAL;
+
+ digest->len = sr->ctx->digest_length;
+ if (op->sym->auth.digest.data) {
+ digest->iova = op->sym->auth.digest.phys_addr;
+ digest->virt = op->sym->auth.digest.data;
+ return 0;
+ }
+
+ if (unlikely(rte_pktmbuf_data_len(mdst) < op->sym->auth.data.offset +
+ op->sym->auth.data.length + digest->len))
+ return -EINVAL;
+
+ digest->iova = rte_pktmbuf_iova_offset(mdst,
+ op->sym->auth.data.offset +
+ op->sym->auth.data.length);
+ digest->virt = rte_pktmbuf_mtod_offset(mdst, uint8_t *,
+ op->sym->auth.data.offset +
+ op->sym->auth.data.length);
+ return 0;
+}
+
static int
create_cipher_auth_sglist(struct nitrox_softreq *sr,
struct nitrox_sgtable *sgtbl, struct rte_mbuf *mbuf)
@@ -408,23 +513,6 @@ create_aead_sglist(struct nitrox_softreq *sr, struct nitrox_sgtable *sgtbl,
return err;
}
-static void
-create_sgcomp(struct nitrox_sgtable *sgtbl)
-{
- int i, j, nr_sgcomp;
- struct nitrox_sgcomp *sgcomp = sgtbl->sgcomp;
- struct nitrox_sglist *sglist = sgtbl->sglist;
-
- nr_sgcomp = RTE_ALIGN_MUL_CEIL(sgtbl->map_bufs_cnt, 4) / 4;
- sgtbl->nr_sgcomp = nr_sgcomp;
- for (i = 0; i < nr_sgcomp; i++, sgcomp++) {
- for (j = 0; j < 4; j++, sglist++) {
- sgcomp->len[j] = rte_cpu_to_be_16(sglist->len);
- sgcomp->iova[j] = rte_cpu_to_be_64(sglist->iova);
- }
- }
-}
-
static int
create_aead_inbuf(struct nitrox_softreq *sr, struct nitrox_sglist *digest)
{
@@ -661,6 +749,9 @@ process_softreq(struct nitrox_softreq *sr)
int err = 0;
switch (ctx->nitrox_chain) {
+ case NITROX_CHAIN_CIPHER_ONLY:
+ err = process_cipher_data(sr);
+ break;
case NITROX_CHAIN_CIPHER_AUTH:
case NITROX_CHAIN_AUTH_CIPHER:
err = process_cipher_auth_data(sr);
--
2.20.1
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/2] Add AES-GCM and cipher only offload support
2020-10-09 5:57 ` [dpdk-dev] [PATCH v3 0/2] Add AES-GCM and cipher only offload support Nagadheeraj Rottela
2020-10-09 5:57 ` [dpdk-dev] [PATCH v3 1/2] crypto/nitrox: support AES-GCM Nagadheeraj Rottela
2020-10-09 5:57 ` [dpdk-dev] [PATCH v3 2/2] crypto/nitrox: support cipher only crypto operations Nagadheeraj Rottela
@ 2020-10-09 12:13 ` Akhil Goyal
2 siblings, 0 replies; 17+ messages in thread
From: Akhil Goyal @ 2020-10-09 12:13 UTC (permalink / raw)
To: Nagadheeraj Rottela; +Cc: dev, jsrikanth
> This patchset adds support for AES-GCM and cipher only offload.
> ---
> v3:
> * Fixed compilation error while compiling individual patches.
>
> v2:
> * Rebased patches to latest master and resolved merge conflict.
> * Updated release notes.
>
Release notes should be split across the patches. I forgot to comment on prev version.
Fixed it while applying the patches.
Applied to dpdk-next-crypto
Thanks.
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2020-10-09 12:14 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-24 11:00 [dpdk-dev] [PATCH 0/3] Add AES-GCM and cipher only offload support Nagadheeraj Rottela
2020-07-24 11:00 ` [dpdk-dev] [PATCH 1/3] test/crypto: replace NITROX PMD specific test suite Nagadheeraj Rottela
2020-07-24 11:00 ` [dpdk-dev] [PATCH 2/3] crypto/nitrox: support AES-GCM Nagadheeraj Rottela
2020-07-24 11:00 ` [dpdk-dev] [PATCH 3/3] crypto/nitrox: support cipher only crypto operations Nagadheeraj Rottela
2020-09-22 19:11 ` Akhil Goyal
2020-09-24 13:04 ` [dpdk-dev] [PATCH v2 0/3] Add AES-GCM and cipher only offload support Nagadheeraj Rottela
2020-09-24 13:04 ` [dpdk-dev] [PATCH v2 1/3] test/crypto: replace NITROX PMD specific test suite Nagadheeraj Rottela
2020-10-06 20:07 ` Akhil Goyal
2020-10-09 5:57 ` [dpdk-dev] [PATCH v3 0/2] Add AES-GCM and cipher only offload support Nagadheeraj Rottela
2020-10-09 5:57 ` [dpdk-dev] [PATCH v3 1/2] crypto/nitrox: support AES-GCM Nagadheeraj Rottela
2020-10-09 5:57 ` [dpdk-dev] [PATCH v3 2/2] crypto/nitrox: support cipher only crypto operations Nagadheeraj Rottela
2020-10-09 12:13 ` [dpdk-dev] [PATCH v3 0/2] Add AES-GCM and cipher only offload support Akhil Goyal
2020-09-24 13:04 ` [dpdk-dev] [PATCH v2 2/3] crypto/nitrox: support AES-GCM Nagadheeraj Rottela
2020-10-06 20:04 ` Akhil Goyal
2020-09-24 13:04 ` [dpdk-dev] [PATCH v2 3/3] crypto/nitrox: support cipher only crypto operations Nagadheeraj Rottela
2020-07-26 18:57 ` [dpdk-dev] [PATCH 0/3] Add AES-GCM and cipher only offload support Akhil Goyal
2020-07-28 6:41 ` Nagadheeraj Rottela
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).