* [PATCH 0/3] add remaining SGL support to AESNI_MB
@ 2022-08-12 13:23 Ciara Power
2022-08-12 13:23 ` [PATCH 1/3] test/crypto: fix wireless auth digest segment Ciara Power
` (6 more replies)
0 siblings, 7 replies; 38+ messages in thread
From: Ciara Power @ 2022-08-12 13:23 UTC (permalink / raw)
Cc: dev, kai.ji, roy.fan.zhang, pablo.de.lara.guarch, Ciara Power
Currently, the intel-ipsec-mb library only supports SGL for
GCM and ChaCha20-Poly1305 algorithms through the JOB API.
To add SGL support for other algorithms, a workaround approach is
added in the AESNI_MB PMD. SGL feature flags can now be added to
the PMD.
This patchset also includes a fix for SGL wireless operations,
and also some additional Snow3G SGL tests that were used when
testing functionality of the various SGL input/output combinations.
Ciara Power (3):
test/crypto: fix wireless auth digest segment
crypto/ipsec_mb: add remaining SGL support
test/crypto: add OOP snow3g SGL tests
app/test/test_cryptodev.c | 56 ++++++--
drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 190 ++++++++++++++++++++-----
2 files changed, 204 insertions(+), 42 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH 1/3] test/crypto: fix wireless auth digest segment
2022-08-12 13:23 [PATCH 0/3] add remaining SGL support to AESNI_MB Ciara Power
@ 2022-08-12 13:23 ` Ciara Power
2022-08-12 13:23 ` [PATCH 2/3] crypto/ipsec_mb: add remaining SGL support Ciara Power
` (5 subsequent siblings)
6 siblings, 0 replies; 38+ messages in thread
From: Ciara Power @ 2022-08-12 13:23 UTC (permalink / raw)
To: Akhil Goyal, Fan Zhang; +Cc: dev, kai.ji, pablo.de.lara.guarch, Ciara Power
The segment size for some tests was too small to hold the auth digest.
This caused issues when using op->sym->auth.digest.data for comparisons
in AESNI_MB PMD after a subsequent patch enables SGL.
For example, if segment size is 2, and digest size is 4, then 4 bytes
are read from op->sym->auth.digest.data, which overflows into the memory
after the segment, rather than using the second segment that contains
the remaining half of the digest.
Fixes: 11c5485bb276 ("test/crypto: add scatter-gather tests for IP and OOP")
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
app/test/test_cryptodev.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 69a0301de0..e6925b6531 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -3040,6 +3040,14 @@ create_wireless_algo_auth_cipher_operation(
remaining_off -= rte_pktmbuf_data_len(sgl_buf);
sgl_buf = sgl_buf->next;
}
+
+ /* The last segment should be large enough to hold full digest */
+ if (sgl_buf->data_len < auth_tag_len) {
+ rte_pktmbuf_free(sgl_buf->next);
+ sgl_buf->next = NULL;
+ rte_pktmbuf_append(sgl_buf, auth_tag_len - sgl_buf->data_len);
+ }
+
sym_op->auth.digest.data = rte_pktmbuf_mtod_offset(sgl_buf,
uint8_t *, remaining_off);
sym_op->auth.digest.phys_addr = rte_pktmbuf_iova_offset(sgl_buf,
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH 2/3] crypto/ipsec_mb: add remaining SGL support
2022-08-12 13:23 [PATCH 0/3] add remaining SGL support to AESNI_MB Ciara Power
2022-08-12 13:23 ` [PATCH 1/3] test/crypto: fix wireless auth digest segment Ciara Power
@ 2022-08-12 13:23 ` Ciara Power
2022-08-12 13:23 ` [PATCH 3/3] test/crypto: add OOP snow3g SGL tests Ciara Power
` (4 subsequent siblings)
6 siblings, 0 replies; 38+ messages in thread
From: Ciara Power @ 2022-08-12 13:23 UTC (permalink / raw)
To: Fan Zhang, Pablo de Lara; +Cc: dev, kai.ji, Ciara Power
The intel-ipsec-mb library supports SGL for GCM and ChaChaPoly
algorithms using the JOB API.
This support was added to AESNI_MB PMD previously, but the SGL feature
flags could not be added due to no SGL support for other algorithms.
This patch adds a workaround SGL approach for other algorithms
using the JOB API. The segmented input buffers are copied into a
linear buffer, which is passed as a single job to intel-ipsec-mb.
The job is processed, and on return, the linear buffer is split into the
original destination segments.
Existing AESNI_MB testcases are passing with these feature flags added.
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 190 ++++++++++++++++++++-----
1 file changed, 157 insertions(+), 33 deletions(-)
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 6d5d3ce8eb..53084f689a 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -937,7 +937,7 @@ static inline uint64_t
auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session,
uint32_t oop, const uint32_t auth_offset,
const uint32_t cipher_offset, const uint32_t auth_length,
- const uint32_t cipher_length)
+ const uint32_t cipher_length, uint8_t lb_sgl)
{
struct rte_mbuf *m_src, *m_dst;
uint8_t *p_src, *p_dst;
@@ -945,7 +945,7 @@ auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session,
uint32_t cipher_end, auth_end;
/* Only cipher then hash needs special calculation. */
- if (!oop || session->chain_order != IMB_ORDER_CIPHER_HASH)
+ if (!oop || session->chain_order != IMB_ORDER_CIPHER_HASH || lb_sgl)
return auth_offset;
m_src = op->sym->m_src;
@@ -1159,6 +1159,73 @@ handle_aead_sgl_job(IMB_JOB *job, IMB_MGR *mb_mgr,
return 0;
}
+static int
+handle_sgl_linear(IMB_JOB *job, struct rte_crypto_op *op, uint32_t dst_offset,
+ struct aesni_mb_session *session)
+{
+ uint64_t cipher_len, auth_len;
+ uint8_t *src, *linear_buf = NULL;
+ int total_len;
+ int lb_offset = 0;
+ struct rte_mbuf *src_seg;
+ uint16_t src_len;
+
+ if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+ job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN)
+ cipher_len = (job->msg_len_to_cipher_in_bits >> 3) +
+ (job->cipher_start_src_offset_in_bits >> 3);
+ else
+ cipher_len = job->msg_len_to_cipher_in_bytes +
+ job->cipher_start_src_offset_in_bytes;
+
+ if (job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+ job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN)
+ auth_len = (job->msg_len_to_hash_in_bits >> 3) +
+ job->hash_start_src_offset_in_bytes;
+ else if (job->hash_alg == IMB_AUTH_AES_GMAC)
+ auth_len = job->u.GCM.aad_len_in_bytes;
+ else
+ auth_len = job->msg_len_to_hash_in_bytes +
+ job->hash_start_src_offset_in_bytes;
+
+ total_len = RTE_MAX(auth_len, cipher_len) + job->auth_tag_output_len_in_bytes;
+ linear_buf = rte_zmalloc(NULL, total_len, 0);
+ if (linear_buf == NULL) {
+ IPSEC_MB_LOG(ERR, "Error allocating memory for SGL Linear Buffer\n");
+ return -1;
+ }
+
+ for (src_seg = op->sym->m_src; (src_seg != NULL) &&
+ (total_len - lb_offset > 0); src_seg = src_seg->next) {
+ src = rte_pktmbuf_mtod(src_seg, uint8_t *);
+ src_len = RTE_MIN(src_seg->data_len, total_len - lb_offset);
+ rte_memcpy(linear_buf + lb_offset, src, src_len);
+ lb_offset += src_len;
+ }
+
+ job->src = linear_buf;
+ job->dst = linear_buf + dst_offset;
+ job->user_data2 = linear_buf;
+
+ if (job->hash_alg == IMB_AUTH_AES_GMAC)
+ job->u.GCM.aad = linear_buf;
+
+ if (session->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY)
+ job->auth_tag_output = linear_buf + lb_offset;
+ else
+ job->auth_tag_output = linear_buf + auth_len;
+
+ return 0;
+}
+
+static inline int
+imb_lib_support_sgl_algo(IMB_CIPHER_MODE alg)
+{
+ if (alg == IMB_CIPHER_CHACHA20_POLY1305
+ || alg == IMB_CIPHER_GCM)
+ return 1;
+ return 0;
+}
/**
* Process a crypto operation and complete a IMB_JOB job structure for
@@ -1171,7 +1238,8 @@ handle_aead_sgl_job(IMB_JOB *job, IMB_MGR *mb_mgr,
*
* @return
* - 0 on success, the IMB_JOB will be filled
- * - -1 if invalid session, IMB_JOB will not be filled
+ * - -1 if invalid session or errors allocationg SGL linear buffer,
+ * IMB_JOB will not be filled
*/
static inline int
set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
@@ -1191,6 +1259,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
uint32_t total_len;
IMB_JOB base_job;
uint8_t sgl = 0;
+ uint8_t lb_sgl = 0;
int ret;
session = ipsec_mb_get_session_private(qp, op);
@@ -1199,18 +1268,6 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
return -1;
}
- if (op->sym->m_src->nb_segs > 1) {
- if (session->cipher.mode != IMB_CIPHER_GCM
- && session->cipher.mode !=
- IMB_CIPHER_CHACHA20_POLY1305) {
- op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- IPSEC_MB_LOG(ERR, "Device only supports SGL for AES-GCM"
- " or CHACHA20_POLY1305 algorithms.");
- return -1;
- }
- sgl = 1;
- }
-
/* Set crypto operation */
job->chain_order = session->chain_order;
@@ -1233,6 +1290,26 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
job->dec_keys = session->cipher.expanded_aes_keys.decode;
}
+ if (!op->sym->m_dst) {
+ /* in-place operation */
+ m_dst = m_src;
+ oop = 0;
+ } else if (op->sym->m_dst == op->sym->m_src) {
+ /* in-place operation */
+ m_dst = m_src;
+ oop = 0;
+ } else {
+ /* out-of-place operation */
+ m_dst = op->sym->m_dst;
+ oop = 1;
+ }
+
+ if (m_src->nb_segs > 1 || m_dst->nb_segs > 1) {
+ sgl = 1;
+ if (!imb_lib_support_sgl_algo(session->cipher.mode))
+ lb_sgl = 1;
+ }
+
switch (job->hash_alg) {
case IMB_AUTH_AES_XCBC:
job->u.XCBC._k1_expanded = session->auth.xcbc.k1_expanded;
@@ -1331,20 +1408,6 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
m_offset = 0;
}
- if (!op->sym->m_dst) {
- /* in-place operation */
- m_dst = m_src;
- oop = 0;
- } else if (op->sym->m_dst == op->sym->m_src) {
- /* in-place operation */
- m_dst = m_src;
- oop = 0;
- } else {
- /* out-of-place operation */
- m_dst = op->sym->m_dst;
- oop = 1;
- }
-
/* Set digest output location */
if (job->hash_alg != IMB_AUTH_NULL &&
session->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) {
@@ -1435,7 +1498,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
job->hash_start_src_offset_in_bytes = auth_start_offset(op,
session, oop, auth_off_in_bytes,
ciph_off_in_bytes, auth_len_in_bytes,
- ciph_len_in_bytes);
+ ciph_len_in_bytes, lb_sgl);
job->msg_len_to_hash_in_bits = op->sym->auth.data.length;
job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -1452,7 +1515,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
job->hash_start_src_offset_in_bytes = auth_start_offset(op,
session, oop, auth_off_in_bytes,
ciph_off_in_bytes, auth_len_in_bytes,
- ciph_len_in_bytes);
+ ciph_len_in_bytes, lb_sgl);
job->msg_len_to_hash_in_bytes = auth_len_in_bytes;
job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -1464,7 +1527,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
session, oop, op->sym->auth.data.offset,
op->sym->cipher.data.offset,
op->sym->auth.data.length,
- op->sym->cipher.data.length);
+ op->sym->cipher.data.length, lb_sgl);
job->msg_len_to_hash_in_bytes = op->sym->auth.data.length;
job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -1525,6 +1588,10 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
job->user_data = op;
if (sgl) {
+
+ if (lb_sgl)
+ return handle_sgl_linear(job, op, m_offset, session);
+
base_job = *job;
job->sgl_state = IMB_SGL_INIT;
job = IMB_SUBMIT_JOB(mb_mgr);
@@ -1695,6 +1762,49 @@ generate_digest(IMB_JOB *job, struct rte_crypto_op *op,
sess->auth.req_digest_len);
}
+static void
+post_process_sgl_linear(struct rte_crypto_op *op, IMB_JOB *job,
+ struct aesni_mb_session *sess, uint8_t *linear_buf)
+{
+
+ int lb_offset = 0;
+ struct rte_mbuf *m_dst = op->sym->m_dst == NULL ?
+ op->sym->m_src : op->sym->m_dst;
+ uint16_t total_len, dst_len;
+ uint64_t cipher_len, auth_len;
+ uint8_t *dst;
+
+ if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+ job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN)
+ cipher_len = (job->msg_len_to_cipher_in_bits >> 3) +
+ (job->cipher_start_src_offset_in_bits >> 3);
+ else
+ cipher_len = job->msg_len_to_cipher_in_bytes +
+ job->cipher_start_src_offset_in_bytes;
+
+ if (job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+ job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN)
+ auth_len = (job->msg_len_to_hash_in_bits >> 3) +
+ job->hash_start_src_offset_in_bytes;
+ else if (job->hash_alg == IMB_AUTH_AES_GMAC)
+ auth_len = job->u.GCM.aad_len_in_bytes;
+ else
+ auth_len = job->msg_len_to_hash_in_bytes +
+ job->hash_start_src_offset_in_bytes;
+
+ total_len = RTE_MAX(auth_len, cipher_len);
+
+ if (sess->auth.operation != RTE_CRYPTO_AUTH_OP_VERIFY)
+ total_len += job->auth_tag_output_len_in_bytes;
+
+ for (; (m_dst != NULL) && (total_len - lb_offset > 0); m_dst = m_dst->next) {
+ dst = rte_pktmbuf_mtod(m_dst, uint8_t *);
+ dst_len = RTE_MIN(m_dst->data_len, total_len - lb_offset);
+ rte_memcpy(dst, linear_buf + lb_offset, dst_len);
+ lb_offset += dst_len;
+ }
+}
+
/**
* Process a completed job and return rte_mbuf which job processed
*
@@ -1712,6 +1822,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
struct aesni_mb_session *sess = NULL;
uint32_t driver_id = ipsec_mb_get_driver_id(
IPSEC_MB_PMD_TYPE_AESNI_MB);
+ uint8_t *linear_buf = NULL;
#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
uint8_t is_docsis_sec = 0;
@@ -1740,6 +1851,14 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
case IMB_STATUS_COMPLETED:
op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ if ((op->sym->m_src->nb_segs > 1 ||
+ (op->sym->m_dst != NULL &&
+ op->sym->m_dst->nb_segs > 1)) &&
+ !imb_lib_support_sgl_algo(sess->cipher.mode)) {
+ linear_buf = (uint8_t *) job->user_data2;
+ post_process_sgl_linear(op, job, sess, linear_buf);
+ }
+
if (job->hash_alg == IMB_AUTH_NULL)
break;
@@ -1766,6 +1885,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
default:
op->status = RTE_CRYPTO_OP_STATUS_ERROR;
}
+ rte_free(linear_buf);
}
/* Free session if a session-less crypto op */
@@ -2252,7 +2372,11 @@ RTE_INIT(ipsec_mb_register_aesni_mb)
RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO |
RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA |
- RTE_CRYPTODEV_FF_SYM_SESSIONLESS;
+ RTE_CRYPTODEV_FF_SYM_SESSIONLESS |
+ RTE_CRYPTODEV_FF_IN_PLACE_SGL |
+ RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
+ RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
+ RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT;
aesni_mb_data->internals_priv_size = 0;
aesni_mb_data->ops = &aesni_mb_pmd_ops;
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH 3/3] test/crypto: add OOP snow3g SGL tests
2022-08-12 13:23 [PATCH 0/3] add remaining SGL support to AESNI_MB Ciara Power
2022-08-12 13:23 ` [PATCH 1/3] test/crypto: fix wireless auth digest segment Ciara Power
2022-08-12 13:23 ` [PATCH 2/3] crypto/ipsec_mb: add remaining SGL support Ciara Power
@ 2022-08-12 13:23 ` Ciara Power
2022-08-25 14:28 ` [PATCH v2 0/5] add remaining SGL support to AESNI_MB Ciara Power
` (3 subsequent siblings)
6 siblings, 0 replies; 38+ messages in thread
From: Ciara Power @ 2022-08-12 13:23 UTC (permalink / raw)
To: Akhil Goyal, Fan Zhang; +Cc: dev, kai.ji, pablo.de.lara.guarch, Ciara Power
More tests are added to test variations of OOP SGL for snow3g.
This includes LB_IN_SGL_OUT and SGL_IN_LB_OUT.
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
app/test/test_cryptodev.c | 48 +++++++++++++++++++++++++++++++--------
1 file changed, 39 insertions(+), 9 deletions(-)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index e6925b6531..83860d1853 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -4347,7 +4347,8 @@ test_snow3g_encryption_oop(const struct snow3g_test_data *tdata)
}
static int
-test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
+test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata,
+ uint8_t sgl_in, uint8_t sgl_out)
{
struct crypto_testsuite_params *ts_params = &testsuite_params;
struct crypto_unittest_params *ut_params = &unittest_params;
@@ -4378,9 +4379,12 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
uint64_t feat_flags = dev_info.feature_flags;
- if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) {
- printf("Device doesn't support out-of-place scatter-gather "
- "in both input and output mbufs. "
+ if (((sgl_in && sgl_out) && !(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT))
+ || ((!sgl_in && sgl_out) &&
+ !(feat_flags & RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT))
+ || ((sgl_in && !sgl_out) &&
+ !(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT))) {
+ printf("Device doesn't support out-of-place scatter gather type. "
"Test Skipped.\n");
return TEST_SKIPPED;
}
@@ -4405,10 +4409,21 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
/* the algorithms block size */
plaintext_pad_len = RTE_ALIGN_CEIL(plaintext_len, 16);
- ut_params->ibuf = create_segmented_mbuf(ts_params->mbuf_pool,
- plaintext_pad_len, 10, 0);
- ut_params->obuf = create_segmented_mbuf(ts_params->mbuf_pool,
- plaintext_pad_len, 3, 0);
+ if (sgl_in)
+ ut_params->ibuf = create_segmented_mbuf(ts_params->mbuf_pool,
+ plaintext_pad_len, 10, 0);
+ else {
+ ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+ rte_pktmbuf_append(ut_params->ibuf, plaintext_pad_len);
+ }
+
+ if (sgl_out)
+ ut_params->obuf = create_segmented_mbuf(ts_params->mbuf_pool,
+ plaintext_pad_len, 3, 0);
+ else {
+ ut_params->obuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+ rte_pktmbuf_append(ut_params->obuf, plaintext_pad_len);
+ }
TEST_ASSERT_NOT_NULL(ut_params->ibuf,
"Failed to allocate input buffer in mempool");
@@ -6762,9 +6777,20 @@ test_snow3g_encryption_test_case_1_oop(void)
static int
test_snow3g_encryption_test_case_1_oop_sgl(void)
{
- return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1);
+ return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 1, 1);
+}
+
+static int
+test_snow3g_encryption_test_case_1_oop_lb_in_sgl_out(void)
+{
+ return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 0, 1);
}
+static int
+test_snow3g_encryption_test_case_1_oop_sgl_in_lb_out(void)
+{
+ return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 1, 0);
+}
static int
test_snow3g_encryption_test_case_1_offset_oop(void)
@@ -15985,6 +16011,10 @@ static struct unit_test_suite cryptodev_snow3g_testsuite = {
test_snow3g_encryption_test_case_1_oop),
TEST_CASE_ST(ut_setup, ut_teardown,
test_snow3g_encryption_test_case_1_oop_sgl),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_1_oop_lb_in_sgl_out),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_1_oop_sgl_in_lb_out),
TEST_CASE_ST(ut_setup, ut_teardown,
test_snow3g_encryption_test_case_1_offset_oop),
TEST_CASE_ST(ut_setup, ut_teardown,
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v2 0/5] add remaining SGL support to AESNI_MB
2022-08-12 13:23 [PATCH 0/3] add remaining SGL support to AESNI_MB Ciara Power
` (2 preceding siblings ...)
2022-08-12 13:23 ` [PATCH 3/3] test/crypto: add OOP snow3g SGL tests Ciara Power
@ 2022-08-25 14:28 ` Ciara Power
2022-08-25 14:28 ` [PATCH v2 1/5] test/crypto: fix wireless auth digest segment Ciara Power
` (4 more replies)
2022-09-21 12:50 ` [PATCH v3 0/5] add remaining SGL support to AESNI_MB Ciara Power
` (2 subsequent siblings)
6 siblings, 5 replies; 38+ messages in thread
From: Ciara Power @ 2022-08-25 14:28 UTC (permalink / raw)
Cc: dev, kai.ji, roy.fan.zhang, pablo.de.lara.guarch, Ciara Power
Currently, the intel-ipsec-mb library only supports SGL for
GCM and ChaCha20-Poly1305 algorithms through the JOB API.
To add SGL support for other algorithms, a workaround approach is
added in the AESNI_MB PMD. SGL feature flags can now be added to
the PMD.
This patchset also includes a fix for SGL wireless operations,
and sessionless cleanup.
Some additional Snow3G SGL and AES tests are also added for
various SGL input/output combinations that were not
previously being tested.
v2:
- Added documentation changes.
- Added fix for sessionless cleanup.
- Modified blockcipher tests to support various SGL types.
- Added more SGL AES tests.
- Small fixes.
Ciara Power (5):
test/crypto: fix wireless auth digest segment
crypto/ipsec_mb: fix sessionless cleanup
crypto/ipsec_mb: add remaining SGL support
test/crypto: add OOP snow3g SGL tests
test/crypto: add remaining blockcipher SGL tests
app/test/test_cryptodev.c | 56 +++-
app/test/test_cryptodev_aes_test_vectors.h | 345 +++++++++++++++++---
app/test/test_cryptodev_blockcipher.c | 50 +--
app/test/test_cryptodev_blockcipher.h | 2 +
app/test/test_cryptodev_hash_test_vectors.h | 8 +-
doc/guides/cryptodevs/aesni_mb.rst | 1 -
doc/guides/cryptodevs/features/aesni_mb.ini | 4 +
doc/guides/rel_notes/release_22_11.rst | 4 +
drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 195 ++++++++---
drivers/crypto/ipsec_mb/pmd_chacha_poly.c | 4 -
drivers/crypto/ipsec_mb/pmd_kasumi.c | 5 -
drivers/crypto/ipsec_mb/pmd_snow3g.c | 4 -
drivers/crypto/ipsec_mb/pmd_zuc.c | 4 -
13 files changed, 548 insertions(+), 134 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v2 1/5] test/crypto: fix wireless auth digest segment
2022-08-25 14:28 ` [PATCH v2 0/5] add remaining SGL support to AESNI_MB Ciara Power
@ 2022-08-25 14:28 ` Ciara Power
2022-08-25 14:28 ` [PATCH v2 2/5] crypto/ipsec_mb: fix sessionless cleanup Ciara Power
` (3 subsequent siblings)
4 siblings, 0 replies; 38+ messages in thread
From: Ciara Power @ 2022-08-25 14:28 UTC (permalink / raw)
To: Akhil Goyal, Fan Zhang; +Cc: dev, kai.ji, pablo.de.lara.guarch, Ciara Power
The segment size for some tests was too small to hold the auth digest.
This caused issues when using op->sym->auth.digest.data for comparisons
in AESNI_MB PMD after a subsequent patch enables SGL.
For example, if segment size is 2, and digest size is 4, then 4 bytes
are read from op->sym->auth.digest.data, which overflows into the memory
after the segment, rather than using the second segment that contains
the remaining half of the digest.
Fixes: 11c5485bb276 ("test/crypto: add scatter-gather tests for IP and OOP")
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
app/test/test_cryptodev.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 69a0301de0..e6925b6531 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -3040,6 +3040,14 @@ create_wireless_algo_auth_cipher_operation(
remaining_off -= rte_pktmbuf_data_len(sgl_buf);
sgl_buf = sgl_buf->next;
}
+
+ /* The last segment should be large enough to hold full digest */
+ if (sgl_buf->data_len < auth_tag_len) {
+ rte_pktmbuf_free(sgl_buf->next);
+ sgl_buf->next = NULL;
+ rte_pktmbuf_append(sgl_buf, auth_tag_len - sgl_buf->data_len);
+ }
+
sym_op->auth.digest.data = rte_pktmbuf_mtod_offset(sgl_buf,
uint8_t *, remaining_off);
sym_op->auth.digest.phys_addr = rte_pktmbuf_iova_offset(sgl_buf,
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v2 2/5] crypto/ipsec_mb: fix sessionless cleanup
2022-08-25 14:28 ` [PATCH v2 0/5] add remaining SGL support to AESNI_MB Ciara Power
2022-08-25 14:28 ` [PATCH v2 1/5] test/crypto: fix wireless auth digest segment Ciara Power
@ 2022-08-25 14:28 ` Ciara Power
2022-09-15 11:38 ` De Lara Guarch, Pablo
2022-08-25 14:28 ` [PATCH v2 3/5] crypto/ipsec_mb: add remaining SGL support Ciara Power
` (2 subsequent siblings)
4 siblings, 1 reply; 38+ messages in thread
From: Ciara Power @ 2022-08-25 14:28 UTC (permalink / raw)
To: Fan Zhang, Pablo de Lara; +Cc: dev, kai.ji, Ciara Power, slawomirx.mrozowicz
Currently, for a sessionless op, the session created is reset before
being put back into the mempool. This causes issues as the object isn't
correctly released into the mempool.
Fixes: c68d7aa354f6 ("crypto/aesni_mb: use architecture independent macros")
Fixes: b3bbd9e5f265 ("cryptodev: support device independent sessions")
Fixes: f16662885472 ("crypto/ipsec_mb: add chacha_poly PMD")
Cc: roy.fan.zhang@intel.com
Cc: slawomirx.mrozowicz@intel.com
Cc: kai.ji@intel.com
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 4 ----
drivers/crypto/ipsec_mb/pmd_chacha_poly.c | 4 ----
drivers/crypto/ipsec_mb/pmd_kasumi.c | 5 -----
drivers/crypto/ipsec_mb/pmd_snow3g.c | 4 ----
drivers/crypto/ipsec_mb/pmd_zuc.c | 4 ----
5 files changed, 21 deletions(-)
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 6d5d3ce8eb..944fce0261 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -1770,10 +1770,6 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
/* Free session if a session-less crypto op */
if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
- memset(sess, 0, sizeof(struct aesni_mb_session));
- memset(op->sym->session, 0,
- rte_cryptodev_sym_get_existing_header_session_size(
- op->sym->session));
rte_mempool_put(qp->sess_mp_priv, sess);
rte_mempool_put(qp->sess_mp, op->sym->session);
op->sym->session = NULL;
diff --git a/drivers/crypto/ipsec_mb/pmd_chacha_poly.c b/drivers/crypto/ipsec_mb/pmd_chacha_poly.c
index d953d6e5f5..31397b6395 100644
--- a/drivers/crypto/ipsec_mb/pmd_chacha_poly.c
+++ b/drivers/crypto/ipsec_mb/pmd_chacha_poly.c
@@ -289,10 +289,6 @@ handle_completed_chacha20_poly1305_crypto_op(struct ipsec_mb_qp *qp,
/* Free session if a session-less crypto op */
if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
- memset(sess, 0, sizeof(struct chacha20_poly1305_session));
- memset(op->sym->session, 0,
- rte_cryptodev_sym_get_existing_header_session_size(
- op->sym->session));
rte_mempool_put(qp->sess_mp_priv, sess);
rte_mempool_put(qp->sess_mp, op->sym->session);
op->sym->session = NULL;
diff --git a/drivers/crypto/ipsec_mb/pmd_kasumi.c b/drivers/crypto/ipsec_mb/pmd_kasumi.c
index c9d4f9d0ae..de37e012bd 100644
--- a/drivers/crypto/ipsec_mb/pmd_kasumi.c
+++ b/drivers/crypto/ipsec_mb/pmd_kasumi.c
@@ -230,11 +230,6 @@ process_ops(struct rte_crypto_op **ops, struct kasumi_session *session,
ops[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
/* Free session if a session-less crypto op. */
if (ops[i]->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
- memset(session, 0, sizeof(struct kasumi_session));
- memset(
- ops[i]->sym->session, 0,
- rte_cryptodev_sym_get_existing_header_session_size(
- ops[i]->sym->session));
rte_mempool_put(qp->sess_mp_priv, session);
rte_mempool_put(qp->sess_mp, ops[i]->sym->session);
ops[i]->sym->session = NULL;
diff --git a/drivers/crypto/ipsec_mb/pmd_snow3g.c b/drivers/crypto/ipsec_mb/pmd_snow3g.c
index 9a85f46721..1634c54fb7 100644
--- a/drivers/crypto/ipsec_mb/pmd_snow3g.c
+++ b/drivers/crypto/ipsec_mb/pmd_snow3g.c
@@ -361,10 +361,6 @@ process_ops(struct rte_crypto_op **ops, struct snow3g_session *session,
ops[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
/* Free session if a session-less crypto op. */
if (ops[i]->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
- memset(session, 0, sizeof(struct snow3g_session));
- memset(ops[i]->sym->session, 0,
- rte_cryptodev_sym_get_existing_header_session_size(
- ops[i]->sym->session));
rte_mempool_put(qp->sess_mp_priv, session);
rte_mempool_put(qp->sess_mp, ops[i]->sym->session);
ops[i]->sym->session = NULL;
diff --git a/drivers/crypto/ipsec_mb/pmd_zuc.c b/drivers/crypto/ipsec_mb/pmd_zuc.c
index e36c7092d6..564ca3457c 100644
--- a/drivers/crypto/ipsec_mb/pmd_zuc.c
+++ b/drivers/crypto/ipsec_mb/pmd_zuc.c
@@ -238,10 +238,6 @@ process_ops(struct rte_crypto_op **ops, enum ipsec_mb_operation op_type,
ops[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
/* Free session if a session-less crypto op. */
if (ops[i]->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
- memset(sessions[i], 0, sizeof(struct zuc_session));
- memset(ops[i]->sym->session, 0,
- rte_cryptodev_sym_get_existing_header_session_size(
- ops[i]->sym->session));
rte_mempool_put(qp->sess_mp_priv, sessions[i]);
rte_mempool_put(qp->sess_mp, ops[i]->sym->session);
ops[i]->sym->session = NULL;
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v2 3/5] crypto/ipsec_mb: add remaining SGL support
2022-08-25 14:28 ` [PATCH v2 0/5] add remaining SGL support to AESNI_MB Ciara Power
2022-08-25 14:28 ` [PATCH v2 1/5] test/crypto: fix wireless auth digest segment Ciara Power
2022-08-25 14:28 ` [PATCH v2 2/5] crypto/ipsec_mb: fix sessionless cleanup Ciara Power
@ 2022-08-25 14:28 ` Ciara Power
2022-09-15 11:47 ` De Lara Guarch, Pablo
2022-08-25 14:29 ` [PATCH v2 4/5] test/crypto: add OOP snow3g SGL tests Ciara Power
2022-08-25 14:29 ` [PATCH v2 5/5] test/crypto: add remaining blockcipher " Ciara Power
4 siblings, 1 reply; 38+ messages in thread
From: Ciara Power @ 2022-08-25 14:28 UTC (permalink / raw)
To: Fan Zhang, Pablo de Lara; +Cc: dev, kai.ji, Ciara Power
The intel-ipsec-mb library supports SGL for GCM and ChaChaPoly
algorithms using the JOB API.
This support was added to AESNI_MB PMD previously, but the SGL feature
flags could not be added due to no SGL support for other algorithms.
This patch adds a workaround SGL approach for other algorithms
using the JOB API. The segmented input buffers are copied into a
linear buffer, which is passed as a single job to intel-ipsec-mb.
The job is processed, and on return, the linear buffer is split into the
original destination segments.
Existing AESNI_MB testcases are passing with these feature flags added.
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
v2:
- Small improvements when copying segments to linear buffer.
- Added documentation changes.
---
doc/guides/cryptodevs/aesni_mb.rst | 1 -
doc/guides/cryptodevs/features/aesni_mb.ini | 4 +
doc/guides/rel_notes/release_22_11.rst | 4 +
drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 191 ++++++++++++++++----
4 files changed, 166 insertions(+), 34 deletions(-)
diff --git a/doc/guides/cryptodevs/aesni_mb.rst b/doc/guides/cryptodevs/aesni_mb.rst
index 07222ee117..59c134556f 100644
--- a/doc/guides/cryptodevs/aesni_mb.rst
+++ b/doc/guides/cryptodevs/aesni_mb.rst
@@ -72,7 +72,6 @@ Protocol offloads:
Limitations
-----------
-* Chained mbufs are not supported.
* Out-of-place is not supported for combined Crypto-CRC DOCSIS security
protocol.
* RTE_CRYPTO_CIPHER_DES_DOCSISBPI is not supported for combined Crypto-CRC
diff --git a/doc/guides/cryptodevs/features/aesni_mb.ini b/doc/guides/cryptodevs/features/aesni_mb.ini
index 3c648a391e..e4e965c35a 100644
--- a/doc/guides/cryptodevs/features/aesni_mb.ini
+++ b/doc/guides/cryptodevs/features/aesni_mb.ini
@@ -12,6 +12,10 @@ CPU AVX = Y
CPU AVX2 = Y
CPU AVX512 = Y
CPU AESNI = Y
+In Place SGL = Y
+OOP SGL In SGL Out = Y
+OOP SGL In LB Out = Y
+OOP LB In SGL Out = Y
OOP LB In LB Out = Y
CPU crypto = Y
Symmetric sessionless = Y
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index 8c021cf050..6416f0a4e1 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -55,6 +55,10 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Added SGL support to AESNI_MB PMD.**
+
+ Added support for SGL to AESNI_MB PMD. Support for inplace,
+ OOP SGL in SGL out, OOP LB in SGL out, and OOP SGL in LB out added.
Removed Items
-------------
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 944fce0261..800a9ae72c 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -937,7 +937,7 @@ static inline uint64_t
auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session,
uint32_t oop, const uint32_t auth_offset,
const uint32_t cipher_offset, const uint32_t auth_length,
- const uint32_t cipher_length)
+ const uint32_t cipher_length, uint8_t lb_sgl)
{
struct rte_mbuf *m_src, *m_dst;
uint8_t *p_src, *p_dst;
@@ -945,7 +945,7 @@ auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session,
uint32_t cipher_end, auth_end;
/* Only cipher then hash needs special calculation. */
- if (!oop || session->chain_order != IMB_ORDER_CIPHER_HASH)
+ if (!oop || session->chain_order != IMB_ORDER_CIPHER_HASH || lb_sgl)
return auth_offset;
m_src = op->sym->m_src;
@@ -1159,6 +1159,74 @@ handle_aead_sgl_job(IMB_JOB *job, IMB_MGR *mb_mgr,
return 0;
}
+static int
+handle_sgl_linear(IMB_JOB *job, struct rte_crypto_op *op, uint32_t dst_offset,
+ struct aesni_mb_session *session)
+{
+ uint64_t cipher_len, auth_len;
+ uint8_t *src, *linear_buf = NULL;
+ int total_len;
+ int lb_offset = 0;
+ struct rte_mbuf *src_seg;
+ uint16_t src_len;
+
+ if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+ job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN)
+ cipher_len = (job->msg_len_to_cipher_in_bits >> 3) +
+ (job->cipher_start_src_offset_in_bits >> 3);
+ else
+ cipher_len = job->msg_len_to_cipher_in_bytes +
+ job->cipher_start_src_offset_in_bytes;
+
+ if (job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+ job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN)
+ auth_len = (job->msg_len_to_hash_in_bits >> 3) +
+ job->hash_start_src_offset_in_bytes;
+ else if (job->hash_alg == IMB_AUTH_AES_GMAC)
+ auth_len = job->u.GCM.aad_len_in_bytes;
+ else
+ auth_len = job->msg_len_to_hash_in_bytes +
+ job->hash_start_src_offset_in_bytes;
+
+ total_len = RTE_MAX(auth_len, cipher_len);
+ linear_buf = rte_zmalloc(NULL, total_len + job->auth_tag_output_len_in_bytes, 0);
+ if (linear_buf == NULL) {
+ IPSEC_MB_LOG(ERR, "Error allocating memory for SGL Linear Buffer\n");
+ return -1;
+ }
+
+ for (src_seg = op->sym->m_src; (src_seg != NULL) &&
+ (total_len - lb_offset > 0);
+ src_seg = src_seg->next) {
+ src = rte_pktmbuf_mtod(src_seg, uint8_t *);
+ src_len = RTE_MIN(src_seg->data_len, total_len - lb_offset);
+ rte_memcpy(linear_buf + lb_offset, src, src_len);
+ lb_offset += src_len;
+ }
+
+ job->src = linear_buf;
+ job->dst = linear_buf + dst_offset;
+ job->user_data2 = linear_buf;
+
+ if (job->hash_alg == IMB_AUTH_AES_GMAC)
+ job->u.GCM.aad = linear_buf;
+
+ if (session->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY)
+ job->auth_tag_output = linear_buf + lb_offset;
+ else
+ job->auth_tag_output = linear_buf + auth_len;
+
+ return 0;
+}
+
+static inline int
+imb_lib_support_sgl_algo(IMB_CIPHER_MODE alg)
+{
+ if (alg == IMB_CIPHER_CHACHA20_POLY1305
+ || alg == IMB_CIPHER_GCM)
+ return 1;
+ return 0;
+}
/**
* Process a crypto operation and complete a IMB_JOB job structure for
@@ -1171,7 +1239,8 @@ handle_aead_sgl_job(IMB_JOB *job, IMB_MGR *mb_mgr,
*
* @return
* - 0 on success, the IMB_JOB will be filled
- * - -1 if invalid session, IMB_JOB will not be filled
+ * - -1 if invalid session or errors allocationg SGL linear buffer,
+ * IMB_JOB will not be filled
*/
static inline int
set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
@@ -1191,6 +1260,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
uint32_t total_len;
IMB_JOB base_job;
uint8_t sgl = 0;
+ uint8_t lb_sgl = 0;
int ret;
session = ipsec_mb_get_session_private(qp, op);
@@ -1199,18 +1269,6 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
return -1;
}
- if (op->sym->m_src->nb_segs > 1) {
- if (session->cipher.mode != IMB_CIPHER_GCM
- && session->cipher.mode !=
- IMB_CIPHER_CHACHA20_POLY1305) {
- op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- IPSEC_MB_LOG(ERR, "Device only supports SGL for AES-GCM"
- " or CHACHA20_POLY1305 algorithms.");
- return -1;
- }
- sgl = 1;
- }
-
/* Set crypto operation */
job->chain_order = session->chain_order;
@@ -1233,6 +1291,26 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
job->dec_keys = session->cipher.expanded_aes_keys.decode;
}
+ if (!op->sym->m_dst) {
+ /* in-place operation */
+ m_dst = m_src;
+ oop = 0;
+ } else if (op->sym->m_dst == op->sym->m_src) {
+ /* in-place operation */
+ m_dst = m_src;
+ oop = 0;
+ } else {
+ /* out-of-place operation */
+ m_dst = op->sym->m_dst;
+ oop = 1;
+ }
+
+ if (m_src->nb_segs > 1 || m_dst->nb_segs > 1) {
+ sgl = 1;
+ if (!imb_lib_support_sgl_algo(session->cipher.mode))
+ lb_sgl = 1;
+ }
+
switch (job->hash_alg) {
case IMB_AUTH_AES_XCBC:
job->u.XCBC._k1_expanded = session->auth.xcbc.k1_expanded;
@@ -1331,20 +1409,6 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
m_offset = 0;
}
- if (!op->sym->m_dst) {
- /* in-place operation */
- m_dst = m_src;
- oop = 0;
- } else if (op->sym->m_dst == op->sym->m_src) {
- /* in-place operation */
- m_dst = m_src;
- oop = 0;
- } else {
- /* out-of-place operation */
- m_dst = op->sym->m_dst;
- oop = 1;
- }
-
/* Set digest output location */
if (job->hash_alg != IMB_AUTH_NULL &&
session->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) {
@@ -1435,7 +1499,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
job->hash_start_src_offset_in_bytes = auth_start_offset(op,
session, oop, auth_off_in_bytes,
ciph_off_in_bytes, auth_len_in_bytes,
- ciph_len_in_bytes);
+ ciph_len_in_bytes, lb_sgl);
job->msg_len_to_hash_in_bits = op->sym->auth.data.length;
job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -1452,7 +1516,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
job->hash_start_src_offset_in_bytes = auth_start_offset(op,
session, oop, auth_off_in_bytes,
ciph_off_in_bytes, auth_len_in_bytes,
- ciph_len_in_bytes);
+ ciph_len_in_bytes, lb_sgl);
job->msg_len_to_hash_in_bytes = auth_len_in_bytes;
job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -1464,7 +1528,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
session, oop, op->sym->auth.data.offset,
op->sym->cipher.data.offset,
op->sym->auth.data.length,
- op->sym->cipher.data.length);
+ op->sym->cipher.data.length, lb_sgl);
job->msg_len_to_hash_in_bytes = op->sym->auth.data.length;
job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -1525,6 +1589,10 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
job->user_data = op;
if (sgl) {
+
+ if (lb_sgl)
+ return handle_sgl_linear(job, op, m_offset, session);
+
base_job = *job;
job->sgl_state = IMB_SGL_INIT;
job = IMB_SUBMIT_JOB(mb_mgr);
@@ -1695,6 +1763,49 @@ generate_digest(IMB_JOB *job, struct rte_crypto_op *op,
sess->auth.req_digest_len);
}
+static void
+post_process_sgl_linear(struct rte_crypto_op *op, IMB_JOB *job,
+ struct aesni_mb_session *sess, uint8_t *linear_buf)
+{
+
+ int lb_offset = 0;
+ struct rte_mbuf *m_dst = op->sym->m_dst == NULL ?
+ op->sym->m_src : op->sym->m_dst;
+ uint16_t total_len, dst_len;
+ uint64_t cipher_len, auth_len;
+ uint8_t *dst;
+
+ if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+ job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN)
+ cipher_len = (job->msg_len_to_cipher_in_bits >> 3) +
+ (job->cipher_start_src_offset_in_bits >> 3);
+ else
+ cipher_len = job->msg_len_to_cipher_in_bytes +
+ job->cipher_start_src_offset_in_bytes;
+
+ if (job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+ job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN)
+ auth_len = (job->msg_len_to_hash_in_bits >> 3) +
+ job->hash_start_src_offset_in_bytes;
+ else if (job->hash_alg == IMB_AUTH_AES_GMAC)
+ auth_len = job->u.GCM.aad_len_in_bytes;
+ else
+ auth_len = job->msg_len_to_hash_in_bytes +
+ job->hash_start_src_offset_in_bytes;
+
+ total_len = RTE_MAX(auth_len, cipher_len);
+
+ if (sess->auth.operation != RTE_CRYPTO_AUTH_OP_VERIFY)
+ total_len += job->auth_tag_output_len_in_bytes;
+
+ for (; (m_dst != NULL) && (total_len - lb_offset > 0); m_dst = m_dst->next) {
+ dst = rte_pktmbuf_mtod(m_dst, uint8_t *);
+ dst_len = RTE_MIN(m_dst->data_len, total_len - lb_offset);
+ rte_memcpy(dst, linear_buf + lb_offset, dst_len);
+ lb_offset += dst_len;
+ }
+}
+
/**
* Process a completed job and return rte_mbuf which job processed
*
@@ -1712,6 +1823,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
struct aesni_mb_session *sess = NULL;
uint32_t driver_id = ipsec_mb_get_driver_id(
IPSEC_MB_PMD_TYPE_AESNI_MB);
+ uint8_t *linear_buf = NULL;
#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
uint8_t is_docsis_sec = 0;
@@ -1740,6 +1852,14 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
case IMB_STATUS_COMPLETED:
op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ if ((op->sym->m_src->nb_segs > 1 ||
+ (op->sym->m_dst != NULL &&
+ op->sym->m_dst->nb_segs > 1)) &&
+ !imb_lib_support_sgl_algo(sess->cipher.mode)) {
+ linear_buf = (uint8_t *) job->user_data2;
+ post_process_sgl_linear(op, job, sess, linear_buf);
+ }
+
if (job->hash_alg == IMB_AUTH_NULL)
break;
@@ -1766,6 +1886,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
default:
op->status = RTE_CRYPTO_OP_STATUS_ERROR;
}
+ rte_free(linear_buf);
}
/* Free session if a session-less crypto op */
@@ -2248,7 +2369,11 @@ RTE_INIT(ipsec_mb_register_aesni_mb)
RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO |
RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA |
- RTE_CRYPTODEV_FF_SYM_SESSIONLESS;
+ RTE_CRYPTODEV_FF_SYM_SESSIONLESS |
+ RTE_CRYPTODEV_FF_IN_PLACE_SGL |
+ RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
+ RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
+ RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT;
aesni_mb_data->internals_priv_size = 0;
aesni_mb_data->ops = &aesni_mb_pmd_ops;
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v2 4/5] test/crypto: add OOP snow3g SGL tests
2022-08-25 14:28 ` [PATCH v2 0/5] add remaining SGL support to AESNI_MB Ciara Power
` (2 preceding siblings ...)
2022-08-25 14:28 ` [PATCH v2 3/5] crypto/ipsec_mb: add remaining SGL support Ciara Power
@ 2022-08-25 14:29 ` Ciara Power
2022-08-25 14:29 ` [PATCH v2 5/5] test/crypto: add remaining blockcipher " Ciara Power
4 siblings, 0 replies; 38+ messages in thread
From: Ciara Power @ 2022-08-25 14:29 UTC (permalink / raw)
To: Akhil Goyal, Fan Zhang; +Cc: dev, kai.ji, pablo.de.lara.guarch, Ciara Power
More tests are added to test variations of OOP SGL for snow3g.
This includes LB_IN_SGL_OUT and SGL_IN_LB_OUT.
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
app/test/test_cryptodev.c | 48 +++++++++++++++++++++++++++++++--------
1 file changed, 39 insertions(+), 9 deletions(-)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index e6925b6531..83860d1853 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -4347,7 +4347,8 @@ test_snow3g_encryption_oop(const struct snow3g_test_data *tdata)
}
static int
-test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
+test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata,
+ uint8_t sgl_in, uint8_t sgl_out)
{
struct crypto_testsuite_params *ts_params = &testsuite_params;
struct crypto_unittest_params *ut_params = &unittest_params;
@@ -4378,9 +4379,12 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
uint64_t feat_flags = dev_info.feature_flags;
- if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) {
- printf("Device doesn't support out-of-place scatter-gather "
- "in both input and output mbufs. "
+ if (((sgl_in && sgl_out) && !(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT))
+ || ((!sgl_in && sgl_out) &&
+ !(feat_flags & RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT))
+ || ((sgl_in && !sgl_out) &&
+ !(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT))) {
+ printf("Device doesn't support out-of-place scatter gather type. "
"Test Skipped.\n");
return TEST_SKIPPED;
}
@@ -4405,10 +4409,21 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
/* the algorithms block size */
plaintext_pad_len = RTE_ALIGN_CEIL(plaintext_len, 16);
- ut_params->ibuf = create_segmented_mbuf(ts_params->mbuf_pool,
- plaintext_pad_len, 10, 0);
- ut_params->obuf = create_segmented_mbuf(ts_params->mbuf_pool,
- plaintext_pad_len, 3, 0);
+ if (sgl_in)
+ ut_params->ibuf = create_segmented_mbuf(ts_params->mbuf_pool,
+ plaintext_pad_len, 10, 0);
+ else {
+ ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+ rte_pktmbuf_append(ut_params->ibuf, plaintext_pad_len);
+ }
+
+ if (sgl_out)
+ ut_params->obuf = create_segmented_mbuf(ts_params->mbuf_pool,
+ plaintext_pad_len, 3, 0);
+ else {
+ ut_params->obuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+ rte_pktmbuf_append(ut_params->obuf, plaintext_pad_len);
+ }
TEST_ASSERT_NOT_NULL(ut_params->ibuf,
"Failed to allocate input buffer in mempool");
@@ -6762,9 +6777,20 @@ test_snow3g_encryption_test_case_1_oop(void)
static int
test_snow3g_encryption_test_case_1_oop_sgl(void)
{
- return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1);
+ return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 1, 1);
+}
+
+static int
+test_snow3g_encryption_test_case_1_oop_lb_in_sgl_out(void)
+{
+ return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 0, 1);
}
+static int
+test_snow3g_encryption_test_case_1_oop_sgl_in_lb_out(void)
+{
+ return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 1, 0);
+}
static int
test_snow3g_encryption_test_case_1_offset_oop(void)
@@ -15985,6 +16011,10 @@ static struct unit_test_suite cryptodev_snow3g_testsuite = {
test_snow3g_encryption_test_case_1_oop),
TEST_CASE_ST(ut_setup, ut_teardown,
test_snow3g_encryption_test_case_1_oop_sgl),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_1_oop_lb_in_sgl_out),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_1_oop_sgl_in_lb_out),
TEST_CASE_ST(ut_setup, ut_teardown,
test_snow3g_encryption_test_case_1_offset_oop),
TEST_CASE_ST(ut_setup, ut_teardown,
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v2 5/5] test/crypto: add remaining blockcipher SGL tests
2022-08-25 14:28 ` [PATCH v2 0/5] add remaining SGL support to AESNI_MB Ciara Power
` (3 preceding siblings ...)
2022-08-25 14:29 ` [PATCH v2 4/5] test/crypto: add OOP snow3g SGL tests Ciara Power
@ 2022-08-25 14:29 ` Ciara Power
4 siblings, 0 replies; 38+ messages in thread
From: Ciara Power @ 2022-08-25 14:29 UTC (permalink / raw)
To: Akhil Goyal, Fan Zhang, Yipeng Wang, Sameh Gobriel,
Bruce Richardson, Vladimir Medvedkin
Cc: dev, kai.ji, pablo.de.lara.guarch, Ciara Power
The current blockcipher test function only has support for two types of
SGL test, INPLACE or OOP_SGL_IN_LB_OUT. These types are hardcoded into
the function, with the number of segments always set to 3.
To ensure all SGL types are tested, blockcipher test vectors now have
fields to specify SGL type, and the number of segments.
If these fields are missing, the previous defaults are used,
either INPLACE or OOP_SGL_IN_LB_OUT, with 3 segments.
Some AES and Hash vectors are modified to use these new fields, and new
AES tests are added to test the SGL types that were not previously
being tested.
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
app/test/test_cryptodev_aes_test_vectors.h | 345 +++++++++++++++++---
app/test/test_cryptodev_blockcipher.c | 50 +--
app/test/test_cryptodev_blockcipher.h | 2 +
app/test/test_cryptodev_hash_test_vectors.h | 8 +-
4 files changed, 335 insertions(+), 70 deletions(-)
diff --git a/app/test/test_cryptodev_aes_test_vectors.h b/app/test/test_cryptodev_aes_test_vectors.h
index a797af1b00..2c1875d3d9 100644
--- a/app/test/test_cryptodev_aes_test_vectors.h
+++ b/app/test/test_cryptodev_aes_test_vectors.h
@@ -4163,12 +4163,44 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
},
{
.test_descr = "AES-192-CTR XCBC Decryption Digest Verify "
- "Scatter Gather",
+ "Scatter Gather (Inplace)",
+ .test_data = &aes_test_data_2,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CTR XCBC Decryption Digest Verify "
+ "Scatter Gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_2,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CTR XCBC Decryption Digest Verify "
+ "Scatter Gather OOP (LB in SGL out)",
.test_data = &aes_test_data_2,
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
},
+ {
+ .test_descr = "AES-192-CTR XCBC Decryption Digest Verify "
+ "Scatter Gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_2,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
+ },
+
{
.test_descr = "AES-256-CTR HMAC-SHA1 Encryption Digest",
.test_data = &aes_test_data_3,
@@ -4193,11 +4225,52 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
- "Scatter Gather",
+ "Scatter Gather (Inplace)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
+ "Scatter Gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
+ "Scatter Gather OOP 16 segs (SGL in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 16
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
+ "Scatter Gather OOP (LB in SGL out)",
.test_data = &aes_test_data_4,
.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
+ "Scatter Gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -4207,10 +4280,52 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
- "Verify Scatter Gather",
+ "Verify Scatter Gather (Inplace)",
.test_data = &aes_test_data_4,
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
+ "Verify Scatter Gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
+ "Verify Scatter Gather OOP 16 segs (SGL in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 16
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
+ "Verify Scatter Gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
+ "Verify Scatter Gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -4255,12 +4370,46 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
},
{
.test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest "
- "Scatter Gather Sessionless",
+ "Scatter Gather Sessionless (Inplace)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
+ BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest "
+ "Scatter Gather Sessionless OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
+ BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest "
+ "Scatter Gather Sessionless OOP (LB in SGL out)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
+ BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest "
+ "Scatter Gather Sessionless OOP (SGL in LB out)",
.test_data = &aes_test_data_6,
.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
@@ -4270,11 +4419,42 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
},
{
.test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
- "Verify Scatter Gather",
+ "Verify Scatter Gather (Inplace)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 2
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
+ "Verify Scatter Gather OOP (SGL in SGL out)",
.test_data = &aes_test_data_6,
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
+ "Verify Scatter Gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
+ "Verify Scatter Gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC XCBC Encryption Digest",
@@ -4358,6 +4538,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN_ENC,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
@@ -4382,6 +4564,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -4397,6 +4581,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_DEC_AUTH_VERIFY,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -4421,6 +4607,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -4504,6 +4692,41 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
.test_data = &aes_test_data_4,
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
},
+ {
+ .test_descr = "AES-128-CBC Encryption Scatter gather (Inplace)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC Encryption Scatter gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC Encryption Scatter gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC Encryption Scatter gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
+ },
{
.test_descr = "AES-128-CBC Decryption",
.test_data = &aes_test_data_4,
@@ -4515,11 +4738,39 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
},
{
- .test_descr = "AES-192-CBC Encryption Scatter gather",
+ .test_descr = "AES-192-CBC Encryption Scatter gather (Inplace)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Encryption Scatter gather OOP (SGL in SGL out)",
.test_data = &aes_test_data_10,
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Encryption Scatter gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Encryption Scatter gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-192-CBC Decryption",
@@ -4527,10 +4778,39 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
},
{
- .test_descr = "AES-192-CBC Decryption Scatter Gather",
+ .test_descr = "AES-192-CBC Decryption Scatter Gather (Inplace)",
.test_data = &aes_test_data_10,
.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Decryption Scatter Gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Decryption Scatter Gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Decryption Scatter Gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-256-CBC Encryption",
@@ -4689,67 +4969,42 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
},
{
.test_descr = "AES-256-XTS Encryption (512-byte plaintext"
- " Dataunit 512) Scater gather OOP",
+ " Dataunit 512) Scatter gather OOP (SGL in LB out)",
.test_data = &aes_test_data_xts_wrapped_key_48_pt_512_du_512,
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
- .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
- BLOCKCIPHER_TEST_FEATURE_SG,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-256-XTS Decryption (512-byte plaintext"
- " Dataunit 512) Scater gather OOP",
+ " Dataunit 512) Scatter gather OOP (SGL in LB out)",
.test_data = &aes_test_data_xts_wrapped_key_48_pt_512_du_512,
.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
BLOCKCIPHER_TEST_FEATURE_SG,
- },
- {
- .test_descr = "AES-256-XTS Encryption (512-byte plaintext"
- " Dataunit 0) Scater gather OOP",
- .test_data = &aes_test_data_xts_wrapped_key_48_pt_512_du_0,
- .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
- .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
- BLOCKCIPHER_TEST_FEATURE_SG,
- },
- {
- .test_descr = "AES-256-XTS Decryption (512-byte plaintext"
- " Dataunit 0) Scater gather OOP",
- .test_data = &aes_test_data_xts_wrapped_key_48_pt_512_du_0,
- .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
- .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
- BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-256-XTS Encryption (4096-byte plaintext"
- " Dataunit 4096) Scater gather OOP",
+ " Dataunit 4096) Scatter gather OOP (SGL in LB out)",
.test_data = &aes_test_data_xts_wrapped_key_48_pt_4096_du_4096,
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-256-XTS Decryption (4096-byte plaintext"
- " Dataunit 4096) Scater gather OOP",
+ " Dataunit 4096) Scatter gather OOP (SGL in LB out)",
.test_data = &aes_test_data_xts_wrapped_key_48_pt_4096_du_4096,
.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
BLOCKCIPHER_TEST_FEATURE_SG,
- },
- {
- .test_descr = "AES-256-XTS Encryption (4096-byte plaintext"
- " Dataunit 0) Scater gather OOP",
- .test_data = &aes_test_data_xts_wrapped_key_48_pt_4096_du_0,
- .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
- .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
- BLOCKCIPHER_TEST_FEATURE_SG,
- },
- {
- .test_descr = "AES-256-XTS Decryption (4096-byte plaintext"
- " Dataunit 0) Scater gather OOP",
- .test_data = &aes_test_data_xts_wrapped_key_48_pt_4096_du_0,
- .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
- .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
- BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "cipher-only - NULL algo - x8 - encryption",
diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c
index b5813b956f..f1ef0b606f 100644
--- a/app/test/test_cryptodev_blockcipher.c
+++ b/app/test/test_cryptodev_blockcipher.c
@@ -96,7 +96,9 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
uint8_t tmp_dst_buf[MBUF_SIZE];
uint32_t pad_len;
- int nb_segs = 1;
+ int nb_segs_in = 1;
+ int nb_segs_out = 1;
+ uint64_t sgl_type = t->sgl_flag;
uint32_t nb_iterates = 0;
rte_cryptodev_info_get(dev_id, &dev_info);
@@ -121,30 +123,31 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
}
}
if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_SG) {
- uint64_t oop_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT;
+ if (sgl_type == 0) {
+ if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP)
+ sgl_type = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT;
+ else
+ sgl_type = RTE_CRYPTODEV_FF_IN_PLACE_SGL;
+ }
- if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) {
- if (!(feat_flags & oop_flag)) {
- printf("Device doesn't support out-of-place "
- "scatter-gather in input mbuf. "
- "Test Skipped.\n");
- snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
- "SKIPPED");
- return TEST_SKIPPED;
- }
- } else {
- if (!(feat_flags & RTE_CRYPTODEV_FF_IN_PLACE_SGL)) {
- printf("Device doesn't support in-place "
- "scatter-gather mbufs. "
- "Test Skipped.\n");
- snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
- "SKIPPED");
- return TEST_SKIPPED;
- }
+ if (!(feat_flags & sgl_type)) {
+ printf("Device doesn't support scatter-gather type."
+ " Test Skipped.\n");
+ snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
+ "SKIPPED");
+ return TEST_SKIPPED;
}
- nb_segs = 3;
+ if (sgl_type == RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT ||
+ sgl_type == RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT ||
+ sgl_type == RTE_CRYPTODEV_FF_IN_PLACE_SGL)
+ nb_segs_in = t->sgl_segs == 0 ? 3 : t->sgl_segs;
+
+ if (sgl_type == RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT ||
+ sgl_type == RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)
+ nb_segs_out = t->sgl_segs == 0 ? 3 : t->sgl_segs;
}
+
if (!!(feat_flags & RTE_CRYPTODEV_FF_CIPHER_WRAPPED_KEY) ^
tdata->wrapped_key) {
snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
@@ -207,7 +210,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
/* for contiguous mbuf, nb_segs is 1 */
ibuf = create_segmented_mbuf(mbuf_pool,
- tdata->ciphertext.len, nb_segs, src_pattern);
+ tdata->ciphertext.len, nb_segs_in, src_pattern);
if (ibuf == NULL) {
snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
"line %u FAILED: %s",
@@ -256,7 +259,8 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
}
if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) {
- obuf = rte_pktmbuf_alloc(mbuf_pool);
+ obuf = create_segmented_mbuf(mbuf_pool,
+ tdata->ciphertext.len, nb_segs_out, dst_pattern);
if (!obuf) {
snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "line %u "
"FAILED: %s", __LINE__,
diff --git a/app/test/test_cryptodev_blockcipher.h b/app/test/test_cryptodev_blockcipher.h
index 84f5d57787..bad93a5ec1 100644
--- a/app/test/test_cryptodev_blockcipher.h
+++ b/app/test/test_cryptodev_blockcipher.h
@@ -57,6 +57,8 @@ struct blockcipher_test_case {
const struct blockcipher_test_data *test_data;
uint8_t op_mask; /* operation mask */
uint8_t feature_mask;
+ uint64_t sgl_flag;
+ uint8_t sgl_segs;
};
struct blockcipher_test_data {
diff --git a/app/test/test_cryptodev_hash_test_vectors.h b/app/test/test_cryptodev_hash_test_vectors.h
index f7a0981636..944a52721c 100644
--- a/app/test/test_cryptodev_hash_test_vectors.h
+++ b/app/test/test_cryptodev_hash_test_vectors.h
@@ -463,10 +463,12 @@ static const struct blockcipher_test_case hash_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN,
},
{
- .test_descr = "HMAC-SHA1 Digest Scatter Gather",
+ .test_descr = "HMAC-SHA1 Digest Scatter Gather (Inplace)",
.test_data = &hmac_sha1_test_vector,
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "HMAC-SHA1 Digest Verify",
@@ -474,10 +476,12 @@ static const struct blockcipher_test_case hash_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY,
},
{
- .test_descr = "HMAC-SHA1 Digest Verify Scatter Gather",
+ .test_descr = "HMAC-SHA1 Digest Verify Scatter Gather (Inplace)",
.test_data = &hmac_sha1_test_vector,
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "SHA224 Digest",
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: [PATCH v2 2/5] crypto/ipsec_mb: fix sessionless cleanup
2022-08-25 14:28 ` [PATCH v2 2/5] crypto/ipsec_mb: fix sessionless cleanup Ciara Power
@ 2022-09-15 11:38 ` De Lara Guarch, Pablo
2022-09-21 13:02 ` Power, Ciara
0 siblings, 1 reply; 38+ messages in thread
From: De Lara Guarch, Pablo @ 2022-09-15 11:38 UTC (permalink / raw)
To: Power, Ciara, Zhang, Roy Fan; +Cc: dev, Ji, Kai, Mrozowicz, SlawomirX
Hi Ciara,
> -----Original Message-----
> From: Power, Ciara <ciara.power@intel.com>
> Sent: Thursday, August 25, 2022 3:29 PM
> To: Zhang, Roy Fan <roy.fan.zhang@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>
> Cc: dev@dpdk.org; Ji, Kai <kai.ji@intel.com>; Power, Ciara
> <ciara.power@intel.com>; Mrozowicz, SlawomirX
> <slawomirx.mrozowicz@intel.com>
> Subject: [PATCH v2 2/5] crypto/ipsec_mb: fix sessionless cleanup
>
> Currently, for a sessionless op, the session created is reset before being put
> back into the mempool. This causes issues as the object isn't correctly
> released into the mempool.
>
> Fixes: c68d7aa354f6 ("crypto/aesni_mb: use architecture independent
> macros")
> Fixes: b3bbd9e5f265 ("cryptodev: support device independent sessions")
> Fixes: f16662885472 ("crypto/ipsec_mb: add chacha_poly PMD")
> Cc: roy.fan.zhang@intel.com
> Cc: slawomirx.mrozowicz@intel.com
> Cc: kai.ji@intel.com
>
> Signed-off-by: Ciara Power <ciara.power@intel.com>
> ---
> drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 4 ----
> drivers/crypto/ipsec_mb/pmd_chacha_poly.c | 4 ----
> drivers/crypto/ipsec_mb/pmd_kasumi.c | 5 -----
> drivers/crypto/ipsec_mb/pmd_snow3g.c | 4 ----
> drivers/crypto/ipsec_mb/pmd_zuc.c | 4 ----
> 5 files changed, 21 deletions(-)
>
> diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
> b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
> index 6d5d3ce8eb..944fce0261 100644
> --- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
> +++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
> @@ -1770,10 +1770,6 @@ post_process_mb_job(struct ipsec_mb_qp *qp,
> IMB_JOB *job)
>
> /* Free session if a session-less crypto op */
> if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
> - memset(sess, 0, sizeof(struct aesni_mb_session));
> - memset(op->sym->session, 0,
> -
This will leave some info leftover, so it may cause a problem if this object is reused? Is this memset clearing mempool object header and that's the reason why it cannot be released properly?
Maybe Fan/Kai/Slawomir will know more on this.
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: [PATCH v2 3/5] crypto/ipsec_mb: add remaining SGL support
2022-08-25 14:28 ` [PATCH v2 3/5] crypto/ipsec_mb: add remaining SGL support Ciara Power
@ 2022-09-15 11:47 ` De Lara Guarch, Pablo
0 siblings, 0 replies; 38+ messages in thread
From: De Lara Guarch, Pablo @ 2022-09-15 11:47 UTC (permalink / raw)
To: Power, Ciara, Zhang, Roy Fan; +Cc: dev, Ji, Kai
Hi Ciara,
> -----Original Message-----
> From: Power, Ciara <ciara.power@intel.com>
> Sent: Thursday, August 25, 2022 3:29 PM
> To: Zhang, Roy Fan <roy.fan.zhang@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>
> Cc: dev@dpdk.org; Ji, Kai <kai.ji@intel.com>; Power, Ciara
> <ciara.power@intel.com>
> Subject: [PATCH v2 3/5] crypto/ipsec_mb: add remaining SGL support
>
> The intel-ipsec-mb library supports SGL for GCM and ChaChaPoly algorithms
> using the JOB API.
> This support was added to AESNI_MB PMD previously, but the SGL feature
> flags could not be added due to no SGL support for other algorithms.
>
> This patch adds a workaround SGL approach for other algorithms using the
> JOB API. The segmented input buffers are copied into a linear buffer, which is
> passed as a single job to intel-ipsec-mb.
> The job is processed, and on return, the linear buffer is split into the original
> destination segments.
>
> Existing AESNI_MB testcases are passing with these feature flags added.
>
> Signed-off-by: Ciara Power <ciara.power@intel.com>
>
> ---
> v2:
> - Small improvements when copying segments to linear buffer.
> - Added documentation changes.
> ---
> doc/guides/cryptodevs/aesni_mb.rst | 1 -
> doc/guides/cryptodevs/features/aesni_mb.ini | 4 +
> doc/guides/rel_notes/release_22_11.rst | 4 +
> drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 191 ++++++++++++++++----
> 4 files changed, 166 insertions(+), 34 deletions(-)
>
...
> +++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
...
>
> +static int
> +handle_sgl_linear(IMB_JOB *job, struct rte_crypto_op *op, uint32_t
> dst_offset,
> + struct aesni_mb_session *session)
> +{
> + uint64_t cipher_len, auth_len;
> + uint8_t *src, *linear_buf = NULL;
> + int total_len;
> + int lb_offset = 0;
Suggest using unsigned here (probably uint64_t for total_len, as it gets the value from max between cipher_len and auth_len).
> + struct rte_mbuf *src_seg;
> + uint16_t src_len;
> +
> + if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
> + job->cipher_mode ==
> IMB_CIPHER_KASUMI_UEA1_BITLEN)
> + cipher_len = (job->msg_len_to_cipher_in_bits >> 3) +
> + (job->cipher_start_src_offset_in_bits >> 3);
> + else
> + cipher_len = job->msg_len_to_cipher_in_bytes +
> + job->cipher_start_src_offset_in_bytes;
> +
> + if (job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
> + job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN)
> + auth_len = (job->msg_len_to_hash_in_bits >> 3) +
> + job->hash_start_src_offset_in_bytes;
> + else if (job->hash_alg == IMB_AUTH_AES_GMAC)
> + auth_len = job->u.GCM.aad_len_in_bytes;
> + else
> + auth_len = job->msg_len_to_hash_in_bytes +
> + job->hash_start_src_offset_in_bytes;
> +
> + total_len = RTE_MAX(auth_len, cipher_len);
> + linear_buf = rte_zmalloc(NULL, total_len + job-
> >auth_tag_output_len_in_bytes, 0);
> + if (linear_buf == NULL) {
> + IPSEC_MB_LOG(ERR, "Error allocating memory for SGL Linear
> Buffer\n");
> + return -1;
> + }
> +
..
> +static void
> +post_process_sgl_linear(struct rte_crypto_op *op, IMB_JOB *job,
> + struct aesni_mb_session *sess, uint8_t *linear_buf) {
> +
> + int lb_offset = 0;
> + struct rte_mbuf *m_dst = op->sym->m_dst == NULL ?
> + op->sym->m_src : op->sym->m_dst;
> + uint16_t total_len, dst_len;
> + uint64_t cipher_len, auth_len;
> + uint8_t *dst;
> +
> + if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
> + job->cipher_mode ==
> IMB_CIPHER_KASUMI_UEA1_BITLEN)
> + cipher_len = (job->msg_len_to_cipher_in_bits >> 3) +
> + (job->cipher_start_src_offset_in_bits >> 3);
> + else
> + cipher_len = job->msg_len_to_cipher_in_bytes +
> + job->cipher_start_src_offset_in_bytes;
> +
> + if (job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
> + job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN)
> + auth_len = (job->msg_len_to_hash_in_bits >> 3) +
> + job->hash_start_src_offset_in_bytes;
> + else if (job->hash_alg == IMB_AUTH_AES_GMAC)
> + auth_len = job->u.GCM.aad_len_in_bytes;
> + else
> + auth_len = job->msg_len_to_hash_in_bytes +
> + job->hash_start_src_offset_in_bytes;
> +
This code above is the same as the code in handle_sgl_linear.
Maybe you can have a separate function and remove duplication.
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v3 0/5] add remaining SGL support to AESNI_MB
2022-08-12 13:23 [PATCH 0/3] add remaining SGL support to AESNI_MB Ciara Power
` (3 preceding siblings ...)
2022-08-25 14:28 ` [PATCH v2 0/5] add remaining SGL support to AESNI_MB Ciara Power
@ 2022-09-21 12:50 ` Ciara Power
2022-09-21 12:50 ` [PATCH v3 1/5] test/crypto: fix wireless auth digest segment Ciara Power
` (5 more replies)
2022-10-04 12:55 ` [PATCH v4 " Ciara Power
2022-10-07 13:46 ` [PATCH v5 0/4] " Ciara Power
6 siblings, 6 replies; 38+ messages in thread
From: Ciara Power @ 2022-09-21 12:50 UTC (permalink / raw)
Cc: dev, kai.ji, roy.fan.zhang, pablo.de.lara.guarch, Ciara Power
Currently, the intel-ipsec-mb library only supports SGL for
GCM and ChaCha20-Poly1305 algorithms through the JOB API.
To add SGL support for other algorithms, a workaround approach is
added in the AESNI_MB PMD. SGL feature flags can now be added to
the PMD.
This patchset also includes a fix for SGL wireless operations,
session cleanup and session creation for sessionless operations.
Some additional Snow3G SGL and AES tests are also added for
various SGL input/output combinations that were not
previously being tested.
v3:
- Modified fix to reset sessions, and ensure values are then set for
sessionless testcases. V2 fix just ensured the same values in
session objects were reused, as they were not being reset,
which was incorrect.
- Reduced code duplication by adding a reusable function.
- Changed int to uint64_t for total_len.
v2:
- Added documentation changes.
- Added fix for sessionless cleanup.
- Modified blockcipher tests to support various SGL types.
- Added more SGL AES tests.
- Small fixes.
Ciara Power (5):
test/crypto: fix wireless auth digest segment
crypto/ipsec_mb: fix session creation for sessionless
crypto/ipsec_mb: add remaining SGL support
test/crypto: add OOP snow3g SGL tests
test/crypto: add remaining blockcipher SGL tests
app/test/test_cryptodev.c | 56 +++-
app/test/test_cryptodev_aes_test_vectors.h | 345 +++++++++++++++++---
app/test/test_cryptodev_blockcipher.c | 50 +--
app/test/test_cryptodev_blockcipher.h | 2 +
app/test/test_cryptodev_hash_test_vectors.h | 8 +-
doc/guides/cryptodevs/aesni_mb.rst | 1 -
doc/guides/cryptodevs/features/aesni_mb.ini | 4 +
doc/guides/rel_notes/release_22_11.rst | 5 +
drivers/crypto/ipsec_mb/ipsec_mb_private.h | 12 +-
drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 180 ++++++++--
lib/cryptodev/rte_cryptodev.c | 1 +
11 files changed, 547 insertions(+), 117 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v3 1/5] test/crypto: fix wireless auth digest segment
2022-09-21 12:50 ` [PATCH v3 0/5] add remaining SGL support to AESNI_MB Ciara Power
@ 2022-09-21 12:50 ` Ciara Power
2022-09-21 13:32 ` Zhang, Roy Fan
2022-09-21 12:50 ` [PATCH v3 2/5] crypto/ipsec_mb: fix session creation for sessionless Ciara Power
` (4 subsequent siblings)
5 siblings, 1 reply; 38+ messages in thread
From: Ciara Power @ 2022-09-21 12:50 UTC (permalink / raw)
To: Akhil Goyal, Fan Zhang; +Cc: dev, kai.ji, pablo.de.lara.guarch, Ciara Power
The segment size for some tests was too small to hold the auth digest.
This caused issues when using op->sym->auth.digest.data for comparisons
in AESNI_MB PMD after a subsequent patch enables SGL.
For example, if segment size is 2, and digest size is 4, then 4 bytes
are read from op->sym->auth.digest.data, which overflows into the memory
after the segment, rather than using the second segment that contains
the remaining half of the digest.
Fixes: 11c5485bb276 ("test/crypto: add scatter-gather tests for IP and OOP")
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
app/test/test_cryptodev.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 6ee4480399..5533c135b0 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -3040,6 +3040,14 @@ create_wireless_algo_auth_cipher_operation(
remaining_off -= rte_pktmbuf_data_len(sgl_buf);
sgl_buf = sgl_buf->next;
}
+
+ /* The last segment should be large enough to hold full digest */
+ if (sgl_buf->data_len < auth_tag_len) {
+ rte_pktmbuf_free(sgl_buf->next);
+ sgl_buf->next = NULL;
+ rte_pktmbuf_append(sgl_buf, auth_tag_len - sgl_buf->data_len);
+ }
+
sym_op->auth.digest.data = rte_pktmbuf_mtod_offset(sgl_buf,
uint8_t *, remaining_off);
sym_op->auth.digest.phys_addr = rte_pktmbuf_iova_offset(sgl_buf,
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v3 2/5] crypto/ipsec_mb: fix session creation for sessionless
2022-09-21 12:50 ` [PATCH v3 0/5] add remaining SGL support to AESNI_MB Ciara Power
2022-09-21 12:50 ` [PATCH v3 1/5] test/crypto: fix wireless auth digest segment Ciara Power
@ 2022-09-21 12:50 ` Ciara Power
2022-09-21 13:33 ` Zhang, Roy Fan
2022-09-21 12:50 ` [PATCH v3 3/5] crypto/ipsec_mb: add remaining SGL support Ciara Power
` (3 subsequent siblings)
5 siblings, 1 reply; 38+ messages in thread
From: Ciara Power @ 2022-09-21 12:50 UTC (permalink / raw)
To: Fan Zhang, Pablo de Lara, Akhil Goyal
Cc: dev, kai.ji, Ciara Power, slawomirx.mrozowicz
Currently, for a sessionless op, the session taken from the mempool
contains some values previously set by a testcase that does use a
session. This is due to the session object not being reset before going
back into the mempool.
This caused issues when multiple sessionless testcases ran, as the
previously set objects were being used for the first few testcases, but
subsequent testcases used empty objects, as they were being correctly
reset by the sessionless testcases.
To fix this, the session objects are now reset before being returned to
the mempool for session testcases. In addition, rather than pulling the
session object directly from the mempool for sessionless testcases, the
session_create() function is now used, which sets the required values,
such as nb_drivers.
Fixes: c75542ae4200 ("crypto/ipsec_mb: introduce IPsec_mb framework")
Fixes: b3bbd9e5f265 ("cryptodev: support device independent sessions")
Cc: roy.fan.zhang@intel.com
Cc: slawomirx.mrozowicz@intel.com
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
v3:
- Modified fix to reset sessions, and ensure values are then set for
sessionless testcases. V2 fix just ensured the same values in
session objects were reused, as they were not being reset,
which was incorrect.
---
drivers/crypto/ipsec_mb/ipsec_mb_private.h | 12 ++++++++----
lib/cryptodev/rte_cryptodev.c | 1 +
2 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_private.h b/drivers/crypto/ipsec_mb/ipsec_mb_private.h
index d074b33133..8ec23c172d 100644
--- a/drivers/crypto/ipsec_mb/ipsec_mb_private.h
+++ b/drivers/crypto/ipsec_mb/ipsec_mb_private.h
@@ -415,7 +415,7 @@ ipsec_mb_get_session_private(struct ipsec_mb_qp *qp, struct rte_crypto_op *op)
uint32_t driver_id = ipsec_mb_get_driver_id(qp->pmd_type);
struct rte_crypto_sym_op *sym_op = op->sym;
uint8_t sess_type = op->sess_type;
- void *_sess;
+ struct rte_cryptodev_sym_session *_sess;
void *_sess_private_data = NULL;
struct ipsec_mb_internals *pmd_data = &ipsec_mb_pmds[qp->pmd_type];
@@ -426,8 +426,12 @@ ipsec_mb_get_session_private(struct ipsec_mb_qp *qp, struct rte_crypto_op *op)
driver_id);
break;
case RTE_CRYPTO_OP_SESSIONLESS:
- if (!qp->sess_mp ||
- rte_mempool_get(qp->sess_mp, (void **)&_sess))
+ if (!qp->sess_mp)
+ return NULL;
+
+ _sess = rte_cryptodev_sym_session_create(qp->sess_mp);
+
+ if (!_sess)
return NULL;
if (!qp->sess_mp_priv ||
@@ -443,7 +447,7 @@ ipsec_mb_get_session_private(struct ipsec_mb_qp *qp, struct rte_crypto_op *op)
sess = NULL;
}
- sym_op->session = (struct rte_cryptodev_sym_session *)_sess;
+ sym_op->session = _sess;
set_sym_session_private_data(sym_op->session, driver_id,
_sess_private_data);
break;
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index 42f3221052..af24969ed5 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -2032,6 +2032,7 @@ rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess)
/* Return session to mempool */
sess_mp = rte_mempool_from_obj(sess);
+ memset(sess, 0, rte_cryptodev_sym_get_existing_header_session_size(sess));
rte_mempool_put(sess_mp, sess);
rte_cryptodev_trace_sym_session_free(sess);
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v3 3/5] crypto/ipsec_mb: add remaining SGL support
2022-09-21 12:50 ` [PATCH v3 0/5] add remaining SGL support to AESNI_MB Ciara Power
2022-09-21 12:50 ` [PATCH v3 1/5] test/crypto: fix wireless auth digest segment Ciara Power
2022-09-21 12:50 ` [PATCH v3 2/5] crypto/ipsec_mb: fix session creation for sessionless Ciara Power
@ 2022-09-21 12:50 ` Ciara Power
2022-09-21 14:50 ` Zhang, Roy Fan
2022-09-21 12:50 ` [PATCH v3 4/5] test/crypto: add OOP snow3g SGL tests Ciara Power
` (2 subsequent siblings)
5 siblings, 1 reply; 38+ messages in thread
From: Ciara Power @ 2022-09-21 12:50 UTC (permalink / raw)
To: Fan Zhang, Pablo de Lara; +Cc: dev, kai.ji, Ciara Power
The intel-ipsec-mb library supports SGL for GCM and ChaChaPoly
algorithms using the JOB API.
This support was added to AESNI_MB PMD previously, but the SGL feature
flags could not be added due to no SGL support for other algorithms.
This patch adds a workaround SGL approach for other algorithms
using the JOB API. The segmented input buffers are copied into a
linear buffer, which is passed as a single job to intel-ipsec-mb.
The job is processed, and on return, the linear buffer is split into the
original destination segments.
Existing AESNI_MB testcases are passing with these feature flags added.
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
v3:
- Reduced code duplication by adding a reusable function.
- Changed int to uint64_t for total_len.
v2:
- Small improvements when copying segments to linear buffer.
- Added documentation changes.
---
doc/guides/cryptodevs/aesni_mb.rst | 1 -
doc/guides/cryptodevs/features/aesni_mb.ini | 4 +
doc/guides/rel_notes/release_22_11.rst | 5 +
drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 180 ++++++++++++++++----
4 files changed, 156 insertions(+), 34 deletions(-)
diff --git a/doc/guides/cryptodevs/aesni_mb.rst b/doc/guides/cryptodevs/aesni_mb.rst
index 07222ee117..59c134556f 100644
--- a/doc/guides/cryptodevs/aesni_mb.rst
+++ b/doc/guides/cryptodevs/aesni_mb.rst
@@ -72,7 +72,6 @@ Protocol offloads:
Limitations
-----------
-* Chained mbufs are not supported.
* Out-of-place is not supported for combined Crypto-CRC DOCSIS security
protocol.
* RTE_CRYPTO_CIPHER_DES_DOCSISBPI is not supported for combined Crypto-CRC
diff --git a/doc/guides/cryptodevs/features/aesni_mb.ini b/doc/guides/cryptodevs/features/aesni_mb.ini
index 3c648a391e..e4e965c35a 100644
--- a/doc/guides/cryptodevs/features/aesni_mb.ini
+++ b/doc/guides/cryptodevs/features/aesni_mb.ini
@@ -12,6 +12,10 @@ CPU AVX = Y
CPU AVX2 = Y
CPU AVX512 = Y
CPU AESNI = Y
+In Place SGL = Y
+OOP SGL In SGL Out = Y
+OOP SGL In LB Out = Y
+OOP LB In SGL Out = Y
OOP LB In LB Out = Y
CPU crypto = Y
Symmetric sessionless = Y
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index 7fab9d6550..b3717ce9e3 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -60,6 +60,11 @@ New Features
* Added AES-CCM support in lookaside protocol (IPsec) for CN9K & CN10K.
* Added AES & DES DOCSIS algorithm support in lookaside crypto for CN9K.
+* **Added SGL support to AESNI_MB PMD.**
+
+ Added support for SGL to AESNI_MB PMD. Support for inplace,
+ OOP SGL in SGL out, OOP LB in SGL out, and OOP SGL in LB out added.
+
Removed Items
-------------
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 6d5d3ce8eb..62f7d4ee5a 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -937,7 +937,7 @@ static inline uint64_t
auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session,
uint32_t oop, const uint32_t auth_offset,
const uint32_t cipher_offset, const uint32_t auth_length,
- const uint32_t cipher_length)
+ const uint32_t cipher_length, uint8_t lb_sgl)
{
struct rte_mbuf *m_src, *m_dst;
uint8_t *p_src, *p_dst;
@@ -945,7 +945,7 @@ auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session,
uint32_t cipher_end, auth_end;
/* Only cipher then hash needs special calculation. */
- if (!oop || session->chain_order != IMB_ORDER_CIPHER_HASH)
+ if (!oop || session->chain_order != IMB_ORDER_CIPHER_HASH || lb_sgl)
return auth_offset;
m_src = op->sym->m_src;
@@ -1159,6 +1159,81 @@ handle_aead_sgl_job(IMB_JOB *job, IMB_MGR *mb_mgr,
return 0;
}
+static uint64_t
+sgl_linear_cipher_auth_len(IMB_JOB *job, uint64_t *auth_len)
+{
+ uint64_t cipher_len;
+
+ if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+ job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN)
+ cipher_len = (job->msg_len_to_cipher_in_bits >> 3) +
+ (job->cipher_start_src_offset_in_bits >> 3);
+ else
+ cipher_len = job->msg_len_to_cipher_in_bytes +
+ job->cipher_start_src_offset_in_bytes;
+
+ if (job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+ job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN)
+ *auth_len = (job->msg_len_to_hash_in_bits >> 3) +
+ job->hash_start_src_offset_in_bytes;
+ else if (job->hash_alg == IMB_AUTH_AES_GMAC)
+ *auth_len = job->u.GCM.aad_len_in_bytes;
+ else
+ *auth_len = job->msg_len_to_hash_in_bytes +
+ job->hash_start_src_offset_in_bytes;
+
+ return RTE_MAX(*auth_len, cipher_len);
+}
+
+static int
+handle_sgl_linear(IMB_JOB *job, struct rte_crypto_op *op, uint32_t dst_offset,
+ struct aesni_mb_session *session)
+{
+ uint64_t auth_len, total_len;
+ uint8_t *src, *linear_buf = NULL;
+ int lb_offset = 0;
+ struct rte_mbuf *src_seg;
+ uint16_t src_len;
+
+ total_len = sgl_linear_cipher_auth_len(job, &auth_len);
+ linear_buf = rte_zmalloc(NULL, total_len + job->auth_tag_output_len_in_bytes, 0);
+ if (linear_buf == NULL) {
+ IPSEC_MB_LOG(ERR, "Error allocating memory for SGL Linear Buffer\n");
+ return -1;
+ }
+
+ for (src_seg = op->sym->m_src; (src_seg != NULL) &&
+ (total_len - lb_offset > 0);
+ src_seg = src_seg->next) {
+ src = rte_pktmbuf_mtod(src_seg, uint8_t *);
+ src_len = RTE_MIN(src_seg->data_len, total_len - lb_offset);
+ rte_memcpy(linear_buf + lb_offset, src, src_len);
+ lb_offset += src_len;
+ }
+
+ job->src = linear_buf;
+ job->dst = linear_buf + dst_offset;
+ job->user_data2 = linear_buf;
+
+ if (job->hash_alg == IMB_AUTH_AES_GMAC)
+ job->u.GCM.aad = linear_buf;
+
+ if (session->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY)
+ job->auth_tag_output = linear_buf + lb_offset;
+ else
+ job->auth_tag_output = linear_buf + auth_len;
+
+ return 0;
+}
+
+static inline int
+imb_lib_support_sgl_algo(IMB_CIPHER_MODE alg)
+{
+ if (alg == IMB_CIPHER_CHACHA20_POLY1305
+ || alg == IMB_CIPHER_GCM)
+ return 1;
+ return 0;
+}
/**
* Process a crypto operation and complete a IMB_JOB job structure for
@@ -1171,7 +1246,8 @@ handle_aead_sgl_job(IMB_JOB *job, IMB_MGR *mb_mgr,
*
* @return
* - 0 on success, the IMB_JOB will be filled
- * - -1 if invalid session, IMB_JOB will not be filled
+ * - -1 if invalid session or errors allocationg SGL linear buffer,
+ * IMB_JOB will not be filled
*/
static inline int
set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
@@ -1191,6 +1267,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
uint32_t total_len;
IMB_JOB base_job;
uint8_t sgl = 0;
+ uint8_t lb_sgl = 0;
int ret;
session = ipsec_mb_get_session_private(qp, op);
@@ -1199,18 +1276,6 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
return -1;
}
- if (op->sym->m_src->nb_segs > 1) {
- if (session->cipher.mode != IMB_CIPHER_GCM
- && session->cipher.mode !=
- IMB_CIPHER_CHACHA20_POLY1305) {
- op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- IPSEC_MB_LOG(ERR, "Device only supports SGL for AES-GCM"
- " or CHACHA20_POLY1305 algorithms.");
- return -1;
- }
- sgl = 1;
- }
-
/* Set crypto operation */
job->chain_order = session->chain_order;
@@ -1233,6 +1298,26 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
job->dec_keys = session->cipher.expanded_aes_keys.decode;
}
+ if (!op->sym->m_dst) {
+ /* in-place operation */
+ m_dst = m_src;
+ oop = 0;
+ } else if (op->sym->m_dst == op->sym->m_src) {
+ /* in-place operation */
+ m_dst = m_src;
+ oop = 0;
+ } else {
+ /* out-of-place operation */
+ m_dst = op->sym->m_dst;
+ oop = 1;
+ }
+
+ if (m_src->nb_segs > 1 || m_dst->nb_segs > 1) {
+ sgl = 1;
+ if (!imb_lib_support_sgl_algo(session->cipher.mode))
+ lb_sgl = 1;
+ }
+
switch (job->hash_alg) {
case IMB_AUTH_AES_XCBC:
job->u.XCBC._k1_expanded = session->auth.xcbc.k1_expanded;
@@ -1331,20 +1416,6 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
m_offset = 0;
}
- if (!op->sym->m_dst) {
- /* in-place operation */
- m_dst = m_src;
- oop = 0;
- } else if (op->sym->m_dst == op->sym->m_src) {
- /* in-place operation */
- m_dst = m_src;
- oop = 0;
- } else {
- /* out-of-place operation */
- m_dst = op->sym->m_dst;
- oop = 1;
- }
-
/* Set digest output location */
if (job->hash_alg != IMB_AUTH_NULL &&
session->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) {
@@ -1435,7 +1506,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
job->hash_start_src_offset_in_bytes = auth_start_offset(op,
session, oop, auth_off_in_bytes,
ciph_off_in_bytes, auth_len_in_bytes,
- ciph_len_in_bytes);
+ ciph_len_in_bytes, lb_sgl);
job->msg_len_to_hash_in_bits = op->sym->auth.data.length;
job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -1452,7 +1523,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
job->hash_start_src_offset_in_bytes = auth_start_offset(op,
session, oop, auth_off_in_bytes,
ciph_off_in_bytes, auth_len_in_bytes,
- ciph_len_in_bytes);
+ ciph_len_in_bytes, lb_sgl);
job->msg_len_to_hash_in_bytes = auth_len_in_bytes;
job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -1464,7 +1535,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
session, oop, op->sym->auth.data.offset,
op->sym->cipher.data.offset,
op->sym->auth.data.length,
- op->sym->cipher.data.length);
+ op->sym->cipher.data.length, lb_sgl);
job->msg_len_to_hash_in_bytes = op->sym->auth.data.length;
job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -1525,6 +1596,10 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
job->user_data = op;
if (sgl) {
+
+ if (lb_sgl)
+ return handle_sgl_linear(job, op, m_offset, session);
+
base_job = *job;
job->sgl_state = IMB_SGL_INIT;
job = IMB_SUBMIT_JOB(mb_mgr);
@@ -1695,6 +1770,31 @@ generate_digest(IMB_JOB *job, struct rte_crypto_op *op,
sess->auth.req_digest_len);
}
+static void
+post_process_sgl_linear(struct rte_crypto_op *op, IMB_JOB *job,
+ struct aesni_mb_session *sess, uint8_t *linear_buf)
+{
+
+ int lb_offset = 0;
+ struct rte_mbuf *m_dst = op->sym->m_dst == NULL ?
+ op->sym->m_src : op->sym->m_dst;
+ uint16_t total_len, dst_len;
+ uint64_t auth_len;
+ uint8_t *dst;
+
+ total_len = sgl_linear_cipher_auth_len(job, &auth_len);
+
+ if (sess->auth.operation != RTE_CRYPTO_AUTH_OP_VERIFY)
+ total_len += job->auth_tag_output_len_in_bytes;
+
+ for (; (m_dst != NULL) && (total_len - lb_offset > 0); m_dst = m_dst->next) {
+ dst = rte_pktmbuf_mtod(m_dst, uint8_t *);
+ dst_len = RTE_MIN(m_dst->data_len, total_len - lb_offset);
+ rte_memcpy(dst, linear_buf + lb_offset, dst_len);
+ lb_offset += dst_len;
+ }
+}
+
/**
* Process a completed job and return rte_mbuf which job processed
*
@@ -1712,6 +1812,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
struct aesni_mb_session *sess = NULL;
uint32_t driver_id = ipsec_mb_get_driver_id(
IPSEC_MB_PMD_TYPE_AESNI_MB);
+ uint8_t *linear_buf = NULL;
#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
uint8_t is_docsis_sec = 0;
@@ -1740,6 +1841,14 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
case IMB_STATUS_COMPLETED:
op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ if ((op->sym->m_src->nb_segs > 1 ||
+ (op->sym->m_dst != NULL &&
+ op->sym->m_dst->nb_segs > 1)) &&
+ !imb_lib_support_sgl_algo(sess->cipher.mode)) {
+ linear_buf = (uint8_t *) job->user_data2;
+ post_process_sgl_linear(op, job, sess, linear_buf);
+ }
+
if (job->hash_alg == IMB_AUTH_NULL)
break;
@@ -1766,6 +1875,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
default:
op->status = RTE_CRYPTO_OP_STATUS_ERROR;
}
+ rte_free(linear_buf);
}
/* Free session if a session-less crypto op */
@@ -2252,7 +2362,11 @@ RTE_INIT(ipsec_mb_register_aesni_mb)
RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO |
RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA |
- RTE_CRYPTODEV_FF_SYM_SESSIONLESS;
+ RTE_CRYPTODEV_FF_SYM_SESSIONLESS |
+ RTE_CRYPTODEV_FF_IN_PLACE_SGL |
+ RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
+ RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
+ RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT;
aesni_mb_data->internals_priv_size = 0;
aesni_mb_data->ops = &aesni_mb_pmd_ops;
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v3 4/5] test/crypto: add OOP snow3g SGL tests
2022-09-21 12:50 ` [PATCH v3 0/5] add remaining SGL support to AESNI_MB Ciara Power
` (2 preceding siblings ...)
2022-09-21 12:50 ` [PATCH v3 3/5] crypto/ipsec_mb: add remaining SGL support Ciara Power
@ 2022-09-21 12:50 ` Ciara Power
2022-09-21 14:54 ` Zhang, Roy Fan
2022-09-21 12:50 ` [PATCH v3 5/5] test/crypto: add remaining blockcipher " Ciara Power
2022-09-26 8:06 ` [PATCH v3 0/5] add remaining SGL support to AESNI_MB De Lara Guarch, Pablo
5 siblings, 1 reply; 38+ messages in thread
From: Ciara Power @ 2022-09-21 12:50 UTC (permalink / raw)
To: Akhil Goyal, Fan Zhang; +Cc: dev, kai.ji, pablo.de.lara.guarch, Ciara Power
More tests are added to test variations of OOP SGL for snow3g.
This includes LB_IN_SGL_OUT and SGL_IN_LB_OUT.
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
app/test/test_cryptodev.c | 48 +++++++++++++++++++++++++++++++--------
1 file changed, 39 insertions(+), 9 deletions(-)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 5533c135b0..a48c0abae6 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -4347,7 +4347,8 @@ test_snow3g_encryption_oop(const struct snow3g_test_data *tdata)
}
static int
-test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
+test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata,
+ uint8_t sgl_in, uint8_t sgl_out)
{
struct crypto_testsuite_params *ts_params = &testsuite_params;
struct crypto_unittest_params *ut_params = &unittest_params;
@@ -4378,9 +4379,12 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
uint64_t feat_flags = dev_info.feature_flags;
- if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) {
- printf("Device doesn't support out-of-place scatter-gather "
- "in both input and output mbufs. "
+ if (((sgl_in && sgl_out) && !(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT))
+ || ((!sgl_in && sgl_out) &&
+ !(feat_flags & RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT))
+ || ((sgl_in && !sgl_out) &&
+ !(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT))) {
+ printf("Device doesn't support out-of-place scatter gather type. "
"Test Skipped.\n");
return TEST_SKIPPED;
}
@@ -4405,10 +4409,21 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
/* the algorithms block size */
plaintext_pad_len = RTE_ALIGN_CEIL(plaintext_len, 16);
- ut_params->ibuf = create_segmented_mbuf(ts_params->mbuf_pool,
- plaintext_pad_len, 10, 0);
- ut_params->obuf = create_segmented_mbuf(ts_params->mbuf_pool,
- plaintext_pad_len, 3, 0);
+ if (sgl_in)
+ ut_params->ibuf = create_segmented_mbuf(ts_params->mbuf_pool,
+ plaintext_pad_len, 10, 0);
+ else {
+ ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+ rte_pktmbuf_append(ut_params->ibuf, plaintext_pad_len);
+ }
+
+ if (sgl_out)
+ ut_params->obuf = create_segmented_mbuf(ts_params->mbuf_pool,
+ plaintext_pad_len, 3, 0);
+ else {
+ ut_params->obuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+ rte_pktmbuf_append(ut_params->obuf, plaintext_pad_len);
+ }
TEST_ASSERT_NOT_NULL(ut_params->ibuf,
"Failed to allocate input buffer in mempool");
@@ -6762,9 +6777,20 @@ test_snow3g_encryption_test_case_1_oop(void)
static int
test_snow3g_encryption_test_case_1_oop_sgl(void)
{
- return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1);
+ return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 1, 1);
+}
+
+static int
+test_snow3g_encryption_test_case_1_oop_lb_in_sgl_out(void)
+{
+ return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 0, 1);
}
+static int
+test_snow3g_encryption_test_case_1_oop_sgl_in_lb_out(void)
+{
+ return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 1, 0);
+}
static int
test_snow3g_encryption_test_case_1_offset_oop(void)
@@ -15993,6 +16019,10 @@ static struct unit_test_suite cryptodev_snow3g_testsuite = {
test_snow3g_encryption_test_case_1_oop),
TEST_CASE_ST(ut_setup, ut_teardown,
test_snow3g_encryption_test_case_1_oop_sgl),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_1_oop_lb_in_sgl_out),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_1_oop_sgl_in_lb_out),
TEST_CASE_ST(ut_setup, ut_teardown,
test_snow3g_encryption_test_case_1_offset_oop),
TEST_CASE_ST(ut_setup, ut_teardown,
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v3 5/5] test/crypto: add remaining blockcipher SGL tests
2022-09-21 12:50 ` [PATCH v3 0/5] add remaining SGL support to AESNI_MB Ciara Power
` (3 preceding siblings ...)
2022-09-21 12:50 ` [PATCH v3 4/5] test/crypto: add OOP snow3g SGL tests Ciara Power
@ 2022-09-21 12:50 ` Ciara Power
2022-09-21 14:55 ` Zhang, Roy Fan
2022-09-26 8:06 ` [PATCH v3 0/5] add remaining SGL support to AESNI_MB De Lara Guarch, Pablo
5 siblings, 1 reply; 38+ messages in thread
From: Ciara Power @ 2022-09-21 12:50 UTC (permalink / raw)
To: Akhil Goyal, Fan Zhang, Yipeng Wang, Sameh Gobriel,
Bruce Richardson, Vladimir Medvedkin
Cc: dev, kai.ji, pablo.de.lara.guarch, Ciara Power
The current blockcipher test function only has support for two types of
SGL test, INPLACE or OOP_SGL_IN_LB_OUT. These types are hardcoded into
the function, with the number of segments always set to 3.
To ensure all SGL types are tested, blockcipher test vectors now have
fields to specify SGL type, and the number of segments.
If these fields are missing, the previous defaults are used,
either INPLACE or OOP_SGL_IN_LB_OUT, with 3 segments.
Some AES and Hash vectors are modified to use these new fields, and new
AES tests are added to test the SGL types that were not previously
being tested.
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
app/test/test_cryptodev_aes_test_vectors.h | 345 +++++++++++++++++---
app/test/test_cryptodev_blockcipher.c | 50 +--
app/test/test_cryptodev_blockcipher.h | 2 +
app/test/test_cryptodev_hash_test_vectors.h | 8 +-
4 files changed, 335 insertions(+), 70 deletions(-)
diff --git a/app/test/test_cryptodev_aes_test_vectors.h b/app/test/test_cryptodev_aes_test_vectors.h
index a797af1b00..2c1875d3d9 100644
--- a/app/test/test_cryptodev_aes_test_vectors.h
+++ b/app/test/test_cryptodev_aes_test_vectors.h
@@ -4163,12 +4163,44 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
},
{
.test_descr = "AES-192-CTR XCBC Decryption Digest Verify "
- "Scatter Gather",
+ "Scatter Gather (Inplace)",
+ .test_data = &aes_test_data_2,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CTR XCBC Decryption Digest Verify "
+ "Scatter Gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_2,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CTR XCBC Decryption Digest Verify "
+ "Scatter Gather OOP (LB in SGL out)",
.test_data = &aes_test_data_2,
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
},
+ {
+ .test_descr = "AES-192-CTR XCBC Decryption Digest Verify "
+ "Scatter Gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_2,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
+ },
+
{
.test_descr = "AES-256-CTR HMAC-SHA1 Encryption Digest",
.test_data = &aes_test_data_3,
@@ -4193,11 +4225,52 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
- "Scatter Gather",
+ "Scatter Gather (Inplace)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
+ "Scatter Gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
+ "Scatter Gather OOP 16 segs (SGL in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 16
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
+ "Scatter Gather OOP (LB in SGL out)",
.test_data = &aes_test_data_4,
.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
+ "Scatter Gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -4207,10 +4280,52 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
- "Verify Scatter Gather",
+ "Verify Scatter Gather (Inplace)",
.test_data = &aes_test_data_4,
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
+ "Verify Scatter Gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
+ "Verify Scatter Gather OOP 16 segs (SGL in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 16
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
+ "Verify Scatter Gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
+ "Verify Scatter Gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -4255,12 +4370,46 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
},
{
.test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest "
- "Scatter Gather Sessionless",
+ "Scatter Gather Sessionless (Inplace)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
+ BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest "
+ "Scatter Gather Sessionless OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
+ BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest "
+ "Scatter Gather Sessionless OOP (LB in SGL out)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
+ BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest "
+ "Scatter Gather Sessionless OOP (SGL in LB out)",
.test_data = &aes_test_data_6,
.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
@@ -4270,11 +4419,42 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
},
{
.test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
- "Verify Scatter Gather",
+ "Verify Scatter Gather (Inplace)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 2
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
+ "Verify Scatter Gather OOP (SGL in SGL out)",
.test_data = &aes_test_data_6,
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
+ "Verify Scatter Gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
+ "Verify Scatter Gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC XCBC Encryption Digest",
@@ -4358,6 +4538,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN_ENC,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
@@ -4382,6 +4564,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -4397,6 +4581,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_DEC_AUTH_VERIFY,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -4421,6 +4607,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -4504,6 +4692,41 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
.test_data = &aes_test_data_4,
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
},
+ {
+ .test_descr = "AES-128-CBC Encryption Scatter gather (Inplace)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC Encryption Scatter gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC Encryption Scatter gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC Encryption Scatter gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
+ },
{
.test_descr = "AES-128-CBC Decryption",
.test_data = &aes_test_data_4,
@@ -4515,11 +4738,39 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
},
{
- .test_descr = "AES-192-CBC Encryption Scatter gather",
+ .test_descr = "AES-192-CBC Encryption Scatter gather (Inplace)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Encryption Scatter gather OOP (SGL in SGL out)",
.test_data = &aes_test_data_10,
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Encryption Scatter gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Encryption Scatter gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-192-CBC Decryption",
@@ -4527,10 +4778,39 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
},
{
- .test_descr = "AES-192-CBC Decryption Scatter Gather",
+ .test_descr = "AES-192-CBC Decryption Scatter Gather (Inplace)",
.test_data = &aes_test_data_10,
.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Decryption Scatter Gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Decryption Scatter Gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Decryption Scatter Gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-256-CBC Encryption",
@@ -4689,67 +4969,42 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
},
{
.test_descr = "AES-256-XTS Encryption (512-byte plaintext"
- " Dataunit 512) Scater gather OOP",
+ " Dataunit 512) Scatter gather OOP (SGL in LB out)",
.test_data = &aes_test_data_xts_wrapped_key_48_pt_512_du_512,
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
- .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
- BLOCKCIPHER_TEST_FEATURE_SG,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-256-XTS Decryption (512-byte plaintext"
- " Dataunit 512) Scater gather OOP",
+ " Dataunit 512) Scatter gather OOP (SGL in LB out)",
.test_data = &aes_test_data_xts_wrapped_key_48_pt_512_du_512,
.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
BLOCKCIPHER_TEST_FEATURE_SG,
- },
- {
- .test_descr = "AES-256-XTS Encryption (512-byte plaintext"
- " Dataunit 0) Scater gather OOP",
- .test_data = &aes_test_data_xts_wrapped_key_48_pt_512_du_0,
- .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
- .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
- BLOCKCIPHER_TEST_FEATURE_SG,
- },
- {
- .test_descr = "AES-256-XTS Decryption (512-byte plaintext"
- " Dataunit 0) Scater gather OOP",
- .test_data = &aes_test_data_xts_wrapped_key_48_pt_512_du_0,
- .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
- .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
- BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-256-XTS Encryption (4096-byte plaintext"
- " Dataunit 4096) Scater gather OOP",
+ " Dataunit 4096) Scatter gather OOP (SGL in LB out)",
.test_data = &aes_test_data_xts_wrapped_key_48_pt_4096_du_4096,
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-256-XTS Decryption (4096-byte plaintext"
- " Dataunit 4096) Scater gather OOP",
+ " Dataunit 4096) Scatter gather OOP (SGL in LB out)",
.test_data = &aes_test_data_xts_wrapped_key_48_pt_4096_du_4096,
.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
BLOCKCIPHER_TEST_FEATURE_SG,
- },
- {
- .test_descr = "AES-256-XTS Encryption (4096-byte plaintext"
- " Dataunit 0) Scater gather OOP",
- .test_data = &aes_test_data_xts_wrapped_key_48_pt_4096_du_0,
- .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
- .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
- BLOCKCIPHER_TEST_FEATURE_SG,
- },
- {
- .test_descr = "AES-256-XTS Decryption (4096-byte plaintext"
- " Dataunit 0) Scater gather OOP",
- .test_data = &aes_test_data_xts_wrapped_key_48_pt_4096_du_0,
- .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
- .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
- BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "cipher-only - NULL algo - x8 - encryption",
diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c
index b5813b956f..f1ef0b606f 100644
--- a/app/test/test_cryptodev_blockcipher.c
+++ b/app/test/test_cryptodev_blockcipher.c
@@ -96,7 +96,9 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
uint8_t tmp_dst_buf[MBUF_SIZE];
uint32_t pad_len;
- int nb_segs = 1;
+ int nb_segs_in = 1;
+ int nb_segs_out = 1;
+ uint64_t sgl_type = t->sgl_flag;
uint32_t nb_iterates = 0;
rte_cryptodev_info_get(dev_id, &dev_info);
@@ -121,30 +123,31 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
}
}
if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_SG) {
- uint64_t oop_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT;
+ if (sgl_type == 0) {
+ if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP)
+ sgl_type = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT;
+ else
+ sgl_type = RTE_CRYPTODEV_FF_IN_PLACE_SGL;
+ }
- if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) {
- if (!(feat_flags & oop_flag)) {
- printf("Device doesn't support out-of-place "
- "scatter-gather in input mbuf. "
- "Test Skipped.\n");
- snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
- "SKIPPED");
- return TEST_SKIPPED;
- }
- } else {
- if (!(feat_flags & RTE_CRYPTODEV_FF_IN_PLACE_SGL)) {
- printf("Device doesn't support in-place "
- "scatter-gather mbufs. "
- "Test Skipped.\n");
- snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
- "SKIPPED");
- return TEST_SKIPPED;
- }
+ if (!(feat_flags & sgl_type)) {
+ printf("Device doesn't support scatter-gather type."
+ " Test Skipped.\n");
+ snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
+ "SKIPPED");
+ return TEST_SKIPPED;
}
- nb_segs = 3;
+ if (sgl_type == RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT ||
+ sgl_type == RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT ||
+ sgl_type == RTE_CRYPTODEV_FF_IN_PLACE_SGL)
+ nb_segs_in = t->sgl_segs == 0 ? 3 : t->sgl_segs;
+
+ if (sgl_type == RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT ||
+ sgl_type == RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)
+ nb_segs_out = t->sgl_segs == 0 ? 3 : t->sgl_segs;
}
+
if (!!(feat_flags & RTE_CRYPTODEV_FF_CIPHER_WRAPPED_KEY) ^
tdata->wrapped_key) {
snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
@@ -207,7 +210,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
/* for contiguous mbuf, nb_segs is 1 */
ibuf = create_segmented_mbuf(mbuf_pool,
- tdata->ciphertext.len, nb_segs, src_pattern);
+ tdata->ciphertext.len, nb_segs_in, src_pattern);
if (ibuf == NULL) {
snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
"line %u FAILED: %s",
@@ -256,7 +259,8 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
}
if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) {
- obuf = rte_pktmbuf_alloc(mbuf_pool);
+ obuf = create_segmented_mbuf(mbuf_pool,
+ tdata->ciphertext.len, nb_segs_out, dst_pattern);
if (!obuf) {
snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "line %u "
"FAILED: %s", __LINE__,
diff --git a/app/test/test_cryptodev_blockcipher.h b/app/test/test_cryptodev_blockcipher.h
index 84f5d57787..bad93a5ec1 100644
--- a/app/test/test_cryptodev_blockcipher.h
+++ b/app/test/test_cryptodev_blockcipher.h
@@ -57,6 +57,8 @@ struct blockcipher_test_case {
const struct blockcipher_test_data *test_data;
uint8_t op_mask; /* operation mask */
uint8_t feature_mask;
+ uint64_t sgl_flag;
+ uint8_t sgl_segs;
};
struct blockcipher_test_data {
diff --git a/app/test/test_cryptodev_hash_test_vectors.h b/app/test/test_cryptodev_hash_test_vectors.h
index 5bd7858de4..62602310b2 100644
--- a/app/test/test_cryptodev_hash_test_vectors.h
+++ b/app/test/test_cryptodev_hash_test_vectors.h
@@ -467,10 +467,12 @@ static const struct blockcipher_test_case hash_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN,
},
{
- .test_descr = "HMAC-SHA1 Digest Scatter Gather",
+ .test_descr = "HMAC-SHA1 Digest Scatter Gather (Inplace)",
.test_data = &hmac_sha1_test_vector,
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "HMAC-SHA1 Digest Verify",
@@ -478,10 +480,12 @@ static const struct blockcipher_test_case hash_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY,
},
{
- .test_descr = "HMAC-SHA1 Digest Verify Scatter Gather",
+ .test_descr = "HMAC-SHA1 Digest Verify Scatter Gather (Inplace)",
.test_data = &hmac_sha1_test_vector,
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "SHA224 Digest",
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: [PATCH v2 2/5] crypto/ipsec_mb: fix sessionless cleanup
2022-09-15 11:38 ` De Lara Guarch, Pablo
@ 2022-09-21 13:02 ` Power, Ciara
0 siblings, 0 replies; 38+ messages in thread
From: Power, Ciara @ 2022-09-21 13:02 UTC (permalink / raw)
To: De Lara Guarch, Pablo, Zhang, Roy Fan; +Cc: dev, Ji, Kai, Mrozowicz, SlawomirX
Hi Pablo,
> -----Original Message-----
> From: De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>
> Sent: Thursday 15 September 2022 13:39
> To: Power, Ciara <ciara.power@intel.com>; Zhang, Roy Fan
> <roy.fan.zhang@intel.com>
> Cc: dev@dpdk.org; Ji, Kai <kai.ji@intel.com>; Mrozowicz, SlawomirX
> <slawomirx.mrozowicz@intel.com>
> Subject: RE: [PATCH v2 2/5] crypto/ipsec_mb: fix sessionless cleanup
>
> Hi Ciara,
>
> > -----Original Message-----
> > From: Power, Ciara <ciara.power@intel.com>
> > Sent: Thursday, August 25, 2022 3:29 PM
> > To: Zhang, Roy Fan <roy.fan.zhang@intel.com>; De Lara Guarch, Pablo
> > <pablo.de.lara.guarch@intel.com>
> > Cc: dev@dpdk.org; Ji, Kai <kai.ji@intel.com>; Power, Ciara
> > <ciara.power@intel.com>; Mrozowicz, SlawomirX
> > <slawomirx.mrozowicz@intel.com>
> > Subject: [PATCH v2 2/5] crypto/ipsec_mb: fix sessionless cleanup
> >
> > Currently, for a sessionless op, the session created is reset before
> > being put back into the mempool. This causes issues as the object
> > isn't correctly released into the mempool.
> >
> > Fixes: c68d7aa354f6 ("crypto/aesni_mb: use architecture independent
> > macros")
> > Fixes: b3bbd9e5f265 ("cryptodev: support device independent sessions")
> > Fixes: f16662885472 ("crypto/ipsec_mb: add chacha_poly PMD")
> > Cc: roy.fan.zhang@intel.com
> > Cc: slawomirx.mrozowicz@intel.com
> > Cc: kai.ji@intel.com
> >
> > Signed-off-by: Ciara Power <ciara.power@intel.com>
> > ---
> > drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 4 ----
> > drivers/crypto/ipsec_mb/pmd_chacha_poly.c | 4 ----
> > drivers/crypto/ipsec_mb/pmd_kasumi.c | 5 -----
> > drivers/crypto/ipsec_mb/pmd_snow3g.c | 4 ----
> > drivers/crypto/ipsec_mb/pmd_zuc.c | 4 ----
> > 5 files changed, 21 deletions(-)
> >
> > diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
> > b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
> > index 6d5d3ce8eb..944fce0261 100644
> > --- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
> > +++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
> > @@ -1770,10 +1770,6 @@ post_process_mb_job(struct ipsec_mb_qp *qp,
> > IMB_JOB *job)
> >
> > /* Free session if a session-less crypto op */
> > if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
> > - memset(sess, 0, sizeof(struct aesni_mb_session));
> > - memset(op->sym->session, 0,
> > -
>
> This will leave some info leftover, so it may cause a problem if this object is
> reused? Is this memset clearing mempool object header and that's the reason
> why it cannot be released properly?
> Maybe Fan/Kai/Slawomir will know more on this.
[CP]
Yes, I believe this would leave data leftover, my initial solution was incorrect.
I have sent a v3 fix which takes a different approach, after debugging the issue a little more.
I found the sessionless tests were reusing data in old session objects from previous session testcases,
which had not been reset before being put back into the mempool.
Once that reset was added, the sessionless tests failed due to session->nb_drivers being 0 - this was due
to the value never being set for sessionless operations. Instead of pulling from the mempool directly,
I added a call to sym_session_create(), which pulls from the mempool, and also sets values such as nb_drivers.
These changes can be seen here: https://patchwork.dpdk.org/project/dpdk/patch/20220921125036.9104-3-ciara.power@intel.com/
Thanks for the review.
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: [PATCH v3 1/5] test/crypto: fix wireless auth digest segment
2022-09-21 12:50 ` [PATCH v3 1/5] test/crypto: fix wireless auth digest segment Ciara Power
@ 2022-09-21 13:32 ` Zhang, Roy Fan
0 siblings, 0 replies; 38+ messages in thread
From: Zhang, Roy Fan @ 2022-09-21 13:32 UTC (permalink / raw)
To: Power, Ciara, Akhil Goyal; +Cc: dev, Ji, Kai, De Lara Guarch, Pablo
Hi Ciara,
> -----Original Message-----
> From: Power, Ciara <ciara.power@intel.com>
> Sent: Wednesday, September 21, 2022 1:51 PM
> To: Akhil Goyal <gakhil@marvell.com>; Zhang, Roy Fan
> <roy.fan.zhang@intel.com>
> Cc: dev@dpdk.org; Ji, Kai <kai.ji@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; Power, Ciara <ciara.power@intel.com>
> Subject: [PATCH v3 1/5] test/crypto: fix wireless auth digest segment
>
> The segment size for some tests was too small to hold the auth digest.
> This caused issues when using op->sym->auth.digest.data for comparisons
> in AESNI_MB PMD after a subsequent patch enables SGL.
>
> For example, if segment size is 2, and digest size is 4, then 4 bytes
> are read from op->sym->auth.digest.data, which overflows into the memory
> after the segment, rather than using the second segment that contains
> the remaining half of the digest.
>
> Fixes: 11c5485bb276 ("test/crypto: add scatter-gather tests for IP and OOP")
>
> Signed-off-by: Ciara Power <ciara.power@intel.com>
> ---
> app/test/test_cryptodev.c | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
> index 6ee4480399..5533c135b0 100644
> --- a/app/test/test_cryptodev.c
> +++ b/app/test/test_cryptodev.c
> @@ -3040,6 +3040,14 @@ create_wireless_algo_auth_cipher_operation(
> remaining_off -= rte_pktmbuf_data_len(sgl_buf);
> sgl_buf = sgl_buf->next;
> }
> +
> + /* The last segment should be large enough to hold full digest
> */
> + if (sgl_buf->data_len < auth_tag_len) {
> + rte_pktmbuf_free(sgl_buf->next);
> + sgl_buf->next = NULL;
> + rte_pktmbuf_append(sgl_buf, auth_tag_len -
> sgl_buf->data_len);
The append shall work once the mbufs are correctly set and mempool is
allocated well. But we should not assume that - it is better to add a simple
check here to make sure the mbuf are appended.
Other than that,
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: [PATCH v3 2/5] crypto/ipsec_mb: fix session creation for sessionless
2022-09-21 12:50 ` [PATCH v3 2/5] crypto/ipsec_mb: fix session creation for sessionless Ciara Power
@ 2022-09-21 13:33 ` Zhang, Roy Fan
0 siblings, 0 replies; 38+ messages in thread
From: Zhang, Roy Fan @ 2022-09-21 13:33 UTC (permalink / raw)
To: Power, Ciara, De Lara Guarch, Pablo, Akhil Goyal
Cc: dev, Ji, Kai, Mrozowicz, SlawomirX
> -----Original Message-----
> From: Power, Ciara <ciara.power@intel.com>
> Sent: Wednesday, September 21, 2022 1:51 PM
> To: Zhang, Roy Fan <roy.fan.zhang@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; Akhil Goyal <gakhil@marvell.com>
> Cc: dev@dpdk.org; Ji, Kai <kai.ji@intel.com>; Power, Ciara
> <ciara.power@intel.com>; Mrozowicz, SlawomirX
> <slawomirx.mrozowicz@intel.com>
> Subject: [PATCH v3 2/5] crypto/ipsec_mb: fix session creation for sessionless
>
> Currently, for a sessionless op, the session taken from the mempool
> contains some values previously set by a testcase that does use a
> session. This is due to the session object not being reset before going
> back into the mempool.
>
> This caused issues when multiple sessionless testcases ran, as the
> previously set objects were being used for the first few testcases, but
> subsequent testcases used empty objects, as they were being correctly
> reset by the sessionless testcases.
>
> To fix this, the session objects are now reset before being returned to
> the mempool for session testcases. In addition, rather than pulling the
> session object directly from the mempool for sessionless testcases, the
> session_create() function is now used, which sets the required values,
> such as nb_drivers.
>
> Fixes: c75542ae4200 ("crypto/ipsec_mb: introduce IPsec_mb framework")
> Fixes: b3bbd9e5f265 ("cryptodev: support device independent sessions")
> Cc: roy.fan.zhang@intel.com
> Cc: slawomirx.mrozowicz@intel.com
>
> Signed-off-by: Ciara Power <ciara.power@intel.com>
>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: [PATCH v3 3/5] crypto/ipsec_mb: add remaining SGL support
2022-09-21 12:50 ` [PATCH v3 3/5] crypto/ipsec_mb: add remaining SGL support Ciara Power
@ 2022-09-21 14:50 ` Zhang, Roy Fan
0 siblings, 0 replies; 38+ messages in thread
From: Zhang, Roy Fan @ 2022-09-21 14:50 UTC (permalink / raw)
To: Power, Ciara, De Lara Guarch, Pablo; +Cc: dev, Ji, Kai
> -----Original Message-----
> From: Power, Ciara <ciara.power@intel.com>
> Sent: Wednesday, September 21, 2022 1:51 PM
> To: Zhang, Roy Fan <roy.fan.zhang@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>
> Cc: dev@dpdk.org; Ji, Kai <kai.ji@intel.com>; Power, Ciara
> <ciara.power@intel.com>
> Subject: [PATCH v3 3/5] crypto/ipsec_mb: add remaining SGL support
>
> The intel-ipsec-mb library supports SGL for GCM and ChaChaPoly
> algorithms using the JOB API.
> This support was added to AESNI_MB PMD previously, but the SGL feature
> flags could not be added due to no SGL support for other algorithms.
>
> This patch adds a workaround SGL approach for other algorithms
> using the JOB API. The segmented input buffers are copied into a
> linear buffer, which is passed as a single job to intel-ipsec-mb.
> The job is processed, and on return, the linear buffer is split into the
> original destination segments.
>
> Existing AESNI_MB testcases are passing with these feature flags added.
>
> Signed-off-by: Ciara Power <ciara.power@intel.com>
>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: [PATCH v3 4/5] test/crypto: add OOP snow3g SGL tests
2022-09-21 12:50 ` [PATCH v3 4/5] test/crypto: add OOP snow3g SGL tests Ciara Power
@ 2022-09-21 14:54 ` Zhang, Roy Fan
0 siblings, 0 replies; 38+ messages in thread
From: Zhang, Roy Fan @ 2022-09-21 14:54 UTC (permalink / raw)
To: Power, Ciara, Akhil Goyal; +Cc: dev, Ji, Kai, De Lara Guarch, Pablo
> -----Original Message-----
> From: Power, Ciara <ciara.power@intel.com>
> Sent: Wednesday, September 21, 2022 1:51 PM
> To: Akhil Goyal <gakhil@marvell.com>; Zhang, Roy Fan
> <roy.fan.zhang@intel.com>
> Cc: dev@dpdk.org; Ji, Kai <kai.ji@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; Power, Ciara <ciara.power@intel.com>
> Subject: [PATCH v3 4/5] test/crypto: add OOP snow3g SGL tests
>
> More tests are added to test variations of OOP SGL for snow3g.
> This includes LB_IN_SGL_OUT and SGL_IN_LB_OUT.
>
> Signed-off-by: Ciara Power <ciara.power@intel.com>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: [PATCH v3 5/5] test/crypto: add remaining blockcipher SGL tests
2022-09-21 12:50 ` [PATCH v3 5/5] test/crypto: add remaining blockcipher " Ciara Power
@ 2022-09-21 14:55 ` Zhang, Roy Fan
0 siblings, 0 replies; 38+ messages in thread
From: Zhang, Roy Fan @ 2022-09-21 14:55 UTC (permalink / raw)
To: Power, Ciara, Akhil Goyal, Wang, Yipeng1, Gobriel, Sameh,
Richardson, Bruce, Medvedkin, Vladimir
Cc: dev, Ji, Kai, De Lara Guarch, Pablo
> -----Original Message-----
> From: Power, Ciara <ciara.power@intel.com>
> Sent: Wednesday, September 21, 2022 1:51 PM
> To: Akhil Goyal <gakhil@marvell.com>; Zhang, Roy Fan
> <roy.fan.zhang@intel.com>; Wang, Yipeng1 <yipeng1.wang@intel.com>;
> Gobriel, Sameh <sameh.gobriel@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Medvedkin, Vladimir
> <vladimir.medvedkin@intel.com>
> Cc: dev@dpdk.org; Ji, Kai <kai.ji@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; Power, Ciara <ciara.power@intel.com>
> Subject: [PATCH v3 5/5] test/crypto: add remaining blockcipher SGL tests
>
> The current blockcipher test function only has support for two types of
> SGL test, INPLACE or OOP_SGL_IN_LB_OUT. These types are hardcoded into
> the function, with the number of segments always set to 3.
>
> To ensure all SGL types are tested, blockcipher test vectors now have
> fields to specify SGL type, and the number of segments.
> If these fields are missing, the previous defaults are used,
> either INPLACE or OOP_SGL_IN_LB_OUT, with 3 segments.
>
> Some AES and Hash vectors are modified to use these new fields, and new
> AES tests are added to test the SGL types that were not previously
> being tested.
>
> Signed-off-by: Ciara Power <ciara.power@intel.com>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: [PATCH v3 0/5] add remaining SGL support to AESNI_MB
2022-09-21 12:50 ` [PATCH v3 0/5] add remaining SGL support to AESNI_MB Ciara Power
` (4 preceding siblings ...)
2022-09-21 12:50 ` [PATCH v3 5/5] test/crypto: add remaining blockcipher " Ciara Power
@ 2022-09-26 8:06 ` De Lara Guarch, Pablo
5 siblings, 0 replies; 38+ messages in thread
From: De Lara Guarch, Pablo @ 2022-09-26 8:06 UTC (permalink / raw)
To: Power, Ciara; +Cc: dev, Ji, Kai, Zhang, Roy Fan
Hi Ciara,
> -----Original Message-----
> From: Power, Ciara <ciara.power@intel.com>
> Sent: Wednesday, September 21, 2022 2:51 PM
> Cc: dev@dpdk.org; Ji, Kai <kai.ji@intel.com>; Zhang, Roy Fan
> <roy.fan.zhang@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; Power, Ciara <ciara.power@intel.com>
> Subject: [PATCH v3 0/5] add remaining SGL support to AESNI_MB
>
> Currently, the intel-ipsec-mb library only supports SGL for GCM and ChaCha20-
> Poly1305 algorithms through the JOB API.
>
> To add SGL support for other algorithms, a workaround approach is added in
> the AESNI_MB PMD. SGL feature flags can now be added to the PMD.
>
> This patchset also includes a fix for SGL wireless operations, session cleanup and
> session creation for sessionless operations.
>
> Some additional Snow3G SGL and AES tests are also added for various SGL
> input/output combinations that were not previously being tested.
>
> v3:
> - Modified fix to reset sessions, and ensure values are then set for
> sessionless testcases. V2 fix just ensured the same values in
> session objects were reused, as they were not being reset,
> which was incorrect.
> - Reduced code duplication by adding a reusable function.
> - Changed int to uint64_t for total_len.
>
> v2:
> - Added documentation changes.
> - Added fix for sessionless cleanup.
> - Modified blockcipher tests to support various SGL types.
> - Added more SGL AES tests.
> - Small fixes.
>
>
> Ciara Power (5):
> test/crypto: fix wireless auth digest segment
> crypto/ipsec_mb: fix session creation for sessionless
> crypto/ipsec_mb: add remaining SGL support
> test/crypto: add OOP snow3g SGL tests
> test/crypto: add remaining blockcipher SGL tests
>
> app/test/test_cryptodev.c | 56 +++-
> app/test/test_cryptodev_aes_test_vectors.h | 345 +++++++++++++++++---
> app/test/test_cryptodev_blockcipher.c | 50 +--
> app/test/test_cryptodev_blockcipher.h | 2 +
> app/test/test_cryptodev_hash_test_vectors.h | 8 +-
> doc/guides/cryptodevs/aesni_mb.rst | 1 -
> doc/guides/cryptodevs/features/aesni_mb.ini | 4 +
> doc/guides/rel_notes/release_22_11.rst | 5 +
> drivers/crypto/ipsec_mb/ipsec_mb_private.h | 12 +-
> drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 180 ++++++++--
> lib/cryptodev/rte_cryptodev.c | 1 +
> 11 files changed, 547 insertions(+), 117 deletions(-)
>
> --
> 2.25.1
Thanks for addressing the comments!
Series-acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v4 0/5] add remaining SGL support to AESNI_MB
2022-08-12 13:23 [PATCH 0/3] add remaining SGL support to AESNI_MB Ciara Power
` (4 preceding siblings ...)
2022-09-21 12:50 ` [PATCH v3 0/5] add remaining SGL support to AESNI_MB Ciara Power
@ 2022-10-04 12:55 ` Ciara Power
2022-10-04 12:55 ` [PATCH v4 1/5] test/crypto: fix wireless auth digest segment Ciara Power
` (5 more replies)
2022-10-07 13:46 ` [PATCH v5 0/4] " Ciara Power
6 siblings, 6 replies; 38+ messages in thread
From: Ciara Power @ 2022-10-04 12:55 UTC (permalink / raw)
Cc: dev, kai.ji, pablo.de.lara.guarch, Ciara Power
Currently, the intel-ipsec-mb library only supports SGL for
GCM and ChaCha20-Poly1305 algorithms through the JOB API.
To add SGL support for other algorithms, a workaround approach is
added in the AESNI_MB PMD. SGL feature flags can now be added to
the PMD.
This patchset also includes a fix for SGL wireless operations,
session cleanup and session creation for sessionless operations.
Some additional Snow3G SGL and AES tests are also added for
various SGL input/output combinations that were not
previously being tested.
v4: Added error check when appending space for digest to buffer.
v3:
- Modified fix to reset sessions, and ensure values are then set for
sessionless testcases. V2 fix just ensured the same values in
session objects were reused, as they were not being reset,
which was incorrect.
- Reduced code duplication by adding a reusable function.
- Changed int to uint64_t for total_len.
v2:
- Added documentation changes.
- Added fix for sessionless cleanup.
- Modified blockcipher tests to support various SGL types.
- Added more SGL AES tests.
- Small fixes.
Ciara Power (5):
test/crypto: fix wireless auth digest segment
crypto/ipsec_mb: fix session creation for sessionless
crypto/ipsec_mb: add remaining SGL support
test/crypto: add OOP snow3g SGL tests
test/crypto: add remaining blockcipher SGL tests
app/test/test_cryptodev.c | 58 +++-
app/test/test_cryptodev_aes_test_vectors.h | 345 +++++++++++++++++---
app/test/test_cryptodev_blockcipher.c | 50 +--
app/test/test_cryptodev_blockcipher.h | 2 +
app/test/test_cryptodev_hash_test_vectors.h | 8 +-
doc/guides/cryptodevs/aesni_mb.rst | 1 -
doc/guides/cryptodevs/features/aesni_mb.ini | 4 +
doc/guides/rel_notes/release_22_11.rst | 5 +
drivers/crypto/ipsec_mb/ipsec_mb_private.h | 12 +-
drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 180 ++++++++--
lib/cryptodev/rte_cryptodev.c | 1 +
11 files changed, 549 insertions(+), 117 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v4 1/5] test/crypto: fix wireless auth digest segment
2022-10-04 12:55 ` [PATCH v4 " Ciara Power
@ 2022-10-04 12:55 ` Ciara Power
2022-10-04 12:55 ` [PATCH v4 2/5] crypto/ipsec_mb: fix session creation for sessionless Ciara Power
` (4 subsequent siblings)
5 siblings, 0 replies; 38+ messages in thread
From: Ciara Power @ 2022-10-04 12:55 UTC (permalink / raw)
To: Akhil Goyal, Fan Zhang
Cc: dev, kai.ji, pablo.de.lara.guarch, Ciara Power, Fan Zhang
The segment size for some tests was too small to hold the auth digest.
This caused issues when using op->sym->auth.digest.data for comparisons
in AESNI_MB PMD after a subsequent patch enables SGL.
For example, if segment size is 2, and digest size is 4, then 4 bytes
are read from op->sym->auth.digest.data, which overflows into the memory
after the segment, rather than using the second segment that contains
the remaining half of the digest.
Fixes: 11c5485bb276 ("test/crypto: add scatter-gather tests for IP and OOP")
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
v4: Added failure check when appending digest size to buffer.
---
app/test/test_cryptodev.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 0c39b16b71..799eff0649 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -3051,6 +3051,16 @@ create_wireless_algo_auth_cipher_operation(
remaining_off -= rte_pktmbuf_data_len(sgl_buf);
sgl_buf = sgl_buf->next;
}
+
+ /* The last segment should be large enough to hold full digest */
+ if (sgl_buf->data_len < auth_tag_len) {
+ rte_pktmbuf_free(sgl_buf->next);
+ sgl_buf->next = NULL;
+ TEST_ASSERT_NOT_NULL(rte_pktmbuf_append(sgl_buf,
+ auth_tag_len - sgl_buf->data_len),
+ "No room to append auth tag");
+ }
+
sym_op->auth.digest.data = rte_pktmbuf_mtod_offset(sgl_buf,
uint8_t *, remaining_off);
sym_op->auth.digest.phys_addr = rte_pktmbuf_iova_offset(sgl_buf,
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v4 2/5] crypto/ipsec_mb: fix session creation for sessionless
2022-10-04 12:55 ` [PATCH v4 " Ciara Power
2022-10-04 12:55 ` [PATCH v4 1/5] test/crypto: fix wireless auth digest segment Ciara Power
@ 2022-10-04 12:55 ` Ciara Power
2022-10-04 12:55 ` [PATCH v4 3/5] crypto/ipsec_mb: add remaining SGL support Ciara Power
` (3 subsequent siblings)
5 siblings, 0 replies; 38+ messages in thread
From: Ciara Power @ 2022-10-04 12:55 UTC (permalink / raw)
To: Kai Ji, Pablo de Lara, Akhil Goyal, Fan Zhang
Cc: dev, Ciara Power, roy.fan.zhang, slawomirx.mrozowicz
Currently, for a sessionless op, the session taken from the mempool
contains some values previously set by a testcase that does use a
session. This is due to the session object not being reset before going
back into the mempool.
This caused issues when multiple sessionless testcases ran, as the
previously set objects were being used for the first few testcases, but
subsequent testcases used empty objects, as they were being correctly
reset by the sessionless testcases.
To fix this, the session objects are now reset before being returned to
the mempool for session testcases. In addition, rather than pulling the
session object directly from the mempool for sessionless testcases, the
session_create() function is now used, which sets the required values,
such as nb_drivers.
Fixes: c75542ae4200 ("crypto/ipsec_mb: introduce IPsec_mb framework")
Fixes: b3bbd9e5f265 ("cryptodev: support device independent sessions")
Cc: roy.fan.zhang@intel.com
Cc: slawomirx.mrozowicz@intel.com
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
v3:
- Modified fix to reset sessions, and ensure values are then set for
sessionless testcases. V2 fix just ensured the same values in
session objects were reused, as they were not being reset,
which was incorrect.
---
drivers/crypto/ipsec_mb/ipsec_mb_private.h | 12 ++++++++----
lib/cryptodev/rte_cryptodev.c | 1 +
2 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_private.h b/drivers/crypto/ipsec_mb/ipsec_mb_private.h
index 472b672f08..420701a818 100644
--- a/drivers/crypto/ipsec_mb/ipsec_mb_private.h
+++ b/drivers/crypto/ipsec_mb/ipsec_mb_private.h
@@ -415,7 +415,7 @@ ipsec_mb_get_session_private(struct ipsec_mb_qp *qp, struct rte_crypto_op *op)
uint32_t driver_id = ipsec_mb_get_driver_id(qp->pmd_type);
struct rte_crypto_sym_op *sym_op = op->sym;
uint8_t sess_type = op->sess_type;
- void *_sess;
+ struct rte_cryptodev_sym_session *_sess;
void *_sess_private_data = NULL;
struct ipsec_mb_internals *pmd_data = &ipsec_mb_pmds[qp->pmd_type];
@@ -426,8 +426,12 @@ ipsec_mb_get_session_private(struct ipsec_mb_qp *qp, struct rte_crypto_op *op)
driver_id);
break;
case RTE_CRYPTO_OP_SESSIONLESS:
- if (!qp->sess_mp ||
- rte_mempool_get(qp->sess_mp, (void **)&_sess))
+ if (!qp->sess_mp)
+ return NULL;
+
+ _sess = rte_cryptodev_sym_session_create(qp->sess_mp);
+
+ if (!_sess)
return NULL;
if (!qp->sess_mp_priv ||
@@ -443,7 +447,7 @@ ipsec_mb_get_session_private(struct ipsec_mb_qp *qp, struct rte_crypto_op *op)
sess = NULL;
}
- sym_op->session = (struct rte_cryptodev_sym_session *)_sess;
+ sym_op->session = _sess;
set_sym_session_private_data(sym_op->session, driver_id,
_sess_private_data);
break;
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index 9e76a1c72d..ac0c508e76 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -2187,6 +2187,7 @@ rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess)
/* Return session to mempool */
sess_mp = rte_mempool_from_obj(sess);
+ memset(sess, 0, rte_cryptodev_sym_get_existing_header_session_size(sess));
rte_mempool_put(sess_mp, sess);
rte_cryptodev_trace_sym_session_free(sess);
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v4 3/5] crypto/ipsec_mb: add remaining SGL support
2022-10-04 12:55 ` [PATCH v4 " Ciara Power
2022-10-04 12:55 ` [PATCH v4 1/5] test/crypto: fix wireless auth digest segment Ciara Power
2022-10-04 12:55 ` [PATCH v4 2/5] crypto/ipsec_mb: fix session creation for sessionless Ciara Power
@ 2022-10-04 12:55 ` Ciara Power
2022-10-04 12:55 ` [PATCH v4 4/5] test/crypto: add OOP snow3g SGL tests Ciara Power
` (2 subsequent siblings)
5 siblings, 0 replies; 38+ messages in thread
From: Ciara Power @ 2022-10-04 12:55 UTC (permalink / raw)
To: Kai Ji, Pablo de Lara; +Cc: dev, Ciara Power, Fan Zhang
The intel-ipsec-mb library supports SGL for GCM and ChaChaPoly
algorithms using the JOB API.
This support was added to AESNI_MB PMD previously, but the SGL feature
flags could not be added due to no SGL support for other algorithms.
This patch adds a workaround SGL approach for other algorithms
using the JOB API. The segmented input buffers are copied into a
linear buffer, which is passed as a single job to intel-ipsec-mb.
The job is processed, and on return, the linear buffer is split into the
original destination segments.
Existing AESNI_MB testcases are passing with these feature flags added.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
v3:
- Reduced code duplication by adding a reusable function.
- Changed int to uint64_t for total_len.
v2:
- Small improvements when copying segments to linear buffer.
- Added documentation changes.
---
doc/guides/cryptodevs/aesni_mb.rst | 1 -
doc/guides/cryptodevs/features/aesni_mb.ini | 4 +
doc/guides/rel_notes/release_22_11.rst | 5 +
drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 180 ++++++++++++++++----
4 files changed, 156 insertions(+), 34 deletions(-)
diff --git a/doc/guides/cryptodevs/aesni_mb.rst b/doc/guides/cryptodevs/aesni_mb.rst
index 07222ee117..59c134556f 100644
--- a/doc/guides/cryptodevs/aesni_mb.rst
+++ b/doc/guides/cryptodevs/aesni_mb.rst
@@ -72,7 +72,6 @@ Protocol offloads:
Limitations
-----------
-* Chained mbufs are not supported.
* Out-of-place is not supported for combined Crypto-CRC DOCSIS security
protocol.
* RTE_CRYPTO_CIPHER_DES_DOCSISBPI is not supported for combined Crypto-CRC
diff --git a/doc/guides/cryptodevs/features/aesni_mb.ini b/doc/guides/cryptodevs/features/aesni_mb.ini
index 3c648a391e..e4e965c35a 100644
--- a/doc/guides/cryptodevs/features/aesni_mb.ini
+++ b/doc/guides/cryptodevs/features/aesni_mb.ini
@@ -12,6 +12,10 @@ CPU AVX = Y
CPU AVX2 = Y
CPU AVX512 = Y
CPU AESNI = Y
+In Place SGL = Y
+OOP SGL In SGL Out = Y
+OOP SGL In LB Out = Y
+OOP LB In SGL Out = Y
OOP LB In LB Out = Y
CPU crypto = Y
Symmetric sessionless = Y
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index 53fe21453c..81f7f978a4 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -102,6 +102,11 @@ New Features
* Added AES-CCM support in lookaside protocol (IPsec) for CN9K & CN10K.
* Added AES & DES DOCSIS algorithm support in lookaside crypto for CN9K.
+* **Added SGL support to AESNI_MB PMD.**
+
+ Added support for SGL to AESNI_MB PMD. Support for inplace,
+ OOP SGL in SGL out, OOP LB in SGL out, and OOP SGL in LB out added.
+
* **Added eventdev adapter instance get API.**
* Added ``rte_event_eth_rx_adapter_instance_get`` to get Rx adapter
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 6d5d3ce8eb..62f7d4ee5a 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -937,7 +937,7 @@ static inline uint64_t
auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session,
uint32_t oop, const uint32_t auth_offset,
const uint32_t cipher_offset, const uint32_t auth_length,
- const uint32_t cipher_length)
+ const uint32_t cipher_length, uint8_t lb_sgl)
{
struct rte_mbuf *m_src, *m_dst;
uint8_t *p_src, *p_dst;
@@ -945,7 +945,7 @@ auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session,
uint32_t cipher_end, auth_end;
/* Only cipher then hash needs special calculation. */
- if (!oop || session->chain_order != IMB_ORDER_CIPHER_HASH)
+ if (!oop || session->chain_order != IMB_ORDER_CIPHER_HASH || lb_sgl)
return auth_offset;
m_src = op->sym->m_src;
@@ -1159,6 +1159,81 @@ handle_aead_sgl_job(IMB_JOB *job, IMB_MGR *mb_mgr,
return 0;
}
+static uint64_t
+sgl_linear_cipher_auth_len(IMB_JOB *job, uint64_t *auth_len)
+{
+ uint64_t cipher_len;
+
+ if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+ job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN)
+ cipher_len = (job->msg_len_to_cipher_in_bits >> 3) +
+ (job->cipher_start_src_offset_in_bits >> 3);
+ else
+ cipher_len = job->msg_len_to_cipher_in_bytes +
+ job->cipher_start_src_offset_in_bytes;
+
+ if (job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+ job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN)
+ *auth_len = (job->msg_len_to_hash_in_bits >> 3) +
+ job->hash_start_src_offset_in_bytes;
+ else if (job->hash_alg == IMB_AUTH_AES_GMAC)
+ *auth_len = job->u.GCM.aad_len_in_bytes;
+ else
+ *auth_len = job->msg_len_to_hash_in_bytes +
+ job->hash_start_src_offset_in_bytes;
+
+ return RTE_MAX(*auth_len, cipher_len);
+}
+
+static int
+handle_sgl_linear(IMB_JOB *job, struct rte_crypto_op *op, uint32_t dst_offset,
+ struct aesni_mb_session *session)
+{
+ uint64_t auth_len, total_len;
+ uint8_t *src, *linear_buf = NULL;
+ int lb_offset = 0;
+ struct rte_mbuf *src_seg;
+ uint16_t src_len;
+
+ total_len = sgl_linear_cipher_auth_len(job, &auth_len);
+ linear_buf = rte_zmalloc(NULL, total_len + job->auth_tag_output_len_in_bytes, 0);
+ if (linear_buf == NULL) {
+ IPSEC_MB_LOG(ERR, "Error allocating memory for SGL Linear Buffer\n");
+ return -1;
+ }
+
+ for (src_seg = op->sym->m_src; (src_seg != NULL) &&
+ (total_len - lb_offset > 0);
+ src_seg = src_seg->next) {
+ src = rte_pktmbuf_mtod(src_seg, uint8_t *);
+ src_len = RTE_MIN(src_seg->data_len, total_len - lb_offset);
+ rte_memcpy(linear_buf + lb_offset, src, src_len);
+ lb_offset += src_len;
+ }
+
+ job->src = linear_buf;
+ job->dst = linear_buf + dst_offset;
+ job->user_data2 = linear_buf;
+
+ if (job->hash_alg == IMB_AUTH_AES_GMAC)
+ job->u.GCM.aad = linear_buf;
+
+ if (session->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY)
+ job->auth_tag_output = linear_buf + lb_offset;
+ else
+ job->auth_tag_output = linear_buf + auth_len;
+
+ return 0;
+}
+
+static inline int
+imb_lib_support_sgl_algo(IMB_CIPHER_MODE alg)
+{
+ if (alg == IMB_CIPHER_CHACHA20_POLY1305
+ || alg == IMB_CIPHER_GCM)
+ return 1;
+ return 0;
+}
/**
* Process a crypto operation and complete a IMB_JOB job structure for
@@ -1171,7 +1246,8 @@ handle_aead_sgl_job(IMB_JOB *job, IMB_MGR *mb_mgr,
*
* @return
* - 0 on success, the IMB_JOB will be filled
- * - -1 if invalid session, IMB_JOB will not be filled
+ * - -1 if invalid session or errors allocationg SGL linear buffer,
+ * IMB_JOB will not be filled
*/
static inline int
set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
@@ -1191,6 +1267,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
uint32_t total_len;
IMB_JOB base_job;
uint8_t sgl = 0;
+ uint8_t lb_sgl = 0;
int ret;
session = ipsec_mb_get_session_private(qp, op);
@@ -1199,18 +1276,6 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
return -1;
}
- if (op->sym->m_src->nb_segs > 1) {
- if (session->cipher.mode != IMB_CIPHER_GCM
- && session->cipher.mode !=
- IMB_CIPHER_CHACHA20_POLY1305) {
- op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- IPSEC_MB_LOG(ERR, "Device only supports SGL for AES-GCM"
- " or CHACHA20_POLY1305 algorithms.");
- return -1;
- }
- sgl = 1;
- }
-
/* Set crypto operation */
job->chain_order = session->chain_order;
@@ -1233,6 +1298,26 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
job->dec_keys = session->cipher.expanded_aes_keys.decode;
}
+ if (!op->sym->m_dst) {
+ /* in-place operation */
+ m_dst = m_src;
+ oop = 0;
+ } else if (op->sym->m_dst == op->sym->m_src) {
+ /* in-place operation */
+ m_dst = m_src;
+ oop = 0;
+ } else {
+ /* out-of-place operation */
+ m_dst = op->sym->m_dst;
+ oop = 1;
+ }
+
+ if (m_src->nb_segs > 1 || m_dst->nb_segs > 1) {
+ sgl = 1;
+ if (!imb_lib_support_sgl_algo(session->cipher.mode))
+ lb_sgl = 1;
+ }
+
switch (job->hash_alg) {
case IMB_AUTH_AES_XCBC:
job->u.XCBC._k1_expanded = session->auth.xcbc.k1_expanded;
@@ -1331,20 +1416,6 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
m_offset = 0;
}
- if (!op->sym->m_dst) {
- /* in-place operation */
- m_dst = m_src;
- oop = 0;
- } else if (op->sym->m_dst == op->sym->m_src) {
- /* in-place operation */
- m_dst = m_src;
- oop = 0;
- } else {
- /* out-of-place operation */
- m_dst = op->sym->m_dst;
- oop = 1;
- }
-
/* Set digest output location */
if (job->hash_alg != IMB_AUTH_NULL &&
session->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) {
@@ -1435,7 +1506,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
job->hash_start_src_offset_in_bytes = auth_start_offset(op,
session, oop, auth_off_in_bytes,
ciph_off_in_bytes, auth_len_in_bytes,
- ciph_len_in_bytes);
+ ciph_len_in_bytes, lb_sgl);
job->msg_len_to_hash_in_bits = op->sym->auth.data.length;
job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -1452,7 +1523,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
job->hash_start_src_offset_in_bytes = auth_start_offset(op,
session, oop, auth_off_in_bytes,
ciph_off_in_bytes, auth_len_in_bytes,
- ciph_len_in_bytes);
+ ciph_len_in_bytes, lb_sgl);
job->msg_len_to_hash_in_bytes = auth_len_in_bytes;
job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -1464,7 +1535,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
session, oop, op->sym->auth.data.offset,
op->sym->cipher.data.offset,
op->sym->auth.data.length,
- op->sym->cipher.data.length);
+ op->sym->cipher.data.length, lb_sgl);
job->msg_len_to_hash_in_bytes = op->sym->auth.data.length;
job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -1525,6 +1596,10 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
job->user_data = op;
if (sgl) {
+
+ if (lb_sgl)
+ return handle_sgl_linear(job, op, m_offset, session);
+
base_job = *job;
job->sgl_state = IMB_SGL_INIT;
job = IMB_SUBMIT_JOB(mb_mgr);
@@ -1695,6 +1770,31 @@ generate_digest(IMB_JOB *job, struct rte_crypto_op *op,
sess->auth.req_digest_len);
}
+static void
+post_process_sgl_linear(struct rte_crypto_op *op, IMB_JOB *job,
+ struct aesni_mb_session *sess, uint8_t *linear_buf)
+{
+
+ int lb_offset = 0;
+ struct rte_mbuf *m_dst = op->sym->m_dst == NULL ?
+ op->sym->m_src : op->sym->m_dst;
+ uint16_t total_len, dst_len;
+ uint64_t auth_len;
+ uint8_t *dst;
+
+ total_len = sgl_linear_cipher_auth_len(job, &auth_len);
+
+ if (sess->auth.operation != RTE_CRYPTO_AUTH_OP_VERIFY)
+ total_len += job->auth_tag_output_len_in_bytes;
+
+ for (; (m_dst != NULL) && (total_len - lb_offset > 0); m_dst = m_dst->next) {
+ dst = rte_pktmbuf_mtod(m_dst, uint8_t *);
+ dst_len = RTE_MIN(m_dst->data_len, total_len - lb_offset);
+ rte_memcpy(dst, linear_buf + lb_offset, dst_len);
+ lb_offset += dst_len;
+ }
+}
+
/**
* Process a completed job and return rte_mbuf which job processed
*
@@ -1712,6 +1812,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
struct aesni_mb_session *sess = NULL;
uint32_t driver_id = ipsec_mb_get_driver_id(
IPSEC_MB_PMD_TYPE_AESNI_MB);
+ uint8_t *linear_buf = NULL;
#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
uint8_t is_docsis_sec = 0;
@@ -1740,6 +1841,14 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
case IMB_STATUS_COMPLETED:
op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ if ((op->sym->m_src->nb_segs > 1 ||
+ (op->sym->m_dst != NULL &&
+ op->sym->m_dst->nb_segs > 1)) &&
+ !imb_lib_support_sgl_algo(sess->cipher.mode)) {
+ linear_buf = (uint8_t *) job->user_data2;
+ post_process_sgl_linear(op, job, sess, linear_buf);
+ }
+
if (job->hash_alg == IMB_AUTH_NULL)
break;
@@ -1766,6 +1875,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
default:
op->status = RTE_CRYPTO_OP_STATUS_ERROR;
}
+ rte_free(linear_buf);
}
/* Free session if a session-less crypto op */
@@ -2252,7 +2362,11 @@ RTE_INIT(ipsec_mb_register_aesni_mb)
RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO |
RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA |
- RTE_CRYPTODEV_FF_SYM_SESSIONLESS;
+ RTE_CRYPTODEV_FF_SYM_SESSIONLESS |
+ RTE_CRYPTODEV_FF_IN_PLACE_SGL |
+ RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
+ RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
+ RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT;
aesni_mb_data->internals_priv_size = 0;
aesni_mb_data->ops = &aesni_mb_pmd_ops;
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v4 4/5] test/crypto: add OOP snow3g SGL tests
2022-10-04 12:55 ` [PATCH v4 " Ciara Power
` (2 preceding siblings ...)
2022-10-04 12:55 ` [PATCH v4 3/5] crypto/ipsec_mb: add remaining SGL support Ciara Power
@ 2022-10-04 12:55 ` Ciara Power
2022-10-04 12:55 ` [PATCH v4 5/5] test/crypto: add remaining blockcipher " Ciara Power
2022-10-07 6:53 ` [EXT] [PATCH v4 0/5] add remaining SGL support to AESNI_MB Akhil Goyal
5 siblings, 0 replies; 38+ messages in thread
From: Ciara Power @ 2022-10-04 12:55 UTC (permalink / raw)
To: Akhil Goyal, Fan Zhang
Cc: dev, kai.ji, pablo.de.lara.guarch, Ciara Power, Fan Zhang
More tests are added to test variations of OOP SGL for snow3g.
This includes LB_IN_SGL_OUT and SGL_IN_LB_OUT.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test/test_cryptodev.c | 48 +++++++++++++++++++++++++++++++--------
1 file changed, 39 insertions(+), 9 deletions(-)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 799eff0649..e732daae03 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -4360,7 +4360,8 @@ test_snow3g_encryption_oop(const struct snow3g_test_data *tdata)
}
static int
-test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
+test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata,
+ uint8_t sgl_in, uint8_t sgl_out)
{
struct crypto_testsuite_params *ts_params = &testsuite_params;
struct crypto_unittest_params *ut_params = &unittest_params;
@@ -4391,9 +4392,12 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
uint64_t feat_flags = dev_info.feature_flags;
- if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) {
- printf("Device doesn't support out-of-place scatter-gather "
- "in both input and output mbufs. "
+ if (((sgl_in && sgl_out) && !(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT))
+ || ((!sgl_in && sgl_out) &&
+ !(feat_flags & RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT))
+ || ((sgl_in && !sgl_out) &&
+ !(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT))) {
+ printf("Device doesn't support out-of-place scatter gather type. "
"Test Skipped.\n");
return TEST_SKIPPED;
}
@@ -4418,10 +4422,21 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
/* the algorithms block size */
plaintext_pad_len = RTE_ALIGN_CEIL(plaintext_len, 16);
- ut_params->ibuf = create_segmented_mbuf(ts_params->mbuf_pool,
- plaintext_pad_len, 10, 0);
- ut_params->obuf = create_segmented_mbuf(ts_params->mbuf_pool,
- plaintext_pad_len, 3, 0);
+ if (sgl_in)
+ ut_params->ibuf = create_segmented_mbuf(ts_params->mbuf_pool,
+ plaintext_pad_len, 10, 0);
+ else {
+ ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+ rte_pktmbuf_append(ut_params->ibuf, plaintext_pad_len);
+ }
+
+ if (sgl_out)
+ ut_params->obuf = create_segmented_mbuf(ts_params->mbuf_pool,
+ plaintext_pad_len, 3, 0);
+ else {
+ ut_params->obuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+ rte_pktmbuf_append(ut_params->obuf, plaintext_pad_len);
+ }
TEST_ASSERT_NOT_NULL(ut_params->ibuf,
"Failed to allocate input buffer in mempool");
@@ -6775,9 +6790,20 @@ test_snow3g_encryption_test_case_1_oop(void)
static int
test_snow3g_encryption_test_case_1_oop_sgl(void)
{
- return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1);
+ return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 1, 1);
+}
+
+static int
+test_snow3g_encryption_test_case_1_oop_lb_in_sgl_out(void)
+{
+ return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 0, 1);
}
+static int
+test_snow3g_encryption_test_case_1_oop_sgl_in_lb_out(void)
+{
+ return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 1, 0);
+}
static int
test_snow3g_encryption_test_case_1_offset_oop(void)
@@ -16006,6 +16032,10 @@ static struct unit_test_suite cryptodev_snow3g_testsuite = {
test_snow3g_encryption_test_case_1_oop),
TEST_CASE_ST(ut_setup, ut_teardown,
test_snow3g_encryption_test_case_1_oop_sgl),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_1_oop_lb_in_sgl_out),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_1_oop_sgl_in_lb_out),
TEST_CASE_ST(ut_setup, ut_teardown,
test_snow3g_encryption_test_case_1_offset_oop),
TEST_CASE_ST(ut_setup, ut_teardown,
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v4 5/5] test/crypto: add remaining blockcipher SGL tests
2022-10-04 12:55 ` [PATCH v4 " Ciara Power
` (3 preceding siblings ...)
2022-10-04 12:55 ` [PATCH v4 4/5] test/crypto: add OOP snow3g SGL tests Ciara Power
@ 2022-10-04 12:55 ` Ciara Power
2022-10-07 6:53 ` [EXT] [PATCH v4 0/5] add remaining SGL support to AESNI_MB Akhil Goyal
5 siblings, 0 replies; 38+ messages in thread
From: Ciara Power @ 2022-10-04 12:55 UTC (permalink / raw)
To: Akhil Goyal, Fan Zhang, Yipeng Wang, Sameh Gobriel,
Bruce Richardson, Vladimir Medvedkin
Cc: dev, kai.ji, pablo.de.lara.guarch, Ciara Power, Fan Zhang
The current blockcipher test function only has support for two types of
SGL test, INPLACE or OOP_SGL_IN_LB_OUT. These types are hardcoded into
the function, with the number of segments always set to 3.
To ensure all SGL types are tested, blockcipher test vectors now have
fields to specify SGL type, and the number of segments.
If these fields are missing, the previous defaults are used,
either INPLACE or OOP_SGL_IN_LB_OUT, with 3 segments.
Some AES and Hash vectors are modified to use these new fields, and new
AES tests are added to test the SGL types that were not previously
being tested.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test/test_cryptodev_aes_test_vectors.h | 345 +++++++++++++++++---
app/test/test_cryptodev_blockcipher.c | 50 +--
app/test/test_cryptodev_blockcipher.h | 2 +
app/test/test_cryptodev_hash_test_vectors.h | 8 +-
4 files changed, 335 insertions(+), 70 deletions(-)
diff --git a/app/test/test_cryptodev_aes_test_vectors.h b/app/test/test_cryptodev_aes_test_vectors.h
index a797af1b00..2c1875d3d9 100644
--- a/app/test/test_cryptodev_aes_test_vectors.h
+++ b/app/test/test_cryptodev_aes_test_vectors.h
@@ -4163,12 +4163,44 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
},
{
.test_descr = "AES-192-CTR XCBC Decryption Digest Verify "
- "Scatter Gather",
+ "Scatter Gather (Inplace)",
+ .test_data = &aes_test_data_2,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CTR XCBC Decryption Digest Verify "
+ "Scatter Gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_2,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CTR XCBC Decryption Digest Verify "
+ "Scatter Gather OOP (LB in SGL out)",
.test_data = &aes_test_data_2,
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
},
+ {
+ .test_descr = "AES-192-CTR XCBC Decryption Digest Verify "
+ "Scatter Gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_2,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
+ },
+
{
.test_descr = "AES-256-CTR HMAC-SHA1 Encryption Digest",
.test_data = &aes_test_data_3,
@@ -4193,11 +4225,52 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
- "Scatter Gather",
+ "Scatter Gather (Inplace)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
+ "Scatter Gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
+ "Scatter Gather OOP 16 segs (SGL in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 16
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
+ "Scatter Gather OOP (LB in SGL out)",
.test_data = &aes_test_data_4,
.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
+ "Scatter Gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -4207,10 +4280,52 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
- "Verify Scatter Gather",
+ "Verify Scatter Gather (Inplace)",
.test_data = &aes_test_data_4,
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
+ "Verify Scatter Gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
+ "Verify Scatter Gather OOP 16 segs (SGL in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 16
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
+ "Verify Scatter Gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
+ "Verify Scatter Gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -4255,12 +4370,46 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
},
{
.test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest "
- "Scatter Gather Sessionless",
+ "Scatter Gather Sessionless (Inplace)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
+ BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest "
+ "Scatter Gather Sessionless OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
+ BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest "
+ "Scatter Gather Sessionless OOP (LB in SGL out)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
+ BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest "
+ "Scatter Gather Sessionless OOP (SGL in LB out)",
.test_data = &aes_test_data_6,
.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
@@ -4270,11 +4419,42 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
},
{
.test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
- "Verify Scatter Gather",
+ "Verify Scatter Gather (Inplace)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 2
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
+ "Verify Scatter Gather OOP (SGL in SGL out)",
.test_data = &aes_test_data_6,
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
+ "Verify Scatter Gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
+ "Verify Scatter Gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC XCBC Encryption Digest",
@@ -4358,6 +4538,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN_ENC,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
@@ -4382,6 +4564,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -4397,6 +4581,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_DEC_AUTH_VERIFY,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -4421,6 +4607,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -4504,6 +4692,41 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
.test_data = &aes_test_data_4,
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
},
+ {
+ .test_descr = "AES-128-CBC Encryption Scatter gather (Inplace)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC Encryption Scatter gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC Encryption Scatter gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC Encryption Scatter gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
+ },
{
.test_descr = "AES-128-CBC Decryption",
.test_data = &aes_test_data_4,
@@ -4515,11 +4738,39 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
},
{
- .test_descr = "AES-192-CBC Encryption Scatter gather",
+ .test_descr = "AES-192-CBC Encryption Scatter gather (Inplace)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Encryption Scatter gather OOP (SGL in SGL out)",
.test_data = &aes_test_data_10,
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Encryption Scatter gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Encryption Scatter gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-192-CBC Decryption",
@@ -4527,10 +4778,39 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
},
{
- .test_descr = "AES-192-CBC Decryption Scatter Gather",
+ .test_descr = "AES-192-CBC Decryption Scatter Gather (Inplace)",
.test_data = &aes_test_data_10,
.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Decryption Scatter Gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Decryption Scatter Gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Decryption Scatter Gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-256-CBC Encryption",
@@ -4689,67 +4969,42 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
},
{
.test_descr = "AES-256-XTS Encryption (512-byte plaintext"
- " Dataunit 512) Scater gather OOP",
+ " Dataunit 512) Scatter gather OOP (SGL in LB out)",
.test_data = &aes_test_data_xts_wrapped_key_48_pt_512_du_512,
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
- .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
- BLOCKCIPHER_TEST_FEATURE_SG,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-256-XTS Decryption (512-byte plaintext"
- " Dataunit 512) Scater gather OOP",
+ " Dataunit 512) Scatter gather OOP (SGL in LB out)",
.test_data = &aes_test_data_xts_wrapped_key_48_pt_512_du_512,
.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
BLOCKCIPHER_TEST_FEATURE_SG,
- },
- {
- .test_descr = "AES-256-XTS Encryption (512-byte plaintext"
- " Dataunit 0) Scater gather OOP",
- .test_data = &aes_test_data_xts_wrapped_key_48_pt_512_du_0,
- .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
- .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
- BLOCKCIPHER_TEST_FEATURE_SG,
- },
- {
- .test_descr = "AES-256-XTS Decryption (512-byte plaintext"
- " Dataunit 0) Scater gather OOP",
- .test_data = &aes_test_data_xts_wrapped_key_48_pt_512_du_0,
- .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
- .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
- BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-256-XTS Encryption (4096-byte plaintext"
- " Dataunit 4096) Scater gather OOP",
+ " Dataunit 4096) Scatter gather OOP (SGL in LB out)",
.test_data = &aes_test_data_xts_wrapped_key_48_pt_4096_du_4096,
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-256-XTS Decryption (4096-byte plaintext"
- " Dataunit 4096) Scater gather OOP",
+ " Dataunit 4096) Scatter gather OOP (SGL in LB out)",
.test_data = &aes_test_data_xts_wrapped_key_48_pt_4096_du_4096,
.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
BLOCKCIPHER_TEST_FEATURE_SG,
- },
- {
- .test_descr = "AES-256-XTS Encryption (4096-byte plaintext"
- " Dataunit 0) Scater gather OOP",
- .test_data = &aes_test_data_xts_wrapped_key_48_pt_4096_du_0,
- .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
- .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
- BLOCKCIPHER_TEST_FEATURE_SG,
- },
- {
- .test_descr = "AES-256-XTS Decryption (4096-byte plaintext"
- " Dataunit 0) Scater gather OOP",
- .test_data = &aes_test_data_xts_wrapped_key_48_pt_4096_du_0,
- .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
- .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
- BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "cipher-only - NULL algo - x8 - encryption",
diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c
index b5813b956f..f1ef0b606f 100644
--- a/app/test/test_cryptodev_blockcipher.c
+++ b/app/test/test_cryptodev_blockcipher.c
@@ -96,7 +96,9 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
uint8_t tmp_dst_buf[MBUF_SIZE];
uint32_t pad_len;
- int nb_segs = 1;
+ int nb_segs_in = 1;
+ int nb_segs_out = 1;
+ uint64_t sgl_type = t->sgl_flag;
uint32_t nb_iterates = 0;
rte_cryptodev_info_get(dev_id, &dev_info);
@@ -121,30 +123,31 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
}
}
if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_SG) {
- uint64_t oop_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT;
+ if (sgl_type == 0) {
+ if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP)
+ sgl_type = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT;
+ else
+ sgl_type = RTE_CRYPTODEV_FF_IN_PLACE_SGL;
+ }
- if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) {
- if (!(feat_flags & oop_flag)) {
- printf("Device doesn't support out-of-place "
- "scatter-gather in input mbuf. "
- "Test Skipped.\n");
- snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
- "SKIPPED");
- return TEST_SKIPPED;
- }
- } else {
- if (!(feat_flags & RTE_CRYPTODEV_FF_IN_PLACE_SGL)) {
- printf("Device doesn't support in-place "
- "scatter-gather mbufs. "
- "Test Skipped.\n");
- snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
- "SKIPPED");
- return TEST_SKIPPED;
- }
+ if (!(feat_flags & sgl_type)) {
+ printf("Device doesn't support scatter-gather type."
+ " Test Skipped.\n");
+ snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
+ "SKIPPED");
+ return TEST_SKIPPED;
}
- nb_segs = 3;
+ if (sgl_type == RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT ||
+ sgl_type == RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT ||
+ sgl_type == RTE_CRYPTODEV_FF_IN_PLACE_SGL)
+ nb_segs_in = t->sgl_segs == 0 ? 3 : t->sgl_segs;
+
+ if (sgl_type == RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT ||
+ sgl_type == RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)
+ nb_segs_out = t->sgl_segs == 0 ? 3 : t->sgl_segs;
}
+
if (!!(feat_flags & RTE_CRYPTODEV_FF_CIPHER_WRAPPED_KEY) ^
tdata->wrapped_key) {
snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
@@ -207,7 +210,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
/* for contiguous mbuf, nb_segs is 1 */
ibuf = create_segmented_mbuf(mbuf_pool,
- tdata->ciphertext.len, nb_segs, src_pattern);
+ tdata->ciphertext.len, nb_segs_in, src_pattern);
if (ibuf == NULL) {
snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
"line %u FAILED: %s",
@@ -256,7 +259,8 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
}
if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) {
- obuf = rte_pktmbuf_alloc(mbuf_pool);
+ obuf = create_segmented_mbuf(mbuf_pool,
+ tdata->ciphertext.len, nb_segs_out, dst_pattern);
if (!obuf) {
snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "line %u "
"FAILED: %s", __LINE__,
diff --git a/app/test/test_cryptodev_blockcipher.h b/app/test/test_cryptodev_blockcipher.h
index 84f5d57787..bad93a5ec1 100644
--- a/app/test/test_cryptodev_blockcipher.h
+++ b/app/test/test_cryptodev_blockcipher.h
@@ -57,6 +57,8 @@ struct blockcipher_test_case {
const struct blockcipher_test_data *test_data;
uint8_t op_mask; /* operation mask */
uint8_t feature_mask;
+ uint64_t sgl_flag;
+ uint8_t sgl_segs;
};
struct blockcipher_test_data {
diff --git a/app/test/test_cryptodev_hash_test_vectors.h b/app/test/test_cryptodev_hash_test_vectors.h
index 5bd7858de4..62602310b2 100644
--- a/app/test/test_cryptodev_hash_test_vectors.h
+++ b/app/test/test_cryptodev_hash_test_vectors.h
@@ -467,10 +467,12 @@ static const struct blockcipher_test_case hash_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN,
},
{
- .test_descr = "HMAC-SHA1 Digest Scatter Gather",
+ .test_descr = "HMAC-SHA1 Digest Scatter Gather (Inplace)",
.test_data = &hmac_sha1_test_vector,
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "HMAC-SHA1 Digest Verify",
@@ -478,10 +480,12 @@ static const struct blockcipher_test_case hash_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY,
},
{
- .test_descr = "HMAC-SHA1 Digest Verify Scatter Gather",
+ .test_descr = "HMAC-SHA1 Digest Verify Scatter Gather (Inplace)",
.test_data = &hmac_sha1_test_vector,
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "SHA224 Digest",
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: [EXT] [PATCH v4 0/5] add remaining SGL support to AESNI_MB
2022-10-04 12:55 ` [PATCH v4 " Ciara Power
` (4 preceding siblings ...)
2022-10-04 12:55 ` [PATCH v4 5/5] test/crypto: add remaining blockcipher " Ciara Power
@ 2022-10-07 6:53 ` Akhil Goyal
5 siblings, 0 replies; 38+ messages in thread
From: Akhil Goyal @ 2022-10-07 6:53 UTC (permalink / raw)
To: Ciara Power; +Cc: dev, kai.ji, pablo.de.lara.guarch
> ----------------------------------------------------------------------
> Currently, the intel-ipsec-mb library only supports SGL for
> GCM and ChaCha20-Poly1305 algorithms through the JOB API.
>
> To add SGL support for other algorithms, a workaround approach is
> added in the AESNI_MB PMD. SGL feature flags can now be added to
> the PMD.
>
> This patchset also includes a fix for SGL wireless operations,
> session cleanup and session creation for sessionless operations.
>
> Some additional Snow3G SGL and AES tests are also added for
> various SGL input/output combinations that were not
> previously being tested.
>
> v4: Added error check when appending space for digest to buffer.
>
Please rebase.
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v5 0/4] add remaining SGL support to AESNI_MB
2022-08-12 13:23 [PATCH 0/3] add remaining SGL support to AESNI_MB Ciara Power
` (5 preceding siblings ...)
2022-10-04 12:55 ` [PATCH v4 " Ciara Power
@ 2022-10-07 13:46 ` Ciara Power
2022-10-07 13:46 ` [PATCH v5 1/4] test/crypto: fix wireless auth digest segment Ciara Power
` (4 more replies)
6 siblings, 5 replies; 38+ messages in thread
From: Ciara Power @ 2022-10-07 13:46 UTC (permalink / raw)
Cc: dev, gakhil, kai.ji, Ciara Power
Currently, the intel-ipsec-mb library only supports SGL for
GCM and ChaCha20-Poly1305 algorithms through the JOB API.
To add SGL support for other algorithms, a workaround approach is
added in the AESNI_MB PMD. SGL feature flags can now be added to
the PMD.
Some additional Snow3G SGL and AES tests are also added for
various SGL input/output combinations that were not
previously being tested.
v5:
- Rebased on for-main branch after session rework merged.
- Dropped bug fix patch for sessionless tests, no longer
applicable after session rework.
- Removed some added AES-XTS tests as they are skipped for AESNI_MB
and QAT and I can't verify them locally.
- Added AES-XTS tests that had been accidentally removed in
previous patchset versions.
v4: Added error check when appending space for digest to buffer.
v3:
- Modified fix to reset sessions, and ensure values are then set for
sessionless testcases. V2 fix just ensured the same values in
session objects were reused, as they were not being reset,
which was incorrect.
- Reduced code duplication by adding a reusable function.
- Changed int to uint64_t for total_len.
v2:
- Added documentation changes.
- Added fix for sessionless cleanup.
- Modified blockcipher tests to support various SGL types.
- Added more SGL AES tests.
- Small fixes.
Ciara Power (4):
test/crypto: fix wireless auth digest segment
crypto/ipsec_mb: add remaining SGL support
test/crypto: add OOP snow3g SGL tests
test/crypto: add remaining blockcipher SGL tests
app/test/test_cryptodev.c | 58 +++-
app/test/test_cryptodev_aes_test_vectors.h | 310 +++++++++++++++++++-
app/test/test_cryptodev_blockcipher.c | 50 ++--
app/test/test_cryptodev_blockcipher.h | 2 +
app/test/test_cryptodev_hash_test_vectors.h | 8 +-
doc/guides/cryptodevs/aesni_mb.rst | 1 -
doc/guides/cryptodevs/features/aesni_mb.ini | 4 +
doc/guides/rel_notes/release_22_11.rst | 5 +
drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 180 +++++++++---
9 files changed, 543 insertions(+), 75 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v5 1/4] test/crypto: fix wireless auth digest segment
2022-10-07 13:46 ` [PATCH v5 0/4] " Ciara Power
@ 2022-10-07 13:46 ` Ciara Power
2022-10-07 13:46 ` [PATCH v5 2/4] crypto/ipsec_mb: add remaining SGL support Ciara Power
` (3 subsequent siblings)
4 siblings, 0 replies; 38+ messages in thread
From: Ciara Power @ 2022-10-07 13:46 UTC (permalink / raw)
To: Akhil Goyal, Fan Zhang; +Cc: dev, kai.ji, Ciara Power, Fan Zhang, Pablo de Lara
The segment size for some tests was too small to hold the auth digest.
This caused issues when using op->sym->auth.digest.data for comparisons
in AESNI_MB PMD after a subsequent patch enables SGL.
For example, if segment size is 2, and digest size is 4, then 4 bytes
are read from op->sym->auth.digest.data, which overflows into the memory
after the segment, rather than using the second segment that contains
the remaining half of the digest.
Fixes: 11c5485bb276 ("test/crypto: add scatter-gather tests for IP and OOP")
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
v4: Added failure check when appending digest size to buffer.
---
app/test/test_cryptodev.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index c6d47a035e..203b8b61fa 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -2990,6 +2990,16 @@ create_wireless_algo_auth_cipher_operation(
remaining_off -= rte_pktmbuf_data_len(sgl_buf);
sgl_buf = sgl_buf->next;
}
+
+ /* The last segment should be large enough to hold full digest */
+ if (sgl_buf->data_len < auth_tag_len) {
+ rte_pktmbuf_free(sgl_buf->next);
+ sgl_buf->next = NULL;
+ TEST_ASSERT_NOT_NULL(rte_pktmbuf_append(sgl_buf,
+ auth_tag_len - sgl_buf->data_len),
+ "No room to append auth tag");
+ }
+
sym_op->auth.digest.data = rte_pktmbuf_mtod_offset(sgl_buf,
uint8_t *, remaining_off);
sym_op->auth.digest.phys_addr = rte_pktmbuf_iova_offset(sgl_buf,
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v5 2/4] crypto/ipsec_mb: add remaining SGL support
2022-10-07 13:46 ` [PATCH v5 0/4] " Ciara Power
2022-10-07 13:46 ` [PATCH v5 1/4] test/crypto: fix wireless auth digest segment Ciara Power
@ 2022-10-07 13:46 ` Ciara Power
2022-10-07 13:46 ` [PATCH v5 3/4] test/crypto: add OOP snow3g SGL tests Ciara Power
` (2 subsequent siblings)
4 siblings, 0 replies; 38+ messages in thread
From: Ciara Power @ 2022-10-07 13:46 UTC (permalink / raw)
To: Kai Ji, Pablo de Lara; +Cc: dev, gakhil, Ciara Power, Fan Zhang
The intel-ipsec-mb library supports SGL for GCM and ChaChaPoly
algorithms using the JOB API.
This support was added to AESNI_MB PMD previously, but the SGL feature
flags could not be added due to no SGL support for other algorithms.
This patch adds a workaround SGL approach for other algorithms
using the JOB API. The segmented input buffers are copied into a
linear buffer, which is passed as a single job to intel-ipsec-mb.
The job is processed, and on return, the linear buffer is split into the
original destination segments.
Existing AESNI_MB testcases are passing with these feature flags added.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
v3:
- Reduced code duplication by adding a reusable function.
- Changed int to uint64_t for total_len.
v2:
- Small improvements when copying segments to linear buffer.
- Added documentation changes.
---
doc/guides/cryptodevs/aesni_mb.rst | 1 -
doc/guides/cryptodevs/features/aesni_mb.ini | 4 +
doc/guides/rel_notes/release_22_11.rst | 5 +
drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 180 ++++++++++++++++----
4 files changed, 156 insertions(+), 34 deletions(-)
diff --git a/doc/guides/cryptodevs/aesni_mb.rst b/doc/guides/cryptodevs/aesni_mb.rst
index 07222ee117..59c134556f 100644
--- a/doc/guides/cryptodevs/aesni_mb.rst
+++ b/doc/guides/cryptodevs/aesni_mb.rst
@@ -72,7 +72,6 @@ Protocol offloads:
Limitations
-----------
-* Chained mbufs are not supported.
* Out-of-place is not supported for combined Crypto-CRC DOCSIS security
protocol.
* RTE_CRYPTO_CIPHER_DES_DOCSISBPI is not supported for combined Crypto-CRC
diff --git a/doc/guides/cryptodevs/features/aesni_mb.ini b/doc/guides/cryptodevs/features/aesni_mb.ini
index 3c648a391e..e4e965c35a 100644
--- a/doc/guides/cryptodevs/features/aesni_mb.ini
+++ b/doc/guides/cryptodevs/features/aesni_mb.ini
@@ -12,6 +12,10 @@ CPU AVX = Y
CPU AVX2 = Y
CPU AVX512 = Y
CPU AESNI = Y
+In Place SGL = Y
+OOP SGL In SGL Out = Y
+OOP SGL In LB Out = Y
+OOP LB In SGL Out = Y
OOP LB In LB Out = Y
CPU crypto = Y
Symmetric sessionless = Y
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index fc1649f5a2..3afc083563 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -122,6 +122,11 @@ New Features
* Added AES-CCM support in lookaside protocol (IPsec) for CN9K & CN10K.
* Added AES & DES DOCSIS algorithm support in lookaside crypto for CN9K.
+* **Added SGL support to AESNI_MB PMD.**
+
+ Added support for SGL to AESNI_MB PMD. Support for inplace,
+ OOP SGL in SGL out, OOP LB in SGL out, and OOP SGL in LB out added.
+
* **Added new operation for FFT processing in bbdev.**
Added a new operation type in bbdev for FFT processing with new functions
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 8ec2364aa7..8477a09061 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -937,7 +937,7 @@ static inline uint64_t
auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session,
uint32_t oop, const uint32_t auth_offset,
const uint32_t cipher_offset, const uint32_t auth_length,
- const uint32_t cipher_length)
+ const uint32_t cipher_length, uint8_t lb_sgl)
{
struct rte_mbuf *m_src, *m_dst;
uint8_t *p_src, *p_dst;
@@ -945,7 +945,7 @@ auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session,
uint32_t cipher_end, auth_end;
/* Only cipher then hash needs special calculation. */
- if (!oop || session->chain_order != IMB_ORDER_CIPHER_HASH)
+ if (!oop || session->chain_order != IMB_ORDER_CIPHER_HASH || lb_sgl)
return auth_offset;
m_src = op->sym->m_src;
@@ -1159,6 +1159,81 @@ handle_aead_sgl_job(IMB_JOB *job, IMB_MGR *mb_mgr,
return 0;
}
+static uint64_t
+sgl_linear_cipher_auth_len(IMB_JOB *job, uint64_t *auth_len)
+{
+ uint64_t cipher_len;
+
+ if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN ||
+ job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN)
+ cipher_len = (job->msg_len_to_cipher_in_bits >> 3) +
+ (job->cipher_start_src_offset_in_bits >> 3);
+ else
+ cipher_len = job->msg_len_to_cipher_in_bytes +
+ job->cipher_start_src_offset_in_bytes;
+
+ if (job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN ||
+ job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN)
+ *auth_len = (job->msg_len_to_hash_in_bits >> 3) +
+ job->hash_start_src_offset_in_bytes;
+ else if (job->hash_alg == IMB_AUTH_AES_GMAC)
+ *auth_len = job->u.GCM.aad_len_in_bytes;
+ else
+ *auth_len = job->msg_len_to_hash_in_bytes +
+ job->hash_start_src_offset_in_bytes;
+
+ return RTE_MAX(*auth_len, cipher_len);
+}
+
+static int
+handle_sgl_linear(IMB_JOB *job, struct rte_crypto_op *op, uint32_t dst_offset,
+ struct aesni_mb_session *session)
+{
+ uint64_t auth_len, total_len;
+ uint8_t *src, *linear_buf = NULL;
+ int lb_offset = 0;
+ struct rte_mbuf *src_seg;
+ uint16_t src_len;
+
+ total_len = sgl_linear_cipher_auth_len(job, &auth_len);
+ linear_buf = rte_zmalloc(NULL, total_len + job->auth_tag_output_len_in_bytes, 0);
+ if (linear_buf == NULL) {
+ IPSEC_MB_LOG(ERR, "Error allocating memory for SGL Linear Buffer\n");
+ return -1;
+ }
+
+ for (src_seg = op->sym->m_src; (src_seg != NULL) &&
+ (total_len - lb_offset > 0);
+ src_seg = src_seg->next) {
+ src = rte_pktmbuf_mtod(src_seg, uint8_t *);
+ src_len = RTE_MIN(src_seg->data_len, total_len - lb_offset);
+ rte_memcpy(linear_buf + lb_offset, src, src_len);
+ lb_offset += src_len;
+ }
+
+ job->src = linear_buf;
+ job->dst = linear_buf + dst_offset;
+ job->user_data2 = linear_buf;
+
+ if (job->hash_alg == IMB_AUTH_AES_GMAC)
+ job->u.GCM.aad = linear_buf;
+
+ if (session->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY)
+ job->auth_tag_output = linear_buf + lb_offset;
+ else
+ job->auth_tag_output = linear_buf + auth_len;
+
+ return 0;
+}
+
+static inline int
+imb_lib_support_sgl_algo(IMB_CIPHER_MODE alg)
+{
+ if (alg == IMB_CIPHER_CHACHA20_POLY1305
+ || alg == IMB_CIPHER_GCM)
+ return 1;
+ return 0;
+}
/**
* Process a crypto operation and complete a IMB_JOB job structure for
@@ -1171,7 +1246,8 @@ handle_aead_sgl_job(IMB_JOB *job, IMB_MGR *mb_mgr,
*
* @return
* - 0 on success, the IMB_JOB will be filled
- * - -1 if invalid session, IMB_JOB will not be filled
+ * - -1 if invalid session or errors allocationg SGL linear buffer,
+ * IMB_JOB will not be filled
*/
static inline int
set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
@@ -1191,6 +1267,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
uint32_t total_len;
IMB_JOB base_job;
uint8_t sgl = 0;
+ uint8_t lb_sgl = 0;
int ret;
session = ipsec_mb_get_session_private(qp, op);
@@ -1199,18 +1276,6 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
return -1;
}
- if (op->sym->m_src->nb_segs > 1) {
- if (session->cipher.mode != IMB_CIPHER_GCM
- && session->cipher.mode !=
- IMB_CIPHER_CHACHA20_POLY1305) {
- op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- IPSEC_MB_LOG(ERR, "Device only supports SGL for AES-GCM"
- " or CHACHA20_POLY1305 algorithms.");
- return -1;
- }
- sgl = 1;
- }
-
/* Set crypto operation */
job->chain_order = session->chain_order;
@@ -1233,6 +1298,26 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
job->dec_keys = session->cipher.expanded_aes_keys.decode;
}
+ if (!op->sym->m_dst) {
+ /* in-place operation */
+ m_dst = m_src;
+ oop = 0;
+ } else if (op->sym->m_dst == op->sym->m_src) {
+ /* in-place operation */
+ m_dst = m_src;
+ oop = 0;
+ } else {
+ /* out-of-place operation */
+ m_dst = op->sym->m_dst;
+ oop = 1;
+ }
+
+ if (m_src->nb_segs > 1 || m_dst->nb_segs > 1) {
+ sgl = 1;
+ if (!imb_lib_support_sgl_algo(session->cipher.mode))
+ lb_sgl = 1;
+ }
+
switch (job->hash_alg) {
case IMB_AUTH_AES_XCBC:
job->u.XCBC._k1_expanded = session->auth.xcbc.k1_expanded;
@@ -1331,20 +1416,6 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
m_offset = 0;
}
- if (!op->sym->m_dst) {
- /* in-place operation */
- m_dst = m_src;
- oop = 0;
- } else if (op->sym->m_dst == op->sym->m_src) {
- /* in-place operation */
- m_dst = m_src;
- oop = 0;
- } else {
- /* out-of-place operation */
- m_dst = op->sym->m_dst;
- oop = 1;
- }
-
/* Set digest output location */
if (job->hash_alg != IMB_AUTH_NULL &&
session->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) {
@@ -1435,7 +1506,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
job->hash_start_src_offset_in_bytes = auth_start_offset(op,
session, oop, auth_off_in_bytes,
ciph_off_in_bytes, auth_len_in_bytes,
- ciph_len_in_bytes);
+ ciph_len_in_bytes, lb_sgl);
job->msg_len_to_hash_in_bits = op->sym->auth.data.length;
job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -1452,7 +1523,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
job->hash_start_src_offset_in_bytes = auth_start_offset(op,
session, oop, auth_off_in_bytes,
ciph_off_in_bytes, auth_len_in_bytes,
- ciph_len_in_bytes);
+ ciph_len_in_bytes, lb_sgl);
job->msg_len_to_hash_in_bytes = auth_len_in_bytes;
job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -1464,7 +1535,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
session, oop, op->sym->auth.data.offset,
op->sym->cipher.data.offset,
op->sym->auth.data.length,
- op->sym->cipher.data.length);
+ op->sym->cipher.data.length, lb_sgl);
job->msg_len_to_hash_in_bytes = op->sym->auth.data.length;
job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -1525,6 +1596,10 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
job->user_data = op;
if (sgl) {
+
+ if (lb_sgl)
+ return handle_sgl_linear(job, op, m_offset, session);
+
base_job = *job;
job->sgl_state = IMB_SGL_INIT;
job = IMB_SUBMIT_JOB(mb_mgr);
@@ -1694,6 +1769,31 @@ generate_digest(IMB_JOB *job, struct rte_crypto_op *op,
sess->auth.req_digest_len);
}
+static void
+post_process_sgl_linear(struct rte_crypto_op *op, IMB_JOB *job,
+ struct aesni_mb_session *sess, uint8_t *linear_buf)
+{
+
+ int lb_offset = 0;
+ struct rte_mbuf *m_dst = op->sym->m_dst == NULL ?
+ op->sym->m_src : op->sym->m_dst;
+ uint16_t total_len, dst_len;
+ uint64_t auth_len;
+ uint8_t *dst;
+
+ total_len = sgl_linear_cipher_auth_len(job, &auth_len);
+
+ if (sess->auth.operation != RTE_CRYPTO_AUTH_OP_VERIFY)
+ total_len += job->auth_tag_output_len_in_bytes;
+
+ for (; (m_dst != NULL) && (total_len - lb_offset > 0); m_dst = m_dst->next) {
+ dst = rte_pktmbuf_mtod(m_dst, uint8_t *);
+ dst_len = RTE_MIN(m_dst->data_len, total_len - lb_offset);
+ rte_memcpy(dst, linear_buf + lb_offset, dst_len);
+ lb_offset += dst_len;
+ }
+}
+
/**
* Process a completed job and return rte_mbuf which job processed
*
@@ -1709,6 +1809,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
{
struct rte_crypto_op *op = (struct rte_crypto_op *)job->user_data;
struct aesni_mb_session *sess = NULL;
+ uint8_t *linear_buf = NULL;
#ifdef AESNI_MB_DOCSIS_SEC_ENABLED
uint8_t is_docsis_sec = 0;
@@ -1729,6 +1830,14 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
case IMB_STATUS_COMPLETED:
op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ if ((op->sym->m_src->nb_segs > 1 ||
+ (op->sym->m_dst != NULL &&
+ op->sym->m_dst->nb_segs > 1)) &&
+ !imb_lib_support_sgl_algo(sess->cipher.mode)) {
+ linear_buf = (uint8_t *) job->user_data2;
+ post_process_sgl_linear(op, job, sess, linear_buf);
+ }
+
if (job->hash_alg == IMB_AUTH_NULL)
break;
@@ -1755,6 +1864,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
default:
op->status = RTE_CRYPTO_OP_STATUS_ERROR;
}
+ rte_free(linear_buf);
}
/* Free session if a session-less crypto op */
@@ -2213,7 +2323,11 @@ RTE_INIT(ipsec_mb_register_aesni_mb)
RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO |
RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA |
- RTE_CRYPTODEV_FF_SYM_SESSIONLESS;
+ RTE_CRYPTODEV_FF_SYM_SESSIONLESS |
+ RTE_CRYPTODEV_FF_IN_PLACE_SGL |
+ RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
+ RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
+ RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT;
aesni_mb_data->internals_priv_size = 0;
aesni_mb_data->ops = &aesni_mb_pmd_ops;
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v5 3/4] test/crypto: add OOP snow3g SGL tests
2022-10-07 13:46 ` [PATCH v5 0/4] " Ciara Power
2022-10-07 13:46 ` [PATCH v5 1/4] test/crypto: fix wireless auth digest segment Ciara Power
2022-10-07 13:46 ` [PATCH v5 2/4] crypto/ipsec_mb: add remaining SGL support Ciara Power
@ 2022-10-07 13:46 ` Ciara Power
2022-10-07 13:46 ` [PATCH v5 4/4] test/crypto: add remaining blockcipher " Ciara Power
2022-10-12 18:22 ` [EXT] [PATCH v5 0/4] add remaining SGL support to AESNI_MB Akhil Goyal
4 siblings, 0 replies; 38+ messages in thread
From: Ciara Power @ 2022-10-07 13:46 UTC (permalink / raw)
To: Akhil Goyal, Fan Zhang; +Cc: dev, kai.ji, Ciara Power, Fan Zhang, Pablo de Lara
More tests are added to test variations of OOP SGL for snow3g.
This includes LB_IN_SGL_OUT and SGL_IN_LB_OUT.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test/test_cryptodev.c | 48 +++++++++++++++++++++++++++++++--------
1 file changed, 39 insertions(+), 9 deletions(-)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 203b8b61fa..c2b33686ed 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -4299,7 +4299,8 @@ test_snow3g_encryption_oop(const struct snow3g_test_data *tdata)
}
static int
-test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
+test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata,
+ uint8_t sgl_in, uint8_t sgl_out)
{
struct crypto_testsuite_params *ts_params = &testsuite_params;
struct crypto_unittest_params *ut_params = &unittest_params;
@@ -4330,9 +4331,12 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
uint64_t feat_flags = dev_info.feature_flags;
- if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) {
- printf("Device doesn't support out-of-place scatter-gather "
- "in both input and output mbufs. "
+ if (((sgl_in && sgl_out) && !(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT))
+ || ((!sgl_in && sgl_out) &&
+ !(feat_flags & RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT))
+ || ((sgl_in && !sgl_out) &&
+ !(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT))) {
+ printf("Device doesn't support out-of-place scatter gather type. "
"Test Skipped.\n");
return TEST_SKIPPED;
}
@@ -4357,10 +4361,21 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
/* the algorithms block size */
plaintext_pad_len = RTE_ALIGN_CEIL(plaintext_len, 16);
- ut_params->ibuf = create_segmented_mbuf(ts_params->mbuf_pool,
- plaintext_pad_len, 10, 0);
- ut_params->obuf = create_segmented_mbuf(ts_params->mbuf_pool,
- plaintext_pad_len, 3, 0);
+ if (sgl_in)
+ ut_params->ibuf = create_segmented_mbuf(ts_params->mbuf_pool,
+ plaintext_pad_len, 10, 0);
+ else {
+ ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+ rte_pktmbuf_append(ut_params->ibuf, plaintext_pad_len);
+ }
+
+ if (sgl_out)
+ ut_params->obuf = create_segmented_mbuf(ts_params->mbuf_pool,
+ plaintext_pad_len, 3, 0);
+ else {
+ ut_params->obuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
+ rte_pktmbuf_append(ut_params->obuf, plaintext_pad_len);
+ }
TEST_ASSERT_NOT_NULL(ut_params->ibuf,
"Failed to allocate input buffer in mempool");
@@ -6714,9 +6729,20 @@ test_snow3g_encryption_test_case_1_oop(void)
static int
test_snow3g_encryption_test_case_1_oop_sgl(void)
{
- return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1);
+ return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 1, 1);
+}
+
+static int
+test_snow3g_encryption_test_case_1_oop_lb_in_sgl_out(void)
+{
+ return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 0, 1);
}
+static int
+test_snow3g_encryption_test_case_1_oop_sgl_in_lb_out(void)
+{
+ return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 1, 0);
+}
static int
test_snow3g_encryption_test_case_1_offset_oop(void)
@@ -15842,6 +15868,10 @@ static struct unit_test_suite cryptodev_snow3g_testsuite = {
test_snow3g_encryption_test_case_1_oop),
TEST_CASE_ST(ut_setup, ut_teardown,
test_snow3g_encryption_test_case_1_oop_sgl),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_1_oop_lb_in_sgl_out),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_snow3g_encryption_test_case_1_oop_sgl_in_lb_out),
TEST_CASE_ST(ut_setup, ut_teardown,
test_snow3g_encryption_test_case_1_offset_oop),
TEST_CASE_ST(ut_setup, ut_teardown,
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH v5 4/4] test/crypto: add remaining blockcipher SGL tests
2022-10-07 13:46 ` [PATCH v5 0/4] " Ciara Power
` (2 preceding siblings ...)
2022-10-07 13:46 ` [PATCH v5 3/4] test/crypto: add OOP snow3g SGL tests Ciara Power
@ 2022-10-07 13:46 ` Ciara Power
2022-10-12 18:22 ` [EXT] [PATCH v5 0/4] add remaining SGL support to AESNI_MB Akhil Goyal
4 siblings, 0 replies; 38+ messages in thread
From: Ciara Power @ 2022-10-07 13:46 UTC (permalink / raw)
To: Akhil Goyal, Fan Zhang, Yipeng Wang, Sameh Gobriel,
Bruce Richardson, Vladimir Medvedkin
Cc: dev, kai.ji, Ciara Power, Fan Zhang, Pablo de Lara
The current blockcipher test function only has support for two types of
SGL test, INPLACE or OOP_SGL_IN_LB_OUT. These types are hardcoded into
the function, with the number of segments always set to 3.
To ensure all SGL types are tested, blockcipher test vectors now have
fields to specify SGL type, and the number of segments.
If these fields are missing, the previous defaults are used,
either INPLACE or OOP_SGL_IN_LB_OUT, with 3 segments.
Some AES and Hash vectors are modified to use these new fields, and new
AES tests are added to test the SGL types that were not previously
being tested.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
v5:
- Fixed AES-XTS tests to add in some that were accidentally
removed in previous versions.
- Removed some AES-XTS tests added in previous version that are
skipped for AESNI_MB and QAT, as they haven't been verified locally.
---
app/test/test_cryptodev_aes_test_vectors.h | 310 +++++++++++++++++++-
app/test/test_cryptodev_blockcipher.c | 50 ++--
app/test/test_cryptodev_blockcipher.h | 2 +
app/test/test_cryptodev_hash_test_vectors.h | 8 +-
4 files changed, 338 insertions(+), 32 deletions(-)
diff --git a/app/test/test_cryptodev_aes_test_vectors.h b/app/test/test_cryptodev_aes_test_vectors.h
index a797af1b00..ea7b21ce53 100644
--- a/app/test/test_cryptodev_aes_test_vectors.h
+++ b/app/test/test_cryptodev_aes_test_vectors.h
@@ -4163,12 +4163,44 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
},
{
.test_descr = "AES-192-CTR XCBC Decryption Digest Verify "
- "Scatter Gather",
+ "Scatter Gather (Inplace)",
+ .test_data = &aes_test_data_2,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CTR XCBC Decryption Digest Verify "
+ "Scatter Gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_2,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CTR XCBC Decryption Digest Verify "
+ "Scatter Gather OOP (LB in SGL out)",
.test_data = &aes_test_data_2,
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
},
+ {
+ .test_descr = "AES-192-CTR XCBC Decryption Digest Verify "
+ "Scatter Gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_2,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
+ },
+
{
.test_descr = "AES-256-CTR HMAC-SHA1 Encryption Digest",
.test_data = &aes_test_data_3,
@@ -4193,11 +4225,52 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
- "Scatter Gather",
+ "Scatter Gather (Inplace)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
+ "Scatter Gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
+ "Scatter Gather OOP 16 segs (SGL in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 16
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
+ "Scatter Gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
+ "Scatter Gather OOP (SGL in LB out)",
.test_data = &aes_test_data_4,
.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -4207,10 +4280,52 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
- "Verify Scatter Gather",
+ "Verify Scatter Gather (Inplace)",
.test_data = &aes_test_data_4,
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
+ "Verify Scatter Gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
+ "Verify Scatter Gather OOP 16 segs (SGL in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 16
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
+ "Verify Scatter Gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
+ "Verify Scatter Gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -4255,12 +4370,46 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
},
{
.test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest "
- "Scatter Gather Sessionless",
+ "Scatter Gather Sessionless (Inplace)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
+ BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest "
+ "Scatter Gather Sessionless OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
+ BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest "
+ "Scatter Gather Sessionless OOP (LB in SGL out)",
.test_data = &aes_test_data_6,
.op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest "
+ "Scatter Gather Sessionless OOP (SGL in LB out)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
+ BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
@@ -4270,11 +4419,42 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
},
{
.test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
- "Verify Scatter Gather",
+ "Verify Scatter Gather (Inplace)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 2
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
+ "Verify Scatter Gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
+ "Verify Scatter Gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_6,
+ .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest "
+ "Verify Scatter Gather OOP (SGL in LB out)",
.test_data = &aes_test_data_6,
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC XCBC Encryption Digest",
@@ -4358,6 +4538,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN_ENC,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
@@ -4382,6 +4564,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -4397,6 +4581,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_DEC_AUTH_VERIFY,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -4421,6 +4607,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = {
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS |
BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest "
@@ -4504,6 +4692,41 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
.test_data = &aes_test_data_4,
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
},
+ {
+ .test_descr = "AES-128-CBC Encryption Scatter gather (Inplace)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC Encryption Scatter gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC Encryption Scatter gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-128-CBC Encryption Scatter gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_4,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
+ },
{
.test_descr = "AES-128-CBC Decryption",
.test_data = &aes_test_data_4,
@@ -4515,11 +4738,39 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
},
{
- .test_descr = "AES-192-CBC Encryption Scatter gather",
+ .test_descr = "AES-192-CBC Encryption Scatter gather (Inplace)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Encryption Scatter gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Encryption Scatter gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Encryption Scatter gather OOP (SGL in LB out)",
.test_data = &aes_test_data_10,
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-192-CBC Decryption",
@@ -4527,10 +4778,39 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
},
{
- .test_descr = "AES-192-CBC Decryption Scatter Gather",
+ .test_descr = "AES-192-CBC Decryption Scatter Gather (Inplace)",
.test_data = &aes_test_data_10,
.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Decryption Scatter Gather OOP (SGL in SGL out)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Decryption Scatter Gather OOP (LB in SGL out)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT,
+ .sgl_segs = 3
+ },
+ {
+ .test_descr = "AES-192-CBC Decryption Scatter Gather OOP (SGL in LB out)",
+ .test_data = &aes_test_data_10,
+ .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
+ .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG |
+ BLOCKCIPHER_TEST_FEATURE_OOP,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-256-CBC Encryption",
@@ -4694,6 +4974,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-256-XTS Decryption (512-byte plaintext"
@@ -4702,6 +4984,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-256-XTS Encryption (512-byte plaintext"
@@ -4710,6 +4994,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-256-XTS Decryption (512-byte plaintext"
@@ -4718,6 +5004,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-256-XTS Encryption (4096-byte plaintext"
@@ -4726,6 +5014,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-256-XTS Decryption (4096-byte plaintext"
@@ -4734,6 +5024,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-256-XTS Encryption (4096-byte plaintext"
@@ -4742,6 +5034,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "AES-256-XTS Decryption (4096-byte plaintext"
@@ -4750,6 +5044,8 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_DECRYPT,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP |
BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT,
+ .sgl_segs = 3
},
{
.test_descr = "cipher-only - NULL algo - x8 - encryption",
diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c
index 28479812ee..6c9a5964ea 100644
--- a/app/test/test_cryptodev_blockcipher.c
+++ b/app/test/test_cryptodev_blockcipher.c
@@ -95,7 +95,9 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
uint8_t tmp_dst_buf[MBUF_SIZE];
uint32_t pad_len;
- int nb_segs = 1;
+ int nb_segs_in = 1;
+ int nb_segs_out = 1;
+ uint64_t sgl_type = t->sgl_flag;
uint32_t nb_iterates = 0;
rte_cryptodev_info_get(dev_id, &dev_info);
@@ -120,30 +122,31 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
}
}
if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_SG) {
- uint64_t oop_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT;
+ if (sgl_type == 0) {
+ if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP)
+ sgl_type = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT;
+ else
+ sgl_type = RTE_CRYPTODEV_FF_IN_PLACE_SGL;
+ }
- if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) {
- if (!(feat_flags & oop_flag)) {
- printf("Device doesn't support out-of-place "
- "scatter-gather in input mbuf. "
- "Test Skipped.\n");
- snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
- "SKIPPED");
- return TEST_SKIPPED;
- }
- } else {
- if (!(feat_flags & RTE_CRYPTODEV_FF_IN_PLACE_SGL)) {
- printf("Device doesn't support in-place "
- "scatter-gather mbufs. "
- "Test Skipped.\n");
- snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
- "SKIPPED");
- return TEST_SKIPPED;
- }
+ if (!(feat_flags & sgl_type)) {
+ printf("Device doesn't support scatter-gather type."
+ " Test Skipped.\n");
+ snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
+ "SKIPPED");
+ return TEST_SKIPPED;
}
- nb_segs = 3;
+ if (sgl_type == RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT ||
+ sgl_type == RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT ||
+ sgl_type == RTE_CRYPTODEV_FF_IN_PLACE_SGL)
+ nb_segs_in = t->sgl_segs == 0 ? 3 : t->sgl_segs;
+
+ if (sgl_type == RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT ||
+ sgl_type == RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)
+ nb_segs_out = t->sgl_segs == 0 ? 3 : t->sgl_segs;
}
+
if (!!(feat_flags & RTE_CRYPTODEV_FF_CIPHER_WRAPPED_KEY) ^
tdata->wrapped_key) {
snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
@@ -206,7 +209,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
/* for contiguous mbuf, nb_segs is 1 */
ibuf = create_segmented_mbuf(mbuf_pool,
- tdata->ciphertext.len, nb_segs, src_pattern);
+ tdata->ciphertext.len, nb_segs_in, src_pattern);
if (ibuf == NULL) {
snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
"line %u FAILED: %s",
@@ -255,7 +258,8 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
}
if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) {
- obuf = rte_pktmbuf_alloc(mbuf_pool);
+ obuf = create_segmented_mbuf(mbuf_pool,
+ tdata->ciphertext.len, nb_segs_out, dst_pattern);
if (!obuf) {
snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "line %u "
"FAILED: %s", __LINE__,
diff --git a/app/test/test_cryptodev_blockcipher.h b/app/test/test_cryptodev_blockcipher.h
index 84f5d57787..bad93a5ec1 100644
--- a/app/test/test_cryptodev_blockcipher.h
+++ b/app/test/test_cryptodev_blockcipher.h
@@ -57,6 +57,8 @@ struct blockcipher_test_case {
const struct blockcipher_test_data *test_data;
uint8_t op_mask; /* operation mask */
uint8_t feature_mask;
+ uint64_t sgl_flag;
+ uint8_t sgl_segs;
};
struct blockcipher_test_data {
diff --git a/app/test/test_cryptodev_hash_test_vectors.h b/app/test/test_cryptodev_hash_test_vectors.h
index 5bd7858de4..62602310b2 100644
--- a/app/test/test_cryptodev_hash_test_vectors.h
+++ b/app/test/test_cryptodev_hash_test_vectors.h
@@ -467,10 +467,12 @@ static const struct blockcipher_test_case hash_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN,
},
{
- .test_descr = "HMAC-SHA1 Digest Scatter Gather",
+ .test_descr = "HMAC-SHA1 Digest Scatter Gather (Inplace)",
.test_data = &hmac_sha1_test_vector,
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "HMAC-SHA1 Digest Verify",
@@ -478,10 +480,12 @@ static const struct blockcipher_test_case hash_test_cases[] = {
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY,
},
{
- .test_descr = "HMAC-SHA1 Digest Verify Scatter Gather",
+ .test_descr = "HMAC-SHA1 Digest Verify Scatter Gather (Inplace)",
.test_data = &hmac_sha1_test_vector,
.op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY,
.feature_mask = BLOCKCIPHER_TEST_FEATURE_SG,
+ .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL,
+ .sgl_segs = 3
},
{
.test_descr = "SHA224 Digest",
--
2.25.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* RE: [EXT] [PATCH v5 0/4] add remaining SGL support to AESNI_MB
2022-10-07 13:46 ` [PATCH v5 0/4] " Ciara Power
` (3 preceding siblings ...)
2022-10-07 13:46 ` [PATCH v5 4/4] test/crypto: add remaining blockcipher " Ciara Power
@ 2022-10-12 18:22 ` Akhil Goyal
4 siblings, 0 replies; 38+ messages in thread
From: Akhil Goyal @ 2022-10-12 18:22 UTC (permalink / raw)
To: Ciara Power; +Cc: dev, kai.ji
> Currently, the intel-ipsec-mb library only supports SGL for
> GCM and ChaCha20-Poly1305 algorithms through the JOB API.
>
> To add SGL support for other algorithms, a workaround approach is
> added in the AESNI_MB PMD. SGL feature flags can now be added to
> the PMD.
>
> Some additional Snow3G SGL and AES tests are also added for
> various SGL input/output combinations that were not
> previously being tested.
>
> v5:
> - Rebased on for-main branch after session rework merged.
> - Dropped bug fix patch for sessionless tests, no longer
> applicable after session rework.
> - Removed some added AES-XTS tests as they are skipped for AESNI_MB
> and QAT and I can't verify them locally.
> - Added AES-XTS tests that had been accidentally removed in
> previous patchset versions.
Applied to dpdk-next-crypto
Thanks.
^ permalink raw reply [flat|nested] 38+ messages in thread
end of thread, other threads:[~2022-10-12 18:22 UTC | newest]
Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-12 13:23 [PATCH 0/3] add remaining SGL support to AESNI_MB Ciara Power
2022-08-12 13:23 ` [PATCH 1/3] test/crypto: fix wireless auth digest segment Ciara Power
2022-08-12 13:23 ` [PATCH 2/3] crypto/ipsec_mb: add remaining SGL support Ciara Power
2022-08-12 13:23 ` [PATCH 3/3] test/crypto: add OOP snow3g SGL tests Ciara Power
2022-08-25 14:28 ` [PATCH v2 0/5] add remaining SGL support to AESNI_MB Ciara Power
2022-08-25 14:28 ` [PATCH v2 1/5] test/crypto: fix wireless auth digest segment Ciara Power
2022-08-25 14:28 ` [PATCH v2 2/5] crypto/ipsec_mb: fix sessionless cleanup Ciara Power
2022-09-15 11:38 ` De Lara Guarch, Pablo
2022-09-21 13:02 ` Power, Ciara
2022-08-25 14:28 ` [PATCH v2 3/5] crypto/ipsec_mb: add remaining SGL support Ciara Power
2022-09-15 11:47 ` De Lara Guarch, Pablo
2022-08-25 14:29 ` [PATCH v2 4/5] test/crypto: add OOP snow3g SGL tests Ciara Power
2022-08-25 14:29 ` [PATCH v2 5/5] test/crypto: add remaining blockcipher " Ciara Power
2022-09-21 12:50 ` [PATCH v3 0/5] add remaining SGL support to AESNI_MB Ciara Power
2022-09-21 12:50 ` [PATCH v3 1/5] test/crypto: fix wireless auth digest segment Ciara Power
2022-09-21 13:32 ` Zhang, Roy Fan
2022-09-21 12:50 ` [PATCH v3 2/5] crypto/ipsec_mb: fix session creation for sessionless Ciara Power
2022-09-21 13:33 ` Zhang, Roy Fan
2022-09-21 12:50 ` [PATCH v3 3/5] crypto/ipsec_mb: add remaining SGL support Ciara Power
2022-09-21 14:50 ` Zhang, Roy Fan
2022-09-21 12:50 ` [PATCH v3 4/5] test/crypto: add OOP snow3g SGL tests Ciara Power
2022-09-21 14:54 ` Zhang, Roy Fan
2022-09-21 12:50 ` [PATCH v3 5/5] test/crypto: add remaining blockcipher " Ciara Power
2022-09-21 14:55 ` Zhang, Roy Fan
2022-09-26 8:06 ` [PATCH v3 0/5] add remaining SGL support to AESNI_MB De Lara Guarch, Pablo
2022-10-04 12:55 ` [PATCH v4 " Ciara Power
2022-10-04 12:55 ` [PATCH v4 1/5] test/crypto: fix wireless auth digest segment Ciara Power
2022-10-04 12:55 ` [PATCH v4 2/5] crypto/ipsec_mb: fix session creation for sessionless Ciara Power
2022-10-04 12:55 ` [PATCH v4 3/5] crypto/ipsec_mb: add remaining SGL support Ciara Power
2022-10-04 12:55 ` [PATCH v4 4/5] test/crypto: add OOP snow3g SGL tests Ciara Power
2022-10-04 12:55 ` [PATCH v4 5/5] test/crypto: add remaining blockcipher " Ciara Power
2022-10-07 6:53 ` [EXT] [PATCH v4 0/5] add remaining SGL support to AESNI_MB Akhil Goyal
2022-10-07 13:46 ` [PATCH v5 0/4] " Ciara Power
2022-10-07 13:46 ` [PATCH v5 1/4] test/crypto: fix wireless auth digest segment Ciara Power
2022-10-07 13:46 ` [PATCH v5 2/4] crypto/ipsec_mb: add remaining SGL support Ciara Power
2022-10-07 13:46 ` [PATCH v5 3/4] test/crypto: add OOP snow3g SGL tests Ciara Power
2022-10-07 13:46 ` [PATCH v5 4/4] test/crypto: add remaining blockcipher " Ciara Power
2022-10-12 18:22 ` [EXT] [PATCH v5 0/4] add remaining SGL support to AESNI_MB Akhil Goyal
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).