* [PATCH v2 0/2] app/eventdev: fix issues with cop alloc and qp size @ 2022-06-17 12:38 Volodymyr Fialko 2022-06-17 12:38 ` [PATCH v2 1/2] app/eventdev: add null checks for cop allocations Volodymyr Fialko 2022-06-17 12:38 ` [PATCH v2 2/2] app/eventdev: increase number of qp descriptors Volodymyr Fialko 0 siblings, 2 replies; 4+ messages in thread From: Volodymyr Fialko @ 2022-06-17 12:38 UTC (permalink / raw) To: dev; +Cc: jerinj, anoobj, Volodymyr Fialko - Handle cop allocation failures and count them. - QP default size increased, previous size of 128 was too small for most of the cases --- v2: - resolve issues with the patch description Volodymyr Fialko (2): app/eventdev: add null checks for cop allocations app/eventdev: increase number of qp descriptors app/test-eventdev/test_perf_common.c | 42 ++++++++++++++++++++++++++-- 1 file changed, 39 insertions(+), 3 deletions(-) -- 2.25.1 ^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH v2 1/2] app/eventdev: add null checks for cop allocations 2022-06-17 12:38 [PATCH v2 0/2] app/eventdev: fix issues with cop alloc and qp size Volodymyr Fialko @ 2022-06-17 12:38 ` Volodymyr Fialko 2022-06-20 19:29 ` Jerin Jacob 2022-06-17 12:38 ` [PATCH v2 2/2] app/eventdev: increase number of qp descriptors Volodymyr Fialko 1 sibling, 1 reply; 4+ messages in thread From: Volodymyr Fialko @ 2022-06-17 12:38 UTC (permalink / raw) To: dev, Jerin Jacob; +Cc: anoobj, Volodymyr Fialko Crypto operation allocation may fail in case when total size of queue pairs are bigger then the pool size. Signed-off-by: Volodymyr Fialko <vfialko@marvell.com> --- app/test-eventdev/test_perf_common.c | 40 ++++++++++++++++++++++++++-- 1 file changed, 38 insertions(+), 2 deletions(-) diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c index b41785492e..a5e031873d 100644 --- a/app/test-eventdev/test_perf_common.c +++ b/app/test-eventdev/test_perf_common.c @@ -367,6 +367,7 @@ crypto_adapter_enq_op_new(struct prod_data *p) struct evt_options *opt = t->opt; uint16_t qp_id = p->ca.cdev_qp_id; uint8_t cdev_id = p->ca.cdev_id; + uint64_t alloc_failures = 0; uint32_t flow_counter = 0; struct rte_crypto_op *op; struct rte_mbuf *m; @@ -386,9 +387,17 @@ crypto_adapter_enq_op_new(struct prod_data *p) op = rte_crypto_op_alloc(t->ca_op_pool, RTE_CRYPTO_OP_TYPE_SYMMETRIC); + if (unlikely(op == NULL)) { + alloc_failures++; + continue; + } + m = rte_pktmbuf_alloc(pool); - if (m == NULL) + if (unlikely(m == NULL)) { + alloc_failures++; + rte_crypto_op_free(op); continue; + } rte_pktmbuf_append(m, len); sym_op = op->sym; @@ -404,6 +413,11 @@ crypto_adapter_enq_op_new(struct prod_data *p) op = rte_crypto_op_alloc(t->ca_op_pool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC); + if (unlikely(op == NULL)) { + alloc_failures++; + continue; + } + asym_op = op->asym; asym_op->modex.base.data = modex_test_case.base.data; asym_op->modex.base.length = modex_test_case.base.len; @@ -418,6 +432,10 @@ crypto_adapter_enq_op_new(struct prod_data *p) count++; } + + if (opt->verbose_level > 1 && alloc_failures) + printf("%s(): lcore %d allocation failures: %"PRIu64"\n", + __func__, rte_lcore_id(), alloc_failures); } static inline void @@ -430,6 +448,7 @@ crypto_adapter_enq_op_fwd(struct prod_data *p) const uint64_t nb_pkts = t->nb_pkts; struct rte_mempool *pool = t->pool; struct evt_options *opt = t->opt; + uint64_t alloc_failures = 0; uint32_t flow_counter = 0; struct rte_crypto_op *op; struct rte_event ev; @@ -455,9 +474,17 @@ crypto_adapter_enq_op_fwd(struct prod_data *p) op = rte_crypto_op_alloc(t->ca_op_pool, RTE_CRYPTO_OP_TYPE_SYMMETRIC); + if (unlikely(op == NULL)) { + alloc_failures++; + continue; + } + m = rte_pktmbuf_alloc(pool); - if (m == NULL) + if (unlikely(m == NULL)) { + alloc_failures++; + rte_crypto_op_free(op); continue; + } rte_pktmbuf_append(m, len); sym_op = op->sym; @@ -473,6 +500,11 @@ crypto_adapter_enq_op_fwd(struct prod_data *p) op = rte_crypto_op_alloc(t->ca_op_pool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC); + if (unlikely(op == NULL)) { + alloc_failures++; + continue; + } + asym_op = op->asym; asym_op->modex.base.data = modex_test_case.base.data; asym_op->modex.base.length = modex_test_case.base.len; @@ -489,6 +521,10 @@ crypto_adapter_enq_op_fwd(struct prod_data *p) count++; } + + if (opt->verbose_level > 1 && alloc_failures) + printf("%s(): lcore %d allocation failures: %"PRIu64"\n", + __func__, rte_lcore_id(), alloc_failures); } static inline int -- 2.25.1 ^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v2 1/2] app/eventdev: add null checks for cop allocations 2022-06-17 12:38 ` [PATCH v2 1/2] app/eventdev: add null checks for cop allocations Volodymyr Fialko @ 2022-06-20 19:29 ` Jerin Jacob 0 siblings, 0 replies; 4+ messages in thread From: Jerin Jacob @ 2022-06-20 19:29 UTC (permalink / raw) To: Volodymyr Fialko; +Cc: dpdk-dev, Jerin Jacob, Anoob Joseph On Fri, Jun 17, 2022 at 6:09 PM Volodymyr Fialko <vfialko@marvell.com> wrote: > > Crypto operation allocation may fail in case when total size of queue > pairs are bigger then the pool size. then -> than > > Signed-off-by: Volodymyr Fialko <vfialko@marvell.com> Series-Acked-by: Jerin Jacob <jerinj@marvell.com> Series applied to dpdk-next-net-eventdev/for-main. Thanks > --- > app/test-eventdev/test_perf_common.c | 40 ++++++++++++++++++++++++++-- > 1 file changed, 38 insertions(+), 2 deletions(-) > > diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c > index b41785492e..a5e031873d 100644 > --- a/app/test-eventdev/test_perf_common.c > +++ b/app/test-eventdev/test_perf_common.c > @@ -367,6 +367,7 @@ crypto_adapter_enq_op_new(struct prod_data *p) > struct evt_options *opt = t->opt; > uint16_t qp_id = p->ca.cdev_qp_id; > uint8_t cdev_id = p->ca.cdev_id; > + uint64_t alloc_failures = 0; > uint32_t flow_counter = 0; > struct rte_crypto_op *op; > struct rte_mbuf *m; > @@ -386,9 +387,17 @@ crypto_adapter_enq_op_new(struct prod_data *p) > > op = rte_crypto_op_alloc(t->ca_op_pool, > RTE_CRYPTO_OP_TYPE_SYMMETRIC); > + if (unlikely(op == NULL)) { > + alloc_failures++; > + continue; > + } > + > m = rte_pktmbuf_alloc(pool); > - if (m == NULL) > + if (unlikely(m == NULL)) { > + alloc_failures++; > + rte_crypto_op_free(op); > continue; > + } > > rte_pktmbuf_append(m, len); > sym_op = op->sym; > @@ -404,6 +413,11 @@ crypto_adapter_enq_op_new(struct prod_data *p) > > op = rte_crypto_op_alloc(t->ca_op_pool, > RTE_CRYPTO_OP_TYPE_ASYMMETRIC); > + if (unlikely(op == NULL)) { > + alloc_failures++; > + continue; > + } > + > asym_op = op->asym; > asym_op->modex.base.data = modex_test_case.base.data; > asym_op->modex.base.length = modex_test_case.base.len; > @@ -418,6 +432,10 @@ crypto_adapter_enq_op_new(struct prod_data *p) > > count++; > } > + > + if (opt->verbose_level > 1 && alloc_failures) > + printf("%s(): lcore %d allocation failures: %"PRIu64"\n", > + __func__, rte_lcore_id(), alloc_failures); > } > > static inline void > @@ -430,6 +448,7 @@ crypto_adapter_enq_op_fwd(struct prod_data *p) > const uint64_t nb_pkts = t->nb_pkts; > struct rte_mempool *pool = t->pool; > struct evt_options *opt = t->opt; > + uint64_t alloc_failures = 0; > uint32_t flow_counter = 0; > struct rte_crypto_op *op; > struct rte_event ev; > @@ -455,9 +474,17 @@ crypto_adapter_enq_op_fwd(struct prod_data *p) > > op = rte_crypto_op_alloc(t->ca_op_pool, > RTE_CRYPTO_OP_TYPE_SYMMETRIC); > + if (unlikely(op == NULL)) { > + alloc_failures++; > + continue; > + } > + > m = rte_pktmbuf_alloc(pool); > - if (m == NULL) > + if (unlikely(m == NULL)) { > + alloc_failures++; > + rte_crypto_op_free(op); > continue; > + } > > rte_pktmbuf_append(m, len); > sym_op = op->sym; > @@ -473,6 +500,11 @@ crypto_adapter_enq_op_fwd(struct prod_data *p) > > op = rte_crypto_op_alloc(t->ca_op_pool, > RTE_CRYPTO_OP_TYPE_ASYMMETRIC); > + if (unlikely(op == NULL)) { > + alloc_failures++; > + continue; > + } > + > asym_op = op->asym; > asym_op->modex.base.data = modex_test_case.base.data; > asym_op->modex.base.length = modex_test_case.base.len; > @@ -489,6 +521,10 @@ crypto_adapter_enq_op_fwd(struct prod_data *p) > > count++; > } > + > + if (opt->verbose_level > 1 && alloc_failures) > + printf("%s(): lcore %d allocation failures: %"PRIu64"\n", > + __func__, rte_lcore_id(), alloc_failures); > } > > static inline int > -- > 2.25.1 > ^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH v2 2/2] app/eventdev: increase number of qp descriptors 2022-06-17 12:38 [PATCH v2 0/2] app/eventdev: fix issues with cop alloc and qp size Volodymyr Fialko 2022-06-17 12:38 ` [PATCH v2 1/2] app/eventdev: add null checks for cop allocations Volodymyr Fialko @ 2022-06-17 12:38 ` Volodymyr Fialko 1 sibling, 0 replies; 4+ messages in thread From: Volodymyr Fialko @ 2022-06-17 12:38 UTC (permalink / raw) To: dev, Jerin Jacob; +Cc: anoobj, Volodymyr Fialko Increase number of cryptodev queue pair descriptors by default. Current size of 128 descriptors does not satisfying minimal requirements of crypto drivers. Signed-off-by: Volodymyr Fialko <vfialko@marvell.com> --- app/test-eventdev/test_perf_common.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c index a5e031873d..81420be73a 100644 --- a/app/test-eventdev/test_perf_common.c +++ b/app/test-eventdev/test_perf_common.c @@ -6,7 +6,7 @@ #include "test_perf_common.h" -#define NB_CRYPTODEV_DESCRIPTORS 128 +#define NB_CRYPTODEV_DESCRIPTORS 1024 #define DATA_SIZE 512 struct modex_test_data { enum rte_crypto_asym_xform_type xform_type; -- 2.25.1 ^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2022-06-20 19:30 UTC | newest] Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2022-06-17 12:38 [PATCH v2 0/2] app/eventdev: fix issues with cop alloc and qp size Volodymyr Fialko 2022-06-17 12:38 ` [PATCH v2 1/2] app/eventdev: add null checks for cop allocations Volodymyr Fialko 2022-06-20 19:29 ` Jerin Jacob 2022-06-17 12:38 ` [PATCH v2 2/2] app/eventdev: increase number of qp descriptors Volodymyr Fialko
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).