DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH] event/cnxk: increase inflight buffer count
@ 2025-09-29 10:27 Tejasree Kondoj
  2025-10-07  6:53 ` Jerin Jacob
  0 siblings, 1 reply; 2+ messages in thread
From: Tejasree Kondoj @ 2025-09-29 10:27 UTC (permalink / raw)
  To: Jerin Jacob, Pavan Nikhilesh, Shijith Thotton; +Cc: Anoob Joseph, dev

Increasing crypto adapter inflight buffer count as
performance is dropped to zero when high traffic is
sent on 16 or 24 cores in event vector mode.

Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
---
 drivers/event/cnxk/cnxk_eventdev_adptr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c
index 80f770ee8d..3380712095 100644
--- a/drivers/event/cnxk/cnxk_eventdev_adptr.c
+++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c
@@ -667,7 +667,7 @@ crypto_adapter_qp_setup(const struct rte_cryptodev *cdev, struct cnxk_cpt_qp *qp
 	snprintf(name, RTE_MEMPOOL_NAMESIZE, "cnxk_ca_req_%u:%u", cdev->data->dev_id, qp->lf.lf_id);
 	req_size = sizeof(struct cpt_inflight_req);
 	cache_size = RTE_MIN(RTE_MEMPOOL_CACHE_MAX_SIZE, qp->lf.nb_desc / 1.5);
-	nb_req = RTE_MAX(qp->lf.nb_desc, cache_size * rte_lcore_count());
+	nb_req = qp->lf.nb_desc + (cache_size * rte_lcore_count());
 	qp->ca.req_mp = rte_mempool_create(name, nb_req, req_size, cache_size, 0, NULL, NULL, NULL,
 					   NULL, rte_socket_id(), 0);
 	if (qp->ca.req_mp == NULL)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 2+ messages in thread

* RE: [PATCH] event/cnxk: increase inflight buffer count
  2025-09-29 10:27 [PATCH] event/cnxk: increase inflight buffer count Tejasree Kondoj
@ 2025-10-07  6:53 ` Jerin Jacob
  0 siblings, 0 replies; 2+ messages in thread
From: Jerin Jacob @ 2025-10-07  6:53 UTC (permalink / raw)
  To: Tejasree Kondoj, Pavan Nikhilesh Bhagavatula, Shijith Thotton
  Cc: Anoob Joseph, dev



> -----Original Message-----
> From: Tejasree Kondoj <ktejasree@marvell.com>
> Sent: Monday, September 29, 2025 3:58 PM
> To: Jerin Jacob <jerinj@marvell.com>; Pavan Nikhilesh Bhagavatula
> <pbhagavatula@marvell.com>; Shijith Thotton <sthotton@marvell.com>
> Cc: Anoob Joseph <anoobj@marvell.com>; dev@dpdk.org
> Subject: [PATCH] event/cnxk: increase inflight buffer count
> 
> Increasing crypto adapter inflight buffer count as performance is dropped to
> zero when high traffic is sent on 16 or 24 cores in event vector mode.
> 
> Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>



Applied to dpdk-next-net-eventdev/for-main. Thanks

> ---
>  drivers/event/cnxk/cnxk_eventdev_adptr.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c
> b/drivers/event/cnxk/cnxk_eventdev_adptr.c
> index 80f770ee8d..3380712095 100644
> --- a/drivers/event/cnxk/cnxk_eventdev_adptr.c
> +++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c
> @@ -667,7 +667,7 @@ crypto_adapter_qp_setup(const struct rte_cryptodev
> *cdev, struct cnxk_cpt_qp *qp
>  	snprintf(name, RTE_MEMPOOL_NAMESIZE, "cnxk_ca_req_%u:%u",
> cdev->data->dev_id, qp->lf.lf_id);
>  	req_size = sizeof(struct cpt_inflight_req);
>  	cache_size = RTE_MIN(RTE_MEMPOOL_CACHE_MAX_SIZE, qp-
> >lf.nb_desc / 1.5);
> -	nb_req = RTE_MAX(qp->lf.nb_desc, cache_size * rte_lcore_count());
> +	nb_req = qp->lf.nb_desc + (cache_size * rte_lcore_count());
>  	qp->ca.req_mp = rte_mempool_create(name, nb_req, req_size,
> cache_size, 0, NULL, NULL, NULL,
>  					   NULL, rte_socket_id(), 0);
>  	if (qp->ca.req_mp == NULL)
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-10-07  6:53 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-09-29 10:27 [PATCH] event/cnxk: increase inflight buffer count Tejasree Kondoj
2025-10-07  6:53 ` Jerin Jacob

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).