DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jerin Jacob <jerinjacobk@gmail.com>
To: Pavan Nikhilesh <pbhagavatula@marvell.com>
Cc: Jerin Jacob <jerinj@marvell.com>,
	Radu Nicolau <radu.nicolau@intel.com>,
	 Akhil Goyal <gakhil@marvell.com>,
	Sunil Kumar Kori <skori@marvell.com>, dpdk-dev <dev@dpdk.org>
Subject: Re: [PATCH 2/2] examples: use mempool cache for vector pool
Date: Fri, 10 Jun 2022 19:16:15 +0530	[thread overview]
Message-ID: <CALBAE1OnPjvt6KcqVR3gSRvqdACumK5Dfv9gV_OczWr+DheA2g@mail.gmail.com> (raw)
In-Reply-To: <20220523095954.3181-2-pbhagavatula@marvell.com>

On Mon, May 23, 2022 at 3:30 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Use mempool cache for vector mempool as vectors are freed by the Tx
> routine, also increase the minimum pool size to 512 to avoid resource
> contention on Rx.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>


Acked-by: Jerin Jacob <jerinj@marvell.com>

> ---
>  examples/ipsec-secgw/event_helper.c | 8 ++++----
>  examples/l2fwd-event/main.c         | 4 +++-
>  examples/l3fwd/main.c               | 4 +++-
>  3 files changed, 10 insertions(+), 6 deletions(-)
>
> diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c
> index 172ab8e716..b36f20a3fd 100644
> --- a/examples/ipsec-secgw/event_helper.c
> +++ b/examples/ipsec-secgw/event_helper.c
> @@ -820,12 +820,12 @@ eh_rx_adapter_configure(struct eventmode_conf *em_conf,
>                                    em_conf->ext_params.vector_size) + 1;
>                         if (per_port_pool)
>                                 nb_elem = nb_ports * nb_elem;
> +                       nb_elem = RTE_MAX(512U, nb_elem);
>                 }
> -
> +               nb_elem += rte_lcore_count() * 32;
>                 vector_pool = rte_event_vector_pool_create(
> -                       "vector_pool", nb_elem, 0,
> -                       em_conf->ext_params.vector_size,
> -                       socket_id);
> +                       "vector_pool", nb_elem, 32,
> +                       em_conf->ext_params.vector_size, socket_id);
>                 if (vector_pool == NULL) {
>                         EH_LOG_ERR("failed to create event vector pool");
>                         return -ENOMEM;
> diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
> index dcc72f3f1e..44303d10c2 100644
> --- a/examples/l2fwd-event/main.c
> +++ b/examples/l2fwd-event/main.c
> @@ -678,8 +678,10 @@ main(int argc, char **argv)
>
>                 vec_size = rsrc->evt_vec.size;
>                 nb_vec = (nb_mbufs + vec_size - 1) / vec_size;
> +               nb_vec = RTE_MAX(512U, nb_vec);
> +               nb_vec += rte_lcore_count() * 32;
>                 rsrc->evt_vec_pool = rte_event_vector_pool_create(
> -                       "vector_pool", nb_vec, 0, vec_size, rte_socket_id());
> +                       "vector_pool", nb_vec, 32, vec_size, rte_socket_id());
>                 if (rsrc->evt_vec_pool == NULL)
>                         rte_panic("Cannot init event vector pool\n");
>         }
> diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
> index a629198223..896a347db3 100644
> --- a/examples/l3fwd/main.c
> +++ b/examples/l3fwd/main.c
> @@ -1006,9 +1006,11 @@ init_mem(uint16_t portid, unsigned int nb_mbuf)
>
>                         nb_vec = (nb_mbuf + evt_rsrc->vector_size - 1) /
>                                  evt_rsrc->vector_size;
> +                       nb_vec = RTE_MAX(512U, nb_vec);
> +                       nb_vec += rte_lcore_count() * 32;
>                         snprintf(s, sizeof(s), "vector_pool_%d", portid);
>                         vector_pool[portid] = rte_event_vector_pool_create(
> -                               s, nb_vec, 0, evt_rsrc->vector_size, socketid);
> +                               s, nb_vec, 32, evt_rsrc->vector_size, socketid);
>                         if (vector_pool[portid] == NULL)
>                                 rte_exit(EXIT_FAILURE,
>                                          "Failed to create vector pool for port %d\n",
> --
> 2.25.1
>

  reply	other threads:[~2022-06-10 13:46 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-23  9:59 [PATCH 1/2] app/eventdev: " pbhagavatula
2022-05-23  9:59 ` [PATCH 2/2] examples: " pbhagavatula
2022-06-10 13:46   ` Jerin Jacob [this message]
2022-06-10 13:44 ` [PATCH 1/2] app/eventdev: " Jerin Jacob
2022-06-13  5:56   ` Jerin Jacob

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALBAE1OnPjvt6KcqVR3gSRvqdACumK5Dfv9gV_OczWr+DheA2g@mail.gmail.com \
    --to=jerinjacobk@gmail.com \
    --cc=dev@dpdk.org \
    --cc=gakhil@marvell.com \
    --cc=jerinj@marvell.com \
    --cc=pbhagavatula@marvell.com \
    --cc=radu.nicolau@intel.com \
    --cc=skori@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).