From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
To: Ophir Munk <ophirmu@nvidia.com>
Cc: <dev@dpdk.org>, Ophir Munk <ophirmu@mellanox.com>,
Matan Azrad <matan@nvidia.com>,
"Thomas Monjalon" <homas@monjalon.net>,
Bruce Richardson <bruce.richardson@intel.com>,
Lior Margalit <lmargalit@nvidia.com>
Subject: Re: [RFC] config: customize max memzones configuration
Date: Mon, 30 Jan 2023 13:00:38 +0300 [thread overview]
Message-ID: <20230130130038.1d75cc01@sovereign> (raw)
In-Reply-To: <20230130092302.376145-1-ophirmu@nvidia.com>
2023-01-30 11:23 (UTC+0200), Ophir Munk:
> In current DPDK the RTE_MAX_MEMZONE definition is unconditionally hard
> coded as 2560. For applications requiring different values of this
> parameter – it is more convenient to set its value as part of the meson
> command line or to set the max value via an rte API - rather than
> changing the dpdk source code per application.
>
> An example would be of an application that uses the DPDK mempool library
> which is based on DPDK memzone library. The application may need to
> create a number of steering tables, each of which will require its own
> mempool allocation. This RFC is not about how to optimize the
> application usage of mempool nor about how to improve the mempool
> implementation based on memzone. It is about how to make the max
> memzone definition - build-time or run-time customized.
>
> I would like to suggest three options.
>
> Option 1
> ========
> Add a Meson option in meson options.txt and remove the
> RTE_MAX_MEMZONE definition from config/rte_config.h
>
[...]
>
> Option 2
> ========
> Use Meson setup -Dc_args="-DRTE_MAX_MEMZONE=XXX" and
> make RTE_MAX_MEMZONE conditional in config/rte_config.h
>
> For example, see the code of this commit.
>
> Option 3
> ========
> Add a function which must be called before rte_eal_init():
> void rte_memzone_set_max(int max) {memzone_max = max;}
> If not called, the default memzone (RTE_MAX_MEMZONE) is used.
>
> With this option there is no need to recompile DPDK and it allows
> using an in-box packaged DPDK.
[...]
Ideally, there should be no limitation at all.
I vote for some compile-time solution for now
with a plan to remove the restriction inside EAL later.
Option 2 does not expose a user-facing option that would be obsoleted anyway,
so it seems preferable to me, but Option 1 is also OK and consistent.
`RTE_MAX_MEMZONE` is needed currently, because `struct rte_memzone` are stored
in `rte_fbarray` (mapped file-backed array) with a fixed capacity.
Unlike e.g. `rte_eth_devices`, this is not used for efficient access by index
and is not even exposed to PMDs, so the storage can be changed painlessly.
In DPDK, only net/qede uses this constant and also for slow-path bookkeeping.
next prev parent reply other threads:[~2023-01-30 10:00 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-30 9:23 Ophir Munk
2023-01-30 9:47 ` Bruce Richardson
2023-01-30 10:00 ` Dmitry Kozlyuk [this message]
2023-02-12 8:14 ` Ophir Munk
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230130130038.1d75cc01@sovereign \
--to=dmitry.kozliuk@gmail.com \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=homas@monjalon.net \
--cc=lmargalit@nvidia.com \
--cc=matan@nvidia.com \
--cc=ophirmu@mellanox.com \
--cc=ophirmu@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).