DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ophir Munk <ophirmu@nvidia.com>
To: Bruce Richardson <bruce.richardson@intel.com>,
	"NBU-Contact-Thomas Monjalon (EXTERNAL)" <thomas@monjalon.net>
Cc: "dev@dpdk.org" <dev@dpdk.org>, Matan Azrad <matan@nvidia.com>,
	Lior Margalit <lmargalit@nvidia.com>,
	Asaf Penso <asafp@nvidia.com>,
	"david.marchand@redhat.com" <david.marchand@redhat.com>,
	"honnappa.nagarahalli@arm.com" <honnappa.nagarahalli@arm.com>,
	"jerinj@marvell.com" <jerinj@marvell.com>,
	"rmody@marvell.com" <rmody@marvell.com>,
	"dsinghrawat@marvell.com" <dsinghrawat@marvell.com>
Subject: RE: [PATCH v1] config: make max memzones definition configurable
Date: Mon, 13 Feb 2023 17:04:34 +0000	[thread overview]
Message-ID: <CO6PR12MB5490DA0219205153AF8E1A53DCDD9@CO6PR12MB5490.namprd12.prod.outlook.com> (raw)
In-Reply-To: <Y+pOriMP6vQ7XumV@bricha3-MOBL.ger.corp.intel.com>

Since the new rte API was "discussed in recent years" and it is also dependent on different driver vendors acceptance - I suggest that the compilation option will be applied now.
The new rte API effort will start in parallel. Once accepted - it will replace the compilation option.

> -----Original Message-----
> From: Bruce Richardson <bruce.richardson@intel.com>
> Sent: Monday, 13 February 2023 16:53
> To: NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>
> Cc: Ophir Munk <ophirmu@nvidia.com>; dev@dpdk.org; Matan Azrad
> <matan@nvidia.com>; Lior Margalit <lmargalit@nvidia.com>; Asaf Penso
> <asafp@nvidia.com>; david.marchand@redhat.com;
> honnappa.nagarahalli@arm.com; jerinj@marvell.com; rmody@marvell.com;
> dsinghrawat@marvell.com
> Subject: Re: [PATCH v1] config: make max memzones definition configurable
> 
> On Mon, Feb 13, 2023 at 02:55:41PM +0100, Thomas Monjalon wrote:
> > 13/02/2023 12:05, Bruce Richardson:
> > > On Sun, Feb 12, 2023 at 10:53:19AM +0200, Ophir Munk wrote:
> > > > In current DPDK the RTE_MAX_MEMZONE definition is unconditionally
> > > > hard coded as 2560.  For applications requiring different values
> > > > of this parameter – it is more convenient to set its value as part
> > > > of the meson command line rather than changing the dpdk source
> > > > code per application.  An example would be of an application that
> > > > uses the DPDK mempool library which is based on DPDK memzone
> > > > library.  The application may need to create a number of steering
> > > > tables, each of which will require its own mempool allocation.
> > > > This commit adds a meson optional parameter named max_memzones.
> If
> > > > not specified - it is set by default to 2560. The hard coded
> > > > definition of RTE_MAX_MEMZONE is removed. During meson build time
> > > > the RTE_MAX_MEMZONE can be optionally defined as the value of
> max_memzones parameter.
> > > >
> > > > Signed-off-by: Ophir Munk <ophirmu@nvidia.com> --- RFC:
> > > >
> https://patchwork.dpdk.org/project/dpdk/patch/20230130092302.37614
> > > > 5-1-ophirmu@nvidia.com/
> > > >
> > > >  config/meson.build  | 1 + config/rte_config.h | 1 -
> > > >  meson_options.txt   | 2 ++ 3 files changed, 3 insertions(+), 1
> > > >  deletion(-)
> > > >
> > > Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> >
> > Are we going to move all compilation-defined settings to
> > meson_options.txt?  The direction discussed in recent years was to
> > configure things at runtime, and stop adding compilation-time settings.
> >
> > In this case, it is quite easy to add a new function void
> > rte_memzone_set_max(int max) to be called before rte_eal_init().  If
> > not called, the historical default is used.
> >
> Good point, I admit I had forgotten that.
> 
> Looking at the use of RTE_MAX_MEMZONE, it is used as an array dimension
> in a number of places, but, from what I see on cursory examination, it should
> be replacable with a runtime value without significant pain in most cases.
> The one that probably needs more attention is the fact that the "net/qede"
> driver maintains an array of memzones in it's base-code layer. Therefore, we
> probably need input from that driver maintainer to know the impact there
> and why that array is needed in a net driver. [Adding the two maintainers on
> CC]
> 
> /Bruce

  reply	other threads:[~2023-02-13 17:04 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-12  8:53 Ophir Munk
2023-02-13 11:05 ` Bruce Richardson
2023-02-13 13:55   ` Thomas Monjalon
2023-02-13 14:52     ` Bruce Richardson
2023-02-13 17:04       ` Ophir Munk [this message]
2023-02-21 10:28         ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CO6PR12MB5490DA0219205153AF8E1A53DCDD9@CO6PR12MB5490.namprd12.prod.outlook.com \
    --to=ophirmu@nvidia.com \
    --cc=asafp@nvidia.com \
    --cc=bruce.richardson@intel.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=dsinghrawat@marvell.com \
    --cc=honnappa.nagarahalli@arm.com \
    --cc=jerinj@marvell.com \
    --cc=lmargalit@nvidia.com \
    --cc=matan@nvidia.com \
    --cc=rmody@marvell.com \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).