From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DEA35457A1; Mon, 12 Aug 2024 17:02:15 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A8859402BB; Mon, 12 Aug 2024 17:02:15 +0200 (CEST) Received: from dkmailrelay1.smartsharesystems.com (smartserver.smartsharesystems.com [77.243.40.215]) by mails.dpdk.org (Postfix) with ESMTP id 290704014F for ; Mon, 12 Aug 2024 17:02:14 +0200 (CEST) Received: from smartserver.smartsharesystems.com (smartserver.smartsharesys.local [192.168.4.10]) by dkmailrelay1.smartsharesystems.com (Postfix) with ESMTP id D454E2056A; Mon, 12 Aug 2024 17:02:13 +0200 (CEST) Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Subject: RE: [RFC PATCH] config: make queues per port a meson config option Date: Mon, 12 Aug 2024 17:02:11 +0200 X-MimeOLE: Produced By Microsoft Exchange V6.5 Message-ID: <98CBD80474FA8B44BF855DF32C47DC35E9F620@smartserver.smartshare.dk> In-Reply-To: X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: [RFC PATCH] config: make queues per port a meson config option Thread-Index: AdrswoQdHkLS36J2QMyJc93vBnLuOwAApfxg References: <20240812132910.162252-1-bruce.richardson@intel.com> <98CBD80474FA8B44BF855DF32C47DC35E9F61F@smartserver.smartshare.dk> From: =?iso-8859-1?Q?Morten_Br=F8rup?= To: "Bruce Richardson" Cc: , , X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org > From: Bruce Richardson [mailto:bruce.richardson@intel.com] >=20 > On Mon, Aug 12, 2024 at 04:10:49PM +0200, Morten Br=F8rup wrote: > > > From: Bruce Richardson [mailto:bruce.richardson@intel.com] > > > > > > The default number of ethernet queues per port is currently set to > > > 1k which is more than enough for most applications, but still is = lower > > > than the total number of queues which may be available on modern = NICs. > > > Rather than increasing the max queues further, which will increase > > > the memory footprint (since the value is used in array = dimensioning), > > > we can instead make the value a meson tunable option - and reduce = the > > > default value to 256 in the process. > > > > Overall, I agree that this tunable is not very exotic, and can be = exposed as > suggested. > > The reduction of the default value must be mentioned in the release = notes. > > >=20 > Yes, good point. I'll add that in any next version. ACK. >=20 > > > > > # set other values pulled from the build options > > > dpdk_conf.set('RTE_MAX_ETHPORTS', get_option('max_ethports')) > > > +dpdk_conf.set('RTE_MAX_QUEUES_PER_PORT', > > > get_option('max_queues_per_ethport')) > > > > Please rename RTE_MAX_QUEUES_PER_PORT to _PER_ETHPORT, so it = resembles > MAX_ETHPORTS. For API backwards compatibility, you can add: > > #define RTE_MAX_QUEUES_PER_PORT RTE_MAX_QUEUES_PER_ETHPORT > > >=20 > Agree that would more consistent. That would probably best be a = separate > patch, since we'd want to convert all internal use over. Will make = this a > two-patch set in next version. ACK. And agree about two-patch series. >=20 > > > > I wonder if it would be possible to have separate max sizes for RX = and TX > queues? If it can save a non-negligible amount of memory, it might be = useful > for some applications. > > >=20 > That is an interesting idea. It would certainly allow saving memory = for > use-cases where you want a large number of rx queues only, or tx = queues > only. However, the defaults are still likely to remain the same. The = main > issue I would have with it, is that it would mean having two build = time > options rather than 1, and I'm a bit concerned at the number of = options we > seem to be accumulating in DPDK. >=20 > Overall, I'm tending towards suggesting that we not do that split, but = I'm > open to being convinced on it. I would guess that many applications have an asymmetrical split of = number of RX/TX queues. So I would argue that: The reason to make this configurable in meson is to conserve memory, so = why only go half the way if there is more memory to be conserved? The distros will use oversize maximums anyway, but custom built = applications might benefit. Here's a weird thought: Perhaps RX and TX maximums can be controlled individually by changing = rte_config.h, and they can be overridden via one meson configuration = parameter to set both (to the same value). >=20 > > > > With suggested changes (splitting RX/TX maximums not required), > > Acked-by: Morten Br=F8rup > > My ACK remains; splitting RX/TX maximums is not Must Have, it is Nice To = Have.