From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 510F9A04FD;
	Fri,  7 Oct 2022 18:09:06 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id EAC2640DDC;
	Fri,  7 Oct 2022 18:09:05 +0200 (CEST)
Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com
 [64.147.123.25]) by mails.dpdk.org (Postfix) with ESMTP id DB31940A80
 for <dev@dpdk.org>; Fri,  7 Oct 2022 18:09:03 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.west.internal (Postfix) with ESMTP id 28C94320046F;
 Fri,  7 Oct 2022 12:09:01 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute2.internal (MEProxy); Fri, 07 Oct 2022 12:09:01 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h=
 cc:cc:content-transfer-encoding:content-type:date:date:from:from
 :in-reply-to:in-reply-to:message-id:mime-version:references
 :reply-to:sender:subject:subject:to:to; s=fm3; t=1665158940; x=
 1665245340; bh=DkmzKMp6ONTA6OHtMmNPj6afF4ZKpkuVVzfnJXnm+F8=; b=q
 ukjUrCqqbEXqfHs/+MQN3FC48P5+LeAJ3ArsuOn1nEZkkwITYitEx86+22PyiW4U
 JX2ehmax2dxd6MOgEixFyQihcKgcszmG3BwsjmkRG5pNNFjOfE6+ZoqgnD/O8JRi
 LBRUKk7YYtOPb5ZeyDMc/kYs9yA5qCIdwGYLZe/TI+PsCzPBr0f3FjvPAHWlvb6w
 A0pLgKmFCctwGpkVaJM32h+LulLRIlic8H6HaCqgDmJNRMRcoSXy7Uaycyy1QL8e
 NE8GBspNiee7AWaAi5OSNSSAuRLfSW9Z21UIkWHf5VKshHl5Mp9XKz2+XdJp8Hed
 UCGpBaZ3Sy4sHcjnHNeVA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
 messagingengine.com; h=cc:cc:content-transfer-encoding
 :content-type:date:date:feedback-id:feedback-id:from:from
 :in-reply-to:in-reply-to:message-id:mime-version:references
 :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
 :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1665158940; x=
 1665245340; bh=DkmzKMp6ONTA6OHtMmNPj6afF4ZKpkuVVzfnJXnm+F8=; b=q
 Lel7MPfs8Krtje6cszz93KRbaFHvBDkuJecwn97TjmX98lseXuUesPxLuOYoOqL/
 v9PmpMZGGf+BDORvf0A25NnDcoOuZM0bdZhoxYoxOvm8vXffK75ToeAlpQ0m+nPe
 DFFHVWVQ1L/V1O64b+fLvQfiwbFNYOb0Cb2TsIvYK3M+FtW9WPeTEkojlI2t2lsD
 n9o3haTOj6GLUiEgAGcTrEuBCmeYOtY7w0/B9qyR6GHEkX9+Ivhe1tgnSF/2NtS9
 fbF9gMxRQfk/xstvRBSUdF9oa/OKMUs6C5laCaYy+T6+5Sbw07X46EfDTEUhqkdq
 c6rHekfznGKBmRpHld3aw==
X-ME-Sender: <xms:HE9AY8h-dYx7nA0rG2Gu901C6GJVuotsBxbcu6QUHunnUaKcE565gQ>
 <xme:HE9AY1DHRPaK1OWRHWhRMvvmpAJdd8_SOjQlX-ipdloezHWeohURZylJFebr6Zemx
 AasdL9Ey5rp8Q-kug>
X-ME-Received: <xmr:HE9AY0EVe7v81DqQis7C4YBAKQo0gT98P0q38Z-iFR-jYMj-WOO6gCBtvL8p_eoA00bfLKcddcmqQsqKcTfnbr4NAw>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrfeeijedgleekucetufdoteggodetrfdotf
 fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
 uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
 cujfgurhephffvvefufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhm
 rghsucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenuc
 ggtffrrghtthgvrhhnpedtjeeiieefhedtfffgvdelteeufeefheeujefgueetfedttdei
 kefgkeduhedtgfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfh
 hrohhmpehthhhomhgrshesmhhonhhjrghlohhnrdhnvght
X-ME-Proxy: <xmx:HE9AY9TeT6NRc75_dHH1tIu01_4NGYdHssMJ-jeNFQgn94MooRn4Qw>
 <xmx:HE9AY5wc6MroawCiY_UbW2dCDBLjCLGpPCjpshccUtNEGMjBDwmTcA>
 <xmx:HE9AY77yPhf_Aq1vwk-H7WNAQjfuup-bHbMf4cjB_asauzXY7augFw>
 <xmx:HE9AY0-hBxUBO0Z0Yn-p9xMgeh3LoxEK8AV3_pAfAzz2B3WCEkByPg>
Feedback-ID: i47234305:Fastmail
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 7 Oct 2022 12:08:59 -0400 (EDT)
From: Thomas Monjalon <thomas@monjalon.net>
To: Hanumanth Pothula <hpothula@marvell.com>,
 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Cc: Ferruh Yigit <ferruh.yigit@amd.com>, dev@dpdk.org
Subject: Re: [PATCH v7 2/4] ethdev: support mulitiple mbuf pools per Rx queue
Date: Fri, 07 Oct 2022 18:08:57 +0200
Message-ID: <13112956.dW097sEU6C@thomas>
In-Reply-To: <20221007143723.3204575-3-andrew.rybchenko@oktetlabs.ru>
References: <20221006170126.1322852-1-hpothula@marvell.com>
 <20221007143723.3204575-1-andrew.rybchenko@oktetlabs.ru>
 <20221007143723.3204575-3-andrew.rybchenko@oktetlabs.ru>
MIME-Version: 1.0
Content-Transfer-Encoding: 7Bit
Content-Type: text/plain; charset="us-ascii"
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

07/10/2022 16:37, Andrew Rybchenko:
> From: Hanumanth Pothula <hpothula@marvell.com>
> 
> Some of the HW has support for choosing memory pools based on the
> packet's size. The capability allows to choose a memory pool based
> on the packet's length.

The second sentence is redundant.

> This is often useful for saving the memory where the application
> can create a different pool to steer the specific size of the
> packet, thus enabling more efficient usage of memory.
[...]
> +* **Added support for mulitiple mbuf pools per ethdev Rx queue.**

mulitiple -> multiple

> +
> +  * Added support for multiple mbuf pools per Rx queue. The capability allows

No need to repeat the title.

> +    application to provide many mempools of different size and PMD to choose
> +    a memory pool based on the packet's length and/or Rx buffers availability.
[...]
> +	/* Ensure that we have one and only one source of Rx buffers */
> +	if ((mp != NULL) +

+ operator?
Are we sure a boolean is always translated as 1?

> +	    (rx_conf != NULL && rx_conf->rx_nseg > 0) +
> +	    (rx_conf != NULL && rx_conf->rx_nmempool > 0) != 1) {
> +		RTE_ETHDEV_LOG(ERR,
> +			       "Ambiguous Rx mempools configuration\n");
> +		return -EINVAL;
> +	}
[...]
> @@ -1067,6 +1067,24 @@ struct rte_eth_rxconf {
>  	 */
>  	union rte_eth_rxseg *rx_seg;
>  
> +	/**
> +	 * Array of mempools to allocate Rx buffers from.
> +	 *
> +	 * This provides support for multiple mbuf pools per Rx queue.
> +	 * The capability is reported in device info via positive
> +	 * max_rx_mempools.
> +	 *
> +	 * It could be useful for more efficient usage of memory when an
> +	 * application creates different mempools to steer the specific
> +	 * size of the packet.
> +	 *
> +	 * Note that if Rx scatter is enabled, a packet may be delivered using
> +	 * a chain of mbufs obtained from single mempool or multiple mempools
> +	 * based on the NIC implementation.
> +	 */
> +	struct rte_mempool **rx_mempools;
> +	uint16_t rx_nmempool; /** < Number of Rx mempools */

The commit message suggests a configuration per packet size.
I guess it is not configurable in ethdev API?
If it is hard-configured in the HW or the driver only,
it should be specified here.

[...]
> +	/**
> +	 * Maximum number of Rx mempools supported per Rx queue.
> +	 *
> +	 * Value greater than 0 means that the driver supports Rx queue
> +	 * mempools specification via rx_conf->rx_mempools.
> +	 */
> +	uint16_t max_rx_mempools;