From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BCCEBA0542; Sat, 8 Oct 2022 22:38:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 84E58400D5; Sat, 8 Oct 2022 22:38:52 +0200 (CEST) Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com [64.147.123.21]) by mails.dpdk.org (Postfix) with ESMTP id 2C48540042 for ; Sat, 8 Oct 2022 22:38:51 +0200 (CEST) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.west.internal (Postfix) with ESMTP id 679783200754; Sat, 8 Oct 2022 16:38:48 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Sat, 08 Oct 2022 16:38:48 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= cc:cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm3; t=1665261528; x= 1665347928; bh=fzTK9zdVoPr+TQKv7d/XTfNWV18LpZW5xAhIJhjfoc8=; b=G t+eQQc01oORglOKa8OvStcWTJFwxDA7+3dQuYSCvR2C/KP9w7JKMy+JeeTbud2qE 5xKdUhUHFzTpu4kbP1i/dsFXhi3tM5fsfVqyo7X45MZuuDCYAtDyBoxwlh3e8qxm 2SmPlwnsaK/Ex2uCU0ZDAfEaptU269lspKbb6FOr2VvNmNo3tMGBfZiIDntaXaxX UW/BTbLEMSFMME9KdcHzFAoDeWTBljZobOpE81OPCSoDKgFIRUAkzZJkw7j2p2ON jMN1Gsx5AgByBNvbOyPX99vzHAXb52SualIYshbu+MJdH6Y4svxFwDv8COxpJi+m 6Qpb4Dau2UZf/KMR2CJTg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1665261528; x= 1665347928; bh=fzTK9zdVoPr+TQKv7d/XTfNWV18LpZW5xAhIJhjfoc8=; b=G 44gkGNABsbL8VGVIPwj4jKw3RYaAB+iZhjoamI6pI9fzED/dSwufY4ln2ddI+Akn 2UIYr7tHtBcrbAJwKyLSxcW82EQ3SmLBGS8UKZHaIzQ0nxvpvbVVO11fT5vSW1ra PIieqe5seBj3hb6+Hb6NLQqy5qghD2xygoIa8IuuwCF5MFkbpANYVFsj1UGPaUch uL8AzCKYSOjC8WRisjl29ehLXBndQzaBik0cyKAzoOXqtxBB5hQzZabfUnbOni/P o+gqaS2LoA3eUvLyg23jGGPaYBVU82KhtKdL7XYxllm6hAIZebx7baWoFJUYKFEf c7yhSF8dDB8IxOr6sBPnA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrfeeiledgudeglecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkfgjfhgggfgtsehtufertddttddvnecuhfhrohhmpefvhhho mhgrshcuofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqne cuggftrfgrthhtvghrnheptdejieeifeehtdffgfdvleetueeffeehueejgfeuteeftddt ieekgfekudehtdfgnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilh hfrhhomhepthhhohhmrghssehmohhnjhgrlhhonhdrnhgvth X-ME-Proxy: Feedback-ID: i47234305:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sat, 8 Oct 2022 16:38:46 -0400 (EDT) From: Thomas Monjalon To: Hanumanth Pothula , Andrew Rybchenko Cc: Ferruh Yigit , dev@dpdk.org Subject: Re: [PATCH v8 0/4] ethdev: support mulitiple mbuf pools per Rx queue Date: Sat, 08 Oct 2022 22:38:44 +0200 Message-ID: <3499917.LM0AJKV5NW@thomas> In-Reply-To: <20221007172921.3325250-1-andrew.rybchenko@oktetlabs.ru> References: <20221006170126.1322852-1-hpothula@marvell.com> <20221007172921.3325250-1-andrew.rybchenko@oktetlabs.ru> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org 07/10/2022 19:29, Andrew Rybchenko: > I'm not sure in testpmd patch. Review would be useful and may be we > should postpone it to rc2. > > v8: > - Process review notes > v7: > - Drop RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL offload which seems to be > unnecessary. Positive max_rx_mempools in dev_info is sufficient to > indicate that the capability is support and positive number of > mempools in Rx configuration is sufficient to request it. > - Add helper patch to factor out Rx mempool check to be shared > for single mempool, buffer split and multiple mempools case. > - Refine check for a way to provide Rx buffers to be one and only one. > Either single mempool, or buffer split, or multi mempool. > - Drop feature advertisement in net/cnxk patch since there is no > such feature defined yet. I have no strong opinion if a new feature > is required or not. > v6: > - Updated release notes, release_22_11.rst. > v5: > - Declared memory pools as struct rte_mempool **rx_mempools rather than > as struct rte_mempool *mp. > - Added the feature in release notes. > - Updated conditions and strings as per review comments. > v4: > - Renamed Offload capability name from RTE_ETH_RX_OFFLOAD_BUFFER_SORT > to RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL. > - In struct rte_eth_rxconf, defined new pointer, which holds array of > type struct rte_eth_rx_mempool(memory pools). This array is used > by PMD to program multiple mempools. > v3: > - Implemented Pool Sort capability as new Rx offload capability, > RTE_ETH_RX_OFFLOAD_BUFFER_SORT. > v2: > - Along with spec changes, uploading testpmd and driver changes. > > Andrew Rybchenko (1): > ethdev: factor out helper function to check Rx mempool > > Hanumanth Pothula (3): > ethdev: support multiple mbuf pools per Rx queue > net/cnxk: support mulitiple mbuf pools per Rx queue > app/testpmd: support mulitiple mbuf pools per Rx queue Applied, except testpmd patch, as recommended by Andrew, thanks.