From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B93D7A0542; Fri, 7 Oct 2022 20:35:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6D0E1400D5; Fri, 7 Oct 2022 20:35:14 +0200 (CEST) Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by mails.dpdk.org (Postfix) with ESMTP id 3D21A40042 for ; Fri, 7 Oct 2022 20:35:12 +0200 (CEST) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 7FCEE5C0170; Fri, 7 Oct 2022 14:35:10 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Fri, 07 Oct 2022 14:35:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= cc:cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm3; t=1665167710; x= 1665254110; bh=8hPbae3BGQrNiyT1YxTj+iw8TohFQJCdvbgFvFv1bV4=; b=t eq0TOnYTaCShfdNwCVmuofFpYans0wgIJSTzwhTCgIPZpXNDYOrCcDdNEfF2hNNW nszNHqYsd9V1/83BTP0/2pc84nL5xFj293k94cuevq89y/cog+NO+1IfVaZypErA xHlYSfA0v3B97fn61w0+wMZUMqquIq9UjFgiwip2NUZlcqczyTlxwwOYJjXB1EoM e/ibInl9XssuL/CfulNrZph06EH3qYqj20wmYJEA8XGGxBMoyU2CY1N4pm24KRcB fMJ2FDBjFH/C6VLz0s7u8lzcdK1h732u62/JIeVdSkVcyrO4PZVJm41UyYBWm1fN JFH3DUicRuMM7a+/EjvsQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1665167710; x= 1665254110; bh=8hPbae3BGQrNiyT1YxTj+iw8TohFQJCdvbgFvFv1bV4=; b=t yqd62M2HcBMLoBccTU1FpRrFcJv0PiHwH1aZvKv97g5zhUZ1/UbJOzKSXWAZs6ac 3FbLUYSPAFhZGvWPhDrE6o1ecGaRRRKLNHkSay8WtGGpkWNIiHFxhRTYoE6xAz1U G2uQFjOrt+V3nnrb7HuQ28PJ1UNCf29qg/8MEDa0UHEfsXC841JUOdyMASEJUncc YqAg4qUd8AxjHh/KEBjDKhUBP5srkJQix9Y/E0eYDz8dPc80WfHgN7oTsDPSkJuc SEi6i/7WYxUuizsaGTf1u59msqhQQYw3xMQKWxS60oC8vryKvpZm5weOu9wijFZ4 RrpClt6RpofZf82JNCzhQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrfeeijedguddviecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkfgjfhgggfgtsehtufertddttddvnecuhfhrohhmpefvhhho mhgrshcuofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqne cuggftrfgrthhtvghrnheptdejieeifeehtdffgfdvleetueeffeehueejgfeuteeftddt ieekgfekudehtdfgnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilh hfrhhomhepthhhohhmrghssehmohhnjhgrlhhonhdrnhgvth X-ME-Proxy: Feedback-ID: i47234305:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 7 Oct 2022 14:35:09 -0400 (EDT) From: Thomas Monjalon To: Andrew Rybchenko Cc: Ferruh Yigit , dev@dpdk.org, Hanumanth Pothula Subject: Re: [PATCH v8 2/4] ethdev: support multiple mbuf pools per Rx queue Date: Fri, 07 Oct 2022 20:35:07 +0200 Message-ID: <2575143.k3LOHGUjKi@thomas> In-Reply-To: <20221007172921.3325250-3-andrew.rybchenko@oktetlabs.ru> References: <20221006170126.1322852-1-hpothula@marvell.com> <20221007172921.3325250-1-andrew.rybchenko@oktetlabs.ru> <20221007172921.3325250-3-andrew.rybchenko@oktetlabs.ru> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org 07/10/2022 19:29, Andrew Rybchenko: > +* **Added support for mulitiple mbuf pools per ethdev Rx queue.** mulitiple -> multiple I can fix when merging. > + > + The capability allows application to provide many mempools of different > + size and PMD and/or NIC to choose a memory pool based on the packet's > + length and/or Rx buffers availability. [...] > + /** > + * Array of mempools to allocate Rx buffers from. > + * > + * This provides support for multiple mbuf pools per Rx queue. > + * The capability is reported in device info via positive > + * max_rx_mempools. > + * > + * It could be useful for more efficient usage of memory when an > + * application creates different mempools to steer the specific > + * size of the packet. > + * > + * If many mempools are specified, packets received using Rx > + * burst may belong to any provided mempool. From ethdev user point > + * of view it is undefined how PMD/NIC chooses mempool for a packet. > + * > + * If Rx scatter is enabled, a packet may be delivered using a chain > + * of mbufs obtained from single mempool or multiple mempools based > + * on the NIC implementation. > + */ > + struct rte_mempool **rx_mempools; > + uint16_t rx_nmempool; /** < Number of Rx mempools */ OK, it's clear, thanks.