From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5AFBC4240F; Wed, 18 Jan 2023 17:32:04 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EFF2440223; Wed, 18 Jan 2023 17:32:03 +0100 (CET) Received: from wout1-smtp.messagingengine.com (wout1-smtp.messagingengine.com [64.147.123.24]) by mails.dpdk.org (Postfix) with ESMTP id D79A2400D6 for ; Wed, 18 Jan 2023 17:32:02 +0100 (CET) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.west.internal (Postfix) with ESMTP id 3B0A9320084E; Wed, 18 Jan 2023 11:32:01 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Wed, 18 Jan 2023 11:32:02 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= cc:cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm3; t=1674059520; x= 1674145920; bh=x7Fr7n2kIzmxh2xHAuzIZhUAH2n5h0x7xxaC8HnEy/8=; b=R 61oe1VbU6Iflo3APramvQ9DPfoMkCfFhTcU2Zpi2Qz51VdPhEk8afHVt2iqRJqrN K0qY5nTlu/4w4xfkeOGCjoSF2+1G0cZqXuR7oWC88218ARhTKg5oSJcN2gGuecsr JEA+0dlOXbx34PKX0FSE+dL7ba0OCID4lJSe501uQ8dR7dkx61DSp+jG6b+xaPOD J9lZzTagn3Am1CJocp3pWJiP7mIuDqE0+C3+nuZmNxkUzXHVY0bN4uL6VMHLKmpN AyxUkLHdJZjuuZw+CXxe9ZRQBwf3d1DItTswr5+1a1PaNB/IEHHA8ZKReercQSwm 50LFtswyadAYW7vQqJMNw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1674059520; x= 1674145920; bh=x7Fr7n2kIzmxh2xHAuzIZhUAH2n5h0x7xxaC8HnEy/8=; b=K tfESVhORqDs+x1Pt7BHzpS6TREj9pQLwJ3gfAwXhzfJjd4+43Cotg4r5+q0vtwri c5NeRuqr5ESJPPJRE7sDWvPxrkxJbOilF8TRBxUBs4MsO/msN19O9WTYKgoBP5Gs bjT/pez1yHDfcrE+0ndD/0UnsKe9nqKCIj0gMpuYTdZYe7dCq0VgIx1H/uBiZEkS cKU8uYGntrQPFGZvTYzaOnNlrENHesuQBT/s2pR58fNRVedrEFBx5tWo09sRYZjl +frY6VuUbgAfX7PG33TMRsVT6M8h2/I+Gwtnr934N5iZBmf+zDofASuUjiY3lVAr Z2+W+X+NeZQxmIqWpYiEg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedruddtkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhm rghsucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenuc ggtffrrghtthgvrhhnpedtjeeiieefhedtfffgvdelteeufeefheeujefgueetfedttdei kefgkeduhedtgfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfh hrohhmpehthhhomhgrshesmhhonhhjrghlohhnrdhnvght X-ME-Proxy: Feedback-ID: i47234305:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 18 Jan 2023 11:31:59 -0500 (EST) From: Thomas Monjalon To: "Jiawei(Jonny) Wang" Cc: Slava Ovsiienko , Ori Kam , Aman Singh , Yuying Zhang , Ferruh Yigit , Andrew Rybchenko , "dev@dpdk.org" , Raslan Darawsheh , "jerinj@marvell.com" Subject: Re: [RFC 2/5] ethdev: introduce the affinity field in Tx queue API Date: Wed, 18 Jan 2023 17:31:58 +0100 Message-ID: <6334712.YiXZdWvhHV@thomas> In-Reply-To: References: <20221221102934.13822-1-jiaweiw@nvidia.com> <2006382.jNaZZp9DzI@thomas> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org 18/01/2023 15:44, Jiawei(Jonny) Wang: > > 21/12/2022 11:29, Jiawei Wang: > > > For the multiple hardware ports connect to a single DPDK port > > > (mhpsdp), the previous patch introduces the new rte flow item to match > > > the port affinity of the received packets. > > > > > > This patch adds the tx_affinity setting in Tx queue API, the affinity > > > value reflects packets be sent to which hardware port. > > > > I think "affinity" means we would like packet to be sent on a specific hardware > > port, but it is not mandatory. > > Is it the meaning you want? Or should it be a mandatory port? > > Right, it's optional setting not mandatory. I think there is a misunderstanding. I mean that "affinity" with port 0 may suggest that we try to send to port 0 but sometimes the packet will be sent to port 1. And I think you want the packet to be always sent to port 0 if affinity is 0, right? If yes, I think the word "affinity" does not convey the right idea. And again, the naming should give the idea that we are talking about multiple ports merged in one DPDK port. > > > Adds the new tx_affinity field into the padding hole of rte_eth_txconf > > > structure, the size of rte_eth_txconf keeps the same. Adds a suppress > > > type for structure change in the ABI check file. > > > > > > This patch adds the testpmd command line: > > > testpmd> port config (port_id) txq (queue_id) affinity (value) > > > > > > For example, there're two hardware ports connects to a single DPDK > > > > connects -> connected > > OK, will fix in next version. > > > > port (port id 0), and affinity 1 stood for hard port 1 and affinity > > > 2 stood for hardware port 2, used the below command to config tx > > > affinity for each TxQ: > > > port config 0 txq 0 affinity 1 > > > port config 0 txq 1 affinity 1 > > > port config 0 txq 2 affinity 2 > > > port config 0 txq 3 affinity 2 > > > > > > These commands config the TxQ index 0 and TxQ index 1 with affinity 1, > > > uses TxQ 0 or TxQ 1 send packets, these packets will be sent from the > > > hardware port 1, and similar with hardware port 2 if sending packets > > > with TxQ 2 or TxQ 3. > > > > [...] > > > @@ -212,6 +212,10 @@ API Changes > > > +* ethdev: added a new field: > > > + > > > + - Tx affinity per-queue ``rte_eth_txconf.tx_affinity`` > > > > Adding a new field is not an API change because existing applications don't > > need to update their code if they don't care this new field. > > I think you can remove this note. > > OK, will remove in next version. > > > > --- a/lib/ethdev/rte_ethdev.h > > > +++ b/lib/ethdev/rte_ethdev.h > > > @@ -1138,6 +1138,7 @@ struct rte_eth_txconf { > > > less free descriptors than this value. */ > > > > > > uint8_t tx_deferred_start; /**< Do not start queue with > > > rte_eth_dev_start(). */ > > > + uint8_t tx_affinity; /**< Drives the setting of affinity per-queue. > > > +*/ > > > > Why "Drives"? It is the setting, right? > > rte_eth_txconf is per-queue so no need to repeat. > > I think a good comment here would be to mention it is a physical port index for > > mhpsdp. > > Another good comment would be to specify how ports are numbered. > > OK, will update the comment for this new setting. > > Thanks.