From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B2BA24240D; Wed, 18 Jan 2023 12:37:20 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A270D400D6; Wed, 18 Jan 2023 12:37:20 +0100 (CET) Received: from wout3-smtp.messagingengine.com (wout3-smtp.messagingengine.com [64.147.123.19]) by mails.dpdk.org (Postfix) with ESMTP id 0DF044003F for ; Wed, 18 Jan 2023 12:37:19 +0100 (CET) Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.west.internal (Postfix) with ESMTP id 662B132007F9; Wed, 18 Jan 2023 06:37:16 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Wed, 18 Jan 2023 06:37:17 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= cc:cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm3; t=1674041836; x= 1674128236; bh=5THUgX69muWfMgy/iyAKu//y1y/94sgL/vPTA+no2Cw=; b=v e87SpTD/ZcUfJcSoNpZuMPoUfhu96xZqf4y71BdAZcUV85iQ1ONP/yEmMbdIAdlj JM7ZO2/MGNd/8JtF5irjkTtpFuTgJe2o3rxfh7OugqAF3765gyG5gREq9G6RW/+G VHX/syU1jdl9jrqO+CEGpDtCPLm2hdY8y1+19iIBGTLIyZTP8rvi/I2s3yPw0ksA iJqr2gSa689627b8xmCZpD7k1Xswqy5cr0wxTgPVC7W+h9TjbflQ3gwEbEAuYVIZ psBSj/x/iFYtN9HzmEwX/g1LMEK6UNuHkxBS29I9gOEJ+A3pFKz852ORFEU61k97 LUwmhJyG7PAUcg0Ah5lGA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1674041836; x= 1674128236; bh=5THUgX69muWfMgy/iyAKu//y1y/94sgL/vPTA+no2Cw=; b=b X9OAJbRNrPLPi3ZBrmQRgTXChM7BdQ3wKyHfjm6OoZPtL8TDRMV/4G1E1tVIBLWA gdIoSWHabAu6cs72/0WpKXCACc8wACPY6Q1o4bUDbGTGaDH7Zx39/lyBDMGUL9uk TSWQMF4hkDeFgldqXJSJ/INuoS/0Q5vTn7QD5zf/2pZ3f9E5mNhEC/FcjYn+VSpf OKF0yQkIaj8OgXmkyM5TBza4ylWZGUpwtNtHITsCs8ex3dZ755wwpUdO4IlJlzbx sdcpTgdJoNpw1mU9KawijATZ7J5wRE8Wkx39IybGGwajp4Wc7i5DTEKsxtUooUF1 L3vOmQ9wC5xmnEaSLX/pg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedruddtkedgvdelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhm rghsucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenuc ggtffrrghtthgvrhhnpedtjeeiieefhedtfffgvdelteeufeefheeujefgueetfedttdei kefgkeduhedtgfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfh hrohhmpehthhhomhgrshesmhhonhhjrghlohhnrdhnvght X-ME-Proxy: Feedback-ID: i47234305:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 18 Jan 2023 06:37:14 -0500 (EST) From: Thomas Monjalon To: Jiawei Wang Cc: viacheslavo@nvidia.com, orika@nvidia.com, Aman Singh , Yuying Zhang , Ferruh Yigit , Andrew Rybchenko , dev@dpdk.org, rasland@nvidia.com, jerinj@marvell.com Subject: Re: [RFC 2/5] ethdev: introduce the affinity field in Tx queue API Date: Wed, 18 Jan 2023 12:37:12 +0100 Message-ID: <2006382.jNaZZp9DzI@thomas> In-Reply-To: <20221221102934.13822-3-jiaweiw@nvidia.com> References: <20221221102934.13822-1-jiaweiw@nvidia.com> <20221221102934.13822-3-jiaweiw@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org 21/12/2022 11:29, Jiawei Wang: > For the multiple hardware ports connect to a single DPDK port (mhpsdp), > the previous patch introduces the new rte flow item to match the port > affinity of the received packets. > > This patch adds the tx_affinity setting in Tx queue API, the affinity value > reflects packets be sent to which hardware port. I think "affinity" means we would like packet to be sent on a specific hardware port, but it is not mandatory. Is it the meaning you want? Or should it be a mandatory port? > Adds the new tx_affinity field into the padding hole of rte_eth_txconf > structure, the size of rte_eth_txconf keeps the same. Adds a suppress > type for structure change in the ABI check file. > > This patch adds the testpmd command line: > testpmd> port config (port_id) txq (queue_id) affinity (value) > > For example, there're two hardware ports connects to a single DPDK connects -> connected > port (port id 0), and affinity 1 stood for hard port 1 and affinity > 2 stood for hardware port 2, used the below command to config > tx affinity for each TxQ: > port config 0 txq 0 affinity 1 > port config 0 txq 1 affinity 1 > port config 0 txq 2 affinity 2 > port config 0 txq 3 affinity 2 > > These commands config the TxQ index 0 and TxQ index 1 with affinity 1, > uses TxQ 0 or TxQ 1 send packets, these packets will be sent from the > hardware port 1, and similar with hardware port 2 if sending packets > with TxQ 2 or TxQ 3. [...] > @@ -212,6 +212,10 @@ API Changes > +* ethdev: added a new field: > + > + - Tx affinity per-queue ``rte_eth_txconf.tx_affinity`` Adding a new field is not an API change because existing applications don't need to update their code if they don't care this new field. I think you can remove this note. > --- a/lib/ethdev/rte_ethdev.h > +++ b/lib/ethdev/rte_ethdev.h > @@ -1138,6 +1138,7 @@ struct rte_eth_txconf { > less free descriptors than this value. */ > > uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */ > + uint8_t tx_affinity; /**< Drives the setting of affinity per-queue. */ Why "Drives"? It is the setting, right? rte_eth_txconf is per-queue so no need to repeat. I think a good comment here would be to mention it is a physical port index for mhpsdp. Another good comment would be to specify how ports are numbered.