From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3DF7641BAE; Thu, 2 Feb 2023 15:43:52 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2BD4E42D20; Thu, 2 Feb 2023 15:43:52 +0100 (CET) Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by mails.dpdk.org (Postfix) with ESMTP id 3606140EDC for ; Thu, 2 Feb 2023 15:43:50 +0100 (CET) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id C73775C01B1; Thu, 2 Feb 2023 09:43:47 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Thu, 02 Feb 2023 09:43:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= cc:cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm3; t=1675349027; x= 1675435427; bh=tlFeN6NtNad5RWIg8aoCY5nQbgS5rpVxaEVQ8kwIyPY=; b=i WV+KU3oSLS8CBbGv/uyJoC2BZNCcuee3s+Ix8vXBJW0RBMDPgI6ivKNn4b85QEUK 306qbMyrxT51SKCp1Ju9phKAUmZWoHc794+DWx9rZ6Xp0C+iOP80IISAoohKPo3m s6OlVb3l1j8o3qd7lUZCSnwFcGXuQYhAgq/uISDijMF7ZYuRLH3nn1jIUqJPRbeO wbRBa2HenJGOrvozK5NXaBoWkUu846KXelqbsKJCDEMGZ+K7fDFjwAzd3Qs8mEsb oAgbXhw6iZDwvVImXr4afSNKVKTADrY5MtzTQIeqSh+xTXCqcgO0+C8GJTqJK7Zn RHMUHuBUVtXjOU1GckJTg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1675349027; x= 1675435427; bh=tlFeN6NtNad5RWIg8aoCY5nQbgS5rpVxaEVQ8kwIyPY=; b=N sOele25GhSOYtAuJ0wtWhhurvb0XgJyH/tJefPPcSYWLyLOJxTG/2f6WNnh+GXS8 oeyi4dkEQz+rptMX9oB+I0Gy4rS473llsR3Q7WFnIEZutj/nLIuJINm1hncem09+ zMm99lMngVLLZyQpQmvKH2NoV4AEoq/xGcmQZA6C7Udlbs2uZnyRmGxwpcgdx3X6 YgOOQcg9Yvq4ulGvClrvkfcv2WBCJTdonMneL/w04g+W2T4F/wK2Gb4RDLEfdIlO Ucb2NirNe0TDXahl3jNHbmsiFW3bdjfzO5Pzj9IMRB3fK8ELLmU4eYme32HvItpB qBXvCQG69D+tEzkyX3dxw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrudefkedgieehucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhm rghsucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenuc ggtffrrghtthgvrhhnpedtjeeiieefhedtfffgvdelteeufeefheeujefgueetfedttdei kefgkeduhedtgfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfh hrohhmpehthhhomhgrshesmhhonhhjrghlohhnrdhnvght X-ME-Proxy: Feedback-ID: i47234305:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 2 Feb 2023 09:43:46 -0500 (EST) From: Thomas Monjalon To: "Jiawei(Jonny) Wang" , Andrew Rybchenko Cc: Slava Ovsiienko , Ori Kam , Aman Singh , Yuying Zhang , Ferruh Yigit , dev@dpdk.org, Raslan Darawsheh Subject: Re: [PATCH v2 2/2] ethdev: introduce the PHY affinity field in Tx queue API Date: Thu, 02 Feb 2023 15:43:44 +0100 Message-ID: <3683800.QJadu78ljV@thomas> In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org 02/02/2023 10:28, Andrew Rybchenko: > On 2/1/23 18:50, Jiawei(Jonny) Wang wrote: > > From: Andrew Rybchenko > >> On 1/30/23 20:00, Jiawei Wang wrote: > >>> Adds the new tx_phy_affinity field into the padding hole of > >>> rte_eth_txconf structure, the size of rte_eth_txconf keeps the same. > >>> Adds a suppress type for structure change in the ABI check file. > >>> > >>> This patch adds the testpmd command line: > >>> testpmd> port config (port_id) txq (queue_id) phy_affinity (value) > >>> > >>> For example, there're two hardware ports 0 and 1 connected to a single > >>> DPDK port (port id 0), and phy_affinity 1 stood for hardware port 0 > >>> and phy_affinity 2 stood for hardware port 1, used the below command > >>> to config tx phy affinity for per Tx Queue: > >>> port config 0 txq 0 phy_affinity 1 > >>> port config 0 txq 1 phy_affinity 1 > >>> port config 0 txq 2 phy_affinity 2 > >>> port config 0 txq 3 phy_affinity 2 > >>> > >>> These commands config the TxQ index 0 and TxQ index 1 with phy > >>> affinity 1, uses TxQ 0 or TxQ 1 send packets, these packets will be > >>> sent from the hardware port 0, and similar with hardware port 1 if > >>> sending packets with TxQ 2 or TxQ 3. > >> > >> Frankly speaking I dislike it. Why do we need to expose it on generic ethdev > >> layer? IMHO dynamic mbuf field would be a better solution to control Tx > >> routing to a specific PHY port. The design of this patch is to map a queue of the front device with an underlying port. This design may be applicable to several situations, including DPDK bonding PMD, or Linux bonding connected to a PMD. The default 0, meaning the queue is not mapped to anything (no change). If the affinity is higher than 0, then the queue can be configured as desired. Then if an application wants to send a packet to a specific underlying port, it just has to send to the right queue. Functionnaly, mapping the queue, or setting the port in mbuf (your proposal) are the same. The advantages of the queue mapping are: - faster to use a queue than filling mbuf field - optimization can be done at queue setup [...] > Why are these queues should be visible to DPDK application? > Nobody denies you to create many HW queues behind one ethdev > queue. Of course, there questions related to descriptor status > API in this case, but IMHO it would be better than exposing > these details to an application level. Why not mapping the queues if application requires these details? > >> IMHO, we definitely need dev_info information about a number of physical > >> ports behind. Yes dev_info would be needed. > >> Advertising value greater than 0 should mean that PMD supports > >> corresponding mbuf dynamic field to contol ongoing physical port on Tx (or > >> should just reject packets on prepare which try to specify outgoing phy port > >> otherwise). In the same way the information may be provided on Rx. > > > > See above, I think phy affinity is Queue level not for each packet. > > > >> I'm OK to have 0 as no phy affinity value and greater than zero as specified phy > >> affinity. I.e. no dynamic flag is required. > > > > Thanks for agreement. > > > >> Also I think that order of patches should be different. > >> We should start from a patch which provides dev_info and flow API matching > >> and action should be in later patch. > > > > OK.