From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E63CD4240F; Wed, 18 Jan 2023 17:26:59 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D7BD942D1C; Wed, 18 Jan 2023 17:26:59 +0100 (CET) Received: from wout1-smtp.messagingengine.com (wout1-smtp.messagingengine.com [64.147.123.24]) by mails.dpdk.org (Postfix) with ESMTP id 4EFD442D16 for ; Wed, 18 Jan 2023 17:26:58 +0100 (CET) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.west.internal (Postfix) with ESMTP id A56B93200933; Wed, 18 Jan 2023 11:26:56 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Wed, 18 Jan 2023 11:26:57 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= cc:cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm3; t=1674059216; x= 1674145616; bh=dQfl4nkDdS+4/WLx+urAu9r1C8PKXubkkiA73cHJrzM=; b=k RBQqKFYD72H98Vt8hIjkGkzlJm7imCyXF/kI5HTrFHbyz/e2q+vuhdWDK+LxFckP JZHvO9ZYkm1x6zLKT82c4bjZW9u0nDC2Yb0QQh0r7ecAePgRPVQ3Fy4ebGZ1on6D L7A7rMCruKhGB0x+7cB6+0IAaKpeVriULw4JQ7gy2u8ABBLxEz//cwYQSOkC0rDn EJaZSnxHtvZpimC9LO63DLQAX/mUsxoUDycJ2d21tQbMdddlMYnuqmgVOz+fOvzQ Sh9DTjxLWtQHlBJpeJOrzi7o9HA48b/13adeLby340re6t+gz1vtbjGUUvwXr2/w JKEXuEa3p2RFN7I2vMr5A== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1674059216; x= 1674145616; bh=dQfl4nkDdS+4/WLx+urAu9r1C8PKXubkkiA73cHJrzM=; b=Y /fS5HrMiczUI51L6o4nzIt7GLTwxyqSUJ1Vwk2ISmRqzQCkdTy/FWBxBg0Rj9eus 5kLl7LZenzuQVDtmBX75cJ37CrqLFlGID272Lnb+S27tEHQZZRz/hJOSPvhsLILP 59ljTGYffrn8R1SR4sfMSYCmJfzsdMynwJYHB4cUykXdspR7Rri2OH1/DpfHhgcX bmMMIcS6PwYS3UesyeUnjoa3j9UfJTc1v2znbh5TXQOQx6a9N7kBLUPDBw8VKE26 0Ih30kzr+jyafbkmEcE2LVlqb2wmUFHcBO0CKunxkQSTDEmq17yX1ixkrqGUDKtV 5wmNo3RQjJ7M6qK6KOx0A== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedruddtkedgkeejucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhm rghsucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenuc ggtffrrghtthgvrhhnpedtjeeiieefhedtfffgvdelteeufeefheeujefgueetfedttdei kefgkeduhedtgfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfh hrohhmpehthhhomhgrshesmhhonhhjrghlohhnrdhnvght X-ME-Proxy: Feedback-ID: i47234305:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 18 Jan 2023 11:26:54 -0500 (EST) From: Thomas Monjalon To: "Jiawei(Jonny) Wang" Cc: Slava Ovsiienko , Ori Kam , Aman Singh , Yuying Zhang , Ferruh Yigit , Andrew Rybchenko , "dev@dpdk.org" , Raslan Darawsheh , Ivan Malov , "jerinj@marvell.com" Subject: Re: [RFC 1/5] ethdev: add port affinity match item Date: Wed, 18 Jan 2023 17:26:53 +0100 Message-ID: <6248682.NeCsiYhmir@thomas> In-Reply-To: References: <20221221102934.13822-1-jiaweiw@nvidia.com> <16543991.hlxOUv9cDv@thomas> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org 18/01/2023 15:41, Jiawei(Jonny) Wang: > Hi, > > > > > 21/12/2022 11:29, Jiawei Wang: > > > + /** > > > + * Matches on the physical port affinity of the received packet. > > > + * > > > + * See struct rte_flow_item_port_affinity. > > > + */ > > > + RTE_FLOW_ITEM_TYPE_PORT_AFFINITY, > > > }; > > > > I'm not sure about the word "affinity". > > I think you want to match on a physical port. > > It could be a global physical port id or an index in the group of physical ports > > connected to a single DPDK port. > > In first case, the name of the item could be RTE_FLOW_ITEM_TYPE_PHY_PORT, > > in the second case, the name could be > > RTE_FLOW_ITEM_TYPE_MHPSDP_PHY_PORT, > > "MHPSDP" meaning "Multiple Hardware Ports - Single DPDK Port". > > We could replace "PHY" with "HW" as well. > > > > Since DPDK only probe/attach the single port, seems first case does not meet this case. > Here, 'affinity' stands for the packet association with actual physical port. I think it is more than affinity because the packet is effectively received from this port. And the other concern is that this name does not give any clue that we are talking about multiple ports merged in a single one. > > Note that we cannot use the new item > > RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT > > because we are in a case where multiple hardware ports are merged in a single > > software represented port. > > > > > > [...] > > > +/** > > > + * @warning > > > + * @b EXPERIMENTAL: this structure may change without prior notice > > > + * > > > + * RTE_FLOW_ITEM_TYPE_PORT_AFFINITY > > > + * > > > + * For the multiple hardware ports connect to a single DPDK port > > > +(mhpsdp), > > > + * use this item to match the hardware port affinity of the packets. > > > + */ > > > +struct rte_flow_item_port_affinity { > > > + uint8_t affinity; /**< port affinity value. */ }; > > > > We need to define how the port numbering is done. > > Is it driver-dependent? > > Does it start at 0? etc... > > User can define any value they want; one use case is the packet could be received and > sent to same port, then they can set the same 'affinity' value in flow and queue configuration. No it does not work. If ports are numbered 1 and 2, and user thinks it is 0 and 1, the port 2 won't be matched at all. > The flow behavior is driver dependent. > > Thanks.