From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E0717A04B4; Fri, 8 Nov 2019 14:06:26 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5E55D1C197; Fri, 8 Nov 2019 14:06:25 +0100 (CET) Received: from new2-smtp.messagingengine.com (new2-smtp.messagingengine.com [66.111.4.224]) by dpdk.org (Postfix) with ESMTP id 071171C196 for ; Fri, 8 Nov 2019 14:06:22 +0100 (CET) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailnew.nyi.internal (Postfix) with ESMTP id 598986C14; Fri, 8 Nov 2019 08:06:22 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute1.internal (MEProxy); Fri, 08 Nov 2019 08:06:22 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=mesmtp; bh=PvNlt1O8B4ujuxH9HXoFwSl6GJN2szBftydVtCdIxug=; b=qN2RY+zB4Yds LCSgp0WbQQGt5t5ovgEF+CA4R0wa6dX4JfxhfVYUl5bNpnK8Ug76hXYa9XE0sbyD Cy35J4pvZvyBgmLt1A/u3+9LDwWPTP9AQYH1AfbfPZZtCXjF6dOea5GGH/W+tUg/ JUlLxnYSobHFRw3wQZmJQcSOwcnq2gk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm1; bh=PvNlt1O8B4ujuxH9HXoFwSl6GJN2szBftydVtCdIx ug=; b=piK56/04uDelTVLQJbnvawHa3F70PNmElS+EdlzsyvBE+OvdsxU7jQ+Wl HTCji+JC3gwr8uCDoVmE1+S9pEtcErTKlif8rPQCUZv62Hb/O7e43enPsxjA/y+u +YhDWjZpyIdYlVY6QQJO8TL9h8Hhk6p3J9hiMljlxswkAO3QpdtKzJue/tzzH6DJ 8tdRcCN6W+Wi+uKO1NwUT/I88lHcd4ealA3BSclYH3sVn2PDVblEhTZkC8Y6gP1O qlwfjcQE2mYHvG5SDQ6f/Lg9B4ouQShzekI2Uj2zjSxDCy0Ipz4clqXrcD4ayYB3 Dss6uNSxJX3Kennoiz74Wv6sn2mrg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedufedruddvuddggeelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkfgjfhgggfgtsehtufertddttddvnecuhfhrohhmpefvhhhomhgr shcuofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqnecukf hppeejjedrudefgedrvddtfedrudekgeenucfrrghrrghmpehmrghilhhfrhhomhepthhh ohhmrghssehmohhnjhgrlhhonhdrnhgvthenucevlhhushhtvghrufhiiigvpedt X-ME-Proxy: Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id D14DD80060; Fri, 8 Nov 2019 08:06:19 -0500 (EST) From: Thomas Monjalon To: "Zhang, Qi Z" Cc: Andrew Rybchenko , Ori Kam , "dev@dpdk.org" , "pbhagavatula@marvell.com" , "Yigit, Ferruh" , "jerinj@marvell.com" , "Mcnamara, John" , "Kovacevic, Marko" , Adrien Mazarguil , "david.marchand@redhat.com" , "ktraynor@redhat.com" , Olivier Matz Date: Fri, 08 Nov 2019 14:06:18 +0100 Message-ID: <2558977.FuTiyjgROS@xps> In-Reply-To: <039ED4275CED7440929022BC67E7061153DC6603@SHSMSX105.ccr.corp.intel.com> References: <20191025152142.12887-1-pbhagavatula@marvell.com> <1784584.NQqjHnNvIa@xps> <039ED4275CED7440929022BC67E7061153DC6603@SHSMSX105.ccr.corp.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH 1/2] ethdev: add flow action type update as an offload X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 08/11/2019 12:40, Zhang, Qi Z: > > > -----Original Message----- > > From: dev On Behalf Of Thomas Monjalon > > Sent: Friday, November 8, 2019 7:04 PM > > To: Andrew Rybchenko > > Cc: Ori Kam ; dev@dpdk.org; > > pbhagavatula@marvell.com; Yigit, Ferruh ; > > jerinj@marvell.com; Mcnamara, John ; > > Kovacevic, Marko ; Adrien Mazarguil > > ; david.marchand@redhat.com; > > ktraynor@redhat.com; Olivier Matz > > Subject: Re: [dpdk-dev] [PATCH 1/2] ethdev: add flow action type update as an > > offload > > > > 08/11/2019 11:42, Andrew Rybchenko: > > > On 11/8/19 1:28 PM, Thomas Monjalon wrote: > > > > 08/11/2019 09:35, Andrew Rybchenko: > > > >> The problem: > > > >> ~~~~~~~~~~~~ > > > >> PMD wants to know before port start if application wants to to use > > > >> flow MARK/FLAG in the future. It is required because: > > > >> > > > >> 1. HW may be configured in a different way to reserve resources > > > >> for MARK/FLAG delivery > > > >> > > > >> 2. Datapath implementation choice may depend on it (e.g. vPMD > > > >> is faster, but does not support MARK) > > > > > > > > Thank you for the clear problem statement. > > > > I agree with it. This is a real design issue. > > > > > > > > > > > >> Discussed solutions: > > > >> ~~~~~~~~~~~~~~~~~~~~ > > > > > > May be it is not 100% clear since below are alternatives. > > > > > > >> A. Explicit Rx offload suggested by the patch. > > > >> > > > >> B. Implicit by validation of a flow rule with MARK/FLAG actions used. > > > >> > > > >> C. Use dynamic field/flag (i.e. application registers dynamic field > > > >> and/or flag and PMD uses lookup to solve the problem) plus part > > > >> of (B) to discover if a feature is supported. > > > > > > > > The dynamic field should be registered via a new API function named > > > > '_init'. > > > > It means the application must explicit request the feature. > > > > I agree this is the way to go. > > > > > > If I understand your statement correctly, but (C) is not ideal since > > > it looks global. If registered dynamic field of mbuf and is flag that > > > the feature should be enabled, it is a flag to all ports/devices. > > > > > > >> All solutions require changes in applications which use these > > > >> features. There is a deprecation notice in place which advertises > > > >> DEV_RX_OFFLOAD_FLOW_MARK addition, but may be it is OK to > > > >> substitute it with solution (B) or (C). Solution (C) requires > > > >> changes since it should be combined with (B) in order to understand > > > >> if the feature is supported. > > > > > > > > I don't understand. > > > > Application request and PMD support are two different things. > > > > PMD support must be via rte_flow validation on a case by case anyway. > > > > > > I mean that application wants to understand if the feature is > > > supported. Then, it wants to enable it. In the case of (B), if I > > > understand the solution correctly, there is no explicit way to enable, > > > PMD just detects it because of discovery is done (that's what I mean > > > by "implicit" and it is a drawback from my point of view, but still > > > could be considered). (C) solves the problem of (B). > > > > > > >> Advantages and drawbacks of solutions: > > > >> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > > >> 1. The main drawback of (A) is a "duplication" since we already > > > >> have a way to request flow MARK using rte_flow API. > > > >> I don't fully agree that it is a duplication, but I agree > > > >> that it sounds like duplication and complicates a bit flow > > > >> MARK usage by applications. (B) complicates it as well. > > > >> > > > >> 2. One more drawback of the solution (A) is the necessity of > > > >> similar solution for META and it eats one more offload bit. > > > >> Yes, that's true and I think it is not a problem. > > > >> It would make it easier for applications to find out if > > > >> either MARK or META is supported. > > > >> > > > >> 3. The main advantage of the solution (A) is simplicity. > > > >> It is simple for application to understand if it supported. > > > >> It is simple in PMD to understand that it is required. > > > >> It is simple to disable it - just reconfigure. > > > >> Also it is easier to document it - just mention that > > > >> the offload should be supported and enabled. > > > >> > > > >> 4. The main advantage of the solution (B) is no "duplication". > > > >> I agree that it is valid argument. Solving the problem > > > >> without extra entities is always nice, but unfortunately > > > >> it is too complex in this case. > > > >> > > > >> 5. The main drawback of the solution (B) is the complexity. > > > >> It is necessary to choose a flow rule which should be used > > > >> as a criteria. It could be hardware dependent. > > > >> Complex logic is require in PMD if it wants to address the > > > >> problem and control MARK delivery based on validated flow > > > >> rules. It adds dependency between start/stop processing and > > > >> flow rules validation code. > > > >> It is pretty complicated to document it. > > > >> > > > >> 6. Useless enabling of the offload in the case of solution (A) > > > >> if really used flow rules do not support MARK looks like > > > >> drawback as well, but easily mitigated by a combination > > > >> with solution (B) and only required if the application wants > > > >> to dive in the level of optimization and complexity and > > > >> makes sense if application knows required flow rules in > > > >> advance. So, it is not a problem in this case. > > > >> > > > >> 7. Solution (C) has drawbacks of the solution (B) for > > > >> applications to understand if these features are supported, > > > >> but no drawbacks in PMD, since explicit criteria is used to > > > >> enable/disable (dynamic field/flag lookup). > > > >> > > > >> 8. Solution (C) is nice since it avoids "duplication". > > > >> > > > >> 9. The main drawback of the solution (C) is asymmetry. > > > >> As it was discussed in the case of RX_TIMESTAMP > > > >> (if I remember it correctly): > > > >> - PMD advertises RX_TIMESTAMP offload capability > > > >> - application enables the offload > > > >> - PMD registers dynamic field for timestamp > > > >> Solution (C): > > > >> - PMD advertises nothing > > > >> - application uses solution (B) to understand if > > > >> these features are supported > > > >> - application registers dynamic field/flag > > > >> - PMD does lookup and solve the problem > > > >> The asymmetry could be partially mitigated if RX_TIMESTAMP > > > >> solution is changed to require an application to register > > > >> dynamic fields and PMD to do lookup if the offload is > > > >> enabled. So, the only difference will be in no offload > > > >> in the case of flow MARK/FLAG and usage of complex logic > > > >> to understand if it is supported or no. > > > >> May be it would be really good since it will allow to > > > >> have dynamic fields registered before mempool population. > > > >> > > > >> 10. Common drawback of solutions (B) and (C) is no granularity. > > > >> Solution (A) may be per queue while (B) and (C) cannot be > > > >> per queue. Moreover (C) looks global - for all devices. > > > >> It could be really painful. > > > >> > > > >> (C) is nice, but I still vote for simplicity and granularity of > > > >> (A). > > > > > > > > I vote for clear separation of application needs and PMD support, by > > > > using the method C (dynamic fields). > > > > I agree timestamp must use the same path. > > > > I agree it's complicate because we don't know in advance whether a > > > > flow rule will be accepted, but that's the reality, config is complex. > > > > > > Do you think that global nature of the (C) is acceptable? > > > > That's a good question. > > Maybe the feature request should be per port. > > In this case, we are back to solution A with a flag per port? > > > > Note that A and C will not guarantee that the offload will be possible. > > We need B (flow rule validation) anyway. > > I may not understand how solution B can works well for all the cases. I think you didn't read above carefully. I am not saying B will solve all, but is needed in addition of A and C. > A rte_flow rule can be issued after dev_start, which means the rx_burst function is already selected at that time, > so does that mean the driver need to switch from a non- mark offload aware path to a mark offload aware path without stop device? I agree to have the application request the offload before starting (A). > or it has to reject the flow? Yes if PMD is not ready (ignored app request or app did not request), it must reject the flow rule. > The question is if we have 2 data path, one support some offload , one not but more fast, which one should be selected during dev_start? Isn't Offload widely used to solve this problem? > > I think the option A solve all the problems, option C might also works, but A is looks much straightforward for me. Again, the answer is below: > > It seems A, B, C are not alternatives but all required as pieces of a puzzle...