From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1504CA00BE; Fri, 1 Nov 2019 01:00:00 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 714481D412; Fri, 1 Nov 2019 00:59:59 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 0F39A1D16B for ; Fri, 1 Nov 2019 00:59:57 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 31 Oct 2019 16:59:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,253,1569308400"; d="scan'208";a="199682328" Received: from fmsmsx107.amr.corp.intel.com ([10.18.124.205]) by fmsmga007.fm.intel.com with ESMTP; 31 Oct 2019 16:59:56 -0700 Received: from fmsmsx112.amr.corp.intel.com (10.18.116.6) by fmsmsx107.amr.corp.intel.com (10.18.124.205) with Microsoft SMTP Server (TLS) id 14.3.439.0; Thu, 31 Oct 2019 16:59:56 -0700 Received: from shsmsx106.ccr.corp.intel.com (10.239.4.159) by FMSMSX112.amr.corp.intel.com (10.18.116.6) with Microsoft SMTP Server (TLS) id 14.3.439.0; Thu, 31 Oct 2019 16:59:55 -0700 Received: from shsmsx105.ccr.corp.intel.com ([169.254.11.225]) by SHSMSX106.ccr.corp.intel.com ([169.254.10.248]) with mapi id 14.03.0439.000; Fri, 1 Nov 2019 07:59:53 +0800 From: "Zhang, Qi Z" To: Thomas Monjalon , Andrew Rybchenko CC: "dev@dpdk.org" , Ori Kam , "pbhagavatula@marvell.com" , "Yigit, Ferruh" , "jerinj@marvell.com" , "Mcnamara, John" , "Kovacevic, Marko" , Adrien Mazarguil , "david.marchand@redhat.com" , "ktraynor@redhat.com" Thread-Topic: [dpdk-dev] [PATCH 1/2] ethdev: add flow action type update as an offload Thread-Index: AQHVj/qAj/AG+eLJjUWW2B/XhNoxP6d1ZMNA Date: Thu, 31 Oct 2019 23:59:53 +0000 Message-ID: <039ED4275CED7440929022BC67E7061153DC1341@SHSMSX105.ccr.corp.intel.com> References: <20191025152142.12887-1-pbhagavatula@marvell.com> <3078181.9TjvbByyqQ@xps> In-Reply-To: <3078181.9TjvbByyqQ@xps> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiMTk5M2Q0MTAtYjJmNy00MmM2LTlmN2QtNGE5YmVjMDU5M2I0IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoibWxWWDRJOWF1WE5DOVlKQ1JmT0NGQUxWQk5IeFp4UkNodlwvbUZZYVh6cnNLQ1J3KzhuMXAxU2ppeURBZkMwMjcifQ== x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.2.0.6 dlp-reaction: no-action x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH 1/2] ethdev: add flow action type update as an offload X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: dev On Behalf Of Thomas Monjalon > Sent: Thursday, October 31, 2019 10:50 PM > To: Andrew Rybchenko > Cc: dev@dpdk.org; Ori Kam ; > pbhagavatula@marvell.com; Yigit, Ferruh ; > jerinj@marvell.com; Mcnamara, John ; Kovacevic, > Marko ; Adrien Mazarguil > ; david.marchand@redhat.com; > ktraynor@redhat.com > Subject: Re: [dpdk-dev] [PATCH 1/2] ethdev: add flow action type update a= s an > offload >=20 > 31/10/2019 10:49, Andrew Rybchenko: > > On 10/28/19 5:00 PM, Ori Kam wrote: > > >> -----Original Message----- > > > From: Andrew Rybchenko > > >> On 10/28/19 1:50 PM, Ori Kam wrote: > > >>> Hi Pavan, > > >>> > > >>> Sorry for jumping in late. > > >>> > > >>> I don't understand why we need this feature. If the user didn't > > >>> set any flow > > >> with MARK > > >>> then the user doesn't need to check it. > > >> There is pretty long discussion on the topic already, please, read [= 1]. > > >> > > >> [1] > > >> https://eur03.safelinks.protection.outlook.com/?url=3Dhttp%3A%2F%2Fi= n > > >> box.dpdk > > >> .org%2Fdev%2F3251fc00-7598-1c4f-fc2a- > > >> > 380065f0a435%40solarflare.com%2F&data=3D02%7C01%7Corika%40mella > n > > >> > ox.com%7Ce3f779d4b7c44b682d6508d75b9d8688%7Ca652971c7d2e4d9ba6a > 4 > > >> > d149256f461b%7C0%7C0%7C637078604439019114&sdata=3DsYooc%2FQ3 > C > > >> kUZG3gRFPlZrm8xMfMB9gOWWex5YIkWhMc%3D&reserved=3D0 > > >> > > > Thanks for the link, it was an interesting reading. > > > > > >>> Also it breaks compatibility. > > >> Yes, there is a deprecation notice for it. > > >> > > >>> If my understanding is correct the MARK field is going to be moved > > >>> to > > >> dynamic field, and this > > >>> will be way to control the use of MARK. > > >> Yes and I think the offload should used to request dynamic field > > >> register. Similar to timestamp in dynamic mbuf examples. > > >> Application requests Rx timestamp offload, PMD registers dynamic > > >> filed. > > >> > > > In general it was decided that there will be no capability for > > > rte_flow API, due to the fact that it is impossible to support all > > > possible combinations. For example a PMD can allow mark on Rx while n= ot > supporting it on e-switch (transfer) or on Tx. > > > The only way to validate it is validating a flow. If the flow is vali= dated then > the action is supported. > > > This is the exact approach we are implementing with the Meta feature. > > > So as I see it, the logic should be something like this: > > > 1. run devconfigure. > > > 2. allocate mempool > > > 3. setup queues. > > > 4. run rte_flow_validate with mark action. > > > If flow validated register mark in mbuf else don't register. > > > If the PMD needs some special setting for mark he can update the queu= e > when he gets the flow to validate. > > > At this stage the device is not started so any change is allowed. > > > > I understand why there is capability reporting in rte_flow API when it > > is about rte_flow API itself. The problem appears when rte_flow API > > starts to interact with other functionality. > > Which pattern/actions should application try in order to decide if > > MARK is supported or not. >=20 > Why application should decide whether MARK is supported or not? > In my understanding it can be enabled dynamically per flow. Sorry to break in the discussion, I think the mark offload will give below = benefits base on some real cases. 1. for PMD which not enable mark offload on all data paths, for example, th= e vector PMD does not support mark, but non vector PMD does, In this case, the offload can give driver a hint to choose the correct data= path, otherwise, when vPMD is selected at dev_start, a flow with mark acti= on has to be rejected. 2. extract the 32 bit mark from rx descriptor have considerable performance= cost, especially on vPMD. so it will be nice if the driver knows that mark offload is not necessary f= or application, then it can always select a faster path.=20 while, driver can track when the first flow with mark are issued and the la= st flow with mark is delete than branch the code for mark extraction proper= ly, but this just give driver another option to simply this Regards Qi >=20 > > The right answer is a pattern/action > > which will be really used, but what to do if there are many > > combinations or if these combinations are not know in advance. > > Minimal? But I easily imagine cases when minimal is not supported, but > > more complex real life patterns are supported. > > > > The main idea behind the offload is as much as you know in advance as > > much you can optimize without overcomplicating drivers and HW. > > > > In the case of OVS, absence MARK offload would mean that OVS should > > not even try to use partial offload even if it is enabled. > > So, no efforts are required to try to convert flow into pattern and > > validate the flow rule. >=20 > That's an interesting feedback. > I would like to understand why OVS cannot adapt its datapath on demand pe= r > port, per queue and per flow? >=20