From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7CB66A04C0; Thu, 17 Sep 2020 17:18:26 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 50D021D69F; Thu, 17 Sep 2020 17:18:24 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id 3B5F51D69C for ; Thu, 17 Sep 2020 17:18:20 +0200 (CEST) Received: from mx1-us1.ppe-hosted.com (unknown [10.7.65.60]) by dispatch1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTP id 892F560073; Thu, 17 Sep 2020 15:18:19 +0000 (UTC) Received: from us4-mdac16-10.ut7.mdlocal (unknown [10.7.65.180]) by mx1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTP id 884AC200A3; Thu, 17 Sep 2020 15:18:19 +0000 (UTC) X-Virus-Scanned: Proofpoint Essentials engine Received: from mx1-us1.ppe-hosted.com (unknown [10.7.65.197]) by mx1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id D27E31C0064; Thu, 17 Sep 2020 15:18:18 +0000 (UTC) Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id 67B85A4007A; Thu, 17 Sep 2020 15:18:18 +0000 (UTC) Received: from [127.0.0.27] (10.17.10.39) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 17 Sep 2020 16:18:08 +0100 To: Gregory Etelson , Ajit Khaparde CC: dpdk-dev , Matan Azrad , Raslan Darawsheh , Ori Kam , Ori Kam , NBU-Contact-Thomas Monjalon , Ferruh Yigit References: <20200625160348.26220-1-getelson@mellanox.com> <20200908201552.14423-1-getelson@nvidia.com> <20200908201552.14423-2-getelson@nvidia.com> <599184f9-72ce-d0f1-a586-d0182888497e@solarflare.com> From: Andrew Rybchenko Message-ID: <0e19026c-db3d-9896-7b73-3efb39d1107b@solarflare.com> Date: Thu, 17 Sep 2020 18:18:01 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-Originating-IP: [10.17.10.39] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.6.1012-25670.006 X-TM-AS-Result: No-17.478200-8.000000-10 X-TMASE-MatchedRID: B0+RG8xpjtTmLzc6AOD8DfHkpkyUphL9HPWRZZqyvpZRnaQSGsK/JD6P hj6DfZCEl8V6ZKEJA0+TH1CW/TkdqgtgJ854eHwOgYFDtM3wGBobAqzdFRyxuJh4xM9oAcstM/3 +TuDqaXd3sxxc2oEDSyJFLankQdU8PNhDIdvLc7/77/k4uTunUQ73P4/aDCIFhnD8O3TjwmaquX imrWoldznpYPqS6jzxz46ntczvww3xJW/zEKImswfBcTvwUB7CTLZB6U/YaPp03EU8crzuS1cAZ +7BKb2qcSoB3lF2m2nThjdHig4By8AWE2Yt75J5tKR5FXfbysw7IFMOvFEK2D347FqezP1TuYqW /cruW8Jmzboo0yKM4MJgxDP0fAFzPIeX4ggjD6TM0ihsfYPMYR51tOwMToE/8CWGWvy/Nx1l/Qx SCU7NR2+Ce2GkjG7G2CyedsbCcm2sVtQY5AJDls0u6j1Iay4mH0MAL5h27KN3PducjiV5hePyVb RUihsaUmMQF6CGUNcNVZ3uq7iJ0olL8gBSDPjt081phgl5F/kHgh3sKJBzPxn4299x0aHBQaNK4 ceW3BzRWiJeuNqHV9O2rF4en8WWtDcef9MiEsl+NQIFduF535iRD9e1E+nrEfUQ24WvSCbi0Rhp rJ50g2N71d9aLyYajxrgmRsHeqPuxoIYPSP14Ohsg0dmQfnG/oVxXwNIskcgUEQTkIWiYuD3P3H uBQm9X1BSdEdoMfj0PbOf1x6JNNTArwwrDU4vC4s/hE51YdWpXdWa4gU0Szs0M7O5pKL/o8WMkQ Wv6iWhMIDkR/KfwI2j49Ftap9E4kYXbobxJbKl/MtrTwS4UFkWa/TnVKTGUIpZf7dNQenYoeztD MzgsxK42fYQpMfbrW7Lspl6Tvo= X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--17.478200-8.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.6.1012-25670.006 X-MDID: 1600355899-EsfcGTmBw1Th Subject: Re: [dpdk-dev] [PATCH v2 1/4] ethdev: allow negative values in flow rule types X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 9/17/20 10:56 AM, Gregory Etelson wrote: >> On 9/16/20 8:21 PM, Gregory Etelson wrote: >>> From: Gregory Etelson >>> Sent: Tuesday, September 15, 2020 13:27 >>> To: Andrew Rybchenko ; Ajit Khaparde >>> >>> Cc: dpdk-dev ; Matan Azrad ; Raslan >>> Darawsheh ; Ori Kam ; Gregory >>> Etelson ; Ori Kam ; >>> NBU-Contact-Thomas Monjalon ; Ferruh Yigit >>> >>> Subject: RE: [dpdk-dev] [PATCH v2 1/4] ethdev: allow negative values >>> in flow rule types >>> >>> Subject: Re: [dpdk-dev] [PATCH v2 1/4] ethdev: allow negative values >>> in flow rule types On 9/15/20 7:36 AM, Ajit Khaparde wrote: >>> On Tue, Sep 8, 2020 at 1:16 PM Gregory Etelson >> wrote: >>> From: Gregory Etelson >>> >>> RTE flow items & actions use positive values in item & action type. >>> Negative values are reserved for PMD private types. PMD items & >>> actions usually are not exposed to application and are not used to >>> create RTE flows. >>> >>> The patch allows applications with access to PMD flow items & actions >>> ability to integrate RTE and PMD items & actions and use them to >>> create flow rule. >>> While we are reviewing this, some quick comment/questions.. >>> >>> Doesn't this go against the above "PMD items & actions usually are not >>> exposed to application and are not used to create RTE flows."? >>> Why would an application try to use PMD specific private types? >>> Isn't this contrary to having a standard API? >>> >>> +1 >>> >>> I would like to clarify the purpose and use of private elements patch. >>> That patch is prerequisite for [PATCH v2 2/4] ethdev: tunnel offload model >> patch. >>> The tunnel offload API provides unified hardware independent model to >>> offload tunneled packets, match on packet headers in hardware and to >> restore outer headers of partially offloaded packets. >>> The model implementation depends on hardware capabilities. For >>> example, if hardware supports inner nat, it can do nat first and >>> postpone decap to the end, while other hardware that cannot do inner >>> nat must decap first and run nat actions afterwards. Such hardware has >>> to save outer header in some hardware context, register or memory, for >> application to restore a packet later, if needed. Also, in this case the exact >> solution depends on PMD because of limited number of hardware contexts. >>> Although application working with DKDK can implement all these >>> requirements with existing flow rules API, it will have to address each >> hardware specifications separately. >>> To solve this limitation we selected design where application quires >>> PMD for actions, or items, that are optimal for a hardware that PMD >>> represents. Result can be a mixture of RTE and PMD private elements - >>> it's up to PMD implementation. Application passes these elements back to >> PMD as a flow rule recipe that's already optimal for underlying hardware. >>> If PMD has private elements in such rule items or actions, these private >> elements must not be rejected by RTE layer. >>> >>> I hope it helps to understand what this model is trying to achieve. >>> Did that clarify your concerns ? >> >> There is a very simple question which I can't answer after reading it. >> Why these PMD specific actions and items do not bind application to a >> specific vendor. If it binds, it should be clearly stated in the description. If no, >> I'd like to understand why since opaque actions/items are not really well >> defined and hardly portable across vendors. > > Tunnel Offload API does not bind application to a vendor. > One of the main goals of that model is to provide application with vendor/hardware independent solution. > PMD transfer to application an array of items. Application passes that array back to PMD as opaque data, > in rte_flow_create(), without reviewing the array content. Therefore, if there are internal PMD actions in the array, > they have no effect on application. > Consider the following application code example: > > /* get PMD actions that implement tunnel offload */ > rte_tunnel_decap_set(&tunnel, &pmd_actions, pmd_actions_num, error); > > /* compile an array of actions to create flow rule */ > memcpy(actions, pmd_actions, pmd_actions_num * sizeof(actions[0])); > memcpy(actions + pmd_actions_num, app_actions, app_actions_num * sizeof(actions[0])); > > /* create flow rule*/ > rte_flow_create(port_id, attr, pattern, actions, error); > > vendor A provides pmd_actions_A = {va1, va2 …. vaN} > vendor B provides pmd_actions_B = {vb1} > Regardless of pmd_actions content, application code will not change. > However, each PMD will receive exact, hardware related, actions for tunnel offload. > Many thanks for explanations. I got it. I'll wait for the next version to take a look at code once again.