From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id E20941C7EB for ; Fri, 11 May 2018 18:08:50 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1-us4.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id ECA4FB4006A; Fri, 11 May 2018 16:08:48 +0000 (UTC) Received: from [192.168.38.17] (84.52.114.114) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Fri, 11 May 2018 17:08:43 +0100 To: Ferruh Yigit , Thomas Monjalon , Shahaf Shuler , Wei Dai CC: "dev@dpdk.org" References: <62020f5c-d34c-3314-a4bf-2587801f5f8c@intel.com> From: Andrew Rybchenko Message-ID: <7eda097f-083c-beb2-08bd-b0847f345064@solarflare.com> Date: Fri, 11 May 2018 19:08:37 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <62020f5c-d34c-3314-a4bf-2587801f5f8c@intel.com> Content-Language: en-GB X-Originating-IP: [84.52.114.114] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-11.0.0.1191-8.100.1062-23836.003 X-TM-AS-Result: No--26.426400-0.000000-31 X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-MDID: 1526054929-2io13+AdPURa Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 8bit X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] Rx/Tx offloads checks behaviour in 18.05 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 May 2018 16:08:51 -0000 On 05/11/2018 05:22 PM, Ferruh Yigit wrote: > On 5/11/2018 8:07 AM, Andrew Rybchenko wrote: >> Hi all, >> >> I think that Rx/Tx offloads checks behaviour is inconsistent in next-net >> as of today. >> >> Consistency checks are removed from PMDs and substituted with error logs >> in ethdev. > Yes. > >> Basically application which is not switched to new offload API has no >> way to find out if, >> for example, Rx scatter is supported. Rx scatter offload was introduced >> in 17.11 to >> substitute corresponding flag in device Rx mode. >> >> Not updated application could try to enable Rx scatter on device >> configure and >> get failure if it is not supported.  Yes it is not fine-grained and it >> could be numerous >> reasons behind the configure failure. With 18.05 configure will pass and >> moreover >> hardware may be configured to do Rx scatter despite of no real support >> in PMD. >> Consequences could be really different from simply dropping scattered >> packet or >> delivery of truncated packets to spoiling of memory etc. >> >> Similar could happen with multi-segment packet on Tx. Application configures >> Tx queue without NOMULTISEG flag, TxQ setup passes (with error log that >> multi-segment is not supported, but it is just an error log) and >> application generates >> multi-segment packets which are simply truncated (if the first segment >> length is used >> as packet length on transmit) or garbage is sent (if total packet length >> is used, i.e. >> possible disclosure of security sensitive information since it could be >> data from >> neighbour packet). > How common these error cases do you think? I don't know. My fear is that consequences are really bad and it is a regression since checks from PMDs are removed. >> I think we have the following options: >> >> A. Rollback corresponding changes which remove checks from PMDs >>     (at least some PMDs will be not affected). >> >> B. Fail configure if unsupported offload is requested (all PMDs must be >> converted >>     in the release, so reporting of supported offloads must be correct) AND >>     add check that offloads requested on Tx queue level (derived from >> txq_flags) >>     are supported at least somewhere (i.e. tx_offload_capa) > Issue is not PMDs, they should support new offload API. Concern is breaking > application which is out of our control. > > With current approach some old application may request invalid offload and PMD > won't return an error to app, agreed this is a concern. > But adding error returns will break same applications, in a better more obvious > way, and has possibility to break more applications, ones really not concerned > about offload may be hit as well. It depends on which PMD is used. Yes, it was no checks in ethdev before. If PMD does not support multi-segment Tx, some checksum or VLAN insertion offload, but application requests it and rely on it, it will result in invalid packets sent to network. I realize that some applications may simply use empty txq_flags, but do not use any offloads in fact. If so, such PMDs will fail to setup TxQ if checks are made fatal, return error and underlying PMD does not  support these offloads. At least it is safer behaviour than transmitting garbage. Yes, not easy decision. I will publish my patches which passed our tests. >> C. Describe the behaviour changes in release notes to try to make it at >> least >>     create for DPDK users. I don't like the option at all. >> >> Any other ideas? >> >> I would vote for B since it is a step forward and even if it makes some apps >> to fail I think it is better than consequences of missing checks. >> I'll make a patch for option B and test it meanwhile. >> >> Andrew >>