From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 76E685687 for ; Sun, 13 Sep 2015 13:47:25 +0200 (CEST) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP; 13 Sep 2015 04:47:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,522,1437462000"; d="scan'208";a="803482143" Received: from irsmsx101.ger.corp.intel.com ([163.33.3.153]) by orsmga002.jf.intel.com with ESMTP; 13 Sep 2015 04:47:22 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.51]) by IRSMSX101.ger.corp.intel.com ([163.33.3.153]) with mapi id 14.03.0224.002; Sun, 13 Sep 2015 12:47:21 +0100 From: "Ananyev, Konstantin" To: Avi Kivity , Thomas Monjalon , Vladislav Zolotarov , didier.pallard Thread-Topic: [dpdk-dev] [PATCH v1] ixgbe_pmd: forbid tx_rs_thresh above 1 for all NICs but 82598 Thread-Index: AQHQ7Lod8WXH6ER9q0eam+oVD21uiZ46T9HA Date: Sun, 13 Sep 2015 11:47:20 +0000 Message-ID: <2601191342CEEE43887BDE71AB97725836A85E36@irsmsx105.ger.corp.intel.com> References: <1439489195-31553-1-git-send-email-vladz@cloudius-systems.com> <55F2F6A9.6080405@cloudius-systems.com> <3734976.j9Azrvq6io@xps13> <55F313E4.2080300@cloudius-systems.com> In-Reply-To: <55F313E4.2080300@cloudius-systems.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH v1] ixgbe_pmd: forbid tx_rs_thresh above 1 for all NICs but 82598 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 13 Sep 2015 11:47:26 -0000 > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Avi Kivity > Sent: Friday, September 11, 2015 6:48 PM > To: Thomas Monjalon; Vladislav Zolotarov; didier.pallard > Cc: dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH v1] ixgbe_pmd: forbid tx_rs_thresh above 1= for all NICs but 82598 >=20 > On 09/11/2015 07:08 PM, Thomas Monjalon wrote: > > 2015-09-11 18:43, Avi Kivity: > >> On 09/11/2015 06:12 PM, Vladislav Zolotarov wrote: > >>> On Sep 11, 2015 5:55 PM, "Thomas Monjalon" >>> > wrote: > >>>> 2015-09-11 17:47, Avi Kivity: > >>>>> On 09/11/2015 05:25 PM, didier.pallard wrote: > >>>>>> Hi vlad, > >>>>>> > >>>>>> Documentation states that a packet (or multiple packets in transmi= t > >>>>>> segmentation) can span any number of > >>>>>> buffers (and their descriptors) up to a limit of 40 minus WTHRESH > >>>>>> minus 2. > >>>>>> > >>>>>> Shouldn't there be a test in transmit function that drops > >>> properly the > >>>>>> mbufs with a too large number of > >>>>>> segments, while incrementing a statistic; otherwise transmit > >>> function > >>>>>> may be locked by the faulty packet without > >>>>>> notification. > >>>>>> > >>>>> What we proposed is that the pmd expose to dpdk, and dpdk expose > >>> to the > >>>>> application, an mbuf check function. This way applications that ca= n > >>>>> generate complex packets can verify that the device will be able to > >>>>> process them, and applications that only generate simple mbufs can > >>> avoid > >>>>> the overhead by not calling the function. > >>>> More than a check, it should be exposed as a capability of the port. > >>>> Anyway, if the application sends too much segments, the driver must > >>>> drop it to avoid hang, and maintain a dedicated statistic counter to > >>>> allow easy debugging. > >>> I agree with Thomas - this should not be optional. Malformed packets > >>> should be dropped. In the icgbe case it's a very simple test - it's a > >>> single branch per packet so i doubt that it could impose any > >>> measurable performance degradation. > >> A drop allows the application no chance to recover. The driver must > >> either provide the ability for the application to know that it cannot > >> accept the packet, or it must fix it up itself. > > I have the feeling that everybody agrees on the same thing: > > the application must be able to make a well formed packet by checking > > limitations of the port. What about a field rte_eth_dev_info.max_tx_seg= s? >=20 > It is not generic enough. i40e has a limit that it imposes post-TSO. >=20 >=20 > > In case the application fails in its checks, the driver must drop it an= d > > notify the user via a stat counter. > > The driver can also remove the hardware limitation by gathering the seg= ments > > but it may be hard to implement and would be a slow operation. >=20 > I think that to satisfy both the 64b full line rate applications and the > more complicated full stack applications, this must be made optional. > In particular, and application that only forwards packets will never hit > a NIC's limits, so it need not take any action. That's why I think a > verification function is ideal; a forwarding application can ignore it, > and a complex application can call it, and if it fails the packet, it > can linearize it itself, removing complexity from dpdk itself. I think that's a good approach to that problem. As I remember we discussed something similar a while ago - A function (tx_prep() or something) that would check nb_segs and probably s= ome other HW specific restrictions, calculate pseudo-header checksum, reset ip header len, etc. =20 >>From other hand we also can add two more fields into rte_eth_dev_info:=20 1) Max num of segs per TSO packet (tx_max_seg ?).=20 2) Max num of segs per single packet/TSO segment (tx_max_mtu_seg ?). So for ixgbe both will have value 40 - wthresh, while for i40e 1) would be UINT8_MAX and 2) will be 8. Then upper layer can use that information to select an optimal size for its= TX buffers. =20 Konstantin