From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qg0-f45.google.com (mail-qg0-f45.google.com [209.85.192.45]) by dpdk.org (Postfix) with ESMTP id D64C45934 for ; Fri, 11 Sep 2015 18:13:04 +0200 (CEST) Received: by qgt47 with SMTP id 47so66310679qgt.2 for ; Fri, 11 Sep 2015 09:13:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=jCVFDga07pZjl3ZnA2oAvqNs9t0eMv6Q9Hf5xAGFmek=; b=F84dcx8lMjptDO/5/ZTAlcCL4O7+dT5sAKc+hqzGDsW1B34nVgmbSR7JU75LYwg8ph J2XDT70J3dQ0sGKdBuWArtNxV1Tt2QH3HIOLlgKtd1lAJY5KnCSTGuvZj0dk1vcJVSfc rxQ9k63L8wQYTd15Yp6knEiuUlZfnolkQMV6YbUYpeb0InY0sCXyCBZPMlEmkSi7o4EZ 1lGBOxTfy0oSuOdmmTiIizOjIF55Zi//BnRQaKlAFDumPkZABrLM3bso6Visu7fMR77V eUQhhoFiB1O/jgZWiUvgxkvZ2qEg7avHwIn2xl/3FT/bp+gQ99d+xwetCnPSyh295M3x Ls6w== X-Gm-Message-State: ALoCoQmqeKYCmx7+CDlgN1Vo5pYL1cT/vO3uEXOYP5xnNUtzXeHiygP6u2LVqNj83+0ylH/whoxv MIME-Version: 1.0 X-Received: by 10.140.232.209 with SMTP id d200mr69048495qhc.68.1441987984236; Fri, 11 Sep 2015 09:13:04 -0700 (PDT) Received: by 10.55.123.131 with HTTP; Fri, 11 Sep 2015 09:13:04 -0700 (PDT) Received: by 10.55.123.131 with HTTP; Fri, 11 Sep 2015 09:13:04 -0700 (PDT) In-Reply-To: <59AF69C657FD0841A61C55336867B5B0359263A3@IRSMSX103.ger.corp.intel.com> References: <1439489195-31553-1-git-send-email-vladz@cloudius-systems.com> <55F2E448.1070602@6wind.com> <55F2E997.5050009@cloudius-systems.com> <1762144.1LKiyImgC1@xps13> <59AF69C657FD0841A61C55336867B5B0359263A3@IRSMSX103.ger.corp.intel.com> Date: Fri, 11 Sep 2015 19:13:04 +0300 Message-ID: From: Vladislav Zolotarov To: Bruce Richardson Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH v1] ixgbe_pmd: forbid tx_rs_thresh above 1 for all NICs but 82598 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Sep 2015 16:13:05 -0000 On Sep 11, 2015 7:00 PM, "Richardson, Bruce" wrote: > > > > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Vladislav Zolotarov > > Sent: Friday, September 11, 2015 4:13 PM > > To: Thomas Monjalon > > Cc: dev@dpdk.org > > Subject: Re: [dpdk-dev] [PATCH v1] ixgbe_pmd: forbid tx_rs_thresh above 1 > > for all NICs but 82598 > > > > On Sep 11, 2015 5:55 PM, "Thomas Monjalon" > > wrote: > > > > > > 2015-09-11 17:47, Avi Kivity: > > > > On 09/11/2015 05:25 PM, didier.pallard wrote: > > > > > On 08/25/2015 08:52 PM, Vlad Zolotarov wrote: > > > > >> > > > > >> Helin, the issue has been seen on x540 devices. Pls., see a > > > > >> chapter > > > > >> 7.2.1.1 of x540 devices spec: > > > > >> > > > > >> A packet (or multiple packets in transmit segmentation) can span > > > > >> any number of buffers (and their descriptors) up to a limit of 40 > > > > >> minus WTHRESH minus 2 (see Section 7.2.3.3 for Tx Ring details > > > > >> and section Section 7.2.3.5.1 for WTHRESH details). For best > > > > >> performance it is recommended to minimize the number of buffers > > > > >> as possible. > > > > >> > > > > >> Could u, pls., clarify why do u think that the maximum number of > > > > >> data buffers is limited by 8? > > > > >> > > > > >> thanks, > > > > >> vlad > > > > > > > > > > Hi vlad, > > > > > > > > > > Documentation states that a packet (or multiple packets in > > > > > transmit > > > > > segmentation) can span any number of buffers (and their > > > > > descriptors) up to a limit of 40 minus WTHRESH minus 2. > > > > > > > > > > Shouldn't there be a test in transmit function that drops properly > > > > > the mbufs with a too large number of segments, while incrementing > > > > > a statistic; otherwise transmit function may be locked by the > > > > > faulty packet without notification. > > > > > > > > > > > > > What we proposed is that the pmd expose to dpdk, and dpdk expose to > > > > the application, an mbuf check function. This way applications that > > > > can generate complex packets can verify that the device will be able > > > > to process them, and applications that only generate simple mbufs > > > > can avoid the overhead by not calling the function. > > > > > > More than a check, it should be exposed as a capability of the port. > > > Anyway, if the application sends too much segments, the driver must > > > drop it to avoid hang, and maintain a dedicated statistic counter to > > > allow easy debugging. > > > > I agree with Thomas - this should not be optional. Malformed packets > > should be dropped. In the icgbe case it's a very simple test - it's a > > single branch per packet so i doubt that it could impose any measurable > > performance degradation. > > > Actually, it could very well do - we'd have to test it. For the vector IO > paths, every additional cycle in the RX or TX paths causes a noticeable perf > drop. Well if your application is willing to know all different HW limitations then u may not need it. However usually application doesn't want to know the HW technical details. And it this case ignoring them may cause HW to hang. Of course, if your app always sends single fragment packets of less than 1500 bytes then u r right and u will most likely not hit any HW limitation, however what i have in mind is a full featured case where packets are bit more big and complicated and where a single branch per packet will change nothing. This is regarding 40 segments case. In regard to the RS bit - this is related to any packet and according to spec it should be set in every packet. > > /Bruce