From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qg0-f47.google.com (mail-qg0-f47.google.com [209.85.192.47]) by dpdk.org (Postfix) with ESMTP id 6CB1A7E6A for ; Fri, 11 Sep 2015 18:14:01 +0200 (CEST) Received: by qgx61 with SMTP id 61so66512770qgx.3 for ; Fri, 11 Sep 2015 09:14:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=10xr5GT8Uv4AtvCY85H51DElY1MG2/dgMKU6PvuVk8I=; b=De7dS5bTGdiSi3YA50Vtzy3MhHkIzY/KfyzIWv77b6Y0OLq6H62ekJRNnYTaJnsg3s 12SMmGJBh8AEJGZdDskIWsbI0UZyCB5njnPx5/miXPZO7C/eVERJeH55kjhc48ubk0Ad e6j24/VMYDbYmrz8LA8u8XfZxqKI2CkZP0oaZRA9vzkfLDxqvbCdGYXUcnPAxfSp1cFB 84Sr6zs0J+mkbKFrhrKnHgtiFEt5qa01aPTwEZ/9YA1k+TKOpK0m53ZTRkmS2DAoT503 c7oHhSPpXaZFlW1ocNN+rowrBsYS4q2b4sIJxv1sn9z8c+T0kwAYeXg2Sc2d5RTmDrEO l1HA== X-Gm-Message-State: ALoCoQmhsooN5ZEXgsDidqVO1ZDnktmERXsbMterBvBUbnQ3mFgQyQ08mGgv2+oGfM1y08CwaaoR MIME-Version: 1.0 X-Received: by 10.140.38.167 with SMTP id t36mr30478004qgt.66.1441988040846; Fri, 11 Sep 2015 09:14:00 -0700 (PDT) Received: by 10.55.123.131 with HTTP; Fri, 11 Sep 2015 09:14:00 -0700 (PDT) Received: by 10.55.123.131 with HTTP; Fri, 11 Sep 2015 09:14:00 -0700 (PDT) In-Reply-To: <59AF69C657FD0841A61C55336867B5B0359263BC@IRSMSX103.ger.corp.intel.com> References: <1439489195-31553-1-git-send-email-vladz@cloudius-systems.com> <55F2E448.1070602@6wind.com> <55F2E997.5050009@cloudius-systems.com> <1762144.1LKiyImgC1@xps13> <55F2F6A9.6080405@cloudius-systems.com> <59AF69C657FD0841A61C55336867B5B0359263BC@IRSMSX103.ger.corp.intel.com> Date: Fri, 11 Sep 2015 19:14:00 +0300 Message-ID: From: Vladislav Zolotarov To: Bruce Richardson Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH v1] ixgbe_pmd: forbid tx_rs_thresh above 1 for all NICs but 82598 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Sep 2015 16:14:01 -0000 On Sep 11, 2015 7:07 PM, "Richardson, Bruce" wrote: > > > > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Vladislav Zolotarov > > Sent: Friday, September 11, 2015 5:04 PM > > To: Avi Kivity > > Cc: dev@dpdk.org > > Subject: Re: [dpdk-dev] [PATCH v1] ixgbe_pmd: forbid tx_rs_thresh above 1 > > for all NICs but 82598 > > > > On Sep 11, 2015 6:43 PM, "Avi Kivity" wrote: > > > > > > On 09/11/2015 06:12 PM, Vladislav Zolotarov wrote: > > >> > > >> > > >> On Sep 11, 2015 5:55 PM, "Thomas Monjalon" > > >> > > wrote: > > >> > > > >> > 2015-09-11 17:47, Avi Kivity: > > >> > > On 09/11/2015 05:25 PM, didier.pallard wrote: > > >> > > > On 08/25/2015 08:52 PM, Vlad Zolotarov wrote: > > >> > > >> > > >> > > >> Helin, the issue has been seen on x540 devices. Pls., see a > > chapter > > >> > > >> 7.2.1.1 of x540 devices spec: > > >> > > >> > > >> > > >> A packet (or multiple packets in transmit segmentation) can > > >> > > >> span > > any > > >> > > >> number of > > >> > > >> buffers (and their descriptors) up to a limit of 40 minus > > >> > > >> WTHRESH minus 2 (see Section 7.2.3.3 for Tx Ring details and > > >> > > >> section Section 7.2.3.5.1 > > for > > >> > > >> WTHRESH > > >> > > >> details). For best performance it is recommended to minimize > > >> > > >> the number of buffers as possible. > > >> > > >> > > >> > > >> Could u, pls., clarify why do u think that the maximum number > > >> > > >> of > > data > > >> > > >> buffers is limited by 8? > > >> > > >> > > >> > > >> thanks, > > >> > > >> vlad > > >> > > > > > >> > > > Hi vlad, > > >> > > > > > >> > > > Documentation states that a packet (or multiple packets in > > >> > > > transmit > > >> > > > segmentation) can span any number of buffers (and their > > >> > > > descriptors) up to a limit of 40 minus WTHRESH minus 2. > > >> > > > > > >> > > > Shouldn't there be a test in transmit function that drops > > >> > > > properly > > the > > >> > > > mbufs with a too large number of segments, while incrementing a > > >> > > > statistic; otherwise transmit > > function > > >> > > > may be locked by the faulty packet without notification. > > >> > > > > > >> > > > > >> > > What we proposed is that the pmd expose to dpdk, and dpdk expose > > >> > > to > > the > > >> > > application, an mbuf check function. This way applications that > > >> > > can generate complex packets can verify that the device will be > > >> > > able to process them, and applications that only generate simple > > >> > > mbufs can > > avoid > > >> > > the overhead by not calling the function. > > >> > > > >> > More than a check, it should be exposed as a capability of the port. > > >> > Anyway, if the application sends too much segments, the driver must > > >> > drop it to avoid hang, and maintain a dedicated statistic counter > > >> > to > > allow > > >> > easy debugging. > > >> > > >> I agree with Thomas - this should not be optional. Malformed packets > > should be dropped. In the icgbe case it's a very simple test - it's a > > single branch per packet so i doubt that it could impose any measurable > > performance degradation. > > >> > > >> > > > > > > A drop allows the application no chance to recover. The driver must > > either provide the ability for the application to know that it cannot > > accept the packet, or it must fix it up itself. > > > > An appropriate statistics counter would be a perfect tool to detect such > > issues. Knowingly sending a packet that will cause a HW to hang is not > > acceptable. > > I would agree. Drivers should provide a function to query the max number of > segments they can accept and the driver should be able to discard any packets > exceeding that number, and just track it via a stat. +1 > > /Bruce