From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wi0-f177.google.com (mail-wi0-f177.google.com [209.85.212.177]) by dpdk.org (Postfix) with ESMTP id 2750D5A52 for ; Sun, 13 Sep 2015 18:01:57 +0200 (CEST) Received: by wiclk2 with SMTP id lk2so113152027wic.0 for ; Sun, 13 Sep 2015 09:01:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=AO9dHYNAe/L/zRPdidC6fssPFbiMbFHXeucnhbS+i4Q=; b=hCwhYhTT//5jsAn8DEVFvUQZmU53lfUzU02aSUGpLaDyIyedMbdeb/0SkjB4w7gilT 5knZNn4doDVKK4ajJpslA4IdwOLzrLiRdr9YbqjueXoo3uIu9Fa1e0CUyLX++LGRQmmV E9WAzvO1P5mEx4haZnuervZwFnZOfJXBhuz/Ck/2AIOdCoR+HM23gYLw4ddWpd5gP6+v tXWakV/aK54spaajbtBH7viO4JEu/1vJZDJ+gc8Zghf5T0MLqpamXS7g6Lpc7H/1VEtT xws/0OfloLqSVbZ7DBI1lhEhZa7A5AiHKC4wZaTl3xkdPhkOmOVtaKjulms5/IIgZccj ZkxQ== X-Gm-Message-State: ALoCoQnCFlzQqXO5Doa5d55O0UM9Ow49KijpvErUzrD37nshS/X9LiX2Nf1UsBstaMGLLIxWjSIb MIME-Version: 1.0 X-Received: by 10.194.116.67 with SMTP id ju3mr18332175wjb.143.1442160116937; Sun, 13 Sep 2015 09:01:56 -0700 (PDT) Received: by 10.194.113.100 with HTTP; Sun, 13 Sep 2015 09:01:56 -0700 (PDT) Received: by 10.194.113.100 with HTTP; Sun, 13 Sep 2015 09:01:56 -0700 (PDT) In-Reply-To: <2601191342CEEE43887BDE71AB97725836A85FDD@irsmsx105.ger.corp.intel.com> References: <1439489195-31553-1-git-send-email-vladz@cloudius-systems.com> <55F2F6A9.6080405@cloudius-systems.com> <3734976.j9Azrvq6io@xps13> <55F313E4.2080300@cloudius-systems.com> <2601191342CEEE43887BDE71AB97725836A85E36@irsmsx105.ger.corp.intel.com> <55F56CEB.6060808@cloudius-systems.com> <2601191342CEEE43887BDE71AB97725836A85FDD@irsmsx105.ger.corp.intel.com> Date: Sun, 13 Sep 2015 19:01:56 +0300 Message-ID: From: Avi Kivity To: Konstantin Ananyev Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "" Subject: Re: [dpdk-dev] [PATCH v1] ixgbe_pmd: forbid tx_rs_thresh above 1 for all NICs but 82598 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 13 Sep 2015 16:01:57 -0000 On Sep 13, 2015 6:54 PM, "Ananyev, Konstantin" wrote: > > > > > -----Original Message----- > > From: Avi Kivity [mailto:avi@cloudius-systems.com] > > Sent: Sunday, September 13, 2015 1:33 PM > > To: Ananyev, Konstantin; Thomas Monjalon; Vladislav Zolotarov; didier.pallard > > Cc: dev@dpdk.org > > Subject: Re: [dpdk-dev] [PATCH v1] ixgbe_pmd: forbid tx_rs_thresh above 1 for all NICs but 82598 > > > > On 09/13/2015 02:47 PM, Ananyev, Konstantin wrote: > > > > > >> -----Original Message----- > > >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Avi Kivity > > >> Sent: Friday, September 11, 2015 6:48 PM > > >> To: Thomas Monjalon; Vladislav Zolotarov; didier.pallard > > >> Cc: dev@dpdk.org > > >> Subject: Re: [dpdk-dev] [PATCH v1] ixgbe_pmd: forbid tx_rs_thresh above 1 for all NICs but 82598 > > >> > > >> On 09/11/2015 07:08 PM, Thomas Monjalon wrote: > > >>> 2015-09-11 18:43, Avi Kivity: > > >>>> On 09/11/2015 06:12 PM, Vladislav Zolotarov wrote: > > >>>>> On Sep 11, 2015 5:55 PM, "Thomas Monjalon" < thomas.monjalon@6wind.com > > >>>>> > wrote: > > >>>>>> 2015-09-11 17:47, Avi Kivity: > > >>>>>>> On 09/11/2015 05:25 PM, didier.pallard wrote: > > >>>>>>>> Hi vlad, > > >>>>>>>> > > >>>>>>>> Documentation states that a packet (or multiple packets in transmit > > >>>>>>>> segmentation) can span any number of > > >>>>>>>> buffers (and their descriptors) up to a limit of 40 minus WTHRESH > > >>>>>>>> minus 2. > > >>>>>>>> > > >>>>>>>> Shouldn't there be a test in transmit function that drops > > >>>>> properly the > > >>>>>>>> mbufs with a too large number of > > >>>>>>>> segments, while incrementing a statistic; otherwise transmit > > >>>>> function > > >>>>>>>> may be locked by the faulty packet without > > >>>>>>>> notification. > > >>>>>>>> > > >>>>>>> What we proposed is that the pmd expose to dpdk, and dpdk expose > > >>>>> to the > > >>>>>>> application, an mbuf check function. This way applications that can > > >>>>>>> generate complex packets can verify that the device will be able to > > >>>>>>> process them, and applications that only generate simple mbufs can > > >>>>> avoid > > >>>>>>> the overhead by not calling the function. > > >>>>>> More than a check, it should be exposed as a capability of the port. > > >>>>>> Anyway, if the application sends too much segments, the driver must > > >>>>>> drop it to avoid hang, and maintain a dedicated statistic counter to > > >>>>>> allow easy debugging. > > >>>>> I agree with Thomas - this should not be optional. Malformed packets > > >>>>> should be dropped. In the icgbe case it's a very simple test - it's a > > >>>>> single branch per packet so i doubt that it could impose any > > >>>>> measurable performance degradation. > > >>>> A drop allows the application no chance to recover. The driver must > > >>>> either provide the ability for the application to know that it cannot > > >>>> accept the packet, or it must fix it up itself. > > >>> I have the feeling that everybody agrees on the same thing: > > >>> the application must be able to make a well formed packet by checking > > >>> limitations of the port. What about a field rte_eth_dev_info.max_tx_segs? > > >> It is not generic enough. i40e has a limit that it imposes post-TSO. > > >> > > >> > > >>> In case the application fails in its checks, the driver must drop it and > > >>> notify the user via a stat counter. > > >>> The driver can also remove the hardware limitation by gathering the segments > > >>> but it may be hard to implement and would be a slow operation. > > >> I think that to satisfy both the 64b full line rate applications and the > > >> more complicated full stack applications, this must be made optional. > > >> In particular, and application that only forwards packets will never hit > > >> a NIC's limits, so it need not take any action. That's why I think a > > >> verification function is ideal; a forwarding application can ignore it, > > >> and a complex application can call it, and if it fails the packet, it > > >> can linearize it itself, removing complexity from dpdk itself. > > > I think that's a good approach to that problem. > > > As I remember we discussed something similar a while ago - > > > A function (tx_prep() or something) that would check nb_segs and probably some other HW specific restrictions, > > > calculate pseudo-header checksum, reset ip header len, etc. > > > > > > From other hand we also can add two more fields into rte_eth_dev_info: > > > 1) Max num of segs per TSO packet (tx_max_seg ?). > > > 2) Max num of segs per single packet/TSO segment (tx_max_mtu_seg ?). > > > So for ixgbe both will have value 40 - wthresh, > > > while for i40e 1) would be UINT8_MAX and 2) will be 8. > > > Then upper layer can use that information to select an optimal size for its TX buffers. > > > > > > > > > > This will break whenever the fevered imagination of hardware designers > > comes up with a new limit. > > > > We can have an internal function that accepts these two parameters, and > > then the driver-specific function can call this internal function: > > > > static bool i40e_validate_packet(mbuf* m) { > > return rte_generic_validate_packet(m, 0, 8); > > } > > > > static bool ixgbe_validate_packet(mbuf* m) { > > return rte_generic_validate_packet(m, 40, 2); > > } > > > > this way, the application is isolated from changes in how invalid > > packets are detected. > > > > > > I am not saying we shouldn't have tx_prep (tx_validate?) function per PMD. > As I said before I like that approach. > I think we should have tx_prep (as you suggested) that most people using full-path TX would call, > *plus* these extra fields in re_eth_dev_conf, so if someone needs that information - it would be there. I think this is reasonable. Having those values can allow the application to avoid generating bad packets in the first place. > Konstantin >