From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id B63C47CFD; Mon, 21 Aug 2017 15:25:47 +0200 (CEST) Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Aug 2017 06:25:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,409,1498546800"; d="scan'208";a="302645473" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.237.220.57]) ([10.237.220.57]) by fmsmga004.fm.intel.com with ESMTP; 21 Aug 2017 06:25:38 -0700 To: Alejandro Lucero Cc: dev , stable@dpdk.org References: <1502445950-44582-1-git-send-email-alejandro.lucero@netronome.com> <51e4be70-4fa4-fbab-6e7a-5f8e9c94ee3c@intel.com> <2c2eeb2b-8c19-9821-dc08-6bd2c9e6f2b1@intel.com> From: Ferruh Yigit Message-ID: <99cbc1dc-7ae1-0e7b-4577-cbac67e6b763@intel.com> Date: Mon, 21 Aug 2017 14:25:38 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-stable] [dpdk-dev] [PATCH] nfp: handle packets with length 0 as usual ones X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Aug 2017 13:25:49 -0000 On 8/21/2017 2:08 PM, Alejandro Lucero wrote: > > > On Mon, Aug 21, 2017 at 11:34 AM, Ferruh Yigit > wrote: > > On 8/18/2017 5:23 PM, Alejandro Lucero wrote: > > > > > > On Fri, Aug 18, 2017 at 4:10 PM, Ferruh Yigit > > >> wrote: > > > >     On 8/11/2017 11:05 AM, Alejandro Lucero wrote: > >     > A DPDK app could, whatever the reason, send packets with size 0. > >     > The PMD is not sending those packets, which does make sense, > >     > but the problem is the mbuf is not released either. That leads > >     > to mbufs not being available, because the app trusts the > >     > PMD will do it. > >     > > >     > Although this is a problem related to app wrong behaviour, we > >     > should harden the PMD in this regard. Not sending a packet with > >     > size 0 could be problematic, needing special handling inside the > >     > PMD xmit function. It could be a burst of those packets, which can > >     > be easily handled, but it could also be a single packet in a burst, > >     > what is harder to handle. > >     > > >     > It would be simpler to just send that kind of packets, which will > >     > likely be dropped by the hw at some point. The main problem is how > >     > the fw/hw handles the DMA, because a dma read to a hypothetical 0x0 > >     > address could trigger an IOMMU error. It turns out, it is safe to > >     > send a descriptor with packet size 0 to the hardware: the DMA never > >     > happens, from the PCIe point of view. > >     > > >     > Signed-off-by: Alejandro Lucero > >      >> > >     > --- > >     >  drivers/net/nfp/nfp_net.c | 17 ++++++++++++----- > >     >  1 file changed, 12 insertions(+), 5 deletions(-) > >     > > >     > diff --git a/drivers/net/nfp/nfp_net.c > b/drivers/net/nfp/nfp_net.c > >     > index 92b03c4..679a91b 100644 > >     > --- a/drivers/net/nfp/nfp_net.c > >     > +++ b/drivers/net/nfp/nfp_net.c > >     > @@ -2094,7 +2094,7 @@ uint32_t nfp_net_txq_full(struct > nfp_net_txq > >     *txq) > >     >                */ > >     >               pkt_size = pkt->pkt_len; > >     > > >     > -             while (pkt_size) { > >     > +             while (pkt) { > >     >                       /* Copying TSO, VLAN and cksum info */ > >     >                       *txds = txd; > >     > > >     > @@ -2126,17 +2126,24 @@ uint32_t nfp_net_txq_full(struct > >     nfp_net_txq *txq) > >     >                               txq->wr_p = 0; > >     > > >     >                       pkt_size -= dma_size; > >     > -                     if (!pkt_size) { > >     > +                     if (!pkt_size) > >     >                               /* End of packet */ > >     >                               txds->offset_eop |= > PCIE_DESC_TX_EOP; > >     > -                     } else { > >     > +                     else > >     >                               txds->offset_eop &= > >     PCIE_DESC_TX_OFFSET_MASK; > >     > -                             pkt = pkt->next; > >     > -                     } > >     > + > >     > +                     pkt = pkt->next; > >     >                       /* Referencing next free TX descriptor */ > >     >                       txds = &txq->txds[txq->wr_p]; > >     >                       lmbuf = &txq->txbufs[txq->wr_p].mbuf; > >     >                       issued_descs++; > >     > + > >     > +                     /* Double-checking if we have to use > chained > >     mbuf. > >     > +                      * It seems there are some apps which > could > >     wrongly > >     > +                      * have zeroed mbufs chained leading > to send > >     null > >     > +                      * descriptors to the hw. */ > >     > +                     if (!pkt_size) > >     > +                             break; > > > >     For the case chained mbufs with all are zero size [1], won't > this cause > >     next mbufs not freed because rte_pktmbuf_free_seg(*lmbuf) used? > > > > > > Good point. Being honest, we had the problem with mbufs and size > 0, and > > this last check > > was not initially there. But we saw performance being low after the > > change, and the only thing > > which could explain it was this sort of chained mbufs. There was not > > mbuf allocation problem at > > all. It was like more (null) packets being sent to the hardware now. > > This last check solved the > > performance problem. > > I assume performance problem is with the chained mbufs with 0 size, I > believe this should be fixed in application, not in PMD level. > > And if application is sending chained mbufs with 0 size, with above code > it will eventually be out off mbufs, since they are not freed, and same > problem will occur that this patch is trying to avoid, but perhaps in > longer run. > > > This is definitely an app problem and maybe that last check should be > avoided and to process that chained mbuf, whatever is it coming from, if > "pkt = pkt->next" is not null. > > Are you OK of I send another version without that last if clause? Yes, thank you. >   >   > > > > > Once I have said that, I have to admit my explanation implies some > > serious problem when > > handling mbufs, and something the app is doing really badly, so I > could > > understand someone > > saying this is hidden a serious problem and should not be there.  > > > >     [1] > >     As you mentioned in the commit log, this not correct thing to > do, but > >     since patch is trying to harden PMD for this wrong application > >     behavior.. > > > > > > If you consider this last check should not be there, I'll be glad to > > remove it. > >   > > > > > >     >               } > >     >               i++; > >     >       } > >     > > > > > > >