From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ua0-f175.google.com (mail-ua0-f175.google.com [209.85.217.175]) by dpdk.org (Postfix) with ESMTP id 21A9E7D4B for ; Fri, 18 Aug 2017 18:23:45 +0200 (CEST) Received: by mail-ua0-f175.google.com with SMTP id k43so37276327uaf.3 for ; Fri, 18 Aug 2017 09:23:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=bWCtvGZ63zzRgd6BIe6Z4rDqRXb90Fr7PRvfmBhGbFk=; b=rjZJME45w2TAjYkD8GixrCoRZj9PQn5B11asyc7B9+O7t/+Zi2NZqWaeMEO1nDRHHD PQEslxZrhmMhX0vuOVnAjypbyf0OubPaWI25+NwfKx/ux2+9r3R1W5Jt44BjZUL+og0Y 9TPTdBx4siLSSTuJMqD46PAHFSwAvZqmf65UlVI00uGIXeoeyFPUF3bzeaCdXDSjuFoy FmYemZ1YQCQRTaDgjpp1mXOqMyupN4tQaO41CY4U6lFtYklMe7dDI/0MrLFY2677T2WY O2U3SXXW1sQu/8PnkpKz59ZBO6IbRQ/l7B8ynoy3B357ewWsdjCCGSzp/iWTgulVVc4G /nzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=bWCtvGZ63zzRgd6BIe6Z4rDqRXb90Fr7PRvfmBhGbFk=; b=Rxxx2uCFrRF4b1OvxvoDmYBb4APi0l2ck94M3Aae18iiuxrBqGEtkUK82OmMqOV6lQ AOswQS/bLaC8QY4lWBbJTypxxfe77uFX+nN7Tb5E5gokgXZmvkumMrXMJNDRI1tXUYqV Yg2+mBdSDCnvl2VgPDTCQ4RvAIdHP3seUJxB/o6sW7rXgBoUAi/i6wh4Hyyg6p467djb QybF04Y4qwwEc6do+1h4i0J4kjAJBcO8jZd1DACXEuYRD/0NZ7j6txK3Uvyxp9JmHoIx o+bneq/siZ2kBoD+bxlWn6YI48RcENpKPiPxzCyBKu9ctw85E9qloU4GhTl0lLSq3Ppg WVfg== X-Gm-Message-State: AHYfb5gkaVp7zUFgYhRa8dBKKR2b+8CAb1+b8+h+hzveHapkWScnAYGU KT+L0tabfwxT9AhRCeFHnnCBrgAhMZQ9 X-Received: by 10.176.3.6 with SMTP id 6mr6337039uat.56.1503073424972; Fri, 18 Aug 2017 09:23:44 -0700 (PDT) MIME-Version: 1.0 Received: by 10.103.101.6 with HTTP; Fri, 18 Aug 2017 09:23:44 -0700 (PDT) In-Reply-To: <51e4be70-4fa4-fbab-6e7a-5f8e9c94ee3c@intel.com> References: <1502445950-44582-1-git-send-email-alejandro.lucero@netronome.com> <51e4be70-4fa4-fbab-6e7a-5f8e9c94ee3c@intel.com> From: Alejandro Lucero Date: Fri, 18 Aug 2017 17:23:44 +0100 Message-ID: To: Ferruh Yigit Cc: dev , stable@dpdk.org Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-stable] [dpdk-dev] [PATCH] nfp: handle packets with length 0 as usual ones X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Aug 2017 16:23:46 -0000 On Fri, Aug 18, 2017 at 4:10 PM, Ferruh Yigit wrote: > On 8/11/2017 11:05 AM, Alejandro Lucero wrote: > > A DPDK app could, whatever the reason, send packets with size 0. > > The PMD is not sending those packets, which does make sense, > > but the problem is the mbuf is not released either. That leads > > to mbufs not being available, because the app trusts the > > PMD will do it. > > > > Although this is a problem related to app wrong behaviour, we > > should harden the PMD in this regard. Not sending a packet with > > size 0 could be problematic, needing special handling inside the > > PMD xmit function. It could be a burst of those packets, which can > > be easily handled, but it could also be a single packet in a burst, > > what is harder to handle. > > > > It would be simpler to just send that kind of packets, which will > > likely be dropped by the hw at some point. The main problem is how > > the fw/hw handles the DMA, because a dma read to a hypothetical 0x0 > > address could trigger an IOMMU error. It turns out, it is safe to > > send a descriptor with packet size 0 to the hardware: the DMA never > > happens, from the PCIe point of view. > > > > Signed-off-by: Alejandro Lucero > > --- > > drivers/net/nfp/nfp_net.c | 17 ++++++++++++----- > > 1 file changed, 12 insertions(+), 5 deletions(-) > > > > diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c > > index 92b03c4..679a91b 100644 > > --- a/drivers/net/nfp/nfp_net.c > > +++ b/drivers/net/nfp/nfp_net.c > > @@ -2094,7 +2094,7 @@ uint32_t nfp_net_txq_full(struct nfp_net_txq *txq) > > */ > > pkt_size = pkt->pkt_len; > > > > - while (pkt_size) { > > + while (pkt) { > > /* Copying TSO, VLAN and cksum info */ > > *txds = txd; > > > > @@ -2126,17 +2126,24 @@ uint32_t nfp_net_txq_full(struct nfp_net_txq > *txq) > > txq->wr_p = 0; > > > > pkt_size -= dma_size; > > - if (!pkt_size) { > > + if (!pkt_size) > > /* End of packet */ > > txds->offset_eop |= PCIE_DESC_TX_EOP; > > - } else { > > + else > > txds->offset_eop &= > PCIE_DESC_TX_OFFSET_MASK; > > - pkt = pkt->next; > > - } > > + > > + pkt = pkt->next; > > /* Referencing next free TX descriptor */ > > txds = &txq->txds[txq->wr_p]; > > lmbuf = &txq->txbufs[txq->wr_p].mbuf; > > issued_descs++; > > + > > + /* Double-checking if we have to use chained mbuf. > > + * It seems there are some apps which could wrongly > > + * have zeroed mbufs chained leading to send null > > + * descriptors to the hw. */ > > + if (!pkt_size) > > + break; > > For the case chained mbufs with all are zero size [1], won't this cause > next mbufs not freed because rte_pktmbuf_free_seg(*lmbuf) used? > > Good point. Being honest, we had the problem with mbufs and size 0, and this last check was not initially there. But we saw performance being low after the change, and the only thing which could explain it was this sort of chained mbufs. There was not mbuf allocation problem at all. It was like more (null) packets being sent to the hardware now. This last check solved the performance problem. Once I have said that, I have to admit my explanation implies some serious problem when handling mbufs, and something the app is doing really badly, so I could understand someone saying this is hidden a serious problem and should not be there. [1] > As you mentioned in the commit log, this not correct thing to do, but > since patch is trying to harden PMD for this wrong application behavior.. > If you consider this last check should not be there, I'll be glad to remove it. > > > } > > i++; > > } > > > >