From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-f52.google.com (mail-oi0-f52.google.com [209.85.218.52]) by dpdk.org (Postfix) with ESMTP id 6CBD47CB0 for ; Mon, 21 Aug 2017 15:08:06 +0200 (CEST) Received: by mail-oi0-f52.google.com with SMTP id f11so154154566oic.0 for ; Mon, 21 Aug 2017 06:08:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=XuFSXILYseGwW570C8dCSjAAG+te8r33OBrbSsM7KHU=; b=E5z1JS8Pxy9mMMM/8SJWANq0eKSfzuhsPr1n0JwGkjonBj5+387vTHblbBzHRx3lH1 CQnLKgbnSxE7JQcNY1JQiGhWsnEDZUUERB5Lmn3k5GRB8FFfFpBEhav48BLLoopvU/he GavrKpejsFLN/y4j0t/u2Hinniv0lANjP2LbZolOH3Oh+xQb7kk/d5+cOEia3PfHYb5k /HUcxoO8hx+4ycjXQxeFlw0AYImR9AlhhE28caZjaMsJkspmLBcr+ceycV4FlFrJdF3z oqyifw15OepsXzKickdALXGTMxxNiQVNelyyH8mqjk/BERBML6J6aZpofdpU4PbATDEV 0Ybg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=XuFSXILYseGwW570C8dCSjAAG+te8r33OBrbSsM7KHU=; b=kRd2Epr8jGkpJ8aTz4EiP6OeNfwxfSWsdqBn5J1GAJ57MUYYzQTI/nNWABb6ZMx+TD zahJH8jSPclTLoAI9Pmf+EK+2eYsNx5D7G83Y44bMnnfwKBJaqS5kYjKBnKOfBZd1AXo xBQMxIVbfkdN7pBaNHyBxwpRlB39Ho8V/jNo5BpF0EpTfL/65MXMErEqDdiLEj+2WoDQ FuZ7aadsovtrpyxZ0RKS+bkLehu04GF+QlkWUZy5zZBi08cPTfn8aHkRB/4wb7a/shVP 68Dz6ZKdazpGPmQQWPyMgaJabq2Oxqv0S7pa2EXjrvaeU0JI/UCF2F8s70mW8cThHETb +gdw== X-Gm-Message-State: AHYfb5hJ0zqrEKlyl2/mOFHEouRzJ52EWbZ8lLKCz97XScMcSFCeUvMs aJEwRa4HZmntcZwv0h5SZePDw6Qs1+3C X-Received: by 10.202.8.70 with SMTP id 67mr21996839oii.194.1503320885341; Mon, 21 Aug 2017 06:08:05 -0700 (PDT) MIME-Version: 1.0 Received: by 10.58.33.219 with HTTP; Mon, 21 Aug 2017 06:08:04 -0700 (PDT) In-Reply-To: <2c2eeb2b-8c19-9821-dc08-6bd2c9e6f2b1@intel.com> References: <1502445950-44582-1-git-send-email-alejandro.lucero@netronome.com> <51e4be70-4fa4-fbab-6e7a-5f8e9c94ee3c@intel.com> <2c2eeb2b-8c19-9821-dc08-6bd2c9e6f2b1@intel.com> From: Alejandro Lucero Date: Mon, 21 Aug 2017 14:08:04 +0100 Message-ID: To: Ferruh Yigit Cc: dev , stable@dpdk.org Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] [PATCH] nfp: handle packets with length 0 as usual ones X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Aug 2017 13:08:06 -0000 On Mon, Aug 21, 2017 at 11:34 AM, Ferruh Yigit wrote: > On 8/18/2017 5:23 PM, Alejandro Lucero wrote: > > > > > > On Fri, Aug 18, 2017 at 4:10 PM, Ferruh Yigit > > wrote: > > > > On 8/11/2017 11:05 AM, Alejandro Lucero wrote: > > > A DPDK app could, whatever the reason, send packets with size 0. > > > The PMD is not sending those packets, which does make sense, > > > but the problem is the mbuf is not released either. That leads > > > to mbufs not being available, because the app trusts the > > > PMD will do it. > > > > > > Although this is a problem related to app wrong behaviour, we > > > should harden the PMD in this regard. Not sending a packet with > > > size 0 could be problematic, needing special handling inside the > > > PMD xmit function. It could be a burst of those packets, which can > > > be easily handled, but it could also be a single packet in a burst, > > > what is harder to handle. > > > > > > It would be simpler to just send that kind of packets, which will > > > likely be dropped by the hw at some point. The main problem is how > > > the fw/hw handles the DMA, because a dma read to a hypothetical 0x0 > > > address could trigger an IOMMU error. It turns out, it is safe to > > > send a descriptor with packet size 0 to the hardware: the DMA never > > > happens, from the PCIe point of view. > > > > > > Signed-off-by: Alejandro Lucero > > > > > --- > > > drivers/net/nfp/nfp_net.c | 17 ++++++++++++----- > > > 1 file changed, 12 insertions(+), 5 deletions(-) > > > > > > diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c > > > index 92b03c4..679a91b 100644 > > > --- a/drivers/net/nfp/nfp_net.c > > > +++ b/drivers/net/nfp/nfp_net.c > > > @@ -2094,7 +2094,7 @@ uint32_t nfp_net_txq_full(struct nfp_net_txq > > *txq) > > > */ > > > pkt_size = pkt->pkt_len; > > > > > > - while (pkt_size) { > > > + while (pkt) { > > > /* Copying TSO, VLAN and cksum info */ > > > *txds = txd; > > > > > > @@ -2126,17 +2126,24 @@ uint32_t nfp_net_txq_full(struct > > nfp_net_txq *txq) > > > txq->wr_p = 0; > > > > > > pkt_size -= dma_size; > > > - if (!pkt_size) { > > > + if (!pkt_size) > > > /* End of packet */ > > > txds->offset_eop |= PCIE_DESC_TX_EOP; > > > - } else { > > > + else > > > txds->offset_eop &= > > PCIE_DESC_TX_OFFSET_MASK; > > > - pkt = pkt->next; > > > - } > > > + > > > + pkt = pkt->next; > > > /* Referencing next free TX descriptor */ > > > txds = &txq->txds[txq->wr_p]; > > > lmbuf = &txq->txbufs[txq->wr_p].mbuf; > > > issued_descs++; > > > + > > > + /* Double-checking if we have to use chained > > mbuf. > > > + * It seems there are some apps which could > > wrongly > > > + * have zeroed mbufs chained leading to send > > null > > > + * descriptors to the hw. */ > > > + if (!pkt_size) > > > + break; > > > > For the case chained mbufs with all are zero size [1], won't this > cause > > next mbufs not freed because rte_pktmbuf_free_seg(*lmbuf) used? > > > > > > Good point. Being honest, we had the problem with mbufs and size 0, and > > this last check > > was not initially there. But we saw performance being low after the > > change, and the only thing > > which could explain it was this sort of chained mbufs. There was not > > mbuf allocation problem at > > all. It was like more (null) packets being sent to the hardware now. > > This last check solved the > > performance problem. > > I assume performance problem is with the chained mbufs with 0 size, I > believe this should be fixed in application, not in PMD level. > > And if application is sending chained mbufs with 0 size, with above code > it will eventually be out off mbufs, since they are not freed, and same > problem will occur that this patch is trying to avoid, but perhaps in > longer run. > > This is definitely an app problem and maybe that last check should be avoided and to process that chained mbuf, whatever is it coming from, if "pkt = pkt->next" is not null. Are you OK of I send another version without that last if clause? > > > > Once I have said that, I have to admit my explanation implies some > > serious problem when > > handling mbufs, and something the app is doing really badly, so I could > > understand someone > > saying this is hidden a serious problem and should not be there. > > > > [1] > > As you mentioned in the commit log, this not correct thing to do, but > > since patch is trying to harden PMD for this wrong application > > behavior.. > > > > > > If you consider this last check should not be there, I'll be glad to > > remove it. > > > > > > > > > } > > > i++; > > > } > > > > > > > > >