From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 83559A0487 for ; Thu, 4 Jul 2019 16:01:24 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A0BC41BE41; Thu, 4 Jul 2019 16:01:23 +0200 (CEST) Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com [66.111.4.25]) by dpdk.org (Postfix) with ESMTP id 37C641BE3C for ; Thu, 4 Jul 2019 16:01:22 +0200 (CEST) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id C2D9520F25; Thu, 4 Jul 2019 10:01:21 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Thu, 04 Jul 2019 10:01:21 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=mesmtp; bh=1OoBIiOKOSSCJRxvgMtuYRyGLTeAVnSEkfC3pG19tLI=; b=sv+AX2nmVjYX qFcXtDURk5LZiar9+RLXa5NGRR2Iuf8Tvbf1TvsyX5R8Y2ymV6JYeQFQCoXp2JTw bsi81MJP4CumVOPo0f3M9yE76UoigW1JyOdIBraHpOfR//Km2AHp9cKf9ZvbRjbY JdUtA92/kPpISn9hqhQr5M/X6Oym+zs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm3; bh=1OoBIiOKOSSCJRxvgMtuYRyGLTeAVnSEkfC3pG19t LI=; b=PrzKAv2ohXX4nL17Bnl5i+utc/nTFj2Z2K0u41v6mZS9CS2c4hVBNLMLX 7ioxCfK8b9U6fOMxVULtOm+KD7qijLQeopDDqgd/6ODXgmram8aogpzU/lc8qN+Y CS0OQu4YKmMUfJDFU65lBCHDr0WIX6eqn9a6QzWO2fnH0BLQ+HnApfYpy4/B1CY+ C6rTzwnkcOYxWIg2FTjHgbujUFhrEZhgX8CuZJfMaKlkcBcIDW+rsGWRTr+Wiv4y Rq0NRWQOMWH6aPX8iNFIeXNZLyEHKXA/4VoOCRzQ+/Q8xmVx2Dqnhmh46H+pVjqp mX/f5yO4LEX31gy8ePm7qjHq9UTNg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduvddrfedvgdejgecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhmrghs ucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenucffoh hmrghinhepughpughkrdhorhhgnecukfhppeejjedrudefgedrvddtfedrudekgeenucfr rghrrghmpehmrghilhhfrhhomhepthhhohhmrghssehmohhnjhgrlhhonhdrnhgvthenuc evlhhushhtvghrufhiiigvpedt X-ME-Proxy: Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id 695B2380086; Thu, 4 Jul 2019 10:01:19 -0400 (EDT) From: Thomas Monjalon To: Ferruh Yigit Cc: dev@dpdk.org, Olivier Matz , Bruce Richardson , Chas Williams <3chas3@gmail.com>, Stephen Hemminger , Chas Williams Date: Thu, 04 Jul 2019 16:01:18 +0200 Message-ID: <14075788.UvRaSRG1TX@xps> In-Reply-To: <20190513124305.5jo4kdqpjwci5idy@platinum> References: <20190416155126.26438-1-ferruh.yigit@intel.com> <20190417081254.GA1890@bricha3-MOBL.ger.corp.intel.com> <20190513124305.5jo4kdqpjwci5idy@platinum> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH] net: do not insert VLAN tag to shared mbufs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 13/05/2019 14:43, Olivier Matz: > On Wed, Apr 17, 2019 at 09:12:55AM +0100, Bruce Richardson wrote: > > On Tue, Apr 16, 2019 at 02:32:18PM -0400, Chas Williams wrote: > > > > > > > > > On 4/16/19 12:28 PM, Bruce Richardson wrote: > > > > On Tue, Apr 16, 2019 at 04:51:26PM +0100, Ferruh Yigit wrote: > > > > > The vlan_insert() is buggy when it tires to handle the shared mbufs, > > > > > instead don't support inserting VLAN tag into shared mbufs and return > > > > > an error for that case. > > > > > > > > > > Signed-off-by: Ferruh Yigit > > > > > --- > > > > > Cc: Stephen Hemminger > > > > > Cc: Chas Williams > > > > > > > > > > This is another approach to RFC to fix the vlan_insert: > > > > > https://patches.dpdk.org/patch/51870/ > > > > > > > > > > vlan_insert() mostly used by drivers to insert VLAN tag into packet > > > > > data in Tx path, drivers creating new copies of mbufs in Tx path may > > > > > result unexpected behavior, like not freed or double freed mbufs. > > > > > --- > > > > > lib/librte_net/rte_ether.h | 11 ++--------- > > > > > 1 file changed, 2 insertions(+), 9 deletions(-) > > > > > > > > > So what is the API to be used if one does want to insert a vlan tag into a > > > > shared mbuf? > > > > > > It's unlikely you would ever want to do that. Have one thread perform > > > some operation on the mbuf and other threads would expect this to have > > > happened? It seems counter to the way that packets might flow through an > > > application. Typically, you would insert the vlan and then share > > > the mbuf. Modifying a shared mbuf should make you ask, what are the > > > other copies expecting? > > > > > The thing is that the reference count only indicates the number of pointers > > to a buffer, it doesn't identify what parts are in use. So in the > > fragmentation case, there may only be one mbuf actually referencing the > > header part of the packet, with all other references to the memory being to > > other parts further in. However, point taken about how the app pipeline layout > > would probably make this issue unlikely. > > Yes, the difficulty here is that the condition > (!RTE_MBUF_DIRECT(*m) || rte_mbuf_refcnt_read(*m) > 1) > is not an exact equivalent of "the mbuf is writable". > > Of course, it the mbuf is direct and refcnt is 1, the mbuf is writable. > But we can imagine other cases where mbuf is writable. For instance, a > PMD that receives several packets in one big mbuf (with an appropriate > headroom for each), then create one indirect mbuf for each packet. > > We probably miss an API to express that the mbuf is writable. > > > > > Also, why is it such a problem to create new copies of data inside the > > > > driver if that is necessary? You create a copy and use that, freeing the > > > > original (i.e. in all likelyhood decrememting the ref-count since you no > > > > longer use it). You already have the pointer to the mbuf pool from the > > > > original buffer so you can get a copy from the same place. I'm curious to > > > > know why it would be impossible to do a functionally correct > > > > implementation? > > > > > > It is not an issue to do this correctly. Hemminger did submit a patch > > > that appeared to do this correctly (I haven't tested it). As mentioned > > > earlier the tricky part is returning the buffer to the application. If > > > you create a copy and transmit fails, you need to free that buffer or > > > return it to the application for it to free. If you free the buffer when > > > making a buffer, you certainly can't return it to the application for > > > it to be freed a second time. > > > > > Right. For transmit though, in most cases the only reason for failure is > > lack of space in a transmit ring, so most NIC drivers can be sure of > > success before cloning. > > > > Overall, it seems the consensus is that for real-world cases it's better to > > have this patch than not, so I'm ok for it to go into DPDK. > > Agree. > > Acked-by: Olivier Matz Applied, sorry this patch was forgotten.