From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 83863A0487 for ; Fri, 5 Jul 2019 17:09:45 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9B8C41B95C; Fri, 5 Jul 2019 17:09:43 +0200 (CEST) Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) by dpdk.org (Postfix) with ESMTP id 5D8F6325F for ; Fri, 5 Jul 2019 17:09:42 +0200 (CEST) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id AF6542129D; Fri, 5 Jul 2019 11:09:41 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Fri, 05 Jul 2019 11:09:41 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=mesmtp; bh=x2XcpNDKXi7cMbOrC28Wql1BVYiUBzpJGRi6RyLgNfI=; b=GWOkfF4vnb29 YAR5+78fOLbAbFUYq1Je5wvo0RIPeqgi8rPoiKzD2/3moGbpq99p1Y26or5SYiLM 6SmjHOgeCGWoocBo9bYxG4ZgB1KWmj9X9Nl5GW7LYO4dSqQtfgBYmqakODzdqQgX qV8p5jCHX9Un8reC8WILQwT391yvZ/o= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm3; bh=x2XcpNDKXi7cMbOrC28Wql1BVYiUBzpJGRi6RyLgN fI=; b=aS0/d0siKITKi7Xl3XaHHlh8ju4oS0Xcv5pn+LvhiPw+RuJieq9EJ3w0N g0FFUhbQXM3h/kBQvyTjWkZe7BZV0ifVlzgHRbL0Ombh8AQTJV3YoMsGfgKyqQz3 vLhf946uSZHuXtXGWxgOM0/XddcooIEclLgbJTzPF7V3EhMz9py787f12A51o+zt msZU3+nYDkBFYl3W70nRivJxN1ybqAb42z2SaAj9yzoQFbsJcUDBLE5avo5eaCKv ekOs1WEhXOS4yLlxr7WcveAjrEMuC/6bMVCnYGqdskRYwfLCV2vwyfNeTMC4qVQe IGrSpakoX67+xfXhJQZzLdKP0T6tA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduvddrfeeggdekiecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhmrghs ucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenucffoh hmrghinhepughpughkrdhorhhgnecukfhppeejjedrudefgedrvddtfedrudekgeenucfr rghrrghmpehmrghilhhfrhhomhepthhhohhmrghssehmohhnjhgrlhhonhdrnhgvthenuc evlhhushhtvghrufhiiigvpedt X-ME-Proxy: Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id 13A5B380083; Fri, 5 Jul 2019 11:09:39 -0400 (EDT) From: Thomas Monjalon To: Harman Kalra Cc: Olivier Matz , "reshma.pattan@intel.com" , "arybchenko@solarflare.com" , "dev@dpdk.org" , Jerin Jacob Kollanukkaran Date: Fri, 05 Jul 2019 17:09:38 +0200 Message-ID: <1571664.Xf5ZK7gnes@xps> In-Reply-To: <20190705143847.GA182782@outlook.office365.com> References: <1552663632-18742-1-git-send-email-hkalra@marvell.com> <20190705134801.qirbqvafroci3oxv@platinum> <20190705143847.GA182782@outlook.office365.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [EXT] Re: [PATCH] app/pdump: enforcing pdump to use sw mempool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 05/07/2019 16:39, Harman Kalra: > On Fri, Jul 05, 2019 at 03:48:01PM +0200, Olivier Matz wrote: > > On Thu, Jul 04, 2019 at 06:29:25PM +0200, Thomas Monjalon wrote: > > > 15/03/2019 16:27, Harman Kalra: > > > > Since pdump uses SW rings to manage packets hence > > > > pdump should use SW ring mempool for managing its > > > > own copy of packets. > > > > > > I'm not sure to understand the reasoning. > > > Reshma, Olivier, Andrew, any opinion? > > > > > > Let's take a decision for this very old patch. > > > > From what I understand, many mempools of packets are created, to > > store the copy of dumped packets. I suppose that it may not be > > possible to create as many mempools by using the "best" mbuf pool > > (from rte_mbuf_best_mempool_ops()). > > > > Using a "ring_mp_mc" as mempool ops should always be possible. > > I think it would be safer to use "ring_mp_mc" instead of > > CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS, because the latter could be > > overriden on a specific platform. > > > > Olivier > > Following are some reasons for this patch: > 1. As we all know dpdk-pdump app creates a mempool for receiving packets (from primary process) into the mbufs, which would get tx'ed into pcap device and freed. Using hw mempool for creating mempool for dpdk-pdump mbufs was generating segmentation fault because hw mempool vfio is setup by primary process and secondary will not have access to its bar regions. > > 2. Setting up a seperate hw mempool vfio device for secondary generates following error: > "cannot find TAILQ entry for PCI device!" > http://git.dpdk.org/dpdk/tree/drivers/bus/pci/linux/pci_vfio.c#n823 > which means secondary cannot setup a new device which is not set by primary. > > 3. Since pdump creates mempool for its own local mbufs, we could not feel the requirement for hw mempool, as SW mempool in our opinion is capable enough for working in all conditions. OK >From the commit log, it is just missing to explain that HW mempool cannot be used in secondary if initialized in primary, and cannot be initialized in secondary process. Then it will become clear :) Please, do you want to reword a v2?