From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by dpdk.org (Postfix) with ESMTP id A609B1D90 for ; Wed, 18 Oct 2017 16:26:36 +0200 (CEST) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id 5E93520EA7; Wed, 18 Oct 2017 10:26:36 -0400 (EDT) Received: from frontend1 ([10.202.2.160]) by compute1.internal (MEProxy); Wed, 18 Oct 2017 10:26:36 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-sender :x-me-sender:x-sasl-enc; s=mesmtp; bh=IHHz5xgbaPtJUSFvRqKPOw/vE5 R8ePvo/5MRU3RFpjk=; b=Uh/URdDfaIhYOdIB0AtbPldZC35oFl/1w3GQEaXRlF qBcqihSnUfQX1WLxvEfBuzvXaZbY/PfOtgtIYsHel/u84tj2MDUaMYVIkliJgjsi BnZoOxoLGQPd0AU0vN72b1s+HjoGsoGVZGPijyFMaL0BDghPpIUojc9k39XiRWz4 g= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=IHHz5x gbaPtJUSFvRqKPOw/vE5R8ePvo/5MRU3RFpjk=; b=rXUHsr9S58fgH8nPmCXlZq IHRndfZDLNuKyzxbWZ0A+wKdlsP5yZhHH6JY8VbmwRLfLr0zWOd6fXGMXQJWGyJU VQZFhPJTfLsC+wUTK5YogI7CU1ite7Tcr4jTnKCTt84jdHGWPwvSmrqwA42iF4It bYGNFEdbgaiMY70dtwTzqwkT6zICD3LVIb0RYIZOMORO/6YAFR747b1pEs6NcjVs pu4v3+6WroSPLojto9nu/FPqu7XeImNJF68hI7ufhwyMyxrca0jeWrXpPcrF//2S Adi1WmUWfm3PKawl/pQpunjZU/09+QwxZlmvnoFzM2XV/S4zEBhda6YeXYNEo7hw == X-ME-Sender: Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id 028867F961; Wed, 18 Oct 2017 10:26:36 -0400 (EDT) From: Thomas Monjalon To: santosh , John McNamara , olivier.matz@6wind.com Cc: dev@dpdk.org, jerin.jacob@caviumnetworks.com, hemant.agrawal@nxp.com, ferruh.yigit@intel.com Date: Wed, 18 Oct 2017 16:26:35 +0200 Message-ID: <1681695.cUBTeW7Xrl@xps> In-Reply-To: <80cd844a-511e-5b27-4bc0-ea796611cb28@caviumnetworks.com> References: <20170831063719.19273-1-santosh.shukla@caviumnetworks.com> <2831928.n80VB9rmku@xps> <80cd844a-511e-5b27-4bc0-ea796611cb28@caviumnetworks.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH v3 10/10] doc: add mempool and octeontx mempool device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Oct 2017 14:26:36 -0000 18/10/2017 16:02, santosh: > > On Wednesday 18 October 2017 07:15 PM, Thomas Monjalon wrote: > > 18/10/2017 14:17, santosh: > >> Hi Thomas, > >> > >> > >> On Monday 09 October 2017 02:49 PM, santosh wrote: > >>> On Monday 09 October 2017 02:18 PM, Thomas Monjalon wrote: > >>>> 09/10/2017 07:46, santosh: > >>>>> On Monday 09 October 2017 10:31 AM, santosh wrote: > >>>>>> Hi Thomas, > >>>>>> > >>>>>> > >>>>>> On Sunday 08 October 2017 10:13 PM, Thomas Monjalon wrote: > >>>>>>> 08/10/2017 14:40, Santosh Shukla: > >>>>>>>> This commit adds a section to the docs listing the mempool > >>>>>>>> device PMDs available. > >>>>>>> It is confusing to add a mempool guide, given that we already have > >>>>>>> a mempool section in the programmer's guide: > >>>>>>> http://dpdk.org/doc/guides/prog_guide/mempool_lib.html > >>>>>>> > >>>>>>> And we will probably need also some doc for bus drivers. > >>>>>>> > >>>>>>> I think it would be more interesting to create a platform guide > >>>>>>> where you can describe the bus and the mempool. > >>>>>>> OK for doc/guides/platform/octeontx.rst ? > >>>>>> No Strong opinion, > >>>>>> > >>>>>> But IMO, purpose of introducing mempool PMD was inspired from > >>>>>> eventdev, Which I find pretty organized. > >>>>>> > >>>>>> Yes, we have mempool_lib guide but that is more about common mempool > >>>>>> layer details like api, structure layout etc.. I wanted > >>>>>> to add guide which tells about mempool PMD's and their capability > >>>>>> if any, thats why included octeontx as strarter and was thinking > >>>>>> that other external-mempool PMDs like dpaa/dpaa2 , sw ring pmd may come > >>>>>> later. > >>>> Yes sure it is interesting. > >>>> The question is to know if mempool drivers make sense in their own guide > >>>> or if it's better to group them with all related platform specifics. > >>> I vote for keeping them just like Eventdev/cryptodev, > >>> has vendor specific PMD's under one roof.. (has both s/w and hw). > >> To be clear and move on to v3 for this patch: > >> * Your proposition to mention about mempool block in dir struct like > >> doc/guides/platform/octeontx.rst. > >> And right now we have more than one reference for octeontx.rst in dpdk > >> example: > >> ./doc/guides/nics/octeontx.rst --> NIC > >> ./doc/guides/eventdevs/octeontx.rst --> eventdev device > >> > >> Keeping above order in mind: My current proposal was to introduce doc like eventdev for mempool block. > >> > >> So now, I am in two mind, Whether I opt your path If so then that should I remove all octeontx.rst reference from dpdk? > > I think we must keep octeontx.rst in nics and eventdevs. > > > > My proposal was to have a platform guide to give more explanations > > about the common hardware and bus design. > > That way, event device also a common hw block.. just like mempool block is > for octeontx platform. Also PCI bus is octeontx bus.. we don;t have platform > specific bus like dpaa has, so bus stuff not applicable to octeontx doc(imo). Right. > > Some infos for tuning Intel platforms are in the quick start guide, > > and could be moved later in such a platform guide. > > > > With this suggestion, we can include mempool drivers in the > > platform guide as mempool is really specific to the platform. > > > > I thought you agreed on it when talking on IRC. > > yes, we did discussed on IRC. But I'm still unsure about scope of that guide > from octeontx perspective: That new platform entry has info about only one block > which is mempool and for other common block or specific blocks : > user has to look around at different directories.. Right. You can point to other sections in the platform guide. >>From platform/octeontx.rst, you can point to eventdev/octeontx.rst, nics/octeontx.rst and mempool/octeontx.rst (if you add it). > >> and bundle them under one roof OR go by my current proposal. > >> > >> Who'll take a call on that? > > If you strongly feel that mempool driver is better outside, > > I don't have strong opinion on doc.. I'm just asking for more opinions here.. Me too, I'm asking for more opinions. > as I'm not fully convinced with your proposition. I am convinced we must create a platform guide. But I am not convinced about where put the mempool section: either directly in the platform guide, or in a new mempool guide which is referenced from the platform guide. > > you can make it outside in a mempool guide. > > John do you have an opinion? If we do not have more opinions, do as you feel. Anyway it will be possible to change it later if needed.