From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7F576A0C4C; Thu, 14 Oct 2021 11:41:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6C39F4112E; Thu, 14 Oct 2021 11:41:29 +0200 (CEST) Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com [66.111.4.25]) by mails.dpdk.org (Postfix) with ESMTP id 660F840041 for ; Thu, 14 Oct 2021 11:41:27 +0200 (CEST) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 20A015C00DC; Thu, 14 Oct 2021 05:41:25 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Thu, 14 Oct 2021 05:41:25 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm2; bh= dz+s+b2QRzjnnsNJZFY/hXllZF0JsnwUaR8aqdaxiRw=; b=AUmozEPb4v6dWBLT 6IvgySCF2/jSa+JbNwkJYzFNhNeAmS0Z9mpe6OLLmazpfv8ipledl0nsfvku8/m1 C8TV7+9XQ0a4khFGAp9UB699Ie5PFkotaB22IyDgvgmwL5pUCBgNEmWmxlS3crYs VE7Fwcp7AWKHx4Bh12ho16NAOlty4ai2J+Ubl4t+M1R0EZgDcs5sIOvSARsJZ981 GO45tr+hJzvRnn10cZk54Z6iXLVi0purjLRlIQlAXNCmFHToicfpfJAkSHibOXl6 ODoAEJGODhnOY4lQ2+7IB93/2T2t4FwvqTR473q+jLsZXq+jBoPENqfWYfK70i6Q Jm30gg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm1; bh=dz+s+b2QRzjnnsNJZFY/hXllZF0JsnwUaR8aqdaxi Rw=; b=ngPSCMHhnMIiIZ6rxc1wpig81z67UiNCcLtlJhsMbIMR4zz5YohfEzPW3 ccz3IzeQ7psW47faiiD+mPAKky1tAWjG4ULguNSX7yAd3J6ZY/wjLqCcTHljnSiS f5dWi8Uucul64C27waC3ERj8dHlBA8VD5EFVopnjph7/o+f3PWS6jFCYDZ5fhXJ2 xwT5TFb5j9FEkLOLCr0godX/EnpvHnRs/YpVBX5cMOK4BKZn5znz259UNBlwLfYq USfeCrWDDu3nPRXR078BqljkNtALoDMb1Bew8Sdl1CGIxB5/ZOu+NBLzO5+Bqok0 65jXAk90Q+isLPwJgMh4TjAgRmjXA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrvdduvddgudejucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkfgjfhgggfgtsehtufertddttddvnecuhfhrohhmpefvhhhomhgr shcuofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqnecugg ftrfgrthhtvghrnhepudeggfdvfeduffdtfeeglefghfeukefgfffhueejtdetuedtjeeu ieeivdffgeehnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrh homhepthhhohhmrghssehmohhnjhgrlhhonhdrnhgvth X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 14 Oct 2021 05:41:22 -0400 (EDT) From: Thomas Monjalon To: Harman Kalra Cc: David Marchand , "dev@dpdk.org" , Raslan Darawsheh , Ray Kinsella , Dmitry Kozlyuk , "viacheslavo@nvidia.com" , "matan@nvidia.com" Date: Thu, 14 Oct 2021 11:41:19 +0200 Message-ID: <4395254.EVvzvEdfqG@thomas> In-Reply-To: References: <20210826145726.102081-1-hkalra@marvell.com> <10896897.yJauvYxkRq@thomas> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 14/10/2021 11:31, Harman Kalra: > From: Thomas Monjalon > > 13/10/2021 20:52, Thomas Monjalon: > > > 13/10/2021 19:57, Harman Kalra: > > > > From: dev On Behalf Of Harman Kalra > > > > > From: Thomas Monjalon > > > > > > 04/10/2021 11:57, David Marchand: > > > > > > > On Mon, Oct 4, 2021 at 10:51 AM Harman Kalra > > > > > > > > > > > > > wrote: > > > > > > > > > > +struct rte_intr_handle *rte_intr_handle_instance_alloc(int > > size, > > > > > > > > > > + > > > > > > > > > > +bool > > > > > > > > > > +from_hugepage) { > > > > > > > > > > + struct rte_intr_handle *intr_handle; > > > > > > > > > > + int i; > > > > > > > > > > + > > > > > > > > > > + if (from_hugepage) > > > > > > > > > > + intr_handle = rte_zmalloc(NULL, > > > > > > > > > > + size * sizeof(struct rte_intr_handle), > > > > > > > > > > + 0); > > > > > > > > > > + else > > > > > > > > > > + intr_handle = calloc(1, size * > > > > > > > > > > + sizeof(struct rte_intr_handle)); > > > > > > > > > > > > > > > > > > We can call DPDK allocator in all cases. > > > > > > > > > That would avoid headaches on why multiprocess does not > > > > > > > > > work in some rarely tested cases. > > [...] > > > > > > I agree with David. > > > > > > I prefer a simpler API which always use rte_malloc, and make > > > > > > sure interrupts are always handled between rte_eal_init and > > rte_eal_cleanup. > > [...] > > > > > There are couple of more dependencies on glibc heap APIs: > > > > > 1. "rte_eal_alarm_init()" allocates an interrupt instance which is > > > > > used for timerfd, is called before "rte_eal_memory_init()" which > > > > > does the memseg init. > > > > > Not sure what all challenges we may face in moving alarm_init > > > > > after memory_init as it might break some subsystem inits. > > > > > Other option could be to allocate interrupt instance for timerfd > > > > > on first alarm_setup call. > > > > > > Indeed it is an issue. > > > > > > [...] > > > > > > > > There are many other drivers which statically declares the > > > > > interrupt handles inside their respective private structures and > > > > > memory for those structure was allocated from heap. For such > > > > > drivers I allocated interrupt instances also using glibc heap APIs. > > > > > > Could you use rte_malloc in these drivers? > > > > If we take the direction of 2 different allocations mode for the interrupts, I > > suggest we make it automatic without any API parameter. > > We don't have any function to check rte_malloc readiness I think. > > But we can detect whether shared memory is ready with this check: > > rte_eal_get_configuration()->mem_config->magic == RTE_MAGIC This check > > is true at the end of rte_eal_init, so it is false during probing. > > Would it be enough? Or should we implement rte_malloc_is_ready()? > > Hi Thomas, > > It's a very good suggestion. Let's implement "rte_malloc_is_ready()" which could be as > simple as " rte_eal_get_configuration()->mem_config->magic == RTE_MAGIC" check. > There may be more consumers for this API in future. You cannot rely on the magic because it is set only after probing. For such API you need to have another internal flag to check that malloc is setup.