From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id DF58FA0C4C;
	Thu, 14 Oct 2021 10:22:47 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id CE6B04112E;
	Thu, 14 Oct 2021 10:22:47 +0200 (CEST)
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by mails.dpdk.org (Postfix) with ESMTP id 8A3CC40041
 for <dev@dpdk.org>; Thu, 14 Oct 2021 10:22:46 +0200 (CEST)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id EDE105C00ED;
 Thu, 14 Oct 2021 04:22:45 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute6.internal (MEProxy); Thu, 14 Oct 2021 04:22:45 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h=
 from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding:content-type; s=fm2; bh=
 MpRwmEVdfVoPmUkKTPaGUhjZZZofjC37cvEAmhCZW+Y=; b=ktqnTsufbz4EUAG5
 YhtEEu6dbtzSi9yM25Lbt3mIFjiYnh81mg2qokkGGFmx6wZSOLoFuB9zgtuiM441
 UHq1DIX19zPzWDkygDPze0C7GVKVOjszXT3Mkxpl3ZSCWte0+uCkpK3p1n+1dtTH
 UDlQWlv1iw5rill7rA8GgoeqNoeyQ4Y7GErNJsTfPZVs5U+bCp+NNr5t46yLGXNm
 xMjqTZODTX8Qmy6VCto7EL5fq3OXelR7i2Yombu7xALEVpQEq1TWMeFe2Qp5l1pb
 LGBHddJwWUl/Mb0ChaeUF+KtuM739fkn0hoLjTCC+q2dlzYy/VxIRmXBAJICfL+C
 BcRQ2w==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
 messagingengine.com; h=cc:content-transfer-encoding:content-type
 :date:from:in-reply-to:message-id:mime-version:references
 :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
 :x-sasl-enc; s=fm1; bh=MpRwmEVdfVoPmUkKTPaGUhjZZZofjC37cvEAmhCZW
 +Y=; b=SxmGdvrHGwhcaPOGNqHTixcwYAmCBdEZXWCeWAq41FdoLKNddSKSZsRrA
 ds2R0jAKGo0wpfnFlXmQqvuL+96e7bk/dL6OATq6sKu8d9Zd3QvJuuJtt3fWvQY5
 JkWLLCnE+C1CjuVazkM12MXke0TZ2QtttyKhLdcDvnrdur2kxCzzxsGzFIwld3YU
 qjyXbH9UXZCle183IUMInGtd/WHxYjtlEYnNF3i5ZwEbZNiPRPqWrH9NBXgmUNU6
 7lm3VMzab4+HOXBgPOu1WkbAFCX3UEVVWpwCbou6/qlbqTLLBSZRcdF0XdPYW6+H
 kXXu0TzNeHlqofjPJUhpiRMWmOssA==
X-ME-Sender: <xms:1ehnYQUFZ9CXmQj1R1o3YeDIhOP-VEBx3dI2VGFK93P88dzW8h76Vw>
 <xme:1ehnYUm98YgxPq-e2DZz1d7bLTZdSlFTIQjieqG1HqkPARRzefE5ESWB9ADmmm2-U
 tRHaJnF-JdliokD6A>
X-ME-Received: <xmr:1ehnYUaGqUMAzGis6iU-z3eWUZc2hsZs9geRFjlINHOwIZo2_QYjp_Vazg2dlHxJl9ASgAsKiNRP94Fy6CBYvlAI6YdvEb9x7fco5qGxMNs>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrvdduvddgtdduucetufdoteggodetrfdotf
 fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
 uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
 cujfgurhephffvufffkfgjfhgggfgtsehtufertddttddvnecuhfhrohhmpefvhhhomhgr
 shcuofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqnecugg
 ftrfgrthhtvghrnhepudeggfdvfeduffdtfeeglefghfeukefgfffhueejtdetuedtjeeu
 ieeivdffgeehnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrh
 homhepthhhohhmrghssehmohhnjhgrlhhonhdrnhgvth
X-ME-Proxy: <xmx:1ehnYfV7PN-A23bfjnGmz_CVp0uaNGxXS-1BiCc48KHDegqrpG2Qig>
 <xmx:1ehnYampPlyLXi6Yo7tYtAVcUeAD52fJ14-nXH4fhXyzHbYesgJq3A>
 <xmx:1ehnYUevzgm_1T5Kq7Upi7joEBdOdlJ5dFTXXRxVL28_Be3q3TvXOA>
 <xmx:1ehnYYa144leahQ-wrrh6lm3RSwb8BsMNxd_7KdomwPYLFjR4QQ_aQ>
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu,
 14 Oct 2021 04:22:43 -0400 (EDT)
From: Thomas Monjalon <thomas@monjalon.net>
To: Harman Kalra <hkalra@marvell.com>
Cc: dev@dpdk.org, Raslan Darawsheh <rasland@nvidia.com>,
 Ray Kinsella <mdr@ashroe.eu>, Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>,
 David Marchand <david.marchand@redhat.com>,
 "viacheslavo@nvidia.com" <viacheslavo@nvidia.com>,
 "matan@nvidia.com" <matan@nvidia.com>
Date: Thu, 14 Oct 2021 10:22:40 +0200
Message-ID: <10896897.yJauvYxkRq@thomas>
In-Reply-To: <24392547.dnzkRMgc80@thomas>
References: <20210826145726.102081-1-hkalra@marvell.com>
 <BN9PR18MB4204A56AE0FB98928883EAEBC5B79@BN9PR18MB4204.namprd18.prod.outlook.com>
 <24392547.dnzkRMgc80@thomas>
MIME-Version: 1.0
Content-Transfer-Encoding: 7Bit
Content-Type: text/plain; charset="us-ascii"
Subject: Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement
 get set APIs
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

13/10/2021 20:52, Thomas Monjalon:
> 13/10/2021 19:57, Harman Kalra:
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Harman Kalra
> > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > 04/10/2021 11:57, David Marchand:
> > > > > On Mon, Oct 4, 2021 at 10:51 AM Harman Kalra <hkalra@marvell.com>
> > > > wrote:
> > > > > > > > +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
> > > > > > > > +                                                      bool
> > > > > > > > +from_hugepage) {
> > > > > > > > +       struct rte_intr_handle *intr_handle;
> > > > > > > > +       int i;
> > > > > > > > +
> > > > > > > > +       if (from_hugepage)
> > > > > > > > +               intr_handle = rte_zmalloc(NULL,
> > > > > > > > +                                         size * sizeof(struct rte_intr_handle),
> > > > > > > > +                                         0);
> > > > > > > > +       else
> > > > > > > > +               intr_handle = calloc(1, size * sizeof(struct
> > > > > > > > + rte_intr_handle));
> > > > > > >
> > > > > > > We can call DPDK allocator in all cases.
> > > > > > > That would avoid headaches on why multiprocess does not work in
> > > > > > > some rarely tested cases.
[...]
> > > > I agree with David.
> > > > I prefer a simpler API which always use rte_malloc, and make sure
> > > > interrupts are always handled between rte_eal_init and rte_eal_cleanup.
[...]
> > > There are couple of more dependencies on glibc heap APIs:
> > > 1. "rte_eal_alarm_init()" allocates an interrupt instance which is used for
> > > timerfd, is called before "rte_eal_memory_init()" which does the memseg
> > > init.
> > > Not sure what all challenges we may face in moving alarm_init after
> > > memory_init as it might break some subsystem inits.
> > > Other option could be to allocate interrupt instance for timerfd on first
> > > alarm_setup call.
> 
> Indeed it is an issue.
> 
> [...]
> 
> > > There are many other drivers which statically declares the interrupt handles
> > > inside their respective private structures and memory for those structure
> > > was allocated from heap. For such drivers I allocated interrupt instances also
> > > using glibc heap APIs.
> 
> Could you use rte_malloc in these drivers?

If we take the direction of 2 different allocations mode for the interrupts,
I suggest we make it automatic without any API parameter.
We don't have any function to check rte_malloc readiness I think.
But we can detect whether shared memory is ready with this check:
rte_eal_get_configuration()->mem_config->magic == RTE_MAGIC
This check is true at the end of rte_eal_init, so it is false during probing.
Would it be enough? Or should we implement rte_malloc_is_ready()?