From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id EE61A1BBB5 for ; Wed, 11 Apr 2018 14:31:07 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Apr 2018 05:31:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.48,436,1517904000"; d="scan'208";a="31862958" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by fmsmga008.fm.intel.com with ESMTP; 11 Apr 2018 05:31:03 -0700 Received: from sivswdev01.ir.intel.com (sivswdev01.ir.intel.com [10.237.217.45]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id w3BCV2Lw012621; Wed, 11 Apr 2018 13:31:02 +0100 Received: from sivswdev01.ir.intel.com (localhost [127.0.0.1]) by sivswdev01.ir.intel.com with ESMTP id w3BCV2NE014056; Wed, 11 Apr 2018 13:31:02 +0100 Received: (from aburakov@localhost) by sivswdev01.ir.intel.com with LOCAL id w3BCV2vL014052; Wed, 11 Apr 2018 13:31:02 +0100 From: Anatoly Burakov To: dev@dpdk.org Cc: keith.wiles@intel.com, jianfeng.tan@intel.com, andras.kovacs@ericsson.com, laszlo.vadkeri@ericsson.com, benjamin.walker@intel.com, bruce.richardson@intel.com, thomas@monjalon.net, konstantin.ananyev@intel.com, kuralamudhan.ramakrishnan@intel.com, louise.m.daly@intel.com, nelio.laranjeiro@6wind.com, yskoh@mellanox.com, pepperjo@japf.ch, jerin.jacob@caviumnetworks.com, hemant.agrawal@nxp.com, olivier.matz@6wind.com, shreyansh.jain@nxp.com, gowrishankar.m@linux.vnet.ibm.com Date: Wed, 11 Apr 2018 13:30:44 +0100 Message-Id: <37d910ac645a685fae79f0eecdabc3452c5f628b.1523448978.git.anatoly.burakov@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: References: In-Reply-To: References: Subject: [dpdk-dev] [PATCH v6 69/70] malloc: enable validation before new page allocation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Apr 2018 12:31:08 -0000 Before allocating a new page, give a chance to the user to allow or deny allocation via callbacks. Signed-off-by: Anatoly Burakov Tested-by: Santosh Shukla Tested-by: Hemant Agrawal Tested-by: Gowrishankar Muthukrishnan --- lib/librte_eal/common/malloc_heap.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c index 18c7b69..f8daf84 100644 --- a/lib/librte_eal/common/malloc_heap.c +++ b/lib/librte_eal/common/malloc_heap.c @@ -196,6 +196,15 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, int allocd_pages; void *ret, *map_addr; + alloc_sz = (size_t)pg_sz * n_segs; + + /* first, check if we're allowed to allocate this memory */ + if (eal_memalloc_mem_alloc_validate(socket, + heap->total_size + alloc_sz) < 0) { + RTE_LOG(DEBUG, EAL, "User has disallowed allocation\n"); + return NULL; + } + allocd_pages = eal_memalloc_alloc_seg_bulk(ms, n_segs, pg_sz, socket, true); @@ -205,7 +214,6 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, map_addr = ms[0]->addr; msl = rte_mem_virt2memseg_list(map_addr); - alloc_sz = (size_t)msl->page_sz * allocd_pages; /* check if we wanted contiguous memory but didn't get it */ if (contig && !eal_memalloc_is_contig(msl, map_addr, alloc_sz)) { -- 2.7.4