From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <aburakov@ecsmtp.ir.intel.com>
Received: from mga01.intel.com (mga01.intel.com [192.55.52.88])
 by dpdk.org (Postfix) with ESMTP id 52D332C8
 for <dev@dpdk.org>; Tue, 24 Apr 2018 12:19:28 +0200 (CEST)
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
 by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;
 24 Apr 2018 03:19:26 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.49,322,1520924400"; d="scan'208";a="49462707"
Received: from irvmail001.ir.intel.com ([163.33.26.43])
 by fmsmga001.fm.intel.com with ESMTP; 24 Apr 2018 03:19:25 -0700
Received: from sivswdev01.ir.intel.com (sivswdev01.ir.intel.com
 [10.237.217.45])
 by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id
 w3OAJPUm001389; Tue, 24 Apr 2018 11:19:25 +0100
Received: from sivswdev01.ir.intel.com (localhost [127.0.0.1])
 by sivswdev01.ir.intel.com with ESMTP id w3OAJPSq024928;
 Tue, 24 Apr 2018 11:19:25 +0100
Received: (from aburakov@localhost)
 by sivswdev01.ir.intel.com with LOCAL id w3OAJOmT024924;
 Tue, 24 Apr 2018 11:19:24 +0100
From: Anatoly Burakov <anatoly.burakov@intel.com>
To: dev@dpdk.org
Cc: reshma.pattan@intel.com
Date: Tue, 24 Apr 2018 11:19:23 +0100
Message-Id: <9bd2877ea94d32f99ea6d17afb6e5f16f4340649.1524564892.git.anatoly.burakov@intel.com>
X-Mailer: git-send-email 1.7.0.7
In-Reply-To: <c8375f94e07e2ed99090eaefb4f4c4bb657c5206.1524564892.git.anatoly.burakov@intel.com>
References: <c8375f94e07e2ed99090eaefb4f4c4bb657c5206.1524564892.git.anatoly.burakov@intel.com>
In-Reply-To: <c8375f94e07e2ed99090eaefb4f4c4bb657c5206.1524237907.git.anatoly.burakov@intel.com>
References: <c8375f94e07e2ed99090eaefb4f4c4bb657c5206.1524237907.git.anatoly.burakov@intel.com>
Subject: [dpdk-dev] [PATCH v3 2/3] mem: improve memory preallocation on
	32-bit
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Tue, 24 Apr 2018 10:19:28 -0000

Previously, if we couldn't preallocate VA space on 32-bit for
one page size, we simply bailed out, even though we could've
tried allocating VA space with other page sizes.

For example, if user had both 1G and 2M pages enabled, and
has asked DPDK to allocate memory on both sockets, DPDK
would've tried to allocate VA space for 1x1G page on both
sockets, failed and never tried again, even though it
could've allocated the same 1G of VA space for 512x2M pages.

Fix this by retrying with different page sizes if VA space
reservation failed.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 lib/librte_eal/common/eal_common_memory.c | 42 +++++++++++++++++++++++++------
 1 file changed, 35 insertions(+), 7 deletions(-)

diff --git a/lib/librte_eal/common/eal_common_memory.c b/lib/librte_eal/common/eal_common_memory.c
index c0d4673..d819abe 100644
--- a/lib/librte_eal/common/eal_common_memory.c
+++ b/lib/librte_eal/common/eal_common_memory.c
@@ -142,6 +142,17 @@ get_mem_amount(uint64_t page_sz, uint64_t max_mem)
 }
 
 static int
+free_memseg_list(struct rte_memseg_list *msl)
+{
+	if (rte_fbarray_destroy(&msl->memseg_arr)) {
+		RTE_LOG(ERR, EAL, "Cannot destroy memseg list\n");
+		return -1;
+	}
+	memset(msl, 0, sizeof(*msl));
+	return 0;
+}
+
+static int
 alloc_memseg_list(struct rte_memseg_list *msl, uint64_t page_sz,
 		uint64_t max_mem, int socket_id, int type_msl_idx)
 {
@@ -339,24 +350,41 @@ memseg_primary_init_32(void)
 					return -1;
 				}
 
-				msl = &mcfg->memsegs[msl_idx++];
+				msl = &mcfg->memsegs[msl_idx];
 
 				if (alloc_memseg_list(msl, hugepage_sz,
 						max_pagesz_mem, socket_id,
-						type_msl_idx))
+						type_msl_idx)) {
+					/* failing to allocate a memseg list is
+					 * a serious error.
+					 */
+					RTE_LOG(ERR, EAL, "Cannot allocate memseg list\n");
 					return -1;
+				}
+
+				if (alloc_va_space(msl)) {
+					/* if we couldn't allocate VA space, we
+					 * can try with smaller page sizes.
+					 */
+					RTE_LOG(ERR, EAL, "Cannot allocate VA space for memseg list, retrying with different page size\n");
+					/* deallocate memseg list */
+					if (free_memseg_list(msl))
+						return -1;
+					break;
+				}
 
 				total_segs += msl->memseg_arr.len;
 				cur_pagesz_mem = total_segs * hugepage_sz;
 				type_msl_idx++;
-
-				if (alloc_va_space(msl)) {
-					RTE_LOG(ERR, EAL, "Cannot allocate VA space for memseg list\n");
-					return -1;
-				}
+				msl_idx++;
 			}
 			cur_socket_mem += cur_pagesz_mem;
 		}
+		if (cur_socket_mem == 0) {
+			RTE_LOG(ERR, EAL, "Cannot allocate VA space on socket %u\n",
+				socket_id);
+			return -1;
+		}
 	}
 
 	return 0;
-- 
2.7.4