From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B946142B71; Mon, 22 May 2023 14:41:17 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9366240EE7; Mon, 22 May 2023 14:41:17 +0200 (CEST) Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) by mails.dpdk.org (Postfix) with ESMTP id BE1C340EE5 for ; Mon, 22 May 2023 14:41:16 +0200 (CEST) Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-64d722dac08so242154b3a.1 for ; Mon, 22 May 2023 05:41:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1684759275; x=1687351275; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=2qMSdC4lTlsG6KFiutW3HFFgE8tH9pAF5PnpTYkvBWQ=; b=LM4qr28coO6FdKnwYLTig8m43nQ3aHbCdO3pJG43MLmx7ISb9o8e3VYIm/RA0Jr3yO eQ2eciPFFwhUMi5TQyAPnipBin7IQJNJCT1Y4+7HH02m4Iw30OvcohwKBx88ZuB3l67Z TukXMQ1g3Bva1MkRDyxw6uaocAg86EKdghAL+PZMhFeQ73UPyFxfQ1O9s7hFiVRABre5 9IwdCyZkF+VGeqwLp4xDoy1oKL4h3O2CapL4YOaBwqa3lafxuv6U/m7enPB13NUnaaS3 mCxrRHYy/Pq6PU+QIJIadWm56zlHcL7t48FutDLaQ1SxMS9H/OawoM5mhJILS7m6JV+n Fhfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684759275; x=1687351275; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=2qMSdC4lTlsG6KFiutW3HFFgE8tH9pAF5PnpTYkvBWQ=; b=WhEM6eFQx8EHbY+WoPkDQldFP0JKAhpRHU/a4NS87xVFXvuRTM1A+1VkH4PIu3xkxn lXjhq2PhorqEBDJIJJtrpC1HeB1vCLjQGQgUHfjczoASawtXPZFft7hg5oRmAfi91MA1 BG+4tL+6IeSfRhbY64brrDFWd+wGBaqG57Nn2bpkvzw2Mh92O76JYalPLDvbjx6aFBQA O7Ia8pMVaAcc1XbLIdG222RkESVOBmJe9IbO2EhKEOMbt0bJYZyeKmtwmlZSUM16m1OE W7M4c1JAFpD4B3ge6A+OUOZ//R+XNRxA30/nNaLBHHaodDbEQA/1m6GyD3WvVEjGUHTE yQgw== X-Gm-Message-State: AC+VfDwG4/7YyuIdJE1qJi144RiDoX5/DChtDGOkAJ98CfF/eVAak0Ln h5VchPnzdJKu/1pGHoApUy3ACQ== X-Google-Smtp-Source: ACHHUZ75r4J3w2OoTg8Enlyg/aHlUscZ+LR1kq4DwTvqKCUmNdKKcKitqDu8LWDqAlIRuJ0SU/pMOg== X-Received: by 2002:a05:6a00:4197:b0:63b:5257:6837 with SMTP id ca23-20020a056a00419700b0063b52576837mr10175060pfb.1.1684759275509; Mon, 22 May 2023 05:41:15 -0700 (PDT) Received: from HTW5T2C6VL.bytedance.net ([139.177.225.232]) by smtp.gmail.com with ESMTPSA id k3-20020aa78203000000b0062607d604b2sm4055743pfi.53.2023.05.22.05.41.13 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 22 May 2023 05:41:15 -0700 (PDT) From: Fengnan Chang To: anatoly.burakov@intel.com, dev@dpdk.org Cc: Fengnan Chang , Lin Li Subject: [PATCH v2] eal: fix eal init may failed when too much continuous memsegs under legacy mode Date: Mon, 22 May 2023 20:41:07 +0800 Message-Id: <20230522124107.99877-1-changfengnan@bytedance.com> X-Mailer: git-send-email 2.37.1 (Apple Git-137.1) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Under legacy mode, if the number of continuous memsegs greater than RTE_MAX_MEMSEG_PER_LIST, eal init will failed even though another memseg list is empty, because only one memseg list used to check in remap_needed_hugepages. Fix this by add a argment indicate how many pages mapped in remap_segment, remap_segment try to mapped most pages it can, if exceed it's capbility, remap_needed_hugepages will continue to map other left pages. For example: hugepage configure: cat /sys/devices/system/node/node*/hugepages/hugepages-2048kB/nr_hugepages 10241 10239 startup log: EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 EAL: Requesting 13370 pages of size 2MB from socket 0 EAL: Requesting 7110 pages of size 2MB from socket 1 EAL: Attempting to map 14220M on socket 1 EAL: Allocated 14220M on socket 1 EAL: Attempting to map 26740M on socket 0 EAL: Could not find space for memseg. Please increase 32768 and/or 65536 in configuration. EAL: Couldn't remap hugepage files into memseg lists EAL: FATAL: Cannot init memory EAL: Cannot init memory Signed-off-by: Fengnan Chang Signed-off-by: Lin Li --- lib/eal/linux/eal_memory.c | 33 +++++++++++++++++++++------------ 1 file changed, 21 insertions(+), 12 deletions(-) diff --git a/lib/eal/linux/eal_memory.c b/lib/eal/linux/eal_memory.c index 60fc8cc6ca..b2e6453fbe 100644 --- a/lib/eal/linux/eal_memory.c +++ b/lib/eal/linux/eal_memory.c @@ -657,12 +657,12 @@ unmap_unneeded_hugepages(struct hugepage_file *hugepg_tbl, } static int -remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) +remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end, int *mapped_seg_len) { struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config; struct rte_memseg_list *msl; struct rte_fbarray *arr; - int cur_page, seg_len; + int cur_page, seg_len, cur_len; unsigned int msl_idx; int ms_idx; uint64_t page_sz; @@ -692,8 +692,9 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) /* leave space for a hole if array is not empty */ empty = arr->count == 0; + cur_len = RTE_MIN((unsigned int)seg_len, arr->len - arr->count - (empty ? 0 : 1)); ms_idx = rte_fbarray_find_next_n_free(arr, 0, - seg_len + (empty ? 0 : 1)); + cur_len + (empty ? 0 : 1)); /* memseg list is full? */ if (ms_idx < 0) @@ -704,12 +705,12 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) */ if (!empty) ms_idx++; + *mapped_seg_len = cur_len; break; } if (msl_idx == RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, "Could not find space for memseg. Please increase %s and/or %s in configuration.\n", - RTE_STR(RTE_MAX_MEMSEG_PER_TYPE), - RTE_STR(RTE_MAX_MEM_MB_PER_TYPE)); + RTE_LOG(ERR, EAL, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST " + "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration.\n"); return -1; } @@ -725,6 +726,8 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) void *addr; int fd; + if (cur_page - seg_start == *mapped_seg_len) + break; fd = open(hfile->filepath, O_RDWR); if (fd < 0) { RTE_LOG(ERR, EAL, "Could not open '%s': %s\n", @@ -986,7 +989,7 @@ prealloc_segments(struct hugepage_file *hugepages, int n_pages) static int remap_needed_hugepages(struct hugepage_file *hugepages, int n_pages) { - int cur_page, seg_start_page, new_memseg, ret; + int cur_page, seg_start_page, new_memseg, ret, mapped_seg_len = 0; seg_start_page = 0; for (cur_page = 0; cur_page < n_pages; cur_page++) { @@ -1023,21 +1026,27 @@ remap_needed_hugepages(struct hugepage_file *hugepages, int n_pages) /* if this isn't the first time, remap segment */ if (cur_page != 0) { ret = remap_segment(hugepages, seg_start_page, - cur_page); + cur_page, &mapped_seg_len); if (ret != 0) return -1; } + cur_page = seg_start_page + mapped_seg_len; /* remember where we started */ seg_start_page = cur_page; + mapped_seg_len = 0; } /* continuation of previous memseg */ } /* we were stopped, but we didn't remap the last segment, do it now */ if (cur_page != 0) { - ret = remap_segment(hugepages, seg_start_page, - cur_page); - if (ret != 0) - return -1; + while (seg_start_page < n_pages) { + ret = remap_segment(hugepages, seg_start_page, + cur_page, &mapped_seg_len); + if (ret != 0) + return -1; + seg_start_page = seg_start_page + mapped_seg_len; + mapped_seg_len = 0; + } } return 0; } -- 2.37.1 (Apple Git-137.1)