From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mailout3.w1.samsung.com (mailout3.w1.samsung.com [210.118.77.13]) by dpdk.org (Postfix) with ESMTP id C5E9A568A for ; Tue, 6 Jun 2017 15:33:55 +0200 (CEST) Received: from eucas1p2.samsung.com (unknown [182.198.249.207]) by mailout3.w1.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTP id <0OR400HU5OCHYSA0@mailout3.w1.samsung.com> for dev@dpdk.org; Tue, 06 Jun 2017 14:33:53 +0100 (BST) Received: from eusmges4.samsung.com (unknown [203.254.199.244]) by eucas1p1.samsung.com (KnoxPortal) with ESMTP id 20170606133353eucas1p12f330e273e6ce4ea826da7297175a3dd~Fi66weBcA2610026100eucas1p1B; Tue, 6 Jun 2017 13:33:53 +0000 (GMT) Received: from eucas1p1.samsung.com ( [182.198.249.206]) by eusmges4.samsung.com (EUCPMTA) with SMTP id 19.4D.04729.14FA6395; Tue, 6 Jun 2017 14:33:53 +0100 (BST) Received: from eusmgms2.samsung.com (unknown [182.198.249.180]) by eucas1p1.samsung.com (KnoxPortal) with ESMTP id 20170606133352eucas1p13d1e860e996057a50a084f9365189e4d~Fi65_Swt22819128191eucas1p1x; Tue, 6 Jun 2017 13:33:52 +0000 (GMT) X-AuditID: cbfec7f4-f79806d000001279-b8-5936af41d471 Received: from eusync4.samsung.com ( [203.254.199.214]) by eusmgms2.samsung.com (EUCPMTA) with SMTP id AF.10.20206.04FA6395; Tue, 6 Jun 2017 14:33:52 +0100 (BST) Received: from imaximets.rnd.samsung.ru ([106.109.129.180]) by eusync4.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTPA id <0OR400DE4OC76R60@eusync4.samsung.com>; Tue, 06 Jun 2017 14:33:52 +0100 (BST) From: Ilya Maximets To: dev@dpdk.org, David Marchand , Sergio Gonzalez Monroy , Thomas Monjalon Cc: Heetae Ahn , Yuanhan Liu , Jianfeng Tan , Neil Horman , Yulong Pei , Ilya Maximets Date: Tue, 06 Jun 2017 16:33:39 +0300 Message-id: <1496756020-4579-2-git-send-email-i.maximets@samsung.com> X-Mailer: git-send-email 2.7.4 In-reply-to: <1496756020-4579-1-git-send-email-i.maximets@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWRa0hTYRzGfc85nnM2XJy2ZS9mQosKwszE7BBZGRkH6YNBH1yYutxBLaey 4yT90mTilczKC7Q0zdSlwkK8KyvWmjpBK6d4q5RCyUJh87aKkfPMb7/nfZ6X5//nT6LiJSyA TMvIZtUZinQZLsS6ra6xU1HGCHlogVNMG4qLMHrV0YPQ1c45grYXuQi6TL+O07O6EZw2VFgA 7VgYxuiCV30IvbmwhV4WMp8ezQPmT32TLzPtnsWZxsGfCGOdqSaY8s5WwNi32kAscUt4Qcmm p+Ww6tMXk4SpbusWlvU78v7L0VJECyZDS4GAhFQ4HPigRXn2hx+/GvFSICTFVBOA9mULwQsn gL9etPju/dhwVHqNZgBNHUbACx0C++o6gSeFU8HQ1mbZNaRUA4D5rmHUI1BqBcDVQRPmSUmo SOhs0iMexqhjULddSHhYREXD5qoBnO8LgjNjJbsTCqhrsMb5GuPf6wi4tuxXCsgdPgw73nmX uAq7yv96IxK4MtRJ8BwIJ56WYZ4ZIKUDUNv6GfCiAsB1XQvCpy5B28zkLqPUPvikuwblC0Sw uFDMRxhYvab3FkTBZw+XvOvbATQsDuIVILAe+LQCKavhVCksFx7CKVScJiMlJDlT1QF2rj/q HlrvBY3W82ZAkUDmJ5qOPysX+ypyuFyVGUASlUlFs+0RcrFIqcjNY9WZiWpNOsuZwSESkx0U CW1TcWIqRZHN3mPZLFa95yKkIEALEiuRcYs6zvkluUX6Zrxfyd547Pohv/PW1H8k6X3P0RhN rG6i90T83SnhpsBY8k0fTTTMt7v7utYlwWFpCfnsuUWfriuGm91Y1YGEsUl/0dTE9vYUUxsG Xtqq7ObaDclxiaE+YK77dkIM3vPvel6d6flIXrayMbdo/4Og730OGcalKs6cRNWc4j9rJf5b +QIAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpjkeLIzCtJLcpLzFFi42I5/e/4NV2H9WaRBgcXyFqs6GhnsXj3aTuT xbTPt9ktrrT/ZLfonv2FzeJW80k2ixUTjjBafHpwgsWiZclOJotvD74zO3B5XOy/w+jxa8FS Vo8b/26xeSze85LJ49jNaewefVtWMXpc+b6aMYA9ys0mIzUxJbVIITUvOT8lMy/dVik0xE3X QkkhLzE31VYpQtc3JEhJoSwxpxTIMzJAAw7OAe7BSvp2CW4Z/459Zyl4Y1ux6HQXUwPjVYMu Rk4OCQETia+fprBD2GISF+6tZ+ti5OIQEljCKLFy3QtGCKeVSeLayvnMIFVsAjoSp1YfAUuI CCxklLiw+gsziMMs8IJR4vvbO4wgVcICthKfl85mArFZBFQlmn+0ge3gFXCVWDZ1NxvEPjmJ m+c6waZyCrhJTP+8kgViXQOjxOHGpYwTGHkXMDKsYhRJLS3OTc8tNtIrTswtLs1L10vOz93E CIyDbcd+btnB2PUu+BCjAAejEg/vjRjTSCHWxLLiytxDjBIczEoivLfWmEUK8aYkVlalFuXH F5XmpBYfYjQFumois5Rocj4wRvNK4g1NDM0tDY2MLSzMjYyUxHmnfrgSLiSQnliSmp2aWpBa BNPHxMEp1cDo/+qn5AttkT8CC7Zah9y97Z0rdfAEa9Zvr925LNU7eORXV779337mTG5twO5N 2xK5ntzl3S/63juWY879NQYH1m8svVxfsKPb7ZVlo5Wl9wQr7wdx5Sp9vSE/Qg0NxDmWrQ+c IKqYpeCRvNfF5/yPt0nhBxfnB2nbPr3ez+Bwbmf5LvPLDxYrsRRnJBpqMRcVJwIA7m3w5ZkC AAA= X-MTR: 20000000000000000@CPGS X-CMS-MailID: 20170606133352eucas1p13d1e860e996057a50a084f9365189e4d X-Msg-Generator: CA X-Sender-IP: 182.198.249.180 X-Local-Sender: =?UTF-8?B?SWx5YSBNYXhpbWV0cxtTUlItVmlydHVhbGl6YXRpb24gTGFi?= =?UTF-8?B?G+yCvOyEseyghOyekBtMZWFkaW5nIEVuZ2luZWVy?= X-Global-Sender: =?UTF-8?B?SWx5YSBNYXhpbWV0cxtTUlItVmlydHVhbGl6YXRpb24gTGFi?= =?UTF-8?B?G1NhbXN1bmcgRWxlY3Ryb25pY3MbTGVhZGluZyBFbmdpbmVlcg==?= X-Sender-Code: =?UTF-8?B?QzEwG0NJU0hRG0MxMEdEMDFHRDAxMDE1NA==?= CMS-TYPE: 201P X-HopCount: 7 X-CMS-RootMailID: 20170606133352eucas1p13d1e860e996057a50a084f9365189e4d X-RootMTR: 20170606133352eucas1p13d1e860e996057a50a084f9365189e4d References: <1496736832-835-1-git-send-email-i.maximets@samsung.com> <1496756020-4579-1-git-send-email-i.maximets@samsung.com> Subject: [dpdk-dev] [PATCH v5 1/2] mem: balanced allocation of hugepages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Jun 2017 13:33:56 -0000 Currently EAL allocates hugepages one by one not paying attention from which NUMA node allocation was done. Such behaviour leads to allocation failure if number of available hugepages for application limited by cgroups or hugetlbfs and memory requested not only from the first socket. Example: # 90 x 1GB hugepages availavle in a system cgcreate -g hugetlb:/test # Limit to 32GB of hugepages cgset -r hugetlb.1GB.limit_in_bytes=34359738368 test # Request 4GB from each of 2 sockets cgexec -g hugetlb:test testpmd --socket-mem=4096,4096 ... EAL: SIGBUS: Cannot mmap more hugepages of size 1024 MB EAL: 32 not 90 hugepages of size 1024 MB allocated EAL: Not enough memory available on socket 1! Requested: 4096MB, available: 0MB PANIC in rte_eal_init(): Cannot init memory This happens beacause all allocated pages are on socket 0. Fix this issue by setting mempolicy MPOL_PREFERRED for each hugepage to one of requested nodes using following schema: 1) Allocate essential hugepages: 1.1) Allocate as many hugepages from numa N to only fit requested memory for this numa. 1.2) repeat 1.1 for all numa nodes. 2) Try to map all remaining free hugepages in a round-robin fashion. 3) Sort pages and choose the most suitable. In this case all essential memory will be allocated and all remaining pages will be fairly distributed between all requested nodes. libnuma added as a general dependency for EAL. Fixes: 77988fc08dc5 ("mem: fix allocating all free hugepages") Signed-off-by: Ilya Maximets --- lib/librte_eal/linuxapp/eal/Makefile | 1 + lib/librte_eal/linuxapp/eal/eal_memory.c | 94 ++++++++++++++++++++++++++++++-- mk/rte.app.mk | 3 + 3 files changed, 94 insertions(+), 4 deletions(-) diff --git a/lib/librte_eal/linuxapp/eal/Makefile b/lib/librte_eal/linuxapp/eal/Makefile index 640afd0..1440fc5 100644 --- a/lib/librte_eal/linuxapp/eal/Makefile +++ b/lib/librte_eal/linuxapp/eal/Makefile @@ -50,6 +50,7 @@ LDLIBS += -ldl LDLIBS += -lpthread LDLIBS += -lgcc_s LDLIBS += -lrt +LDLIBS += -lnuma # specific to linuxapp exec-env SRCS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) := eal.c diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c index 9c9baf6..5947434 100644 --- a/lib/librte_eal/linuxapp/eal/eal_memory.c +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c @@ -54,6 +54,7 @@ #include #include #include +#include #include #include @@ -358,6 +359,19 @@ static int huge_wrap_sigsetjmp(void) return sigsetjmp(huge_jmpenv, 1); } +#ifndef ULONG_SIZE +#define ULONG_SIZE sizeof(unsigned long) +#endif +#ifndef ULONG_BITS +#define ULONG_BITS (ULONG_SIZE * CHAR_BIT) +#endif +#ifndef DIV_ROUND_UP +#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d)) +#endif +#ifndef BITS_TO_LONGS +#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, ULONG_SIZE) +#endif + /* * Mmap all hugepages of hugepage table: it first open a file in * hugetlbfs, then mmap() hugepage_sz data in it. If orig is set, the @@ -366,18 +380,78 @@ static int huge_wrap_sigsetjmp(void) * map continguous physical blocks in contiguous virtual blocks. */ static unsigned -map_all_hugepages(struct hugepage_file *hugepg_tbl, - struct hugepage_info *hpi, int orig) +map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, + uint64_t *essential_memory, int orig) { int fd; unsigned i; void *virtaddr; void *vma_addr = NULL; size_t vma_len = 0; + unsigned long nodemask[BITS_TO_LONGS(RTE_MAX_NUMA_NODES)] = {0UL}; + unsigned long maxnode = 0; + int node_id = -1; + bool numa_available = true; + + /* Check if kernel supports NUMA. */ + if (get_mempolicy(NULL, NULL, 0, 0, 0) < 0 && errno == ENOSYS) { + RTE_LOG(DEBUG, EAL, "NUMA is not supported.\n"); + numa_available = false; + } + + if (orig && numa_available) { + for (i = 0; i < RTE_MAX_NUMA_NODES; i++) + if (internal_config.socket_mem[i]) + maxnode = i + 1; + } for (i = 0; i < hpi->num_pages[0]; i++) { uint64_t hugepage_sz = hpi->hugepage_sz; + if (maxnode) { + unsigned int j; + + for (j = 0; j < RTE_MAX_NUMA_NODES; j++) + if (essential_memory[j]) + break; + + if (j == RTE_MAX_NUMA_NODES) { + node_id = (node_id + 1) % RTE_MAX_NUMA_NODES; + while (!internal_config.socket_mem[node_id]) { + node_id++; + node_id %= RTE_MAX_NUMA_NODES; + } + } else { + node_id = j; + if (essential_memory[j] < hugepage_sz) + essential_memory[j] = 0; + else + essential_memory[j] -= hugepage_sz; + } + + nodemask[node_id / ULONG_BITS] = + 1UL << (node_id % ULONG_BITS); + + RTE_LOG(DEBUG, EAL, + "Setting policy MPOL_PREFERRED for socket %d\n", + node_id); + /* + * Due to old linux kernel bug (feature?) we have to + * increase maxnode by 1. It will be unconditionally + * decreased back to normal value inside the syscall + * handler. + */ + if (set_mempolicy(MPOL_PREFERRED, + nodemask, maxnode + 1) < 0) { + RTE_LOG(ERR, EAL, + "Failed to set policy MPOL_PREFERRED: " + "%s\n", strerror(errno)); + return i; + } + + nodemask[node_id / ULONG_BITS] = 0UL; + } + if (orig) { hugepg_tbl[i].file_id = i; hugepg_tbl[i].size = hugepage_sz; @@ -488,6 +562,9 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, vma_len -= hugepage_sz; } + if (maxnode && set_mempolicy(MPOL_DEFAULT, NULL, 0) < 0) + RTE_LOG(ERR, EAL, "Failed to set mempolicy MPOL_DEFAULT\n"); + return i; } @@ -572,6 +649,9 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) if (hugepg_tbl[i].orig_va == va) { hugepg_tbl[i].socket_id = socket_id; hp_count++; + RTE_LOG(DEBUG, EAL, + "Hugepage %s is on socket %d\n", + hugepg_tbl[i].filepath, socket_id); } } } @@ -1010,6 +1090,11 @@ rte_eal_hugepage_init(void) huge_register_sigbus(); + /* make a copy of socket_mem, needed for balanced allocation. */ + for (i = 0; i < RTE_MAX_NUMA_NODES; i++) + memory[i] = internal_config.socket_mem[i]; + + /* map all hugepages and sort them */ for (i = 0; i < (int)internal_config.num_hugepage_sizes; i ++){ unsigned pages_old, pages_new; @@ -1027,7 +1112,8 @@ rte_eal_hugepage_init(void) /* map all hugepages available */ pages_old = hpi->num_pages[0]; - pages_new = map_all_hugepages(&tmp_hp[hp_offset], hpi, 1); + pages_new = map_all_hugepages(&tmp_hp[hp_offset], hpi, + memory, 1); if (pages_new < pages_old) { RTE_LOG(DEBUG, EAL, "%d not %d hugepages of size %u MB allocated\n", @@ -1070,7 +1156,7 @@ rte_eal_hugepage_init(void) sizeof(struct hugepage_file), cmp_physaddr); /* remap all hugepages */ - if (map_all_hugepages(&tmp_hp[hp_offset], hpi, 0) != + if (map_all_hugepages(&tmp_hp[hp_offset], hpi, NULL, 0) != hpi->num_pages[0]) { RTE_LOG(ERR, EAL, "Failed to remap %u MB pages\n", (unsigned)(hpi->hugepage_sz / 0x100000)); diff --git a/mk/rte.app.mk b/mk/rte.app.mk index bcaf1b3..5f370c9 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -186,6 +186,9 @@ ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),n) # The static libraries do not know their dependencies. # So linking with static library requires explicit dependencies. _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrt +ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y) +_LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lnuma +endif _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lm _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrt _LDLIBS-$(CONFIG_RTE_LIBRTE_METER) += -lm -- 2.7.4