When doing test in spdk, we found a problem: when we try alloc 2048 hugepages for dpdk, sometimes memory init would be fail. This is some useful log: EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 ... EAL: Trying to obtain current memory policy. EAL: Requesting 11264 pages of size 2MB from socket 0 EAL: Requesting 9216 pages of size 2MB from socket 1 EAL: Attempting to map 22528M on socket 0 EAL: Could not find space for memseg. Please increase 32768 and/or 65536 in configuration. EAL: Couldn't remap hugepage files into memseg lists EAL: FATAL: Cannot init memory And the hugepages mapping in node: cat /sys/devices/system/node/node*/hugepages/hugepages-2048kB/nr_hugepages 11264 9216 This is some config about memseg: #define RTE_MAX_MEMSEG_LISTS 128 #define RTE_MAX_MEMSEG_PER_LIST 8192 #define RTE_MAX_MEM_MB_PER_LIST 32768 #define RTE_MAX_MEMSEG_PER_TYPE 32768 #define RTE_MAX_MEM_MB_PER_TYPE 65536 We found if the number of continuous hugepgaes larger than RTE_MAX_MEMSEG_PER_LIST, remap_segment would fail, even though we have other memseg lists can use. As the log showed, we have 4 memseg lists, each can hold 8192 segment, but if we have 11264 continuous hugepgaes, remap_segment can not find a single memseg list to hold those hugepgaes. So my question is why remap_segment have to map a chunk of continuous hugepgaes to a single memseg list ? Can we split it to two memseg lists? We had tried to do like this, it seems ok for our environment. Is there any potential risk ?