From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 983AB4C96 for ; Fri, 29 Mar 2019 18:55:33 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Mar 2019 10:55:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,285,1549958400"; d="scan'208";a="311540287" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by orsmga005.jf.intel.com with ESMTP; 29 Mar 2019 10:55:30 -0700 Received: from sivswdev05.ir.intel.com (sivswdev05.ir.intel.com [10.243.17.64]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id x2THtU8x022596; Fri, 29 Mar 2019 17:55:30 GMT Received: from sivswdev05.ir.intel.com (localhost [127.0.0.1]) by sivswdev05.ir.intel.com with ESMTP id x2THtUQD019440; Fri, 29 Mar 2019 17:55:30 GMT Received: (from aburakov@localhost) by sivswdev05.ir.intel.com with LOCAL id x2THtUHX019436; Fri, 29 Mar 2019 17:55:30 GMT From: Anatoly Burakov To: dev@dpdk.org Cc: John McNamara , Marko Kovacevic , david.marchand@redhat.com, thomas@monjalon.net, maxime.coquelin@redhat.com Date: Fri, 29 Mar 2019 17:55:29 +0000 Message-Id: <2b5404285e5ddc460a857cc90130db1b2c717dfc.1553882085.git.anatoly.burakov@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <728c19fa1ed26cdd319fe65e23c4058363dbf2dd.1553882085.git.anatoly.burakov@intel.com> References: <728c19fa1ed26cdd319fe65e23c4058363dbf2dd.1553882085.git.anatoly.burakov@intel.com> In-Reply-To: <07f664c33ddedaa5dcfe82ecb97d931e68b7e33a.1550855529.git.anatoly.burakov@intel.com> References: <07f664c33ddedaa5dcfe82ecb97d931e68b7e33a.1550855529.git.anatoly.burakov@intel.com> Subject: [dpdk-dev] [PATCH v2 2/2] memalloc: do not use lockfiles for single file segments mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 29 Mar 2019 17:55:34 -0000 Due to internal glibc limitations [1], DPDK may exhaust internal file descriptor limits when using smaller page sizes, which results in inability to use system calls such as select() by user applications. Single file segments option stores lock files per page to ensure that pages are deleted when there are no more users, however this is not necessary because the processes will be holding onto the pages anyway because of mmap(). Thus, removing pages from the filesystem is safe even though they may be used by some other secondary process. As a result, single file segments mode no longer stores inordinate amounts of segment fd's, and the above issue with fd limits is solved. However, this will not work for legacy mem mode. For that, simply document that using bigger page sizes is the only option. [1] https://mails.dpdk.org/archives/dev/2019-February/124386.html Signed-off-by: Anatoly Burakov --- .../prog_guide/env_abstraction_layer.rst | 18 ++ lib/librte_eal/common/eal_filesystem.h | 11 - lib/librte_eal/linux/eal/eal_memalloc.c | 285 +++++------------- 3 files changed, 96 insertions(+), 218 deletions(-) diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst index 2361c3b8f..faa5f6a0d 100644 --- a/doc/guides/prog_guide/env_abstraction_layer.rst +++ b/doc/guides/prog_guide/env_abstraction_layer.rst @@ -214,6 +214,24 @@ Normally, these options do not need to be changed. can later be mapped into that preallocated VA space (if dynamic memory mode is enabled), and can optionally be mapped into it at startup. ++ Segment file descriptors + +On Linux, in most cases, EAL will store segment file descriptors in EAL. This +can become a problem when using smaller page sizes due to underlying limitations +of ``glibc`` library. For example, Linux API calls such as ``select()`` may not +work correctly because ``glibc`` does not support more than certain number of +file descriptors. + +There are two possible solutions for this problem. The recommended solution is +to use ``--single-file-segments`` mode, as that mode will not use a file +descriptor per each page, and it will keep compatibility with Virtio with +vhost-user backend. This option is not available when using ``--legacy-mem`` +mode. + +Another option is to use bigger page sizes. Since fewer pages are required to +cover the same memory area, fewer file descriptors will be stored internally +by EAL. + Support for Externally Allocated Memory ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/lib/librte_eal/common/eal_filesystem.h b/lib/librte_eal/common/eal_filesystem.h index 89a3added..f2f83e712 100644 --- a/lib/librte_eal/common/eal_filesystem.h +++ b/lib/librte_eal/common/eal_filesystem.h @@ -98,17 +98,6 @@ eal_get_hugefile_path(char *buffer, size_t buflen, const char *hugedir, int f_id return buffer; } -/** String format for hugepage map lock files. */ -#define HUGEFILE_LOCK_FMT "%s/map_%d.lock" -static inline const char * -eal_get_hugefile_lock_path(char *buffer, size_t buflen, int f_id) -{ - snprintf(buffer, buflen, HUGEFILE_LOCK_FMT, rte_eal_get_runtime_dir(), - f_id); - buffer[buflen - 1] = '\0'; - return buffer; -} - /** define the default filename prefix for the %s values above */ #define HUGEFILE_PREFIX_DEFAULT "rte" diff --git a/lib/librte_eal/linux/eal/eal_memalloc.c b/lib/librte_eal/linux/eal/eal_memalloc.c index e90fe6c95..4c51fb8ca 100644 --- a/lib/librte_eal/linux/eal/eal_memalloc.c +++ b/lib/librte_eal/linux/eal/eal_memalloc.c @@ -83,13 +83,8 @@ static int fallocate_supported = -1; /* unknown */ /* * we have two modes - single file segments, and file-per-page mode. * - * for single-file segments, we need some kind of mechanism to keep track of - * which hugepages can be freed back to the system, and which cannot. we cannot - * use flock() because they don't allow locking parts of a file, and we cannot - * use fcntl() due to issues with their semantics, so we will have to rely on a - * bunch of lockfiles for each page. so, we will use 'fds' array to keep track - * of per-page lockfiles. we will store the actual segment list fd in the - * 'memseg_list_fd' field. + * for single-file segments, we use memseg_list_fd to store the segment fd, + * while the fds[] will not be allocated, and len will be set to 0. * * for file-per-page mode, each page will have its own fd, so 'memseg_list_fd' * will be invalid (set to -1), and we'll use 'fds' to keep track of page fd's. @@ -245,80 +240,6 @@ static int lock(int fd, int type) return 1; } -static int get_segment_lock_fd(int list_idx, int seg_idx) -{ - char path[PATH_MAX] = {0}; - int fd; - - if (list_idx < 0 || list_idx >= (int)RTE_DIM(fd_list)) - return -1; - if (seg_idx < 0 || seg_idx >= fd_list[list_idx].len) - return -1; - - fd = fd_list[list_idx].fds[seg_idx]; - /* does this lock already exist? */ - if (fd >= 0) - return fd; - - eal_get_hugefile_lock_path(path, sizeof(path), - list_idx * RTE_MAX_MEMSEG_PER_LIST + seg_idx); - - fd = open(path, O_CREAT | O_RDWR, 0660); - if (fd < 0) { - RTE_LOG(ERR, EAL, "%s(): error creating lockfile '%s': %s\n", - __func__, path, strerror(errno)); - return -1; - } - /* take out a read lock */ - if (lock(fd, LOCK_SH) != 1) { - RTE_LOG(ERR, EAL, "%s(): failed to take out a readlock on '%s': %s\n", - __func__, path, strerror(errno)); - close(fd); - return -1; - } - /* store it for future reference */ - fd_list[list_idx].fds[seg_idx] = fd; - fd_list[list_idx].count++; - return fd; -} - -static int unlock_segment(int list_idx, int seg_idx) -{ - int fd, ret; - - if (list_idx < 0 || list_idx >= (int)RTE_DIM(fd_list)) - return -1; - if (seg_idx < 0 || seg_idx >= fd_list[list_idx].len) - return -1; - - fd = fd_list[list_idx].fds[seg_idx]; - - /* upgrade lock to exclusive to see if we can remove the lockfile */ - ret = lock(fd, LOCK_EX); - if (ret == 1) { - /* we've succeeded in taking exclusive lock, this lockfile may - * be removed. - */ - char path[PATH_MAX] = {0}; - eal_get_hugefile_lock_path(path, sizeof(path), - list_idx * RTE_MAX_MEMSEG_PER_LIST + seg_idx); - if (unlink(path)) { - RTE_LOG(ERR, EAL, "%s(): error removing lockfile '%s': %s\n", - __func__, path, strerror(errno)); - } - } - /* we don't want to leak the fd, so even if we fail to lock, close fd - * and remove it from list anyway. - */ - close(fd); - fd_list[list_idx].fds[seg_idx] = -1; - fd_list[list_idx].count--; - - if (ret < 0) - return -1; - return 0; -} - static int get_seg_memfd(struct hugepage_info *hi __rte_unused, unsigned int list_idx __rte_unused, @@ -425,7 +346,7 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi, } static int -resize_hugefile_in_memory(int fd, int list_idx, uint64_t fa_offset, +resize_hugefile_in_memory(int fd, uint64_t fa_offset, uint64_t page_sz, bool grow) { int flags = grow ? 0 : FALLOC_FL_PUNCH_HOLE | @@ -441,18 +362,12 @@ resize_hugefile_in_memory(int fd, int list_idx, uint64_t fa_offset, strerror(errno)); return -1; } - /* increase/decrease total segment count */ - fd_list[list_idx].count += (grow ? 1 : -1); - if (!grow && fd_list[list_idx].count == 0) { - close(fd_list[list_idx].memseg_list_fd); - fd_list[list_idx].memseg_list_fd = -1; - } return 0; } static int -resize_hugefile_in_filesystem(int fd, char *path, int list_idx, int seg_idx, - uint64_t fa_offset, uint64_t page_sz, bool grow) +resize_hugefile_in_filesystem(int fd, uint64_t fa_offset, uint64_t page_sz, + bool grow) { bool again = false; @@ -481,72 +396,22 @@ resize_hugefile_in_filesystem(int fd, char *path, int list_idx, int seg_idx, } else { int flags = grow ? 0 : FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE; - int ret, lock_fd; + int ret; - /* if fallocate() is supported, we need to take out a - * read lock on allocate (to prevent other processes - * from deallocating this page), and take out a write - * lock on deallocate (to ensure nobody else is using - * this page). - * - * read locks on page itself are already taken out at - * file creation, in get_seg_fd(). - * - * we cannot rely on simple use of flock() call, because - * we need to be able to lock a section of the file, - * and we cannot use fcntl() locks, because of numerous - * problems with their semantics, so we will use - * deterministically named lock files for each section - * of the file. - * - * if we're shrinking the file, we want to upgrade our - * lock from shared to exclusive. - * - * lock_fd is an fd for a lockfile, not for the segment - * list. + /* + * technically, it is perfectly safe for both primary + * and secondary to grow and shrink the page files: + * growing the file repeatedly has no effect because + * a page can only be allocated once, while mmap ensures + * that secondaries hold on to the page even after the + * page itself is removed from the filesystem. + * + * however, leaving growing/shrinking to the primary + * tends to expose bugs in fdlist page count handling, + * so leave this here just in case. */ - lock_fd = get_segment_lock_fd(list_idx, seg_idx); - - if (!grow) { - /* we are using this lockfile to determine - * whether this particular page is locked, as we - * are in single file segments mode and thus - * cannot use regular flock() to get this info. - * - * we want to try and take out an exclusive lock - * on the lock file to determine if we're the - * last ones using this page, and if not, we - * won't be shrinking it, and will instead exit - * prematurely. - */ - ret = lock(lock_fd, LOCK_EX); - - /* drop the lock on the lockfile, so that even - * if we couldn't shrink the file ourselves, we - * are signalling to other processes that we're - * no longer using this page. - */ - if (unlock_segment(list_idx, seg_idx)) - RTE_LOG(ERR, EAL, "Could not unlock segment\n"); - - /* additionally, if this was the last lock on - * this segment list, we can safely close the - * page file fd, so that one of the processes - * could then delete the file after shrinking. - */ - if (ret < 1 && fd_list[list_idx].count == 0) { - close(fd); - fd_list[list_idx].memseg_list_fd = -1; - } - - if (ret < 0) { - RTE_LOG(ERR, EAL, "Could not lock segment\n"); - return -1; - } - if (ret == 0) - /* failed to lock, not an error. */ - return 0; - } + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; /* grow or shrink the file */ ret = fallocate(fd, flags, fa_offset, page_sz); @@ -564,44 +429,43 @@ resize_hugefile_in_filesystem(int fd, char *path, int list_idx, int seg_idx, strerror(errno)); return -1; } - } else { + } else fallocate_supported = 1; - - /* we've grew/shrunk the file, and we hold an - * exclusive lock now. check if there are no - * more segments active in this segment list, - * and remove the file if there aren't. - */ - if (fd_list[list_idx].count == 0) { - if (unlink(path)) - RTE_LOG(ERR, EAL, "%s(): unlinking '%s' failed: %s\n", - __func__, path, - strerror(errno)); - close(fd); - fd_list[list_idx].memseg_list_fd = -1; - } - } } } while (again); return 0; } +static void +close_hugefile(int fd, char *path, int list_idx) +{ + /* + * primary process must unlink the file, but only when not in in-memory + * mode (as in that case there is no file to unlink). + */ + if (!internal_config.in_memory && + rte_eal_process_type() == RTE_PROC_PRIMARY && + unlink(path)) + RTE_LOG(ERR, EAL, "%s(): unlinking '%s' failed: %s\n", + __func__, path, strerror(errno)); + + close(fd); + fd_list[list_idx].memseg_list_fd = -1; +} static int -resize_hugefile(int fd, char *path, int list_idx, int seg_idx, - uint64_t fa_offset, uint64_t page_sz, bool grow) +resize_hugefile(int fd, uint64_t fa_offset, uint64_t page_sz, bool grow) { - - /* in-memory mode is a special case, because we don't need to perform - * any locking, and we can be sure that fallocate() is supported. + /* in-memory mode is a special case, because we can be sure that + * fallocate() is supported. */ if (internal_config.in_memory) - return resize_hugefile_in_memory(fd, list_idx, fa_offset, + return resize_hugefile_in_memory(fd, fa_offset, page_sz, grow); - return resize_hugefile_in_filesystem(fd, path, list_idx, seg_idx, - fa_offset, page_sz, grow); + return resize_hugefile_in_filesystem(fd, fa_offset, page_sz, + grow); } static int @@ -663,10 +527,11 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, if (internal_config.single_file_segments) { map_offset = seg_idx * alloc_sz; - ret = resize_hugefile(fd, path, list_idx, seg_idx, - map_offset, alloc_sz, true); + ret = resize_hugefile(fd, map_offset, alloc_sz, true); if (ret < 0) goto resized; + + fd_list[list_idx].count++; } else { map_offset = 0; if (ftruncate(fd, alloc_sz) < 0) { @@ -769,15 +634,21 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, */ RTE_LOG(CRIT, EAL, "Can't mmap holes in our virtual address space\n"); } + /* roll back the ref count */ + if (internal_config.single_file_segments) + fd_list[list_idx].count--; resized: /* some codepaths will return negative fd, so exit early */ if (fd < 0) return -1; if (internal_config.single_file_segments) { - resize_hugefile(fd, path, list_idx, seg_idx, map_offset, - alloc_sz, false); + resize_hugefile(fd, map_offset, alloc_sz, false); /* ignore failure, can't make it any worse */ + + /* if refcount is at zero, close the file */ + if (fd_list[list_idx].count == 0) + close_hugefile(fd, path, list_idx); } else { /* only remove file if we can take out a write lock */ if (internal_config.hugepage_unlink == 0 && @@ -834,9 +705,12 @@ free_seg(struct rte_memseg *ms, struct hugepage_info *hi, if (internal_config.single_file_segments) { map_offset = seg_idx * ms->len; - if (resize_hugefile(fd, path, list_idx, seg_idx, map_offset, - ms->len, false)) + if (resize_hugefile(fd, map_offset, ms->len, false)) return -1; + + if (--(fd_list[list_idx].count) == 0) + close_hugefile(fd, path, list_idx); + ret = 0; } else { /* if we're able to take out a write lock, we're the last one @@ -1492,18 +1366,24 @@ alloc_list(int list_idx, int len) int *data; int i; - /* ensure we have space to store fd per each possible segment */ - data = malloc(sizeof(int) * len); - if (data == NULL) { - RTE_LOG(ERR, EAL, "Unable to allocate space for file descriptors\n"); - return -1; + /* single-file segments mode does not need fd list */ + if (!internal_config.single_file_segments) { + /* ensure we have space to store fd per each possible segment */ + data = malloc(sizeof(int) * len); + if (data == NULL) { + RTE_LOG(ERR, EAL, "Unable to allocate space for file descriptors\n"); + return -1; + } + /* set all fd's as invalid */ + for (i = 0; i < len; i++) + data[i] = -1; + fd_list[list_idx].fds = data; + fd_list[list_idx].len = len; + } else { + fd_list[list_idx].fds = NULL; + fd_list[list_idx].len = 0; } - /* set all fd's as invalid */ - for (i = 0; i < len; i++) - data[i] = -1; - fd_list[list_idx].fds = data; - fd_list[list_idx].len = len; fd_list[list_idx].count = 0; fd_list[list_idx].memseg_list_fd = -1; @@ -1551,20 +1431,10 @@ eal_memalloc_set_seg_fd(int list_idx, int seg_idx, int fd) int eal_memalloc_set_seg_list_fd(int list_idx, int fd) { - struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config; - /* non-single file segment mode doesn't support segment list fd's */ if (!internal_config.single_file_segments) return -ENOTSUP; - /* if list is not allocated, allocate it */ - if (fd_list[list_idx].len == 0) { - int len = mcfg->memsegs[list_idx].memseg_arr.len; - - if (alloc_list(list_idx, len) < 0) - return -ENOMEM; - } - fd_list[list_idx].memseg_list_fd = fd; return 0; @@ -1642,9 +1512,6 @@ eal_memalloc_get_seg_fd_offset(int list_idx, int seg_idx, size_t *offset) return -ENOTSUP; } - /* fd_list not initialized? */ - if (fd_list[list_idx].len == 0) - return -ENODEV; if (internal_config.single_file_segments) { size_t pgsz = mcfg->memsegs[list_idx].page_sz; @@ -1653,6 +1520,10 @@ eal_memalloc_get_seg_fd_offset(int list_idx, int seg_idx, size_t *offset) return -ENOENT; *offset = pgsz * seg_idx; } else { + /* fd_list not initialized? */ + if (fd_list[list_idx].len == 0) + return -ENODEV; + /* segment not active? */ if (fd_list[list_idx].fds[seg_idx] < 0) return -ENOENT; -- 2.17.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 1120CA05D3 for ; Fri, 29 Mar 2019 18:55:43 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 84D914CA7; Fri, 29 Mar 2019 18:55:39 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 983AB4C96 for ; Fri, 29 Mar 2019 18:55:33 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Mar 2019 10:55:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,285,1549958400"; d="scan'208";a="311540287" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by orsmga005.jf.intel.com with ESMTP; 29 Mar 2019 10:55:30 -0700 Received: from sivswdev05.ir.intel.com (sivswdev05.ir.intel.com [10.243.17.64]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id x2THtU8x022596; Fri, 29 Mar 2019 17:55:30 GMT Received: from sivswdev05.ir.intel.com (localhost [127.0.0.1]) by sivswdev05.ir.intel.com with ESMTP id x2THtUQD019440; Fri, 29 Mar 2019 17:55:30 GMT Received: (from aburakov@localhost) by sivswdev05.ir.intel.com with LOCAL id x2THtUHX019436; Fri, 29 Mar 2019 17:55:30 GMT From: Anatoly Burakov To: dev@dpdk.org Cc: John McNamara , Marko Kovacevic , david.marchand@redhat.com, thomas@monjalon.net, maxime.coquelin@redhat.com Date: Fri, 29 Mar 2019 17:55:29 +0000 Message-Id: <2b5404285e5ddc460a857cc90130db1b2c717dfc.1553882085.git.anatoly.burakov@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <728c19fa1ed26cdd319fe65e23c4058363dbf2dd.1553882085.git.anatoly.burakov@intel.com> References: <728c19fa1ed26cdd319fe65e23c4058363dbf2dd.1553882085.git.anatoly.burakov@intel.com> In-Reply-To: <07f664c33ddedaa5dcfe82ecb97d931e68b7e33a.1550855529.git.anatoly.burakov@intel.com> References: <07f664c33ddedaa5dcfe82ecb97d931e68b7e33a.1550855529.git.anatoly.burakov@intel.com> Subject: [dpdk-dev] [PATCH v2 2/2] memalloc: do not use lockfiles for single file segments mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Content-Type: text/plain; charset="UTF-8" Message-ID: <20190329175529.88e_rWYdcgnMAt8Rlkwe8fyY8mqZ9P8pGaTgEYWtXz0@z> Due to internal glibc limitations [1], DPDK may exhaust internal file descriptor limits when using smaller page sizes, which results in inability to use system calls such as select() by user applications. Single file segments option stores lock files per page to ensure that pages are deleted when there are no more users, however this is not necessary because the processes will be holding onto the pages anyway because of mmap(). Thus, removing pages from the filesystem is safe even though they may be used by some other secondary process. As a result, single file segments mode no longer stores inordinate amounts of segment fd's, and the above issue with fd limits is solved. However, this will not work for legacy mem mode. For that, simply document that using bigger page sizes is the only option. [1] https://mails.dpdk.org/archives/dev/2019-February/124386.html Signed-off-by: Anatoly Burakov --- .../prog_guide/env_abstraction_layer.rst | 18 ++ lib/librte_eal/common/eal_filesystem.h | 11 - lib/librte_eal/linux/eal/eal_memalloc.c | 285 +++++------------- 3 files changed, 96 insertions(+), 218 deletions(-) diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst index 2361c3b8f..faa5f6a0d 100644 --- a/doc/guides/prog_guide/env_abstraction_layer.rst +++ b/doc/guides/prog_guide/env_abstraction_layer.rst @@ -214,6 +214,24 @@ Normally, these options do not need to be changed. can later be mapped into that preallocated VA space (if dynamic memory mode is enabled), and can optionally be mapped into it at startup. ++ Segment file descriptors + +On Linux, in most cases, EAL will store segment file descriptors in EAL. This +can become a problem when using smaller page sizes due to underlying limitations +of ``glibc`` library. For example, Linux API calls such as ``select()`` may not +work correctly because ``glibc`` does not support more than certain number of +file descriptors. + +There are two possible solutions for this problem. The recommended solution is +to use ``--single-file-segments`` mode, as that mode will not use a file +descriptor per each page, and it will keep compatibility with Virtio with +vhost-user backend. This option is not available when using ``--legacy-mem`` +mode. + +Another option is to use bigger page sizes. Since fewer pages are required to +cover the same memory area, fewer file descriptors will be stored internally +by EAL. + Support for Externally Allocated Memory ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/lib/librte_eal/common/eal_filesystem.h b/lib/librte_eal/common/eal_filesystem.h index 89a3added..f2f83e712 100644 --- a/lib/librte_eal/common/eal_filesystem.h +++ b/lib/librte_eal/common/eal_filesystem.h @@ -98,17 +98,6 @@ eal_get_hugefile_path(char *buffer, size_t buflen, const char *hugedir, int f_id return buffer; } -/** String format for hugepage map lock files. */ -#define HUGEFILE_LOCK_FMT "%s/map_%d.lock" -static inline const char * -eal_get_hugefile_lock_path(char *buffer, size_t buflen, int f_id) -{ - snprintf(buffer, buflen, HUGEFILE_LOCK_FMT, rte_eal_get_runtime_dir(), - f_id); - buffer[buflen - 1] = '\0'; - return buffer; -} - /** define the default filename prefix for the %s values above */ #define HUGEFILE_PREFIX_DEFAULT "rte" diff --git a/lib/librte_eal/linux/eal/eal_memalloc.c b/lib/librte_eal/linux/eal/eal_memalloc.c index e90fe6c95..4c51fb8ca 100644 --- a/lib/librte_eal/linux/eal/eal_memalloc.c +++ b/lib/librte_eal/linux/eal/eal_memalloc.c @@ -83,13 +83,8 @@ static int fallocate_supported = -1; /* unknown */ /* * we have two modes - single file segments, and file-per-page mode. * - * for single-file segments, we need some kind of mechanism to keep track of - * which hugepages can be freed back to the system, and which cannot. we cannot - * use flock() because they don't allow locking parts of a file, and we cannot - * use fcntl() due to issues with their semantics, so we will have to rely on a - * bunch of lockfiles for each page. so, we will use 'fds' array to keep track - * of per-page lockfiles. we will store the actual segment list fd in the - * 'memseg_list_fd' field. + * for single-file segments, we use memseg_list_fd to store the segment fd, + * while the fds[] will not be allocated, and len will be set to 0. * * for file-per-page mode, each page will have its own fd, so 'memseg_list_fd' * will be invalid (set to -1), and we'll use 'fds' to keep track of page fd's. @@ -245,80 +240,6 @@ static int lock(int fd, int type) return 1; } -static int get_segment_lock_fd(int list_idx, int seg_idx) -{ - char path[PATH_MAX] = {0}; - int fd; - - if (list_idx < 0 || list_idx >= (int)RTE_DIM(fd_list)) - return -1; - if (seg_idx < 0 || seg_idx >= fd_list[list_idx].len) - return -1; - - fd = fd_list[list_idx].fds[seg_idx]; - /* does this lock already exist? */ - if (fd >= 0) - return fd; - - eal_get_hugefile_lock_path(path, sizeof(path), - list_idx * RTE_MAX_MEMSEG_PER_LIST + seg_idx); - - fd = open(path, O_CREAT | O_RDWR, 0660); - if (fd < 0) { - RTE_LOG(ERR, EAL, "%s(): error creating lockfile '%s': %s\n", - __func__, path, strerror(errno)); - return -1; - } - /* take out a read lock */ - if (lock(fd, LOCK_SH) != 1) { - RTE_LOG(ERR, EAL, "%s(): failed to take out a readlock on '%s': %s\n", - __func__, path, strerror(errno)); - close(fd); - return -1; - } - /* store it for future reference */ - fd_list[list_idx].fds[seg_idx] = fd; - fd_list[list_idx].count++; - return fd; -} - -static int unlock_segment(int list_idx, int seg_idx) -{ - int fd, ret; - - if (list_idx < 0 || list_idx >= (int)RTE_DIM(fd_list)) - return -1; - if (seg_idx < 0 || seg_idx >= fd_list[list_idx].len) - return -1; - - fd = fd_list[list_idx].fds[seg_idx]; - - /* upgrade lock to exclusive to see if we can remove the lockfile */ - ret = lock(fd, LOCK_EX); - if (ret == 1) { - /* we've succeeded in taking exclusive lock, this lockfile may - * be removed. - */ - char path[PATH_MAX] = {0}; - eal_get_hugefile_lock_path(path, sizeof(path), - list_idx * RTE_MAX_MEMSEG_PER_LIST + seg_idx); - if (unlink(path)) { - RTE_LOG(ERR, EAL, "%s(): error removing lockfile '%s': %s\n", - __func__, path, strerror(errno)); - } - } - /* we don't want to leak the fd, so even if we fail to lock, close fd - * and remove it from list anyway. - */ - close(fd); - fd_list[list_idx].fds[seg_idx] = -1; - fd_list[list_idx].count--; - - if (ret < 0) - return -1; - return 0; -} - static int get_seg_memfd(struct hugepage_info *hi __rte_unused, unsigned int list_idx __rte_unused, @@ -425,7 +346,7 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi, } static int -resize_hugefile_in_memory(int fd, int list_idx, uint64_t fa_offset, +resize_hugefile_in_memory(int fd, uint64_t fa_offset, uint64_t page_sz, bool grow) { int flags = grow ? 0 : FALLOC_FL_PUNCH_HOLE | @@ -441,18 +362,12 @@ resize_hugefile_in_memory(int fd, int list_idx, uint64_t fa_offset, strerror(errno)); return -1; } - /* increase/decrease total segment count */ - fd_list[list_idx].count += (grow ? 1 : -1); - if (!grow && fd_list[list_idx].count == 0) { - close(fd_list[list_idx].memseg_list_fd); - fd_list[list_idx].memseg_list_fd = -1; - } return 0; } static int -resize_hugefile_in_filesystem(int fd, char *path, int list_idx, int seg_idx, - uint64_t fa_offset, uint64_t page_sz, bool grow) +resize_hugefile_in_filesystem(int fd, uint64_t fa_offset, uint64_t page_sz, + bool grow) { bool again = false; @@ -481,72 +396,22 @@ resize_hugefile_in_filesystem(int fd, char *path, int list_idx, int seg_idx, } else { int flags = grow ? 0 : FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE; - int ret, lock_fd; + int ret; - /* if fallocate() is supported, we need to take out a - * read lock on allocate (to prevent other processes - * from deallocating this page), and take out a write - * lock on deallocate (to ensure nobody else is using - * this page). - * - * read locks on page itself are already taken out at - * file creation, in get_seg_fd(). - * - * we cannot rely on simple use of flock() call, because - * we need to be able to lock a section of the file, - * and we cannot use fcntl() locks, because of numerous - * problems with their semantics, so we will use - * deterministically named lock files for each section - * of the file. - * - * if we're shrinking the file, we want to upgrade our - * lock from shared to exclusive. - * - * lock_fd is an fd for a lockfile, not for the segment - * list. + /* + * technically, it is perfectly safe for both primary + * and secondary to grow and shrink the page files: + * growing the file repeatedly has no effect because + * a page can only be allocated once, while mmap ensures + * that secondaries hold on to the page even after the + * page itself is removed from the filesystem. + * + * however, leaving growing/shrinking to the primary + * tends to expose bugs in fdlist page count handling, + * so leave this here just in case. */ - lock_fd = get_segment_lock_fd(list_idx, seg_idx); - - if (!grow) { - /* we are using this lockfile to determine - * whether this particular page is locked, as we - * are in single file segments mode and thus - * cannot use regular flock() to get this info. - * - * we want to try and take out an exclusive lock - * on the lock file to determine if we're the - * last ones using this page, and if not, we - * won't be shrinking it, and will instead exit - * prematurely. - */ - ret = lock(lock_fd, LOCK_EX); - - /* drop the lock on the lockfile, so that even - * if we couldn't shrink the file ourselves, we - * are signalling to other processes that we're - * no longer using this page. - */ - if (unlock_segment(list_idx, seg_idx)) - RTE_LOG(ERR, EAL, "Could not unlock segment\n"); - - /* additionally, if this was the last lock on - * this segment list, we can safely close the - * page file fd, so that one of the processes - * could then delete the file after shrinking. - */ - if (ret < 1 && fd_list[list_idx].count == 0) { - close(fd); - fd_list[list_idx].memseg_list_fd = -1; - } - - if (ret < 0) { - RTE_LOG(ERR, EAL, "Could not lock segment\n"); - return -1; - } - if (ret == 0) - /* failed to lock, not an error. */ - return 0; - } + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; /* grow or shrink the file */ ret = fallocate(fd, flags, fa_offset, page_sz); @@ -564,44 +429,43 @@ resize_hugefile_in_filesystem(int fd, char *path, int list_idx, int seg_idx, strerror(errno)); return -1; } - } else { + } else fallocate_supported = 1; - - /* we've grew/shrunk the file, and we hold an - * exclusive lock now. check if there are no - * more segments active in this segment list, - * and remove the file if there aren't. - */ - if (fd_list[list_idx].count == 0) { - if (unlink(path)) - RTE_LOG(ERR, EAL, "%s(): unlinking '%s' failed: %s\n", - __func__, path, - strerror(errno)); - close(fd); - fd_list[list_idx].memseg_list_fd = -1; - } - } } } while (again); return 0; } +static void +close_hugefile(int fd, char *path, int list_idx) +{ + /* + * primary process must unlink the file, but only when not in in-memory + * mode (as in that case there is no file to unlink). + */ + if (!internal_config.in_memory && + rte_eal_process_type() == RTE_PROC_PRIMARY && + unlink(path)) + RTE_LOG(ERR, EAL, "%s(): unlinking '%s' failed: %s\n", + __func__, path, strerror(errno)); + + close(fd); + fd_list[list_idx].memseg_list_fd = -1; +} static int -resize_hugefile(int fd, char *path, int list_idx, int seg_idx, - uint64_t fa_offset, uint64_t page_sz, bool grow) +resize_hugefile(int fd, uint64_t fa_offset, uint64_t page_sz, bool grow) { - - /* in-memory mode is a special case, because we don't need to perform - * any locking, and we can be sure that fallocate() is supported. + /* in-memory mode is a special case, because we can be sure that + * fallocate() is supported. */ if (internal_config.in_memory) - return resize_hugefile_in_memory(fd, list_idx, fa_offset, + return resize_hugefile_in_memory(fd, fa_offset, page_sz, grow); - return resize_hugefile_in_filesystem(fd, path, list_idx, seg_idx, - fa_offset, page_sz, grow); + return resize_hugefile_in_filesystem(fd, fa_offset, page_sz, + grow); } static int @@ -663,10 +527,11 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, if (internal_config.single_file_segments) { map_offset = seg_idx * alloc_sz; - ret = resize_hugefile(fd, path, list_idx, seg_idx, - map_offset, alloc_sz, true); + ret = resize_hugefile(fd, map_offset, alloc_sz, true); if (ret < 0) goto resized; + + fd_list[list_idx].count++; } else { map_offset = 0; if (ftruncate(fd, alloc_sz) < 0) { @@ -769,15 +634,21 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, */ RTE_LOG(CRIT, EAL, "Can't mmap holes in our virtual address space\n"); } + /* roll back the ref count */ + if (internal_config.single_file_segments) + fd_list[list_idx].count--; resized: /* some codepaths will return negative fd, so exit early */ if (fd < 0) return -1; if (internal_config.single_file_segments) { - resize_hugefile(fd, path, list_idx, seg_idx, map_offset, - alloc_sz, false); + resize_hugefile(fd, map_offset, alloc_sz, false); /* ignore failure, can't make it any worse */ + + /* if refcount is at zero, close the file */ + if (fd_list[list_idx].count == 0) + close_hugefile(fd, path, list_idx); } else { /* only remove file if we can take out a write lock */ if (internal_config.hugepage_unlink == 0 && @@ -834,9 +705,12 @@ free_seg(struct rte_memseg *ms, struct hugepage_info *hi, if (internal_config.single_file_segments) { map_offset = seg_idx * ms->len; - if (resize_hugefile(fd, path, list_idx, seg_idx, map_offset, - ms->len, false)) + if (resize_hugefile(fd, map_offset, ms->len, false)) return -1; + + if (--(fd_list[list_idx].count) == 0) + close_hugefile(fd, path, list_idx); + ret = 0; } else { /* if we're able to take out a write lock, we're the last one @@ -1492,18 +1366,24 @@ alloc_list(int list_idx, int len) int *data; int i; - /* ensure we have space to store fd per each possible segment */ - data = malloc(sizeof(int) * len); - if (data == NULL) { - RTE_LOG(ERR, EAL, "Unable to allocate space for file descriptors\n"); - return -1; + /* single-file segments mode does not need fd list */ + if (!internal_config.single_file_segments) { + /* ensure we have space to store fd per each possible segment */ + data = malloc(sizeof(int) * len); + if (data == NULL) { + RTE_LOG(ERR, EAL, "Unable to allocate space for file descriptors\n"); + return -1; + } + /* set all fd's as invalid */ + for (i = 0; i < len; i++) + data[i] = -1; + fd_list[list_idx].fds = data; + fd_list[list_idx].len = len; + } else { + fd_list[list_idx].fds = NULL; + fd_list[list_idx].len = 0; } - /* set all fd's as invalid */ - for (i = 0; i < len; i++) - data[i] = -1; - fd_list[list_idx].fds = data; - fd_list[list_idx].len = len; fd_list[list_idx].count = 0; fd_list[list_idx].memseg_list_fd = -1; @@ -1551,20 +1431,10 @@ eal_memalloc_set_seg_fd(int list_idx, int seg_idx, int fd) int eal_memalloc_set_seg_list_fd(int list_idx, int fd) { - struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config; - /* non-single file segment mode doesn't support segment list fd's */ if (!internal_config.single_file_segments) return -ENOTSUP; - /* if list is not allocated, allocate it */ - if (fd_list[list_idx].len == 0) { - int len = mcfg->memsegs[list_idx].memseg_arr.len; - - if (alloc_list(list_idx, len) < 0) - return -ENOMEM; - } - fd_list[list_idx].memseg_list_fd = fd; return 0; @@ -1642,9 +1512,6 @@ eal_memalloc_get_seg_fd_offset(int list_idx, int seg_idx, size_t *offset) return -ENOTSUP; } - /* fd_list not initialized? */ - if (fd_list[list_idx].len == 0) - return -ENODEV; if (internal_config.single_file_segments) { size_t pgsz = mcfg->memsegs[list_idx].page_sz; @@ -1653,6 +1520,10 @@ eal_memalloc_get_seg_fd_offset(int list_idx, int seg_idx, size_t *offset) return -ENOENT; *offset = pgsz * seg_idx; } else { + /* fd_list not initialized? */ + if (fd_list[list_idx].len == 0) + return -ENODEV; + /* segment not active? */ if (fd_list[list_idx].fds[seg_idx] < 0) return -ENOENT; -- 2.17.1