From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id BE9371B889 for ; Wed, 4 Apr 2018 01:22:31 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 03 Apr 2018 16:22:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.48,403,1517904000"; d="scan'208";a="188428148" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by orsmga004.jf.intel.com with ESMTP; 03 Apr 2018 16:22:26 -0700 Received: from sivswdev01.ir.intel.com (sivswdev01.ir.intel.com [10.237.217.45]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id w33NMPeG013110; Wed, 4 Apr 2018 00:22:25 +0100 Received: from sivswdev01.ir.intel.com (localhost [127.0.0.1]) by sivswdev01.ir.intel.com with ESMTP id w33NMPBg014745; Wed, 4 Apr 2018 00:22:25 +0100 Received: (from aburakov@localhost) by sivswdev01.ir.intel.com with LOCAL id w33NMPH7014741; Wed, 4 Apr 2018 00:22:25 +0100 From: Anatoly Burakov To: dev@dpdk.org Cc: Olivier Matz , keith.wiles@intel.com, jianfeng.tan@intel.com, andras.kovacs@ericsson.com, laszlo.vadkeri@ericsson.com, benjamin.walker@intel.com, bruce.richardson@intel.com, thomas@monjalon.net, konstantin.ananyev@intel.com, kuralamudhan.ramakrishnan@intel.com, louise.m.daly@intel.com, nelio.laranjeiro@6wind.com, yskoh@mellanox.com, pepperjo@japf.ch, jerin.jacob@caviumnetworks.com, hemant.agrawal@nxp.com, shreyansh.jain@nxp.com, gowrishankar.m@linux.vnet.ibm.com Date: Wed, 4 Apr 2018 00:21:36 +0100 Message-Id: <0d00b6eba2742d1bb4ac975e4b0191aa2774d659.1522797505.git.anatoly.burakov@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: References: In-Reply-To: References: Subject: [dpdk-dev] [PATCH v3 24/68] mempool: add support for the new allocation methods X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Apr 2018 23:22:33 -0000 If a user has specified that the zone should have contiguous memory, use the new _contig allocation API's instead of normal ones. Otherwise, account for the fact that unless we're in IOVA_AS_VA mode, we cannot guarantee that the pages would be physically contiguous, so we calculate the memzone size and alignments as if we were getting the smallest page size available. Existing mempool size calculation function also doesn't give us expected results, because it will return memzone sizes aligned to page size (e.g. a 1MB mempool will reserve the entire 1GB page if all user has are 1GB pages), so add a new one that will give us results more in line with what we would expect. Signed-off-by: Anatoly Burakov --- Notes: v3: - Fixed mempool size calculation - Fixed handling of contiguous memzones - Moved earlier in the patchset lib/librte_mempool/Makefile | 3 + lib/librte_mempool/meson.build | 3 + lib/librte_mempool/rte_mempool.c | 137 ++++++++++++++++++++++++++++++++------- 3 files changed, 121 insertions(+), 22 deletions(-) diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile index 24e735a..cfc69b4 100644 --- a/lib/librte_mempool/Makefile +++ b/lib/librte_mempool/Makefile @@ -13,6 +13,9 @@ EXPORT_MAP := rte_mempool_version.map LIBABIVER := 3 +# uses new contiguous memzone allocation that isn't yet in stable ABI +CFLAGS += -DALLOW_EXPERIMENTAL_API + # all source are stored in SRCS-y SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool.c SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool_ops.c diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build index 712720f..5916a0f 100644 --- a/lib/librte_mempool/meson.build +++ b/lib/librte_mempool/meson.build @@ -5,3 +5,6 @@ version = 3 sources = files('rte_mempool.c', 'rte_mempool_ops.c') headers = files('rte_mempool.h') deps += ['ring'] + +# contig memzone allocation is not yet part of stable API +allow_experimental_apis = true diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 54f7f4b..e147180 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -3,6 +3,7 @@ * Copyright(c) 2016 6WIND S.A. */ +#include #include #include #include @@ -98,6 +99,27 @@ static unsigned optimize_object_size(unsigned obj_size) return new_obj_size * RTE_MEMPOOL_ALIGN; } +static size_t +get_min_page_size(void) +{ + const struct rte_mem_config *mcfg = + rte_eal_get_configuration()->mem_config; + int i; + size_t min_pagesz = SIZE_MAX; + + for (i = 0; i < RTE_MAX_MEMSEG; i++) { + const struct rte_memseg *ms = &mcfg->memseg[i]; + + if (ms->addr == NULL) + continue; + + if (ms->hugepage_sz < min_pagesz) + min_pagesz = ms->hugepage_sz; + } + + return min_pagesz == SIZE_MAX ? (size_t) getpagesize() : min_pagesz; +} + static void mempool_add_elem(struct rte_mempool *mp, void *obj, rte_iova_t iova) { @@ -204,7 +226,6 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags, return sz->total_size; } - /* * Calculate maximum amount of memory required to store given number of objects. */ @@ -367,16 +388,6 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr, /* update mempool capabilities */ mp->flags |= mp_capa_flags; - /* Detect pool area has sufficient space for elements */ - if (mp_capa_flags & MEMPOOL_F_CAPA_PHYS_CONTIG) { - if (len < total_elt_sz * mp->size) { - RTE_LOG(ERR, MEMPOOL, - "pool area %" PRIx64 " not enough\n", - (uint64_t)len); - return -ENOSPC; - } - } - memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0); if (memhdr == NULL) return -ENOMEM; @@ -549,6 +560,7 @@ rte_mempool_populate_default(struct rte_mempool *mp) unsigned mz_id, n; unsigned int mp_flags; int ret; + bool force_contig, no_contig, try_contig, no_pageshift; /* mempool must not be populated */ if (mp->nb_mem_chunks != 0) @@ -563,9 +575,62 @@ rte_mempool_populate_default(struct rte_mempool *mp) /* update mempool capabilities */ mp->flags |= mp_flags; - if (rte_eal_has_hugepages()) { - pg_shift = 0; /* not needed, zone is physically contiguous */ + no_contig = mp->flags & MEMPOOL_F_NO_PHYS_CONTIG; + force_contig = mp->flags & MEMPOOL_F_CAPA_PHYS_CONTIG; + + /* + * the following section calculates page shift and page size values. + * + * these values impact the result of rte_mempool_xmem_size(), which + * returns the amount of memory that should be allocated to store the + * desired number of objects. when not zero, it allocates more memory + * for the padding between objects, to ensure that an object does not + * cross a page boundary. in other words, page size/shift are to be set + * to zero if mempool elements won't care about page boundaries. + * there are several considerations for page size and page shift here. + * + * if we don't need our mempools to have physically contiguous objects, + * then just set page shift and page size to 0, because the user has + * indicated that there's no need to care about anything. + * + * if we do need contiguous objects, there is also an option to reserve + * the entire mempool memory as one contiguous block of memory, in + * which case the page shift and alignment wouldn't matter as well. + * + * if we require contiguous objects, but not necessarily the entire + * mempool reserved space to be contiguous, then there are two options. + * + * if our IO addresses are virtual, not actual physical (IOVA as VA + * case), then no page shift needed - our memory allocation will give us + * contiguous physical memory as far as the hardware is concerned, so + * act as if we're getting contiguous memory. + * + * if our IO addresses are physical, we may get memory from bigger + * pages, or we might get memory from smaller pages, and how much of it + * we require depends on whether we want bigger or smaller pages. + * However, requesting each and every memory size is too much work, so + * what we'll do instead is walk through the page sizes available, pick + * the smallest one and set up page shift to match that one. We will be + * wasting some space this way, but it's much nicer than looping around + * trying to reserve each and every page size. + * + * However, since size calculation will produce page-aligned sizes, it + * makes sense to first try and see if we can reserve the entire memzone + * in one contiguous chunk as well (otherwise we might end up wasting a + * 1G page on a 10MB memzone). If we fail to get enough contiguous + * memory, then we'll go and reserve space page-by-page. + */ + no_pageshift = no_contig || force_contig || + rte_eal_iova_mode() == RTE_IOVA_VA; + try_contig = !no_contig && !no_pageshift && rte_eal_has_hugepages(); + + if (no_pageshift) { pg_sz = 0; + pg_shift = 0; + align = RTE_CACHE_LINE_SIZE; + } else if (try_contig) { + pg_sz = get_min_page_size(); + pg_shift = rte_bsf32(pg_sz); align = RTE_CACHE_LINE_SIZE; } else { pg_sz = getpagesize(); @@ -575,8 +640,12 @@ rte_mempool_populate_default(struct rte_mempool *mp) total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) { - size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift, - mp->flags); + if (try_contig || no_pageshift) + size = rte_mempool_xmem_size(n, total_elt_sz, 0, + mp->flags); + else + size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift, + mp->flags); ret = snprintf(mz_name, sizeof(mz_name), RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id); @@ -585,23 +654,47 @@ rte_mempool_populate_default(struct rte_mempool *mp) goto fail; } - mz = rte_memzone_reserve_aligned(mz_name, size, - mp->socket_id, mz_flags, align); - /* not enough memory, retry with the biggest zone we have */ - if (mz == NULL) - mz = rte_memzone_reserve_aligned(mz_name, 0, + mz = NULL; + if (force_contig || try_contig) { + /* if contiguous memory for entire mempool memory was + * requested, don't try reserving again if we fail... + */ + mz = rte_memzone_reserve_aligned_contig(mz_name, size, + mp->socket_id, mz_flags, align); + + /* ...unless we are doing best effort allocation, in + * which case recalculate size and try again */ + if (try_contig && mz == NULL) { + try_contig = false; + align = pg_sz; + size = rte_mempool_xmem_size(n, total_elt_sz, + pg_shift, mp->flags); + } + } + /* only try this if we're not trying to reserve contiguous + * memory. + */ + if (!force_contig && mz == NULL) { + mz = rte_memzone_reserve_aligned(mz_name, size, mp->socket_id, mz_flags, align); + /* not enough memory, retry with the biggest zone we + * have + */ + if (mz == NULL) + mz = rte_memzone_reserve_aligned(mz_name, 0, + mp->socket_id, mz_flags, align); + } if (mz == NULL) { ret = -rte_errno; goto fail; } - if (mp->flags & MEMPOOL_F_NO_PHYS_CONTIG) + if (no_contig) iova = RTE_BAD_IOVA; else iova = mz->iova; - if (rte_eal_has_hugepages()) + if (no_pageshift || try_contig) ret = rte_mempool_populate_iova(mp, mz->addr, iova, mz->len, rte_mempool_memchunk_mz_free, -- 2.7.4