From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id C365D8E5B for ; Tue, 17 Apr 2018 12:23:44 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Apr 2018 03:23:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.48,463,1517904000"; d="scan'208";a="46808324" Received: from aburakov-mobl.ger.corp.intel.com (HELO [10.237.220.128]) ([10.237.220.128]) by fmsmga004.fm.intel.com with ESMTP; 17 Apr 2018 03:23:43 -0700 To: Andrew Rybchenko , dev@dpdk.org Cc: Olivier MATZ References: <1511539591-20966-1-git-send-email-arybchenko@solarflare.com> <1523885080-17168-1-git-send-email-arybchenko@solarflare.com> <1523885080-17168-5-git-send-email-arybchenko@solarflare.com> From: "Burakov, Anatoly" Message-ID: Date: Tue, 17 Apr 2018 11:23:42 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <1523885080-17168-5-git-send-email-arybchenko@solarflare.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v4 04/11] mempool: add op to calculate memory size to be allocated X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 Apr 2018 10:23:45 -0000 On 16-Apr-18 2:24 PM, Andrew Rybchenko wrote: > Size of memory chunk required to populate mempool objects depends > on how objects are stored in the memory. Different mempool drivers > may have different requirements and a new operation allows to > calculate memory size in accordance with driver requirements and > advertise requirements on minimum memory chunk size and alignment > in a generic way. > > Bump ABI version since the patch breaks it. > > Suggested-by: Olivier Matz > Signed-off-by: Andrew Rybchenko > --- > v3 -> v4: > - rebased on top of memory rework > - dropped previous Ack's since rebase is not trivial > - check size calculation failure in rte_mempool_populate_anon() and > rte_mempool_memchunk_anon_free() > > v2 -> v3: > - none > > v1 -> v2: > - clarify min_chunk_size meaning > - rebase on top of patch series which fixes library version in meson > build > > RFCv2 -> v1: > - move default calc_mem_size callback to rte_mempool_ops_default.c > - add ABI changes to release notes > - name default callback consistently: rte_mempool_op__default() > - bump ABI version since it is the first patch which breaks ABI > - describe default callback behaviour in details > - avoid introduction of internal function to cope with deprecation > (keep it to deprecation patch) > - move cache-line or page boundary chunk alignment to default callback > - highlight that min_chunk_size and align parameters are output only > > doc/guides/rel_notes/deprecation.rst | 3 +- > doc/guides/rel_notes/release_18_05.rst | 8 +- > lib/librte_mempool/Makefile | 3 +- > lib/librte_mempool/meson.build | 5 +- > lib/librte_mempool/rte_mempool.c | 114 +++++++++++++++------------ > lib/librte_mempool/rte_mempool.h | 86 +++++++++++++++++++- > lib/librte_mempool/rte_mempool_ops.c | 18 +++++ > lib/librte_mempool/rte_mempool_ops_default.c | 38 +++++++++ > lib/librte_mempool/rte_mempool_version.map | 7 ++ > 9 files changed, 225 insertions(+), 57 deletions(-) > create mode 100644 lib/librte_mempool/rte_mempool_ops_default.c <...> > - total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; > for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) { > + size_t min_chunk_size; > unsigned int flags; > + > if (try_contig || no_pageshift) > - size = rte_mempool_xmem_size(n, total_elt_sz, 0, > - mp->flags); > + mem_size = rte_mempool_ops_calc_mem_size(mp, n, > + 0, &min_chunk_size, &align); > else > - size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift, > - mp->flags); > + mem_size = rte_mempool_ops_calc_mem_size(mp, n, > + pg_shift, &min_chunk_size, &align); > + > + if (mem_size < 0) { > + ret = mem_size; > + goto fail; > + } > > ret = snprintf(mz_name, sizeof(mz_name), > RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id); > @@ -692,27 +678,31 @@ rte_mempool_populate_default(struct rte_mempool *mp) > if (try_contig) > flags |= RTE_MEMZONE_IOVA_CONTIG; > > - mz = rte_memzone_reserve_aligned(mz_name, size, mp->socket_id, > - flags, align); > + mz = rte_memzone_reserve_aligned(mz_name, mem_size, > + mp->socket_id, flags, align); > > - /* if we were trying to allocate contiguous memory, adjust > - * memzone size and page size to fit smaller page sizes, and > - * try again. > + /* if we were trying to allocate contiguous memory, failed and > + * minimum required contiguous chunk fits minimum page, adjust > + * memzone size to the page size, and try again. > */ > - if (mz == NULL && try_contig) { > + if (mz == NULL && try_contig && min_chunk_size <= pg_sz) { This is a bit pessimistic. There may not have been enough IOVA-contiguous memory to reserve `mem_size`, but there may be enough contiguous memory to try and reserve `min_chunk_size` contiguous memory if it's bigger than minimum page size. This is *minimum* page size, so there may be bigger pages, and ideally if (min_chunk_size >= pg_sz) && (min_chunk_size < mem_size), we might've tried to allocate some IOVA-contiguous memory, and succeed. However, that's not a huge issue, so... Acked-by: Anatoly Burakov > try_contig = false; > flags &= ~RTE_MEMZONE_IOVA_CONTIG; > - align = pg_sz; > - size = rte_mempool_xmem_size(n, total_elt_sz, > - pg_shift, mp->flags); > > - mz = rte_memzone_reserve_aligned(mz_name, size, > + mem_size = rte_mempool_ops_calc_mem_size(mp, n, > + pg_shift, &min_chunk_size, &align); > + if (mem_size < 0) { > + ret = mem_size; > + goto fail; > + } > + > + mz = rte_memzone_reserve_aligned(mz_name, mem_size, > mp->socket_id, flags, align); > } <...> -- Thanks, Anatoly