From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id C69D31C0FF for ; Thu, 12 Apr 2018 17:22:21 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Apr 2018 08:22:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.48,442,1517904000"; d="scan'208";a="216093494" Received: from aburakov-mobl.ger.corp.intel.com (HELO [10.237.220.128]) ([10.237.220.128]) by orsmga005.jf.intel.com with ESMTP; 12 Apr 2018 08:22:18 -0700 To: Andrew Rybchenko , dev@dpdk.org Cc: Olivier MATZ References: <1516713372-10572-1-git-send-email-arybchenko@solarflare.com> <1522080591-24705-1-git-send-email-arybchenko@solarflare.com> <1522080591-24705-5-git-send-email-arybchenko@solarflare.com> From: "Burakov, Anatoly" Message-ID: <5aec959f-7073-5c7f-e78f-3a8139de80c5@intel.com> Date: Thu, 12 Apr 2018 16:22:17 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <1522080591-24705-5-git-send-email-arybchenko@solarflare.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v3 04/11] mempool: add op to calculate memory size to be allocated X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Apr 2018 15:22:23 -0000 On 26-Mar-18 5:09 PM, Andrew Rybchenko wrote: > Size of memory chunk required to populate mempool objects depends > on how objects are stored in the memory. Different mempool drivers > may have different requirements and a new operation allows to > calculate memory size in accordance with driver requirements and > advertise requirements on minimum memory chunk size and alignment > in a generic way. > > Bump ABI version since the patch breaks it. > > Suggested-by: Olivier Matz > Signed-off-by: Andrew Rybchenko > --- Hi Andrew, <...> > - total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; > for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) { > - size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift, > - mp->flags); > + size_t min_chunk_size; > + > + mem_size = rte_mempool_ops_calc_mem_size(mp, n, pg_shift, > + &min_chunk_size, &align); > + if (mem_size < 0) { > + ret = mem_size; > + goto fail; > + } > > ret = snprintf(mz_name, sizeof(mz_name), > RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id); > @@ -606,7 +600,7 @@ rte_mempool_populate_default(struct rte_mempool *mp) > goto fail; > } > > - mz = rte_memzone_reserve_aligned(mz_name, size, > + mz = rte_memzone_reserve_aligned(mz_name, mem_size, > mp->socket_id, mz_flags, align); > /* not enough memory, retry with the biggest zone we have */ > if (mz == NULL) > @@ -617,6 +611,12 @@ rte_mempool_populate_default(struct rte_mempool *mp) > goto fail; > } > > + if (mz->len < min_chunk_size) { > + rte_memzone_free(mz); > + ret = -ENOMEM; > + goto fail; > + } > + > if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG) > iova = RTE_BAD_IOVA; OK by me, but needs to be rebased. > else > @@ -649,13 +649,14 @@ rte_mempool_populate_default(struct rte_mempool *mp) > static size_t > get_anon_size(const struct rte_mempool *mp) > { > - size_t size, total_elt_sz, pg_sz, pg_shift; > + size_t size, pg_sz, pg_shift; > + size_t min_chunk_size; > + size_t align; > > pg_sz = getpagesize(); <...> > > +/** > + * Calculate memory size required to store given number of objects. > + * > + * If mempool objects are not required to be IOVA-contiguous > + * (the flag MEMPOOL_F_NO_IOVA_CONTIG is set), min_chunk_size defines > + * virtually contiguous chunk size. Otherwise, if mempool objects must > + * be IOVA-contiguous (the flag MEMPOOL_F_NO_IOVA_CONTIG is clear), > + * min_chunk_size defines IOVA-contiguous chunk size. > + * > + * @param[in] mp > + * Pointer to the memory pool. > + * @param[in] obj_num > + * Number of objects. > + * @param[in] pg_shift > + * LOG2 of the physical pages size. If set to 0, ignore page boundaries. > + * @param[out] min_chunk_size > + * Location for minimum size of the memory chunk which may be used to > + * store memory pool objects. > + * @param[out] align > + * Location for required memory chunk alignment. > + * @return > + * Required memory size aligned at page boundary. > + */ > +typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp, > + uint32_t obj_num, uint32_t pg_shift, > + size_t *min_chunk_size, size_t *align); > + > +/** > + * Default way to calculate memory size required to store given number of > + * objects. > + * > + * If page boundaries may be ignored, it is just a product of total > + * object size including header and trailer and number of objects. > + * Otherwise, it is a number of pages required to store given number of > + * objects without crossing page boundary. > + * > + * Note that if object size is bigger than page size, then it assumes > + * that pages are grouped in subsets of physically continuous pages big > + * enough to store at least one object. > + * > + * If mempool driver requires object addresses to be block size aligned > + * (MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS), space for one extra element is > + * reserved to be able to meet the requirement. > + * > + * Minimum size of memory chunk is either all required space, if > + * capabilities say that whole memory area must be physically contiguous > + * (MEMPOOL_F_CAPA_PHYS_CONTIG), or a maximum of the page size and total > + * element size. > + * > + * Required memory chunk alignment is a maximum of page size and cache > + * line size. > + */ > +ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, > + uint32_t obj_num, uint32_t pg_shift, > + size_t *min_chunk_size, size_t *align); For API docs and wording, Acked-by: Anatoly Burakov Should be pretty straightforward to rebase, so you probably should keep my ack for v4. -- Thanks, Anatoly