From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id AA90AA00BE;
	Tue, 29 Oct 2019 11:59:14 +0100 (CET)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 09F961BEC0;
	Tue, 29 Oct 2019 11:59:14 +0100 (CET)
Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com
 [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id 296D91BEBE
 for <dev@dpdk.org>; Tue, 29 Oct 2019 11:59:13 +0100 (CET)
X-Virus-Scanned: Proofpoint Essentials engine
Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits))
 (No client certificate requested)
 by mx1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id 7FDF8400059;
 Tue, 29 Oct 2019 10:59:11 +0000 (UTC)
Received: from [192.168.38.17] (91.220.146.112) by ukex01.SolarFlarecom.com
 (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Tue, 29 Oct
 2019 10:59:04 +0000
To: Olivier Matz <olivier.matz@6wind.com>, <dev@dpdk.org>
CC: Anatoly Burakov <anatoly.burakov@intel.com>, Ferruh Yigit
 <ferruh.yigit@linux.intel.com>, "Giridharan, Ganesan" <ggiridharan@rbbn.com>, 
 Jerin Jacob Kollanukkaran <jerinj@marvell.com>, Kiran Kumar Kokkilagadda
 <kirankumark@marvell.com>, Stephen Hemminger <sthemmin@microsoft.com>,
 "Thomas Monjalon" <thomas@monjalon.net>, Vamsi Krishna Attunuru
 <vattunuru@marvell.com>
References: <20190719133845.32432-1-olivier.matz@6wind.com>
 <20191028140122.9592-1-olivier.matz@6wind.com>
 <20191028140122.9592-6-olivier.matz@6wind.com>
From: Andrew Rybchenko <arybchenko@solarflare.com>
Message-ID: <08a69641-9876-1f28-0f43-06f5d858d4c7@solarflare.com>
Date: Tue, 29 Oct 2019 13:59:00 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101
 Thunderbird/60.9.0
MIME-Version: 1.0
In-Reply-To: <20191028140122.9592-6-olivier.matz@6wind.com>
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-Originating-IP: [91.220.146.112]
X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To
 ukex01.SolarFlarecom.com (10.17.10.4)
X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-25008.003
X-TM-AS-Result: No-15.009100-8.000000-10
X-TMASE-MatchedRID: rYpa/RC+czFq0U6EhO9EE4mfV7NNMGm+69aS+7/zbj+qvcIF1TcLYPVl
 5vsoSSsoyUk1YBpb1H1jhrNbizn3E5FtWie5K6Wwqg0gbtLVIa9DKB2V8AcU1s/GMgxfN8H5Tfx
 xtRfap9eFYdcmVZBI05NNJt74p9tpBm30RORoPjM/ApMPW/xhXkyQ5fRSh265R2YNIFh+clHAN1
 HhbCc8kihoTDS+tzJTjGRC9THIB9pSchotDGJwH850DGuIExk9Ap+UH372RZUda1Vk3RqxONsER
 D+DkXEYUK03rr1+HUIGlgppKEbfsHqMH7Pm+H/ttvnlOJ61K3qCF6GkB9h+D0krZ4mFjTbDYxqm
 cULrxeYLvoxPD55XHAPEYyCVER7aGdVSNQ1S+GUSkDbX2wMzOQ9J3ew438tJgstfdkcDXJpcPaU
 Wmei62XOH/XE7D0nrHn/XO61IUYpf29P9XEzmn6IBnfMCFBiCrYIJqyGcXHURHgO3hlQeWfh5gT
 8kXkoM5/9u6pGGav1pM4uQJ+eFBm5Y5G74YU3HngIgpj8eDcC063Wh9WVqgtQdB5NUNSsi1GcRA
 JRT6POOhzOa6g8KreNDI9rtCAyB1dSbajsyssadUcbB8JpA/0UCZSd6og8L6JMnHygaV4A=
X-TM-AS-User-Approved-Sender: Yes
X-TM-AS-User-Blocked-Sender: No
X-TMASE-Result: 10--15.009100-8.000000
X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-25008.003
X-MDID: 1572346752-qPk4zJqUv2Rz
Subject: Re: [dpdk-dev] ***Spam*** [PATCH 5/5] mempool: prevent objects from
 being across pages
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

On 10/28/19 5:01 PM, Olivier Matz wrote:
> When populating a mempool, ensure that objects are not located across
> several pages, except if user did not request iova contiguous objects.

I think it breaks distribution across memory channels which could
affect performance significantly.

> Signed-off-by: Vamsi Krishna Attunuru <vattunuru@marvell.com>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> ---
>   lib/librte_mempool/rte_mempool.c             | 23 +++++-----------
>   lib/librte_mempool/rte_mempool_ops_default.c | 29 ++++++++++++++++++--
>   2 files changed, 33 insertions(+), 19 deletions(-)
>
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index 7664764e5..b23fd1b06 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -428,8 +428,6 @@ rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz)
>   
>   	if (!need_iova_contig_obj)
>   		*pg_sz = 0;
> -	else if (!alloc_in_ext_mem && rte_eal_iova_mode() == RTE_IOVA_VA)
> -		*pg_sz = 0;
>   	else if (rte_eal_has_hugepages() || alloc_in_ext_mem)
>   		*pg_sz = get_min_page_size(mp->socket_id);
>   	else
> @@ -478,17 +476,15 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>   	 * then just set page shift and page size to 0, because the user has
>   	 * indicated that there's no need to care about anything.
>   	 *
> -	 * if we do need contiguous objects, there is also an option to reserve
> -	 * the entire mempool memory as one contiguous block of memory, in
> -	 * which case the page shift and alignment wouldn't matter as well.
> +	 * if we do need contiguous objects (if a mempool driver has its
> +	 * own calc_size() method returning min_chunk_size = mem_size),
> +	 * there is also an option to reserve the entire mempool memory
> +	 * as one contiguous block of memory.
>   	 *
>   	 * if we require contiguous objects, but not necessarily the entire
> -	 * mempool reserved space to be contiguous, then there are two options.
> -	 *
> -	 * if our IO addresses are virtual, not actual physical (IOVA as VA
> -	 * case), then no page shift needed - our memory allocation will give us
> -	 * contiguous IO memory as far as the hardware is concerned, so
> -	 * act as if we're getting contiguous memory.
> +	 * mempool reserved space to be contiguous, pg_sz will be != 0,
> +	 * and the default ops->populate() will take care of not placing
> +	 * objects across pages.
>   	 *
>   	 * if our IO addresses are physical, we may get memory from bigger
>   	 * pages, or we might get memory from smaller pages, and how much of it
> @@ -501,11 +497,6 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>   	 *
>   	 * If we fail to get enough contiguous memory, then we'll go and
>   	 * reserve space in smaller chunks.
> -	 *
> -	 * We also have to take into account the fact that memory that we're
> -	 * going to allocate from can belong to an externally allocated memory
> -	 * area, in which case the assumption of IOVA as VA mode being
> -	 * synonymous with IOVA contiguousness will not hold.
>   	 */
>   
>   	need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG);
> diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
> index f6aea7662..dd09a0a32 100644
> --- a/lib/librte_mempool/rte_mempool_ops_default.c
> +++ b/lib/librte_mempool/rte_mempool_ops_default.c
> @@ -61,21 +61,44 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
>   	return mem_size;
>   }
>   
> +/* Returns -1 if object crosses a page boundary, else returns 0 */
> +static int
> +check_obj_bounds(char *obj, size_t pg_sz, size_t elt_sz)
> +{
> +	if (pg_sz == 0)
> +		return 0;
> +	if (elt_sz > pg_sz)
> +		return 0;
> +	if (RTE_PTR_ALIGN(obj, pg_sz) != RTE_PTR_ALIGN(obj + elt_sz - 1, pg_sz))
> +		return -1;
> +	return 0;
> +}
> +
>   int
>   rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int max_objs,
>   		void *vaddr, rte_iova_t iova, size_t len,
>   		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
>   {
> -	size_t total_elt_sz;
> +	char *va = vaddr;
> +	size_t total_elt_sz, pg_sz;
>   	size_t off;
>   	unsigned int i;
>   	void *obj;
>   
> +	rte_mempool_get_page_size(mp, &pg_sz);
> +

The function may return an error which should be taken into account here.

>   	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
>   
> -	for (off = 0, i = 0; off + total_elt_sz <= len && i < max_objs; i++) {
> +	for (off = 0, i = 0; i < max_objs; i++) {
> +		/* align offset to next page start if required */
> +		if (check_obj_bounds(va + off, pg_sz, total_elt_sz) < 0)
> +			off += RTE_PTR_ALIGN_CEIL(va + off, pg_sz) - (va + off);
> +
> +		if (off + total_elt_sz > len)
> +			break;
> +
>   		off += mp->header_size;
> -		obj = (char *)vaddr + off;
> +		obj = va + off;
>   		obj_cb(mp, obj_cb_arg, obj,
>   		       (iova == RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off));
>   		rte_mempool_ops_enqueue_bulk(mp, &obj, 1);