From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id A86EB43A64;
	Sun,  4 Feb 2024 02:38:23 +0100 (CET)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 25C8E402F1;
	Sun,  4 Feb 2024 02:38:23 +0100 (CET)
Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32])
 by mails.dpdk.org (Postfix) with ESMTP id B3C20402A2
 for <dev@dpdk.org>; Sun,  4 Feb 2024 02:38:21 +0100 (CET)
Received: from mail.maildlp.com (unknown [172.19.163.44])
 by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4TSBwJ4KPSz1vt2b;
 Sun,  4 Feb 2024 09:37:52 +0800 (CST)
Received: from dggpeml500024.china.huawei.com (unknown [7.185.36.10])
 by mail.maildlp.com (Postfix) with ESMTPS id 466D3140384;
 Sun,  4 Feb 2024 09:38:19 +0800 (CST)
Received: from [10.67.121.161] (10.67.121.161) by
 dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.35; Sun, 4 Feb 2024 09:38:19 +0800
Subject: Re: [PATCH] dmadev: standardize alignment and allocation
To: <pbhagavatula@marvell.com>, <jerinj@marvell.com>, Kevin Laatz
 <kevin.laatz@intel.com>, Bruce Richardson <bruce.richardson@intel.com>
CC: <dev@dpdk.org>
References: <20240202090633.10816-1-pbhagavatula@marvell.com>
From: fengchengwen <fengchengwen@huawei.com>
Message-ID: <97c4bca5-39f6-c66b-bb51-675c3695b3d1@huawei.com>
Date: Sun, 4 Feb 2024 09:38:18 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <20240202090633.10816-1-pbhagavatula@marvell.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [10.67.121.161]
X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To
 dggpeml500024.china.huawei.com (7.185.36.10)
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

Hi Pavan,

Alloc fp_objects from rte_memory is a good idea, but this may cause
the rte_memory memory leak, especially in multi-process scenario.

Currently, there is no mechanism for releasing such a rte_memory which
don't belong to any driver.

So I suggest: maybe we could add rte_mem_align API which alloc from libc
and use in this cases.

BTW: the rte_dma_devices is only used in control-path, so it don't need
use rte_memory API, but I think it could use the new rte_mem_align API.

Thanks

On 2024/2/2 17:06, pbhagavatula@marvell.com wrote:
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> 
> Align fp_objects based on cacheline size, allocate
> devices and fp_objects memory on hugepages.
> 
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
>  lib/dmadev/rte_dmadev.c      | 6 ++----
>  lib/dmadev/rte_dmadev_core.h | 2 +-
>  2 files changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
> index 67434c805f43..1fe1434019f0 100644
> --- a/lib/dmadev/rte_dmadev.c
> +++ b/lib/dmadev/rte_dmadev.c
> @@ -143,10 +143,9 @@ dma_fp_data_prepare(void)
>  	 */
>  	size = dma_devices_max * sizeof(struct rte_dma_fp_object) +
>  		RTE_CACHE_LINE_SIZE;
> -	ptr = malloc(size);
> +	ptr = rte_zmalloc("", size, RTE_CACHE_LINE_SIZE);
>  	if (ptr == NULL)
>  		return -ENOMEM;
> -	memset(ptr, 0, size);
>  
>  	rte_dma_fp_objs = RTE_PTR_ALIGN(ptr, RTE_CACHE_LINE_SIZE);
>  	for (i = 0; i < dma_devices_max; i++)
> @@ -164,10 +163,9 @@ dma_dev_data_prepare(void)
>  		return 0;
>  
>  	size = dma_devices_max * sizeof(struct rte_dma_dev);
> -	rte_dma_devices = malloc(size);
> +	rte_dma_devices = rte_zmalloc("", size, RTE_CACHE_LINE_SIZE);
>  	if (rte_dma_devices == NULL)
>  		return -ENOMEM;
> -	memset(rte_dma_devices, 0, size);
>  
>  	return 0;
>  }
> diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
> index 064785686f7f..e8239c2d22b6 100644
> --- a/lib/dmadev/rte_dmadev_core.h
> +++ b/lib/dmadev/rte_dmadev_core.h
> @@ -73,7 +73,7 @@ struct rte_dma_fp_object {
>  	rte_dma_completed_t        completed;
>  	rte_dma_completed_status_t completed_status;
>  	rte_dma_burst_capacity_t   burst_capacity;
> -} __rte_aligned(128);
> +} __rte_cache_aligned;
>  
>  extern struct rte_dma_fp_object *rte_dma_fp_objs;
>  
>