DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH] dmadev: standardize alignment and allocation
@ 2024-02-02  9:06 pbhagavatula
  2024-02-04  1:38 ` fengchengwen
  2024-02-10  6:27 ` [PATCH v2] dmadev: standardize alignment pbhagavatula
  0 siblings, 2 replies; 6+ messages in thread
From: pbhagavatula @ 2024-02-02  9:06 UTC (permalink / raw)
  To: jerinj, Chengwen Feng, Kevin Laatz, Bruce Richardson; +Cc: dev, Pavan Nikhilesh

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Align fp_objects based on cacheline size, allocate
devices and fp_objects memory on hugepages.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 lib/dmadev/rte_dmadev.c      | 6 ++----
 lib/dmadev/rte_dmadev_core.h | 2 +-
 2 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index 67434c805f43..1fe1434019f0 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -143,10 +143,9 @@ dma_fp_data_prepare(void)
 	 */
 	size = dma_devices_max * sizeof(struct rte_dma_fp_object) +
 		RTE_CACHE_LINE_SIZE;
-	ptr = malloc(size);
+	ptr = rte_zmalloc("", size, RTE_CACHE_LINE_SIZE);
 	if (ptr == NULL)
 		return -ENOMEM;
-	memset(ptr, 0, size);
 
 	rte_dma_fp_objs = RTE_PTR_ALIGN(ptr, RTE_CACHE_LINE_SIZE);
 	for (i = 0; i < dma_devices_max; i++)
@@ -164,10 +163,9 @@ dma_dev_data_prepare(void)
 		return 0;
 
 	size = dma_devices_max * sizeof(struct rte_dma_dev);
-	rte_dma_devices = malloc(size);
+	rte_dma_devices = rte_zmalloc("", size, RTE_CACHE_LINE_SIZE);
 	if (rte_dma_devices == NULL)
 		return -ENOMEM;
-	memset(rte_dma_devices, 0, size);
 
 	return 0;
 }
diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
index 064785686f7f..e8239c2d22b6 100644
--- a/lib/dmadev/rte_dmadev_core.h
+++ b/lib/dmadev/rte_dmadev_core.h
@@ -73,7 +73,7 @@ struct rte_dma_fp_object {
 	rte_dma_completed_t        completed;
 	rte_dma_completed_status_t completed_status;
 	rte_dma_burst_capacity_t   burst_capacity;
-} __rte_aligned(128);
+} __rte_cache_aligned;
 
 extern struct rte_dma_fp_object *rte_dma_fp_objs;
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] dmadev: standardize alignment and allocation
  2024-02-02  9:06 [PATCH] dmadev: standardize alignment and allocation pbhagavatula
@ 2024-02-04  1:38 ` fengchengwen
  2024-02-10  6:20   ` [EXT] " Pavan Nikhilesh Bhagavatula
  2024-02-10  6:27 ` [PATCH v2] dmadev: standardize alignment pbhagavatula
  1 sibling, 1 reply; 6+ messages in thread
From: fengchengwen @ 2024-02-04  1:38 UTC (permalink / raw)
  To: pbhagavatula, jerinj, Kevin Laatz, Bruce Richardson; +Cc: dev

Hi Pavan,

Alloc fp_objects from rte_memory is a good idea, but this may cause
the rte_memory memory leak, especially in multi-process scenario.

Currently, there is no mechanism for releasing such a rte_memory which
don't belong to any driver.

So I suggest: maybe we could add rte_mem_align API which alloc from libc
and use in this cases.

BTW: the rte_dma_devices is only used in control-path, so it don't need
use rte_memory API, but I think it could use the new rte_mem_align API.

Thanks

On 2024/2/2 17:06, pbhagavatula@marvell.com wrote:
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> 
> Align fp_objects based on cacheline size, allocate
> devices and fp_objects memory on hugepages.
> 
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
>  lib/dmadev/rte_dmadev.c      | 6 ++----
>  lib/dmadev/rte_dmadev_core.h | 2 +-
>  2 files changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
> index 67434c805f43..1fe1434019f0 100644
> --- a/lib/dmadev/rte_dmadev.c
> +++ b/lib/dmadev/rte_dmadev.c
> @@ -143,10 +143,9 @@ dma_fp_data_prepare(void)
>  	 */
>  	size = dma_devices_max * sizeof(struct rte_dma_fp_object) +
>  		RTE_CACHE_LINE_SIZE;
> -	ptr = malloc(size);
> +	ptr = rte_zmalloc("", size, RTE_CACHE_LINE_SIZE);
>  	if (ptr == NULL)
>  		return -ENOMEM;
> -	memset(ptr, 0, size);
>  
>  	rte_dma_fp_objs = RTE_PTR_ALIGN(ptr, RTE_CACHE_LINE_SIZE);
>  	for (i = 0; i < dma_devices_max; i++)
> @@ -164,10 +163,9 @@ dma_dev_data_prepare(void)
>  		return 0;
>  
>  	size = dma_devices_max * sizeof(struct rte_dma_dev);
> -	rte_dma_devices = malloc(size);
> +	rte_dma_devices = rte_zmalloc("", size, RTE_CACHE_LINE_SIZE);
>  	if (rte_dma_devices == NULL)
>  		return -ENOMEM;
> -	memset(rte_dma_devices, 0, size);
>  
>  	return 0;
>  }
> diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
> index 064785686f7f..e8239c2d22b6 100644
> --- a/lib/dmadev/rte_dmadev_core.h
> +++ b/lib/dmadev/rte_dmadev_core.h
> @@ -73,7 +73,7 @@ struct rte_dma_fp_object {
>  	rte_dma_completed_t        completed;
>  	rte_dma_completed_status_t completed_status;
>  	rte_dma_burst_capacity_t   burst_capacity;
> -} __rte_aligned(128);
> +} __rte_cache_aligned;
>  
>  extern struct rte_dma_fp_object *rte_dma_fp_objs;
>  
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: [EXT] Re: [PATCH] dmadev: standardize alignment and allocation
  2024-02-04  1:38 ` fengchengwen
@ 2024-02-10  6:20   ` Pavan Nikhilesh Bhagavatula
  0 siblings, 0 replies; 6+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2024-02-10  6:20 UTC (permalink / raw)
  To: fengchengwen, Jerin Jacob, Kevin Laatz, Bruce Richardson; +Cc: dev

> Hi Pavan,
> 
> Alloc fp_objects from rte_memory is a good idea, but this may cause
> the rte_memory memory leak, especially in multi-process scenario.
> 
> Currently, there is no mechanism for releasing such a rte_memory which
> don't belong to any driver.
>

Yeah, secondary process will leak rte_zmalloc allocations if not freed.
The only option currently is to use mmap and allocate non-shared memory 
on secondary, which is not ideal.
 
> So I suggest: maybe we could add rte_mem_align API which alloc from libc
> and use in this cases.
>

Yeah, maybe in future we could add something like rte_zmalloc_private which 
would create new mappings on secondary process. But that is out of scope for 
this patch.
 
I will send a v2 dropping the malloc changes and keeping the cache alignment changes.

> BTW: the rte_dma_devices is only used in control-path, so it don't need
> use rte_memory API, but I think it could use the new rte_mem_align API.
> 
> Thanks
> 

Thanks,
Pavan.

> On 2024/2/2 17:06, pbhagavatula@marvell.com wrote:
> > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >
> > Align fp_objects based on cacheline size, allocate
> > devices and fp_objects memory on hugepages.
> >
> > Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > ---
> >  lib/dmadev/rte_dmadev.c      | 6 ++----
> >  lib/dmadev/rte_dmadev_core.h | 2 +-
> >  2 files changed, 3 insertions(+), 5 deletions(-)
> >
> > diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
> > index 67434c805f43..1fe1434019f0 100644
> > --- a/lib/dmadev/rte_dmadev.c
> > +++ b/lib/dmadev/rte_dmadev.c
> > @@ -143,10 +143,9 @@ dma_fp_data_prepare(void)
> >  	 */
> >  	size = dma_devices_max * sizeof(struct rte_dma_fp_object) +
> >  		RTE_CACHE_LINE_SIZE;
> > -	ptr = malloc(size);
> > +	ptr = rte_zmalloc("", size, RTE_CACHE_LINE_SIZE);
> >  	if (ptr == NULL)
> >  		return -ENOMEM;
> > -	memset(ptr, 0, size);
> >
> >  	rte_dma_fp_objs = RTE_PTR_ALIGN(ptr, RTE_CACHE_LINE_SIZE);
> >  	for (i = 0; i < dma_devices_max; i++)
> > @@ -164,10 +163,9 @@ dma_dev_data_prepare(void)
> >  		return 0;
> >
> >  	size = dma_devices_max * sizeof(struct rte_dma_dev);
> > -	rte_dma_devices = malloc(size);
> > +	rte_dma_devices = rte_zmalloc("", size, RTE_CACHE_LINE_SIZE);
> >  	if (rte_dma_devices == NULL)
> >  		return -ENOMEM;
> > -	memset(rte_dma_devices, 0, size);
> >
> >  	return 0;
> >  }
> > diff --git a/lib/dmadev/rte_dmadev_core.h
> b/lib/dmadev/rte_dmadev_core.h
> > index 064785686f7f..e8239c2d22b6 100644
> > --- a/lib/dmadev/rte_dmadev_core.h
> > +++ b/lib/dmadev/rte_dmadev_core.h
> > @@ -73,7 +73,7 @@ struct rte_dma_fp_object {
> >  	rte_dma_completed_t        completed;
> >  	rte_dma_completed_status_t completed_status;
> >  	rte_dma_burst_capacity_t   burst_capacity;
> > -} __rte_aligned(128);
> > +} __rte_cache_aligned;
> >
> >  extern struct rte_dma_fp_object *rte_dma_fp_objs;
> >
> >

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2] dmadev: standardize alignment
  2024-02-02  9:06 [PATCH] dmadev: standardize alignment and allocation pbhagavatula
  2024-02-04  1:38 ` fengchengwen
@ 2024-02-10  6:27 ` pbhagavatula
  2024-02-10 11:34   ` fengchengwen
  1 sibling, 1 reply; 6+ messages in thread
From: pbhagavatula @ 2024-02-10  6:27 UTC (permalink / raw)
  To: fengchengwen, jerinj, Kevin Laatz, Bruce Richardson; +Cc: dev, Pavan Nikhilesh

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Align fp_objects based on cacheline size defined
by build configuration.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
v2 Changes:
- Drop malloc changes.

 lib/dmadev/rte_dmadev_core.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
index 064785686f..e8239c2d22 100644
--- a/lib/dmadev/rte_dmadev_core.h
+++ b/lib/dmadev/rte_dmadev_core.h
@@ -73,7 +73,7 @@ struct rte_dma_fp_object {
 	rte_dma_completed_t        completed;
 	rte_dma_completed_status_t completed_status;
 	rte_dma_burst_capacity_t   burst_capacity;
-} __rte_aligned(128);
+} __rte_cache_aligned;

 extern struct rte_dma_fp_object *rte_dma_fp_objs;

--
2.25.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] dmadev: standardize alignment
  2024-02-10  6:27 ` [PATCH v2] dmadev: standardize alignment pbhagavatula
@ 2024-02-10 11:34   ` fengchengwen
  2024-02-19  1:32     ` Thomas Monjalon
  0 siblings, 1 reply; 6+ messages in thread
From: fengchengwen @ 2024-02-10 11:34 UTC (permalink / raw)
  To: pbhagavatula, jerinj, Kevin Laatz, Bruce Richardson; +Cc: dev

[-- Attachment #1: Type: text/plain, Size: 931 bytes --]

Acked-by: Chengwen Feng <fengchengwen@huawei.com>

On 2024/2/10 14:27, pbhagavatula@marvell.com wrote:
> From: Pavan Nikhilesh<pbhagavatula@marvell.com>
>
> Align fp_objects based on cacheline size defined
> by build configuration.
>
> Signed-off-by: Pavan Nikhilesh<pbhagavatula@marvell.com>
> ---
> v2 Changes:
> - Drop malloc changes.
>
>   lib/dmadev/rte_dmadev_core.h | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
> index 064785686f..e8239c2d22 100644
> --- a/lib/dmadev/rte_dmadev_core.h
> +++ b/lib/dmadev/rte_dmadev_core.h
> @@ -73,7 +73,7 @@ struct rte_dma_fp_object {
>   	rte_dma_completed_t        completed;
>   	rte_dma_completed_status_t completed_status;
>   	rte_dma_burst_capacity_t   burst_capacity;
> -} __rte_aligned(128);
> +} __rte_cache_aligned;
>
>   extern struct rte_dma_fp_object *rte_dma_fp_objs;
>
> --
> 2.25.1
>

[-- Attachment #2: Type: text/html, Size: 1646 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] dmadev: standardize alignment
  2024-02-10 11:34   ` fengchengwen
@ 2024-02-19  1:32     ` Thomas Monjalon
  0 siblings, 0 replies; 6+ messages in thread
From: Thomas Monjalon @ 2024-02-19  1:32 UTC (permalink / raw)
  To: pbhagavatula; +Cc: jerinj, Kevin Laatz, Bruce Richardson, dev, fengchengwen

> > Align fp_objects based on cacheline size defined
> > by build configuration.
> >
> > Signed-off-by: Pavan Nikhilesh<pbhagavatula@marvell.com>
> Acked-by: Chengwen Feng <fengchengwen@huawei.com>

Applied, thanks.



^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2024-02-19  1:32 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-02  9:06 [PATCH] dmadev: standardize alignment and allocation pbhagavatula
2024-02-04  1:38 ` fengchengwen
2024-02-10  6:20   ` [EXT] " Pavan Nikhilesh Bhagavatula
2024-02-10  6:27 ` [PATCH v2] dmadev: standardize alignment pbhagavatula
2024-02-10 11:34   ` fengchengwen
2024-02-19  1:32     ` Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).