DPDK patches and discussions
 help / color / mirror / Atom feed
* MLX5 PMD access ring library private data
@ 2023-08-17  5:06 Honnappa Nagarahalli
  2023-08-17 14:06 ` Stephen Hemminger
  0 siblings, 1 reply; 14+ messages in thread
From: Honnappa Nagarahalli @ 2023-08-17  5:06 UTC (permalink / raw)
  To: dev, Matan Azrad, viacheslavo
  Cc: Tyler Retzlaff, Wathsala Wathawana Vithanage, nd, nd

Hi Matan, Viacheslav,
	Tyler pointed out that the function __mlx5_hws_cnt_pool_enqueue_revert is accessing the ring private structure members (prod.head and prod.tail) directly. Even though ' struct rte_ring' is a public structure (mainly because the library provides inline functions), the structure members are considered private to the ring library. So, this needs to be corrected.

It looks like the function __mlx5_hws_cnt_pool_enqueue_revert is trying to revert things that were enqueued. It is not clear to me why this functionality is required. Can you provide the use case for this? We can discuss possible solutions.

Thank you,
Honnappa

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: MLX5 PMD access ring library private data
  2023-08-17  5:06 MLX5 PMD access ring library private data Honnappa Nagarahalli
@ 2023-08-17 14:06 ` Stephen Hemminger
  2023-08-18  2:32   ` Jack Min
  0 siblings, 1 reply; 14+ messages in thread
From: Stephen Hemminger @ 2023-08-17 14:06 UTC (permalink / raw)
  To: Honnappa Nagarahalli
  Cc: dev, Matan Azrad, viacheslavo, Tyler Retzlaff,
	Wathsala Wathawana Vithanage, nd

On Thu, 17 Aug 2023 05:06:20 +0000
Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:

> Hi Matan, Viacheslav,
> 	Tyler pointed out that the function __mlx5_hws_cnt_pool_enqueue_revert is accessing the ring private structure members (prod.head and prod.tail) directly. Even though ' struct rte_ring' is a public structure (mainly because the library provides inline functions), the structure members are considered private to the ring library. So, this needs to be corrected.
> 
> It looks like the function __mlx5_hws_cnt_pool_enqueue_revert is trying to revert things that were enqueued. It is not clear to me why this functionality is required. Can you provide the use case for this? We can discuss possible solutions.

How can reverting be thread safe? Consumer could have already looked at them?

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: MLX5 PMD access ring library private data
  2023-08-17 14:06 ` Stephen Hemminger
@ 2023-08-18  2:32   ` Jack Min
  2023-08-18  4:30     ` Honnappa Nagarahalli
  2023-08-18  9:05     ` Konstantin Ananyev
  0 siblings, 2 replies; 14+ messages in thread
From: Jack Min @ 2023-08-18  2:32 UTC (permalink / raw)
  To: Stephen Hemminger, Honnappa Nagarahalli
  Cc: dev, Matan Azrad, viacheslavo, Tyler Retzlaff,
	Wathsala Wathawana Vithanage, nd

On 2023/8/17 22:06, Stephen Hemminger wrote:
> On Thu, 17 Aug 2023 05:06:20 +0000
> Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
>
>> Hi Matan, Viacheslav,
>> 	Tyler pointed out that the function __mlx5_hws_cnt_pool_enqueue_revert is accessing the ring private structure members (prod.head and prod.tail) directly. Even though ' struct rte_ring' is a public structure (mainly because the library provides inline functions), the structure members are considered private to the ring library. So, this needs to be corrected.
>>
>> It looks like the function __mlx5_hws_cnt_pool_enqueue_revert is trying to revert things that were enqueued. It is not clear to me why this functionality is required. Can you provide the use case for this? We can discuss possible solutions.
> How can reverting be thread safe? Consumer could have already looked at them?

Hey,

In our case, this ring is SC/SP, only accessed by one thread 
(enqueue/dequeue/revert).

The scenario we have "revert" is:

  We use ring to manager our HW objects (counter in this case) and for 
each core (thread) has "cache" (a SC/SP ring) for sake of performance.

1. Get objects from "cache" firstly, if cache is empty, we fetch a bulk 
of free objects from global ring into cache.

2. Put (free) objects also into "cache" firstly, if cache is full, we 
flush a bulk of objects into global ring in order to make some rooms in 
cache.

However, this HW object cannot be immediately reused after free. It 
needs time to be reset and then can be used again.

So when we flush cache, we want to keep the first enqueued objects still 
stay there because they have more chance already be reset than the 
latest enqueued objects.

Only flush recently enqueued objects back into global ring, act as 
"LIFO" behavior.

This is why we require "revert" enqueued objects.

-Jack



^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: MLX5 PMD access ring library private data
  2023-08-18  2:32   ` Jack Min
@ 2023-08-18  4:30     ` Honnappa Nagarahalli
  2023-08-18  5:57       ` Jack Min
  2023-08-18  9:05     ` Konstantin Ananyev
  1 sibling, 1 reply; 14+ messages in thread
From: Honnappa Nagarahalli @ 2023-08-18  4:30 UTC (permalink / raw)
  To: Jack Min, Stephen Hemminger
  Cc: dev, Matan Azrad, viacheslavo, Tyler Retzlaff,
	Wathsala Wathawana Vithanage, nd, nd



> -----Original Message-----
> From: Jack Min <jackmin@nvidia.com>
> Sent: Thursday, August 17, 2023 9:32 PM
> To: Stephen Hemminger <stephen@networkplumber.org>; Honnappa
> Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Cc: dev@dpdk.org; Matan Azrad <matan@nvidia.com>;
> viacheslavo@nvidia.com; Tyler Retzlaff <roretzla@linux.microsoft.com>;
> Wathsala Wathawana Vithanage <wathsala.vithanage@arm.com>; nd
> <nd@arm.com>
> Subject: Re: MLX5 PMD access ring library private data
> 
> On 2023/8/17 22:06, Stephen Hemminger wrote:
> > On Thu, 17 Aug 2023 05:06:20 +0000
> > Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
> >
> >> Hi Matan, Viacheslav,
> >> 	Tyler pointed out that the function
> __mlx5_hws_cnt_pool_enqueue_revert is accessing the ring private structure
> members (prod.head and prod.tail) directly. Even though ' struct rte_ring' is a
> public structure (mainly because the library provides inline functions), the
> structure members are considered private to the ring library. So, this needs to
> be corrected.
> >>
> >> It looks like the function __mlx5_hws_cnt_pool_enqueue_revert is trying
> to revert things that were enqueued. It is not clear to me why this
> functionality is required. Can you provide the use case for this? We can
> discuss possible solutions.
> > How can reverting be thread safe? Consumer could have already looked at
> them?
> 
> Hey,
> 
> In our case, this ring is SC/SP, only accessed by one thread
> (enqueue/dequeue/revert).
You could implement a more simpler and more efficient (For ex: such an implementation would not need any atomic operations, would require less number of cache lines) ring for this.
Is this function being used in the dataplane?

> 
> The scenario we have "revert" is:
> 
>   We use ring to manager our HW objects (counter in this case) and for each
> core (thread) has "cache" (a SC/SP ring) for sake of performance.
> 
> 1. Get objects from "cache" firstly, if cache is empty, we fetch a bulk of free
> objects from global ring into cache.
> 
> 2. Put (free) objects also into "cache" firstly, if cache is full, we flush a bulk of
> objects into global ring in order to make some rooms in cache.
> 
> However, this HW object cannot be immediately reused after free. It needs
> time to be reset and then can be used again.
> 
> So when we flush cache, we want to keep the first enqueued objects still stay
> there because they have more chance already be reset than the latest
> enqueued objects.
> 
> Only flush recently enqueued objects back into global ring, act as "LIFO"
> behavior.
> 
> This is why we require "revert" enqueued objects.
You could use 'rte_ring_free_count' API before you enqueue to check for available space.

> 
> -Jack
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: MLX5 PMD access ring library private data
  2023-08-18  4:30     ` Honnappa Nagarahalli
@ 2023-08-18  5:57       ` Jack Min
  2023-08-18 13:59         ` Honnappa Nagarahalli
  0 siblings, 1 reply; 14+ messages in thread
From: Jack Min @ 2023-08-18  5:57 UTC (permalink / raw)
  To: Honnappa Nagarahalli, Stephen Hemminger
  Cc: dev, Matan Azrad, viacheslavo, Tyler Retzlaff,
	Wathsala Wathawana Vithanage, nd

[-- Attachment #1: Type: text/plain, Size: 3232 bytes --]

On 2023/8/18 12:30, Honnappa Nagarahalli wrote:
>
>> -----Original Message-----
>> From: Jack Min<jackmin@nvidia.com>
>> Sent: Thursday, August 17, 2023 9:32 PM
>> To: Stephen Hemminger<stephen@networkplumber.org>; Honnappa
>> Nagarahalli<Honnappa.Nagarahalli@arm.com>
>> Cc:dev@dpdk.org; Matan Azrad<matan@nvidia.com>;
>> viacheslavo@nvidia.com; Tyler Retzlaff<roretzla@linux.microsoft.com>;
>> Wathsala Wathawana Vithanage<wathsala.vithanage@arm.com>; nd
>> <nd@arm.com>
>> Subject: Re: MLX5 PMD access ring library private data
>>
>> On 2023/8/17 22:06, Stephen Hemminger wrote:
>>> On Thu, 17 Aug 2023 05:06:20 +0000
>>> Honnappa Nagarahalli<Honnappa.Nagarahalli@arm.com>  wrote:
>>>
>>>> Hi Matan, Viacheslav,
>>>> 	Tyler pointed out that the function
>> __mlx5_hws_cnt_pool_enqueue_revert is accessing the ring private structure
>> members (prod.head and prod.tail) directly. Even though ' struct rte_ring' is a
>> public structure (mainly because the library provides inline functions), the
>> structure members are considered private to the ring library. So, this needs to
>> be corrected.
>>>> It looks like the function __mlx5_hws_cnt_pool_enqueue_revert is trying
>> to revert things that were enqueued. It is not clear to me why this
>> functionality is required. Can you provide the use case for this? We can
>> discuss possible solutions.
>>> How can reverting be thread safe? Consumer could have already looked at
>> them?
>>
>> Hey,
>>
>> In our case, this ring is SC/SP, only accessed by one thread
>> (enqueue/dequeue/revert).
> You could implement a more simpler and more efficient (For ex: such an implementation would not need any atomic operations, would require less number of cache lines) ring for this.
> Is this function being used in the dataplane?

Yes,  we can have our own version of ring (no atomic operations) but 
basic operation are still as same as rte_ring.

Since rte ring has been well-designed and tested sufficiently, so there 
is no strong reason to re-write a new simple version of it until today :)


>
>> The scenario we have "revert" is:
>>
>>    We use ring to manager our HW objects (counter in this case) and for each
>> core (thread) has "cache" (a SC/SP ring) for sake of performance.
>>
>> 1. Get objects from "cache" firstly, if cache is empty, we fetch a bulk of free
>> objects from global ring into cache.
>>
>> 2. Put (free) objects also into "cache" firstly, if cache is full, we flush a bulk of
>> objects into global ring in order to make some rooms in cache.
>>
>> However, this HW object cannot be immediately reused after free. It needs
>> time to be reset and then can be used again.
>>
>> So when we flush cache, we want to keep the first enqueued objects still stay
>> there because they have more chance already be reset than the latest
>> enqueued objects.
>>
>> Only flush recently enqueued objects back into global ring, act as "LIFO"
>> behavior.
>>
>> This is why we require "revert" enqueued objects.
> You could use 'rte_ring_free_count' API before you enqueue to check for available space.

Only when cache is full (rte_ring_free_count() is zero), we revert X 
objects.

If there is still  one free slot we will not trigger revert (flush).


>
>> -Jack
>>

[-- Attachment #2: Type: text/html, Size: 5966 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: MLX5 PMD access ring library private data
  2023-08-18  2:32   ` Jack Min
  2023-08-18  4:30     ` Honnappa Nagarahalli
@ 2023-08-18  9:05     ` Konstantin Ananyev
  2023-08-18  9:38       ` Jack Min
  1 sibling, 1 reply; 14+ messages in thread
From: Konstantin Ananyev @ 2023-08-18  9:05 UTC (permalink / raw)
  To: Jack Min, Stephen Hemminger, Honnappa Nagarahalli
  Cc: dev, Matan Azrad, viacheslavo, Tyler Retzlaff,
	Wathsala Wathawana Vithanage, nd



> 
> On 2023/8/17 22:06, Stephen Hemminger wrote:
> > On Thu, 17 Aug 2023 05:06:20 +0000
> > Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote:
> >
> >> Hi Matan, Viacheslav,
> >> 	Tyler pointed out that the function __mlx5_hws_cnt_pool_enqueue_revert is accessing the ring private structure members
> (prod.head and prod.tail) directly. Even though ' struct rte_ring' is a public structure (mainly because the library provides inline
> functions), the structure members are considered private to the ring library. So, this needs to be corrected.
> >>
> >> It looks like the function __mlx5_hws_cnt_pool_enqueue_revert is trying to revert things that were enqueued. It is not clear to
> me why this functionality is required. Can you provide the use case for this? We can discuss possible solutions.
> > How can reverting be thread safe? Consumer could have already looked at them?
> 
> Hey,
> 
> In our case, this ring is SC/SP, only accessed by one thread
> (enqueue/dequeue/revert).
> 
> The scenario we have "revert" is:
> 
>   We use ring to manager our HW objects (counter in this case) and for
> each core (thread) has "cache" (a SC/SP ring) for sake of performance.
> 
> 1. Get objects from "cache" firstly, if cache is empty, we fetch a bulk
> of free objects from global ring into cache.
> 
> 2. Put (free) objects also into "cache" firstly, if cache is full, we
> flush a bulk of objects into global ring in order to make some rooms in
> cache.
> 
> However, this HW object cannot be immediately reused after free. It
> needs time to be reset and then can be used again.
> 
> So when we flush cache, we want to keep the first enqueued objects still
> stay there because they have more chance already be reset than the
> latest enqueued objects.
> 
> Only flush recently enqueued objects back into global ring, act as
> "LIFO" behavior.
> 
> This is why we require "revert" enqueued objects.
> 

Wouldn't then simple stack fit you better?
Something like lib/stack/rte_stack_std.h, but even without spinlock around?
 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: MLX5 PMD access ring library private data
  2023-08-18  9:05     ` Konstantin Ananyev
@ 2023-08-18  9:38       ` Jack Min
  2023-08-19 11:57         ` Konstantin Ananyev
  0 siblings, 1 reply; 14+ messages in thread
From: Jack Min @ 2023-08-18  9:38 UTC (permalink / raw)
  To: Konstantin Ananyev, Stephen Hemminger, Honnappa Nagarahalli
  Cc: dev, Matan Azrad, viacheslavo, Tyler Retzlaff,
	Wathsala Wathawana Vithanage, nd

[-- Attachment #1: Type: text/plain, Size: 2552 bytes --]

On 2023/8/18 17:05, Konstantin Ananyev wrote:
>
>> On 2023/8/17 22:06, Stephen Hemminger wrote:
>>> On Thu, 17 Aug 2023 05:06:20 +0000
>>> Honnappa Nagarahalli<Honnappa.Nagarahalli@arm.com>  wrote:
>>>
>>>> Hi Matan, Viacheslav,
>>>> 	Tyler pointed out that the function __mlx5_hws_cnt_pool_enqueue_revert is accessing the ring private structure members
>> (prod.head and prod.tail) directly. Even though ' struct rte_ring' is a public structure (mainly because the library provides inline
>> functions), the structure members are considered private to the ring library. So, this needs to be corrected.
>>>> It looks like the function __mlx5_hws_cnt_pool_enqueue_revert is trying to revert things that were enqueued. It is not clear to
>> me why this functionality is required. Can you provide the use case for this? We can discuss possible solutions.
>>> How can reverting be thread safe? Consumer could have already looked at them?
>> Hey,
>>
>> In our case, this ring is SC/SP, only accessed by one thread
>> (enqueue/dequeue/revert).
>>
>> The scenario we have "revert" is:
>>
>>    We use ring to manager our HW objects (counter in this case) and for
>> each core (thread) has "cache" (a SC/SP ring) for sake of performance.
>>
>> 1. Get objects from "cache" firstly, if cache is empty, we fetch a bulk
>> of free objects from global ring into cache.
>>
>> 2. Put (free) objects also into "cache" firstly, if cache is full, we
>> flush a bulk of objects into global ring in order to make some rooms in
>> cache.
>>
>> However, this HW object cannot be immediately reused after free. It
>> needs time to be reset and then can be used again.
>>
>> So when we flush cache, we want to keep the first enqueued objects still
>> stay there because they have more chance already be reset than the
>> latest enqueued objects.
>>
>> Only flush recently enqueued objects back into global ring, act as
>> "LIFO" behavior.
>>
>> This is why we require "revert" enqueued objects.
>>
> Wouldn't then simple stack fit you better?
> Something like lib/stack/rte_stack_std.h, but even without spinlock around?

No, stack is always a "LIFO" struct, right?

Here first we need this cache works as "FIFO" in most cases (get/put) 
because the first enqueued objects have more chance that are already 
reset so can reuse them.

We only require "LIFO" behavior when "flush" cache in order to make some 
room, so next free will be quick because it happens in our local cache, 
needn't access global ring.

In short, we require a struct supports "FIFO" and "LIFO".

-Jack

[-- Attachment #2: Type: text/html, Size: 4104 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: MLX5 PMD access ring library private data
  2023-08-18  5:57       ` Jack Min
@ 2023-08-18 13:59         ` Honnappa Nagarahalli
  2023-08-19  1:34           ` Jack Min
  0 siblings, 1 reply; 14+ messages in thread
From: Honnappa Nagarahalli @ 2023-08-18 13:59 UTC (permalink / raw)
  To: Jack Min, Stephen Hemminger
  Cc: dev, Matan Azrad, viacheslavo, Tyler Retzlaff,
	Wathsala Wathawana Vithanage, nd, nd

[-- Attachment #1: Type: text/plain, Size: 4469 bytes --]

From: Jack Min <jackmin@nvidia.com>
Sent: Friday, August 18, 2023 12:57 AM
To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Stephen Hemminger <stephen@networkplumber.org>
Cc: dev@dpdk.org; Matan Azrad <matan@nvidia.com>; viacheslavo@nvidia.com; Tyler Retzlaff <roretzla@linux.microsoft.com>; Wathsala Wathawana Vithanage <wathsala.vithanage@arm.com>; nd <nd@arm.com>
Subject: Re: MLX5 PMD access ring library private data

On 2023/8/18 12:30, Honnappa Nagarahalli wrote:





-----Original Message-----

From: Jack Min <jackmin@nvidia.com><mailto:jackmin@nvidia.com>

Sent: Thursday, August 17, 2023 9:32 PM

To: Stephen Hemminger <stephen@networkplumber.org><mailto:stephen@networkplumber.org>; Honnappa

Nagarahalli <Honnappa.Nagarahalli@arm.com><mailto:Honnappa.Nagarahalli@arm.com>

Cc: dev@dpdk.org<mailto:dev@dpdk.org>; Matan Azrad <matan@nvidia.com><mailto:matan@nvidia.com>;

viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>; Tyler Retzlaff <roretzla@linux.microsoft.com><mailto:roretzla@linux.microsoft.com>;

Wathsala Wathawana Vithanage <wathsala.vithanage@arm.com><mailto:wathsala.vithanage@arm.com>; nd

<nd@arm.com><mailto:nd@arm.com>

Subject: Re: MLX5 PMD access ring library private data



On 2023/8/17 22:06, Stephen Hemminger wrote:

On Thu, 17 Aug 2023 05:06:20 +0000

Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com><mailto:Honnappa.Nagarahalli@arm.com> wrote:



Hi Matan, Viacheslav,

      Tyler pointed out that the function

__mlx5_hws_cnt_pool_enqueue_revert is accessing the ring private structure

members (prod.head and prod.tail) directly. Even though ' struct rte_ring' is a

public structure (mainly because the library provides inline functions), the

structure members are considered private to the ring library. So, this needs to

be corrected.



It looks like the function __mlx5_hws_cnt_pool_enqueue_revert is trying

to revert things that were enqueued. It is not clear to me why this

functionality is required. Can you provide the use case for this? We can

discuss possible solutions.

How can reverting be thread safe? Consumer could have already looked at

them?



Hey,



In our case, this ring is SC/SP, only accessed by one thread

(enqueue/dequeue/revert).

You could implement a more simpler and more efficient (For ex: such an implementation would not need any atomic operations, would require less number of cache lines) ring for this.

Is this function being used in the dataplane?

Yes,  we can have our own version of ring (no atomic operations) but basic operation are still as same as rte_ring.

Since rte ring has been well-designed and tested sufficiently, so there is no strong reason to re-write a new simple version of it until today :)









The scenario we have "revert" is:



  We use ring to manager our HW objects (counter in this case) and for each

core (thread) has "cache" (a SC/SP ring) for sake of performance.



1. Get objects from "cache" firstly, if cache is empty, we fetch a bulk of free

objects from global ring into cache.



2. Put (free) objects also into "cache" firstly, if cache is full, we flush a bulk of

objects into global ring in order to make some rooms in cache.



However, this HW object cannot be immediately reused after free. It needs

time to be reset and then can be used again.



So when we flush cache, we want to keep the first enqueued objects still stay

there because they have more chance already be reset than the latest

enqueued objects.



Only flush recently enqueued objects back into global ring, act as "LIFO"

behavior.



This is why we require "revert" enqueued objects.

You could use 'rte_ring_free_count' API before you enqueue to check for available space.

Only when cache is full (rte_ring_free_count() is zero), we revert X objects.

If there is still  one free slot we will not trigger revert (flush).

[Honnappa] May be I was not clear in my recommendation. What I am saying is, you could call ‘rte_ring_free_count’ to check if you have enough space on the cache ring. If there is not enough space you can enqueue the new objects on the global ring. Pseudo code below:

If (rte_ring_free_count(cache_ring) > n) {

             <enqueue n objects on cache ring>

} else {

             <enqueue n objects on global ring>

}









-Jack





[-- Attachment #2: Type: text/html, Size: 9743 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: MLX5 PMD access ring library private data
  2023-08-18 13:59         ` Honnappa Nagarahalli
@ 2023-08-19  1:34           ` Jack Min
  2023-08-21  6:06             ` Honnappa Nagarahalli
  0 siblings, 1 reply; 14+ messages in thread
From: Jack Min @ 2023-08-19  1:34 UTC (permalink / raw)
  To: Honnappa Nagarahalli, Stephen Hemminger
  Cc: dev, Matan Azrad, viacheslavo, Tyler Retzlaff,
	Wathsala Wathawana Vithanage, nd

[-- Attachment #1: Type: text/plain, Size: 5491 bytes --]

On 2023/8/18 21:59, Honnappa Nagarahalli wrote:
>
> *From:* Jack Min <jackmin@nvidia.com>
> *Sent:* Friday, August 18, 2023 12:57 AM
> *To:* Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Stephen 
> Hemminger <stephen@networkplumber.org>
> *Cc:* dev@dpdk.org; Matan Azrad <matan@nvidia.com>; 
> viacheslavo@nvidia.com; Tyler Retzlaff <roretzla@linux.microsoft.com>; 
> Wathsala Wathawana Vithanage <wathsala.vithanage@arm.com>; nd <nd@arm.com>
> *Subject:* Re: MLX5 PMD access ring library private data
>
> On 2023/8/18 12:30, Honnappa Nagarahalli wrote:
>
>         -----Original Message-----
>
>         From: Jack Min<jackmin@nvidia.com>  <mailto:jackmin@nvidia.com>
>
>         Sent: Thursday, August 17, 2023 9:32 PM
>
>         To: Stephen Hemminger<stephen@networkplumber.org>  <mailto:stephen@networkplumber.org>; Honnappa
>
>         Nagarahalli<Honnappa.Nagarahalli@arm.com>  <mailto:Honnappa.Nagarahalli@arm.com>
>
>         Cc:dev@dpdk.org; Matan Azrad<matan@nvidia.com>  <mailto:matan@nvidia.com>;
>
>         viacheslavo@nvidia.com; Tyler Retzlaff<roretzla@linux.microsoft.com>  <mailto:roretzla@linux.microsoft.com>;
>
>         Wathsala Wathawana Vithanage<wathsala.vithanage@arm.com>  <mailto:wathsala.vithanage@arm.com>; nd
>
>         <nd@arm.com>  <mailto:nd@arm.com>
>
>         Subject: Re: MLX5 PMD access ring library private data
>
>         On 2023/8/17 22:06, Stephen Hemminger wrote:
>
>             On Thu, 17 Aug 2023 05:06:20 +0000
>
>             Honnappa Nagarahalli<Honnappa.Nagarahalli@arm.com>  <mailto:Honnappa.Nagarahalli@arm.com>  wrote:
>
>                 Hi Matan, Viacheslav,
>
>                        Tyler pointed out that the function
>
>         __mlx5_hws_cnt_pool_enqueue_revert is accessing the ring private structure
>
>         members (prod.head and prod.tail) directly. Even though ' struct rte_ring' is a
>
>         public structure (mainly because the library provides inline functions), the
>
>         structure members are considered private to the ring library. So, this needs to
>
>         be corrected.
>
>                 It looks like the function __mlx5_hws_cnt_pool_enqueue_revert is trying
>
>         to revert things that were enqueued. It is not clear to me why this
>
>         functionality is required. Can you provide the use case for this? We can
>
>         discuss possible solutions.
>
>             How can reverting be thread safe? Consumer could have already looked at
>
>         them?
>
>         Hey,
>
>         In our case, this ring is SC/SP, only accessed by one thread
>
>         (enqueue/dequeue/revert).
>
>     You could implement a more simpler and more efficient (For ex: such an implementation would not need any atomic operations, would require less number of cache lines) ring for this.
>
>     Is this function being used in the dataplane?
>
> Yes,  we can have our own version of ring (no atomic operations) but 
> basic operation are still as same as rte_ring.
>
> Since rte ring has been well-designed and tested sufficiently, so 
> there is no strong reason to re-write a new simple version of it until 
> today :)
>
>         The scenario we have "revert" is:
>
>            We use ring to manager our HW objects (counter in this case) and for each
>
>         core (thread) has "cache" (a SC/SP ring) for sake of performance.
>
>         1. Get objects from "cache" firstly, if cache is empty, we fetch a bulk of free
>
>         objects from global ring into cache.
>
>         2. Put (free) objects also into "cache" firstly, if cache is full, we flush a bulk of
>
>         objects into global ring in order to make some rooms in cache.
>
>         However, this HW object cannot be immediately reused after free. It needs
>
>         time to be reset and then can be used again.
>
>         So when we flush cache, we want to keep the first enqueued objects still stay
>
>         there because they have more chance already be reset than the latest
>
>         enqueued objects.
>
>         Only flush recently enqueued objects back into global ring, act as "LIFO"
>
>         behavior.
>
>         This is why we require "revert" enqueued objects.
>
>     You could use 'rte_ring_free_count' API before you enqueue to check for available space.
>
> Only when cache is full (rte_ring_free_count() is zero), we revert X 
> objects.
>
> If there is still  one free slot we will not trigger revert (flush).
>
> */[Honnappa]/* May be I was not clear in my recommendation. What I am 
> saying is, you could call ‘rte_ring_free_count’ to check if you have 
> enough space on the cache ring. If there is not enough space you can 
> enqueue the new objects on the global ring. Pseudo code below:
>
> If (rte_ring_free_count(cache_ring) > n) {
>
>              <enqueue n objects on cache ring>
>
> } else {
>
>              <enqueue n objects on global ring>
>
> }
>
Hey,

Then next n objects will still enqueue into global ring, not into cache 
, right? ( we enqueue nnnn objects continually)

Our requirement is like this:

if (rte_ring_free_count(cache_ring) > 0) {

          <enqueue this object on cache ring>

} else { /* cache is full */

       <enqueue this object into global ring>

      <move the latest n objects into global ring too>

}

It's not about if this enqueue on cache can success or not.

It's about we need "free" more room in advance so next n objects can 
enqueue into cache.

-Jack

>         -Jack
>

[-- Attachment #2: Type: text/html, Size: 13375 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: MLX5 PMD access ring library private data
  2023-08-18  9:38       ` Jack Min
@ 2023-08-19 11:57         ` Konstantin Ananyev
  2023-08-20  1:41           ` Jack Min
  0 siblings, 1 reply; 14+ messages in thread
From: Konstantin Ananyev @ 2023-08-19 11:57 UTC (permalink / raw)
  To: Jack Min, Konstantin Ananyev, Stephen Hemminger, Honnappa Nagarahalli
  Cc: dev, Matan Azrad, viacheslavo, Tyler Retzlaff,
	Wathsala Wathawana Vithanage, nd

18/08/2023 10:38, Jack Min пишет:
> On 2023/8/18 17:05, Konstantin Ananyev wrote:
>>> On 2023/8/17 22:06, Stephen Hemminger wrote:
>>>> On Thu, 17 Aug 2023 05:06:20 +0000
>>>> Honnappa Nagarahalli<Honnappa.Nagarahalli@arm.com>  wrote:
>>>>
>>>>> Hi Matan, Viacheslav,
>>>>> 	Tyler pointed out that the function __mlx5_hws_cnt_pool_enqueue_revert is accessing the ring private structure members
>>> (prod.head and prod.tail) directly. Even though ' struct rte_ring' is a public structure (mainly because the library provides inline
>>> functions), the structure members are considered private to the ring library. So, this needs to be corrected.
>>>>> It looks like the function __mlx5_hws_cnt_pool_enqueue_revert is trying to revert things that were enqueued. It is not clear to
>>> me why this functionality is required. Can you provide the use case for this? We can discuss possible solutions.
>>>> How can reverting be thread safe? Consumer could have already looked at them?
>>> Hey,
>>>
>>> In our case, this ring is SC/SP, only accessed by one thread
>>> (enqueue/dequeue/revert).
>>>
>>> The scenario we have "revert" is:
>>>
>>>    We use ring to manager our HW objects (counter in this case) and for
>>> each core (thread) has "cache" (a SC/SP ring) for sake of performance.
>>>
>>> 1. Get objects from "cache" firstly, if cache is empty, we fetch a bulk
>>> of free objects from global ring into cache.
>>>
>>> 2. Put (free) objects also into "cache" firstly, if cache is full, we
>>> flush a bulk of objects into global ring in order to make some rooms in
>>> cache.
>>>
>>> However, this HW object cannot be immediately reused after free. It
>>> needs time to be reset and then can be used again.
>>>
>>> So when we flush cache, we want to keep the first enqueued objects still
>>> stay there because they have more chance already be reset than the
>>> latest enqueued objects.
>>>
>>> Only flush recently enqueued objects back into global ring, act as
>>> "LIFO" behavior.
>>>
>>> This is why we require "revert" enqueued objects.
>>>
>> Wouldn't then simple stack fit you better?
>> Something like lib/stack/rte_stack_std.h, but even without spinlock around?
> 
> No, stack is always a "LIFO" struct, right?

Yep.

> 
> Here first we need this cache works as "FIFO" in most cases (get/put) 
> because the first enqueued objects have more chance that are already 
> reset so can reuse them.
> 
> We only require "LIFO" behavior when "flush" cache in order to make some 
> room, so next free will be quick because it happens in our local cache, 
> needn't access global ring.
> 
> In short, we require a struct supports "FIFO" and "LIFO".

Ok, thanks fro explanation.
So you need a ring, but with an ability to revert prod N elements back, 
right?

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: MLX5 PMD access ring library private data
  2023-08-19 11:57         ` Konstantin Ananyev
@ 2023-08-20  1:41           ` Jack Min
  0 siblings, 0 replies; 14+ messages in thread
From: Jack Min @ 2023-08-20  1:41 UTC (permalink / raw)
  To: Konstantin Ananyev, Konstantin Ananyev, Stephen Hemminger,
	Honnappa Nagarahalli
  Cc: dev, Matan Azrad, viacheslavo, Tyler Retzlaff,
	Wathsala Wathawana Vithanage, nd

[-- Attachment #1: Type: text/plain, Size: 2975 bytes --]

On 2023/8/19 19:57, Konstantin Ananyev wrote:
> 18/08/2023 10:38, Jack Min пишет:
>> On 2023/8/18 17:05, Konstantin Ananyev wrote:
>>>> On 2023/8/17 22:06, Stephen Hemminger wrote:
>>>>> On Thu, 17 Aug 2023 05:06:20 +0000
>>>>> Honnappa Nagarahalli<Honnappa.Nagarahalli@arm.com> wrote:
>>>>>
>>>>>> Hi Matan, Viacheslav,
>>>>>>     Tyler pointed out that the function 
>>>>>> __mlx5_hws_cnt_pool_enqueue_revert is accessing the ring private 
>>>>>> structure members
>>>> (prod.head and prod.tail) directly. Even though ' struct rte_ring' 
>>>> is a public structure (mainly because the library provides inline
>>>> functions), the structure members are considered private to the 
>>>> ring library. So, this needs to be corrected.
>>>>>> It looks like the function __mlx5_hws_cnt_pool_enqueue_revert is 
>>>>>> trying to revert things that were enqueued. It is not clear to
>>>> me why this functionality is required. Can you provide the use case 
>>>> for this? We can discuss possible solutions.
>>>>> How can reverting be thread safe? Consumer could have already 
>>>>> looked at them?
>>>> Hey,
>>>>
>>>> In our case, this ring is SC/SP, only accessed by one thread
>>>> (enqueue/dequeue/revert).
>>>>
>>>> The scenario we have "revert" is:
>>>>
>>>>    We use ring to manager our HW objects (counter in this case) and 
>>>> for
>>>> each core (thread) has "cache" (a SC/SP ring) for sake of performance.
>>>>
>>>> 1. Get objects from "cache" firstly, if cache is empty, we fetch a 
>>>> bulk
>>>> of free objects from global ring into cache.
>>>>
>>>> 2. Put (free) objects also into "cache" firstly, if cache is full, we
>>>> flush a bulk of objects into global ring in order to make some 
>>>> rooms in
>>>> cache.
>>>>
>>>> However, this HW object cannot be immediately reused after free. It
>>>> needs time to be reset and then can be used again.
>>>>
>>>> So when we flush cache, we want to keep the first enqueued objects 
>>>> still
>>>> stay there because they have more chance already be reset than the
>>>> latest enqueued objects.
>>>>
>>>> Only flush recently enqueued objects back into global ring, act as
>>>> "LIFO" behavior.
>>>>
>>>> This is why we require "revert" enqueued objects.
>>>>
>>> Wouldn't then simple stack fit you better?
>>> Something like lib/stack/rte_stack_std.h, but even without spinlock 
>>> around?
>>
>> No, stack is always a "LIFO" struct, right?
>
> Yep.
>
>>
>> Here first we need this cache works as "FIFO" in most cases (get/put) 
>> because the first enqueued objects have more chance that are already 
>> reset so can reuse them.
>>
>> We only require "LIFO" behavior when "flush" cache in order to make 
>> some room, so next free will be quick because it happens in our local 
>> cache, needn't access global ring.
>>
>> In short, we require a struct supports "FIFO" and "LIFO".
>
> Ok, thanks fro explanation.
> So you need a ring, but with an ability to revert prod N elements 
> back, right?

Yes, exactly!

[-- Attachment #2: Type: text/html, Size: 5392 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: MLX5 PMD access ring library private data
  2023-08-19  1:34           ` Jack Min
@ 2023-08-21  6:06             ` Honnappa Nagarahalli
  2023-08-21  6:56               ` Jack Min
  0 siblings, 1 reply; 14+ messages in thread
From: Honnappa Nagarahalli @ 2023-08-21  6:06 UTC (permalink / raw)
  To: Jack Min, Stephen Hemminger
  Cc: dev, Matan Azrad, viacheslavo, Tyler Retzlaff,
	Wathsala Wathawana Vithanage, nd, Konstantin Ananyev, nd

[-- Attachment #1: Type: text/plain, Size: 6282 bytes --]

It would be good if you could fix the email on your side to text format. Comments inline.

From: Jack Min <jackmin@nvidia.com>
Sent: Friday, August 18, 2023 8:35 PM
To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Stephen Hemminger <stephen@networkplumber.org>
Cc: dev@dpdk.org; Matan Azrad <matan@nvidia.com>; viacheslavo@nvidia.com; Tyler Retzlaff <roretzla@linux.microsoft.com>; Wathsala Wathawana Vithanage <wathsala.vithanage@arm.com>; nd <nd@arm.com>
Subject: Re: MLX5 PMD access ring library private data

On 2023/8/18 21:59, Honnappa Nagarahalli wrote:
From: Jack Min <jackmin@nvidia.com><mailto:jackmin@nvidia.com>
Sent: Friday, August 18, 2023 12:57 AM
To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com><mailto:Honnappa.Nagarahalli@arm.com>; Stephen Hemminger <stephen@networkplumber.org><mailto:stephen@networkplumber.org>
Cc: dev@dpdk.org<mailto:dev@dpdk.org>; Matan Azrad <matan@nvidia.com><mailto:matan@nvidia.com>; viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>; Tyler Retzlaff <roretzla@linux.microsoft.com><mailto:roretzla@linux.microsoft.com>; Wathsala Wathawana Vithanage <wathsala.vithanage@arm.com><mailto:wathsala.vithanage@arm.com>; nd <nd@arm.com><mailto:nd@arm.com>
Subject: Re: MLX5 PMD access ring library private data

On 2023/8/18 12:30, Honnappa Nagarahalli wrote:





-----Original Message-----

From: Jack Min <jackmin@nvidia.com><mailto:jackmin@nvidia.com>

Sent: Thursday, August 17, 2023 9:32 PM

To: Stephen Hemminger <stephen@networkplumber.org><mailto:stephen@networkplumber.org>; Honnappa

Nagarahalli <Honnappa.Nagarahalli@arm.com><mailto:Honnappa.Nagarahalli@arm.com>

Cc: dev@dpdk.org<mailto:dev@dpdk.org>; Matan Azrad <matan@nvidia.com><mailto:matan@nvidia.com>;

viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>; Tyler Retzlaff <roretzla@linux.microsoft.com><mailto:roretzla@linux.microsoft.com>;

Wathsala Wathawana Vithanage <wathsala.vithanage@arm.com><mailto:wathsala.vithanage@arm.com>; nd

<nd@arm.com><mailto:nd@arm.com>

Subject: Re: MLX5 PMD access ring library private data



On 2023/8/17 22:06, Stephen Hemminger wrote:

On Thu, 17 Aug 2023 05:06:20 +0000

Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com><mailto:Honnappa.Nagarahalli@arm.com> wrote:



Hi Matan, Viacheslav,

      Tyler pointed out that the function

__mlx5_hws_cnt_pool_enqueue_revert is accessing the ring private structure

members (prod.head and prod.tail) directly. Even though ' struct rte_ring' is a

public structure (mainly because the library provides inline functions), the

structure members are considered private to the ring library. So, this needs to

be corrected.



It looks like the function __mlx5_hws_cnt_pool_enqueue_revert is trying

to revert things that were enqueued. It is not clear to me why this

functionality is required. Can you provide the use case for this? We can

discuss possible solutions.

How can reverting be thread safe? Consumer could have already looked at

them?



Hey,



In our case, this ring is SC/SP, only accessed by one thread

(enqueue/dequeue/revert).

You could implement a more simpler and more efficient (For ex: such an implementation would not need any atomic operations, would require less number of cache lines) ring for this.

Is this function being used in the dataplane?

Yes,  we can have our own version of ring (no atomic operations) but basic operation are still as same as rte_ring.

Since rte ring has been well-designed and tested sufficiently, so there is no strong reason to re-write a new simple version of it until today :)









The scenario we have "revert" is:



  We use ring to manager our HW objects (counter in this case) and for each

core (thread) has "cache" (a SC/SP ring) for sake of performance.



1. Get objects from "cache" firstly, if cache is empty, we fetch a bulk of free

objects from global ring into cache.



2. Put (free) objects also into "cache" firstly, if cache is full, we flush a bulk of

objects into global ring in order to make some rooms in cache.



However, this HW object cannot be immediately reused after free. It needs

time to be reset and then can be used again.



So when we flush cache, we want to keep the first enqueued objects still stay

there because they have more chance already be reset than the latest

enqueued objects.



Only flush recently enqueued objects back into global ring, act as "LIFO"

behavior.



This is why we require "revert" enqueued objects.

You could use 'rte_ring_free_count' API before you enqueue to check for available space.

Only when cache is full (rte_ring_free_count() is zero), we revert X objects.

If there is still  one free slot we will not trigger revert (flush).

[Honnappa] May be I was not clear in my recommendation. What I am saying is, you could call ‘rte_ring_free_count’ to check if you have enough space on the cache ring. If there is not enough space you can enqueue the new objects on the global ring. Pseudo code below:

If (rte_ring_free_count(cache_ring) > n) {

             <enqueue n objects on cache ring>

} else {

             <enqueue n objects on global ring>

}

Hey,

Then next n objects will still enqueue into global ring, not into cache , right? ( we enqueue nnnn objects continually)

Our requirement is like this:

if (rte_ring_free_count(cache_ring) > 0) {

         <enqueue this object on cache ring>

} else { /* cache is full */

      <enqueue this object into global ring>

     <move the latest n objects into global ring too>

[Honnappa] Understood. IMO, this is a unique requirement. Ring library does not support this and is not designed for this. As per the guidelines and past agreements, accessing structure members in ring structures is not allowed.

I think a simple implementation like [1] would suffice your needs.

[1] https://patches.dpdk.org/project/dpdk/patch/20230821060420.3509667-1-honnappa.nagarahalli@arm.com/

}

It's not about if this enqueue on cache can success or not.

It's about we need "free" more room in advance so next n objects can enqueue into cache.

-Jack







-Jack





[-- Attachment #2: Type: text/html, Size: 12903 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: MLX5 PMD access ring library private data
  2023-08-21  6:06             ` Honnappa Nagarahalli
@ 2023-08-21  6:56               ` Jack Min
  2023-08-22  4:18                 ` Honnappa Nagarahalli
  0 siblings, 1 reply; 14+ messages in thread
From: Jack Min @ 2023-08-21  6:56 UTC (permalink / raw)
  To: Honnappa Nagarahalli, Stephen Hemminger
  Cc: dev, Matan Azrad, viacheslavo, Tyler Retzlaff,
	Wathsala Wathawana Vithanage, nd, Konstantin Ananyev

On 2023/8/21 14:06, Honnappa Nagarahalli wrote:
>
> It would be good if you could fix the email on your side to text 
> format. Comments inline.
>
Seems a wrong setting on my email client. Sorry.
>
> *From:* Jack Min <jackmin@nvidia.com>
> *Sent:* Friday, August 18, 2023 8:35 PM
> *To:* Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Stephen 
> Hemminger <stephen@networkplumber.org>
> *Cc:* dev@dpdk.org; Matan Azrad <matan@nvidia.com>; 
> viacheslavo@nvidia.com; Tyler Retzlaff <roretzla@linux.microsoft.com>; 
> Wathsala Wathawana Vithanage <wathsala.vithanage@arm.com>; nd <nd@arm.com>
> *Subject:* Re: MLX5 PMD access ring library private data
>
> On 2023/8/18 21:59, Honnappa Nagarahalli wrote:
>
>     *From:* Jack Min <jackmin@nvidia.com> <mailto:jackmin@nvidia.com>
>     *Sent:* Friday, August 18, 2023 12:57 AM
>     *To:* Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
>     <mailto:Honnappa.Nagarahalli@arm.com>; Stephen Hemminger
>     <stephen@networkplumber.org> <mailto:stephen@networkplumber.org>
>     *Cc:* dev@dpdk.org; Matan Azrad <matan@nvidia.com>
>     <mailto:matan@nvidia.com>; viacheslavo@nvidia.com; Tyler Retzlaff
>     <roretzla@linux.microsoft.com>
>     <mailto:roretzla@linux.microsoft.com>; Wathsala Wathawana
>     Vithanage <wathsala.vithanage@arm.com>
>     <mailto:wathsala.vithanage@arm.com>; nd <nd@arm.com>
>     <mailto:nd@arm.com>
>     *Subject:* Re: MLX5 PMD access ring library private data
>
>     On 2023/8/18 12:30, Honnappa Nagarahalli wrote:
>
>           
>
>           
>
>             -----Original Message-----
>
>             From: Jack Min<jackmin@nvidia.com>  <mailto:jackmin@nvidia.com>
>
>             Sent: Thursday, August 17, 2023 9:32 PM
>
>             To: Stephen Hemminger<stephen@networkplumber.org>  <mailto:stephen@networkplumber.org>; Honnappa
>
>             Nagarahalli<Honnappa.Nagarahalli@arm.com>  <mailto:Honnappa.Nagarahalli@arm.com>
>
>             Cc:dev@dpdk.org; Matan Azrad<matan@nvidia.com>  <mailto:matan@nvidia.com>;
>
>             viacheslavo@nvidia.com; Tyler Retzlaff<roretzla@linux.microsoft.com>  <mailto:roretzla@linux.microsoft.com>;
>
>             Wathsala Wathawana Vithanage<wathsala.vithanage@arm.com>  <mailto:wathsala.vithanage@arm.com>; nd
>
>             <nd@arm.com>  <mailto:nd@arm.com>
>
>             Subject: Re: MLX5 PMD access ring library private data
>
>               
>
>             On 2023/8/17 22:06, Stephen Hemminger wrote:
>
>                 On Thu, 17 Aug 2023 05:06:20 +0000
>
>                 Honnappa Nagarahalli<Honnappa.Nagarahalli@arm.com>  <mailto:Honnappa.Nagarahalli@arm.com>  wrote:
>
>                   
>
>                     Hi Matan, Viacheslav,
>
>                            Tyler pointed out that the function
>
>             __mlx5_hws_cnt_pool_enqueue_revert is accessing the ring private structure
>
>             members (prod.head and prod.tail) directly. Even though ' struct rte_ring' is a
>
>             public structure (mainly because the library provides inline functions), the
>
>             structure members are considered private to the ring library. So, this needs to
>
>             be corrected.
>
>                       
>
>                     It looks like the function __mlx5_hws_cnt_pool_enqueue_revert is trying
>
>             to revert things that were enqueued. It is not clear to me why this
>
>             functionality is required. Can you provide the use case for this? We can
>
>             discuss possible solutions.
>
>                 How can reverting be thread safe? Consumer could have already looked at
>
>             them?
>
>               
>
>             Hey,
>
>               
>
>             In our case, this ring is SC/SP, only accessed by one thread
>
>             (enqueue/dequeue/revert).
>
>         You could implement a more simpler and more efficient (For ex: such an implementation would not need any atomic operations, would require less number of cache lines) ring for this.
>
>         Is this function being used in the dataplane?
>
>     Yes,  we can have our own version of ring (no atomic operations)
>     but basic operation are still as same as rte_ring.
>
>     Since rte ring has been well-designed and tested sufficiently, so
>     there is no strong reason to re-write a new simple version of it
>     until today :)
>
>           
>
>           
>
>               
>
>             The scenario we have "revert" is:
>
>               
>
>                We use ring to manager our HW objects (counter in this case) and for each
>
>             core (thread) has "cache" (a SC/SP ring) for sake of performance.
>
>               
>
>             1. Get objects from "cache" firstly, if cache is empty, we fetch a bulk of free
>
>             objects from global ring into cache.
>
>               
>
>             2. Put (free) objects also into "cache" firstly, if cache is full, we flush a bulk of
>
>             objects into global ring in order to make some rooms in cache.
>
>               
>
>             However, this HW object cannot be immediately reused after free. It needs
>
>             time to be reset and then can be used again.
>
>               
>
>             So when we flush cache, we want to keep the first enqueued objects still stay
>
>             there because they have more chance already be reset than the latest
>
>             enqueued objects.
>
>               
>
>             Only flush recently enqueued objects back into global ring, act as "LIFO"
>
>             behavior.
>
>               
>
>             This is why we require "revert" enqueued objects.
>
>         You could use 'rte_ring_free_count' API before you enqueue to check for available space.
>
>     Only when cache is full (rte_ring_free_count() is zero), we revert
>     X objects.
>
>     If there is still  one free slot we will not trigger revert (flush).
>
>     */[Honnappa]/* May be I was not clear in my recommendation. What I
>     am saying is, you could call ‘rte_ring_free_count’ to check if you
>     have enough space on the cache ring. If there is not enough space
>     you can enqueue the new objects on the global ring. Pseudo code below:
>
>     If (rte_ring_free_count(cache_ring) > n) {
>
>                  <enqueue n objects on cache ring>
>
>     } else {
>
>                  <enqueue n objects on global ring>
>
>     }
>
> Hey,
>
> Then next n objects will still enqueue into global ring, not into 
> cache , right? ( we enqueue nnnn objects continually)
>
> Our requirement is like this:
>
> if (rte_ring_free_count(cache_ring) > 0) {
>
>          <enqueue this object on cache ring>
>
> } else { /* cache is full */
>
>       <enqueue this object into global ring>
>
>      <move the latest n objects into global ring too>
>
> */[Honnappa] /*Understood. IMO, this is a unique requirement. Ring 
> library does not support this and is not designed for this. As per the 
> guidelines and past agreements, accessing structure members in ring 
> structures is not allowed.
>
Alright. Now I'm aware of this.

Do we have a document about this? I probably overlooked it...

> I think a simple implementation like [1] would suffice your needs.
>
> [1] 
> https://patches.dpdk.org/project/dpdk/patch/20230821060420.3509667-1-honnappa.nagarahalli@arm.com/
>
> }
>
Yes, mostly.

It will be better if we have a "zero-copy" version, like: 
rte_st_ring_dequeue_zc_at_head_burst_elem_start()

-Jack


^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: MLX5 PMD access ring library private data
  2023-08-21  6:56               ` Jack Min
@ 2023-08-22  4:18                 ` Honnappa Nagarahalli
  0 siblings, 0 replies; 14+ messages in thread
From: Honnappa Nagarahalli @ 2023-08-22  4:18 UTC (permalink / raw)
  To: Jack Min, Stephen Hemminger
  Cc: dev, Matan Azrad, viacheslavo, Tyler Retzlaff,
	Wathsala Wathawana Vithanage, nd, Konstantin Ananyev,
	Aditya Ambadipudi, nd

<snip>

> >
> >     On 2023/8/18 12:30, Honnappa Nagarahalli wrote:
> >
> >
> >
> >
> >
> >             -----Original Message-----
> >
> >             From: Jack Min<jackmin@nvidia.com>
> > <mailto:jackmin@nvidia.com>
> >
> >             Sent: Thursday, August 17, 2023 9:32 PM
> >
> >             To: Stephen Hemminger<stephen@networkplumber.org>
> > <mailto:stephen@networkplumber.org>; Honnappa
> >
> >             Nagarahalli<Honnappa.Nagarahalli@arm.com>
> > <mailto:Honnappa.Nagarahalli@arm.com>
> >
> >             Cc:dev@dpdk.org; Matan Azrad<matan@nvidia.com>
> > <mailto:matan@nvidia.com>;
> >
> >             viacheslavo@nvidia.com; Tyler
> > Retzlaff<roretzla@linux.microsoft.com>
> > <mailto:roretzla@linux.microsoft.com>;
> >
> >             Wathsala Wathawana Vithanage<wathsala.vithanage@arm.com>
> > <mailto:wathsala.vithanage@arm.com>; nd
> >
> >             <nd@arm.com>  <mailto:nd@arm.com>
> >
> >             Subject: Re: MLX5 PMD access ring library private data
> >
> >
> >
> >             On 2023/8/17 22:06, Stephen Hemminger wrote:
> >
> >                 On Thu, 17 Aug 2023 05:06:20 +0000
> >
> >                 Honnappa Nagarahalli<Honnappa.Nagarahalli@arm.com>
> <mailto:Honnappa.Nagarahalli@arm.com>  wrote:
> >
> >
> >
> >                     Hi Matan, Viacheslav,
> >
> >                            Tyler pointed out that the function
> >
> >             __mlx5_hws_cnt_pool_enqueue_revert is accessing the ring
> > private structure
> >
> >             members (prod.head and prod.tail) directly. Even though '
> > struct rte_ring' is a
> >
> >             public structure (mainly because the library provides
> > inline functions), the
> >
> >             structure members are considered private to the ring
> > library. So, this needs to
> >
> >             be corrected.
> >
> >
> >
> >                     It looks like the function
> > __mlx5_hws_cnt_pool_enqueue_revert is trying
> >
> >             to revert things that were enqueued. It is not clear to me
> > why this
> >
> >             functionality is required. Can you provide the use case
> > for this? We can
> >
> >             discuss possible solutions.
> >
> >                 How can reverting be thread safe? Consumer could have
> > already looked at
> >
> >             them?
> >
> >
> >
> >             Hey,
> >
> >
> >
> >             In our case, this ring is SC/SP, only accessed by one
> > thread
> >
> >             (enqueue/dequeue/revert).
> >
> >         You could implement a more simpler and more efficient (For ex: such an
> implementation would not need any atomic operations, would require less
> number of cache lines) ring for this.
> >
> >         Is this function being used in the dataplane?
> >
> >     Yes,  we can have our own version of ring (no atomic operations)
> >     but basic operation are still as same as rte_ring.
> >
> >     Since rte ring has been well-designed and tested sufficiently, so
> >     there is no strong reason to re-write a new simple version of it
> >     until today :)
> >
> >
> >
> >
> >
> >
> >
> >             The scenario we have "revert" is:
> >
> >
> >
> >                We use ring to manager our HW objects (counter in this
> > case) and for each
> >
> >             core (thread) has "cache" (a SC/SP ring) for sake of performance.
> >
> >
> >
> >             1. Get objects from "cache" firstly, if cache is empty, we
> > fetch a bulk of free
> >
> >             objects from global ring into cache.
> >
> >
> >
> >             2. Put (free) objects also into "cache" firstly, if cache
> > is full, we flush a bulk of
> >
> >             objects into global ring in order to make some rooms in cache.
> >
> >
> >
> >             However, this HW object cannot be immediately reused after
> > free. It needs
> >
> >             time to be reset and then can be used again.
> >
> >
> >
> >             So when we flush cache, we want to keep the first enqueued
> > objects still stay
> >
> >             there because they have more chance already be reset than
> > the latest
> >
> >             enqueued objects.
> >
> >
> >
> >             Only flush recently enqueued objects back into global ring, act as "LIFO"
> >
> >             behavior.
> >
> >
> >
> >             This is why we require "revert" enqueued objects.
> >
> >         You could use 'rte_ring_free_count' API before you enqueue to check for
> available space.
> >
> >     Only when cache is full (rte_ring_free_count() is zero), we revert
> >     X objects.
> >
> >     If there is still  one free slot we will not trigger revert (flush).
> >
> >     */[Honnappa]/* May be I was not clear in my recommendation. What I
> >     am saying is, you could call ‘rte_ring_free_count’ to check if you
> >     have enough space on the cache ring. If there is not enough space
> >     you can enqueue the new objects on the global ring. Pseudo code below:
> >
> >     If (rte_ring_free_count(cache_ring) > n) {
> >
> >                  <enqueue n objects on cache ring>
> >
> >     } else {
> >
> >                  <enqueue n objects on global ring>
> >
> >     }
> >
> > Hey,
> >
> > Then next n objects will still enqueue into global ring, not into
> > cache , right? ( we enqueue nnnn objects continually)
> >
> > Our requirement is like this:
> >
> > if (rte_ring_free_count(cache_ring) > 0) {
> >
> >          <enqueue this object on cache ring>
> >
> > } else { /* cache is full */
> >
> >       <enqueue this object into global ring>
> >
> >      <move the latest n objects into global ring too>
> >
> > */[Honnappa] /*Understood. IMO, this is a unique requirement. Ring
> > library does not support this and is not designed for this. As per the
> > guidelines and past agreements, accessing structure members in ring
> > structures is not allowed.
> >
> Alright. Now I'm aware of this.
> 
> Do we have a document about this? I probably overlooked it...
Not sure if this is documented, we can add it to coding guidelines.

> 
> > I think a simple implementation like [1] would suffice your needs.
> >
> > [1]
> > https://patches.dpdk.org/project/dpdk/patch/20230821060420.3509667-1-
> h
> > onnappa.nagarahalli@arm.com/
> >
> > }
> >
> Yes, mostly.
> 
> It will be better if we have a "zero-copy" version, like:
> rte_st_ring_dequeue_zc_at_head_burst_elem_start()
Yes, will add
> 
> -Jack


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2023-08-22  4:18 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-17  5:06 MLX5 PMD access ring library private data Honnappa Nagarahalli
2023-08-17 14:06 ` Stephen Hemminger
2023-08-18  2:32   ` Jack Min
2023-08-18  4:30     ` Honnappa Nagarahalli
2023-08-18  5:57       ` Jack Min
2023-08-18 13:59         ` Honnappa Nagarahalli
2023-08-19  1:34           ` Jack Min
2023-08-21  6:06             ` Honnappa Nagarahalli
2023-08-21  6:56               ` Jack Min
2023-08-22  4:18                 ` Honnappa Nagarahalli
2023-08-18  9:05     ` Konstantin Ananyev
2023-08-18  9:38       ` Jack Min
2023-08-19 11:57         ` Konstantin Ananyev
2023-08-20  1:41           ` Jack Min

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).