* [dpdk-dev] eventdev: sw rx adapter enqueue caching
@ 2019-05-07 9:52 Elo, Matias (Nokia - FI/Espoo)
2019-05-07 9:52 ` Elo, Matias (Nokia - FI/Espoo)
` (2 more replies)
0 siblings, 3 replies; 16+ messages in thread
From: Elo, Matias (Nokia - FI/Espoo) @ 2019-05-07 9:52 UTC (permalink / raw)
To: dev
Hi,
The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter->event_enqueue_buffer', which stores packets received from the NIC until at least BATCH_SIZE (=32) packets have been received before enqueueing them to eventdev. For example in case of validation testing, where often a small number of specific test packets is sent to the NIC, this causes a lot of problems. One would always have to transmit at least BATCH_SIZE test packets before anything can be received from eventdev. Additionally, if the rx packet rate is slow this also adds a considerable amount of additional delay.
Looking at the rx adapter API and sw implementation code there doesn’t seem to be a way to disable this internal caching. In my opinion this “functionality" makes testing sw rx adapter so cumbersome that either the implementation should be modified to enqueue the cached packets after a while (some performance penalty) or there should be some method to disable caching. Any opinions how this issue could be fixed?
Regards,
Matias
^ permalink raw reply [flat|nested] 16+ messages in thread
* [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-07 9:52 [dpdk-dev] eventdev: sw rx adapter enqueue caching Elo, Matias (Nokia - FI/Espoo)
@ 2019-05-07 9:52 ` Elo, Matias (Nokia - FI/Espoo)
2019-05-07 11:12 ` Honnappa Nagarahalli
2019-05-07 11:56 ` Mattias Rönnblom
2 siblings, 0 replies; 16+ messages in thread
From: Elo, Matias (Nokia - FI/Espoo) @ 2019-05-07 9:52 UTC (permalink / raw)
To: dev
Hi,
The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter->event_enqueue_buffer', which stores packets received from the NIC until at least BATCH_SIZE (=32) packets have been received before enqueueing them to eventdev. For example in case of validation testing, where often a small number of specific test packets is sent to the NIC, this causes a lot of problems. One would always have to transmit at least BATCH_SIZE test packets before anything can be received from eventdev. Additionally, if the rx packet rate is slow this also adds a considerable amount of additional delay.
Looking at the rx adapter API and sw implementation code there doesn’t seem to be a way to disable this internal caching. In my opinion this “functionality" makes testing sw rx adapter so cumbersome that either the implementation should be modified to enqueue the cached packets after a while (some performance penalty) or there should be some method to disable caching. Any opinions how this issue could be fixed?
Regards,
Matias
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-07 9:52 [dpdk-dev] eventdev: sw rx adapter enqueue caching Elo, Matias (Nokia - FI/Espoo)
2019-05-07 9:52 ` Elo, Matias (Nokia - FI/Espoo)
@ 2019-05-07 11:12 ` Honnappa Nagarahalli
2019-05-07 11:12 ` Honnappa Nagarahalli
2019-05-07 12:01 ` Mattias Rönnblom
2019-05-07 11:56 ` Mattias Rönnblom
2 siblings, 2 replies; 16+ messages in thread
From: Honnappa Nagarahalli @ 2019-05-07 11:12 UTC (permalink / raw)
To: Elo, Matias (Nokia - FI/Espoo), dev; +Cc: Honnappa Nagarahalli, nd, nd
>
> Hi,
>
> The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
> >event_enqueue_buffer', which stores packets received from the NIC until at
> least BATCH_SIZE (=32) packets have been received before enqueueing them
> to eventdev. For example in case of validation testing, where often a small
> number of specific test packets is sent to the NIC, this causes a lot of
> problems. One would always have to transmit at least BATCH_SIZE test
> packets before anything can be received from eventdev. Additionally, if the rx
> packet rate is slow this also adds a considerable amount of additional delay.
>
> Looking at the rx adapter API and sw implementation code there doesn’t
> seem to be a way to disable this internal caching. In my opinion this
> “functionality" makes testing sw rx adapter so cumbersome that either the
> implementation should be modified to enqueue the cached packets after a
> while (some performance penalty) or there should be some method to
> disable caching. Any opinions how this issue could be fixed?
At the minimum, I would think there should be a compile time option.
From a use case perspective, I think it falls under latency vs throughput considerations. If there is a latency sensitive application, it might not want to wait till 32 packets are received.
>
>
> Regards,
> Matias
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-07 11:12 ` Honnappa Nagarahalli
@ 2019-05-07 11:12 ` Honnappa Nagarahalli
2019-05-07 12:01 ` Mattias Rönnblom
1 sibling, 0 replies; 16+ messages in thread
From: Honnappa Nagarahalli @ 2019-05-07 11:12 UTC (permalink / raw)
To: Elo, Matias (Nokia - FI/Espoo), dev; +Cc: Honnappa Nagarahalli, nd, nd
>
> Hi,
>
> The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
> >event_enqueue_buffer', which stores packets received from the NIC until at
> least BATCH_SIZE (=32) packets have been received before enqueueing them
> to eventdev. For example in case of validation testing, where often a small
> number of specific test packets is sent to the NIC, this causes a lot of
> problems. One would always have to transmit at least BATCH_SIZE test
> packets before anything can be received from eventdev. Additionally, if the rx
> packet rate is slow this also adds a considerable amount of additional delay.
>
> Looking at the rx adapter API and sw implementation code there doesn’t
> seem to be a way to disable this internal caching. In my opinion this
> “functionality" makes testing sw rx adapter so cumbersome that either the
> implementation should be modified to enqueue the cached packets after a
> while (some performance penalty) or there should be some method to
> disable caching. Any opinions how this issue could be fixed?
At the minimum, I would think there should be a compile time option.
From a use case perspective, I think it falls under latency vs throughput considerations. If there is a latency sensitive application, it might not want to wait till 32 packets are received.
>
>
> Regards,
> Matias
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-07 9:52 [dpdk-dev] eventdev: sw rx adapter enqueue caching Elo, Matias (Nokia - FI/Espoo)
2019-05-07 9:52 ` Elo, Matias (Nokia - FI/Espoo)
2019-05-07 11:12 ` Honnappa Nagarahalli
@ 2019-05-07 11:56 ` Mattias Rönnblom
2019-05-07 11:56 ` Mattias Rönnblom
2 siblings, 1 reply; 16+ messages in thread
From: Mattias Rönnblom @ 2019-05-07 11:56 UTC (permalink / raw)
To: Elo, Matias (Nokia - FI/Espoo), dev
On 2019-05-07 11:52, Elo, Matias (Nokia - FI/Espoo) wrote:
> Hi,
>
> The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter->event_enqueue_buffer', which stores packets received from the NIC until at least BATCH_SIZE (=32) packets have been received before enqueueing them to eventdev. For example in case of validation testing, where often a small number of specific test packets is sent to the NIC, this causes a lot of problems. One would always have to transmit at least BATCH_SIZE test packets before anything can be received from eventdev. Additionally, if the rx packet rate is slow this also adds a considerable amount of additional delay.
>
> Looking at the rx adapter API and sw implementation code there doesn’t seem to be a way to disable this internal caching. In my opinion this “functionality" makes testing sw rx adapter so cumbersome that either the implementation should be modified to enqueue the cached packets after a while (some performance penalty) or there should be some method to disable caching. Any opinions how this issue could be fixed?
>
The rx adaptor's service function will be called repeatedly, at a very
high frequency (especially in near-idle situations). One potential
scheme is to, by means of a counter, keeping track of the number of
calls since the last packet was received from the NIC, and flush the
buffers a number of idle (zero-NIC-dequeue) calls.
In that case, you maintain good performance, while not introducing too
much latency.
The DSW Event Device takes this approach to flushing its internal buffers.
Another way would be to use a timer. Either an adapter-internal TSC
timestamp for buffer age, or a rte_timer timer. rdtsc is not for free,
so I would lean toward the first option.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-07 11:56 ` Mattias Rönnblom
@ 2019-05-07 11:56 ` Mattias Rönnblom
0 siblings, 0 replies; 16+ messages in thread
From: Mattias Rönnblom @ 2019-05-07 11:56 UTC (permalink / raw)
To: Elo, Matias (Nokia - FI/Espoo), dev
On 2019-05-07 11:52, Elo, Matias (Nokia - FI/Espoo) wrote:
> Hi,
>
> The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter->event_enqueue_buffer', which stores packets received from the NIC until at least BATCH_SIZE (=32) packets have been received before enqueueing them to eventdev. For example in case of validation testing, where often a small number of specific test packets is sent to the NIC, this causes a lot of problems. One would always have to transmit at least BATCH_SIZE test packets before anything can be received from eventdev. Additionally, if the rx packet rate is slow this also adds a considerable amount of additional delay.
>
> Looking at the rx adapter API and sw implementation code there doesn’t seem to be a way to disable this internal caching. In my opinion this “functionality" makes testing sw rx adapter so cumbersome that either the implementation should be modified to enqueue the cached packets after a while (some performance penalty) or there should be some method to disable caching. Any opinions how this issue could be fixed?
>
The rx adaptor's service function will be called repeatedly, at a very
high frequency (especially in near-idle situations). One potential
scheme is to, by means of a counter, keeping track of the number of
calls since the last packet was received from the NIC, and flush the
buffers a number of idle (zero-NIC-dequeue) calls.
In that case, you maintain good performance, while not introducing too
much latency.
The DSW Event Device takes this approach to flushing its internal buffers.
Another way would be to use a timer. Either an adapter-internal TSC
timestamp for buffer age, or a rte_timer timer. rdtsc is not for free,
so I would lean toward the first option.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-07 11:12 ` Honnappa Nagarahalli
2019-05-07 11:12 ` Honnappa Nagarahalli
@ 2019-05-07 12:01 ` Mattias Rönnblom
2019-05-07 12:01 ` Mattias Rönnblom
2019-05-07 12:03 ` Elo, Matias (Nokia - FI/Espoo)
1 sibling, 2 replies; 16+ messages in thread
From: Mattias Rönnblom @ 2019-05-07 12:01 UTC (permalink / raw)
To: Honnappa Nagarahalli, Elo, Matias (Nokia - FI/Espoo), dev; +Cc: nd
On 2019-05-07 13:12, Honnappa Nagarahalli wrote:
>>
>> Hi,
>>
>> The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
>>> event_enqueue_buffer', which stores packets received from the NIC until at
>> least BATCH_SIZE (=32) packets have been received before enqueueing them
>> to eventdev. For example in case of validation testing, where often a small
>> number of specific test packets is sent to the NIC, this causes a lot of
>> problems. One would always have to transmit at least BATCH_SIZE test
>> packets before anything can be received from eventdev. Additionally, if the rx
>> packet rate is slow this also adds a considerable amount of additional delay.
>>
>> Looking at the rx adapter API and sw implementation code there doesn’t
>> seem to be a way to disable this internal caching. In my opinion this
>> “functionality" makes testing sw rx adapter so cumbersome that either the
>> implementation should be modified to enqueue the cached packets after a
>> while (some performance penalty) or there should be some method to
>> disable caching. Any opinions how this issue could be fixed?
> At the minimum, I would think there should be a compile time option.
> From a use case perspective, I think it falls under latency vs throughput considerations. If there is a latency sensitive application, it might not want to wait till 32 packets are received.
>
From what I understood from Matias Elo and also after a quick glance in
the code, the unlucky packets will be buffered indefinitely, in case the
system goes idle. This is totally unacceptable (both in production and
validation), in my opinion, and should be filed as a bug.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-07 12:01 ` Mattias Rönnblom
@ 2019-05-07 12:01 ` Mattias Rönnblom
2019-05-07 12:03 ` Elo, Matias (Nokia - FI/Espoo)
1 sibling, 0 replies; 16+ messages in thread
From: Mattias Rönnblom @ 2019-05-07 12:01 UTC (permalink / raw)
To: Honnappa Nagarahalli, Elo, Matias (Nokia - FI/Espoo), dev; +Cc: nd
On 2019-05-07 13:12, Honnappa Nagarahalli wrote:
>>
>> Hi,
>>
>> The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
>>> event_enqueue_buffer', which stores packets received from the NIC until at
>> least BATCH_SIZE (=32) packets have been received before enqueueing them
>> to eventdev. For example in case of validation testing, where often a small
>> number of specific test packets is sent to the NIC, this causes a lot of
>> problems. One would always have to transmit at least BATCH_SIZE test
>> packets before anything can be received from eventdev. Additionally, if the rx
>> packet rate is slow this also adds a considerable amount of additional delay.
>>
>> Looking at the rx adapter API and sw implementation code there doesn’t
>> seem to be a way to disable this internal caching. In my opinion this
>> “functionality" makes testing sw rx adapter so cumbersome that either the
>> implementation should be modified to enqueue the cached packets after a
>> while (some performance penalty) or there should be some method to
>> disable caching. Any opinions how this issue could be fixed?
> At the minimum, I would think there should be a compile time option.
> From a use case perspective, I think it falls under latency vs throughput considerations. If there is a latency sensitive application, it might not want to wait till 32 packets are received.
>
From what I understood from Matias Elo and also after a quick glance in
the code, the unlucky packets will be buffered indefinitely, in case the
system goes idle. This is totally unacceptable (both in production and
validation), in my opinion, and should be filed as a bug.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-07 12:01 ` Mattias Rönnblom
2019-05-07 12:01 ` Mattias Rönnblom
@ 2019-05-07 12:03 ` Elo, Matias (Nokia - FI/Espoo)
2019-05-07 12:03 ` Elo, Matias (Nokia - FI/Espoo)
` (2 more replies)
1 sibling, 3 replies; 16+ messages in thread
From: Elo, Matias (Nokia - FI/Espoo) @ 2019-05-07 12:03 UTC (permalink / raw)
To: Mattias Rönnblom; +Cc: Honnappa Nagarahalli, dev, nd
On 7 May 2019, at 15:01, Mattias Rönnblom <hofors@lysator.liu.se<mailto:hofors@lysator.liu.se>> wrote:
On 2019-05-07 13:12, Honnappa Nagarahalli wrote:
Hi,
The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
event_enqueue_buffer', which stores packets received from the NIC until at
least BATCH_SIZE (=32) packets have been received before enqueueing them
to eventdev. For example in case of validation testing, where often a small
number of specific test packets is sent to the NIC, this causes a lot of
problems. One would always have to transmit at least BATCH_SIZE test
packets before anything can be received from eventdev. Additionally, if the rx
packet rate is slow this also adds a considerable amount of additional delay.
Looking at the rx adapter API and sw implementation code there doesn’t
seem to be a way to disable this internal caching. In my opinion this
“functionality" makes testing sw rx adapter so cumbersome that either the
implementation should be modified to enqueue the cached packets after a
while (some performance penalty) or there should be some method to
disable caching. Any opinions how this issue could be fixed?
At the minimum, I would think there should be a compile time option.
From a use case perspective, I think it falls under latency vs throughput considerations. If there is a latency sensitive application, it might not want to wait till 32 packets are received.
From what I understood from Matias Elo and also after a quick glance in the code, the unlucky packets will be buffered indefinitely, in case the system goes idle. This is totally unacceptable (both in production and validation), in my opinion, and should be filed as a bug.
Indeed, this is what happens. I’ll create a bug report to track this issue.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-07 12:03 ` Elo, Matias (Nokia - FI/Espoo)
@ 2019-05-07 12:03 ` Elo, Matias (Nokia - FI/Espoo)
2019-05-07 12:13 ` Jerin Jacob Kollanukkaran
2019-05-09 11:24 ` Rao, Nikhil
2 siblings, 0 replies; 16+ messages in thread
From: Elo, Matias (Nokia - FI/Espoo) @ 2019-05-07 12:03 UTC (permalink / raw)
To: Mattias Rönnblom; +Cc: Honnappa Nagarahalli, dev, nd
On 7 May 2019, at 15:01, Mattias Rönnblom <hofors@lysator.liu.se<mailto:hofors@lysator.liu.se>> wrote:
On 2019-05-07 13:12, Honnappa Nagarahalli wrote:
Hi,
The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
event_enqueue_buffer', which stores packets received from the NIC until at
least BATCH_SIZE (=32) packets have been received before enqueueing them
to eventdev. For example in case of validation testing, where often a small
number of specific test packets is sent to the NIC, this causes a lot of
problems. One would always have to transmit at least BATCH_SIZE test
packets before anything can be received from eventdev. Additionally, if the rx
packet rate is slow this also adds a considerable amount of additional delay.
Looking at the rx adapter API and sw implementation code there doesn’t
seem to be a way to disable this internal caching. In my opinion this
“functionality" makes testing sw rx adapter so cumbersome that either the
implementation should be modified to enqueue the cached packets after a
while (some performance penalty) or there should be some method to
disable caching. Any opinions how this issue could be fixed?
At the minimum, I would think there should be a compile time option.
From a use case perspective, I think it falls under latency vs throughput considerations. If there is a latency sensitive application, it might not want to wait till 32 packets are received.
From what I understood from Matias Elo and also after a quick glance in the code, the unlucky packets will be buffered indefinitely, in case the system goes idle. This is totally unacceptable (both in production and validation), in my opinion, and should be filed as a bug.
Indeed, this is what happens. I’ll create a bug report to track this issue.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-07 12:03 ` Elo, Matias (Nokia - FI/Espoo)
2019-05-07 12:03 ` Elo, Matias (Nokia - FI/Espoo)
@ 2019-05-07 12:13 ` Jerin Jacob Kollanukkaran
2019-05-07 12:13 ` Jerin Jacob Kollanukkaran
2019-05-09 11:24 ` Rao, Nikhil
2 siblings, 1 reply; 16+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-05-07 12:13 UTC (permalink / raw)
To: Elo, Matias (Nokia - FI/Espoo), Mattias Rönnblom
Cc: Honnappa Nagarahalli, dev, nd, Nikhil Rao
+ Nikhil
Please add respective maintainer from MAINTAINERS file for quick resolution.
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Elo, Matias (Nokia -
> FI/Espoo)
> Sent: Tuesday, May 7, 2019 5:33 PM
> To: Mattias Rönnblom <hofors@lysator.liu.se>
> Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; dev@dpdk.org;
> nd <nd@arm.com>
> Subject: Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
>
>
>
> On 7 May 2019, at 15:01, Mattias Rönnblom
> <hofors@lysator.liu.se<mailto:hofors@lysator.liu.se>> wrote:
>
>
>
> On 2019-05-07 13:12, Honnappa Nagarahalli wrote:
>
> Hi,
>
> The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
> event_enqueue_buffer', which stores packets received from the NIC until at
> least BATCH_SIZE (=32) packets have been received before enqueueing them to
> eventdev. For example in case of validation testing, where often a small number
> of specific test packets is sent to the NIC, this causes a lot of problems. One
> would always have to transmit at least BATCH_SIZE test packets before anything
> can be received from eventdev. Additionally, if the rx packet rate is slow this
> also adds a considerable amount of additional delay.
>
> Looking at the rx adapter API and sw implementation code there doesn’t seem
> to be a way to disable this internal caching. In my opinion this “functionality"
> makes testing sw rx adapter so cumbersome that either the implementation
> should be modified to enqueue the cached packets after a while (some
> performance penalty) or there should be some method to disable caching. Any
> opinions how this issue could be fixed?
> At the minimum, I would think there should be a compile time option.
> From a use case perspective, I think it falls under latency vs throughput
> considerations. If there is a latency sensitive application, it might not want to
> wait till 32 packets are received.
>
> From what I understood from Matias Elo and also after a quick glance in the
> code, the unlucky packets will be buffered indefinitely, in case the system goes
> idle. This is totally unacceptable (both in production and validation), in my
> opinion, and should be filed as a bug.
>
>
> Indeed, this is what happens. I’ll create a bug report to track this issue.
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-07 12:13 ` Jerin Jacob Kollanukkaran
@ 2019-05-07 12:13 ` Jerin Jacob Kollanukkaran
0 siblings, 0 replies; 16+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-05-07 12:13 UTC (permalink / raw)
To: Elo, Matias (Nokia - FI/Espoo), Mattias Rönnblom
Cc: Honnappa Nagarahalli, dev, nd, Nikhil Rao
+ Nikhil
Please add respective maintainer from MAINTAINERS file for quick resolution.
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Elo, Matias (Nokia -
> FI/Espoo)
> Sent: Tuesday, May 7, 2019 5:33 PM
> To: Mattias Rönnblom <hofors@lysator.liu.se>
> Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; dev@dpdk.org;
> nd <nd@arm.com>
> Subject: Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
>
>
>
> On 7 May 2019, at 15:01, Mattias Rönnblom
> <hofors@lysator.liu.se<mailto:hofors@lysator.liu.se>> wrote:
>
>
>
> On 2019-05-07 13:12, Honnappa Nagarahalli wrote:
>
> Hi,
>
> The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
> event_enqueue_buffer', which stores packets received from the NIC until at
> least BATCH_SIZE (=32) packets have been received before enqueueing them to
> eventdev. For example in case of validation testing, where often a small number
> of specific test packets is sent to the NIC, this causes a lot of problems. One
> would always have to transmit at least BATCH_SIZE test packets before anything
> can be received from eventdev. Additionally, if the rx packet rate is slow this
> also adds a considerable amount of additional delay.
>
> Looking at the rx adapter API and sw implementation code there doesn’t seem
> to be a way to disable this internal caching. In my opinion this “functionality"
> makes testing sw rx adapter so cumbersome that either the implementation
> should be modified to enqueue the cached packets after a while (some
> performance penalty) or there should be some method to disable caching. Any
> opinions how this issue could be fixed?
> At the minimum, I would think there should be a compile time option.
> From a use case perspective, I think it falls under latency vs throughput
> considerations. If there is a latency sensitive application, it might not want to
> wait till 32 packets are received.
>
> From what I understood from Matias Elo and also after a quick glance in the
> code, the unlucky packets will be buffered indefinitely, in case the system goes
> idle. This is totally unacceptable (both in production and validation), in my
> opinion, and should be filed as a bug.
>
>
> Indeed, this is what happens. I’ll create a bug report to track this issue.
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-07 12:03 ` Elo, Matias (Nokia - FI/Espoo)
2019-05-07 12:03 ` Elo, Matias (Nokia - FI/Espoo)
2019-05-07 12:13 ` Jerin Jacob Kollanukkaran
@ 2019-05-09 11:24 ` Rao, Nikhil
2019-05-09 11:24 ` Rao, Nikhil
2019-05-09 15:02 ` Elo, Matias (Nokia - FI/Espoo)
2 siblings, 2 replies; 16+ messages in thread
From: Rao, Nikhil @ 2019-05-09 11:24 UTC (permalink / raw)
To: Elo, Matias (Nokia - FI/Espoo), Mattias Rönnblom
Cc: Honnappa Nagarahalli, dev, nd
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Elo, Matias (Nokia -
> FI/Espoo)
> Sent: Tuesday, May 7, 2019 5:33 PM
> To: Mattias Rönnblom <hofors@lysator.liu.se>
> Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>;
> dev@dpdk.org; nd <nd@arm.com>
> Subject: Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
>
>
>
> On 7 May 2019, at 15:01, Mattias Rönnblom
> <hofors@lysator.liu.se<mailto:hofors@lysator.liu.se>> wrote:
>
>
>
> On 2019-05-07 13:12, Honnappa Nagarahalli wrote:
>
> Hi,
>
> The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
> event_enqueue_buffer', which stores packets received from the NIC until at
> least BATCH_SIZE (=32) packets have been received before enqueueing them
> to eventdev. For example in case of validation testing, where often a small
> number of specific test packets is sent to the NIC, this causes a lot of
> problems. One would always have to transmit at least BATCH_SIZE test
> packets before anything can be received from eventdev. Additionally, if the
> rx
> packet rate is slow this also adds a considerable amount of additional delay.
>
> Looking at the rx adapter API and sw implementation code there doesn’t
> seem to be a way to disable this internal caching. In my opinion this
> “functionality" makes testing sw rx adapter so cumbersome that either the
> implementation should be modified to enqueue the cached packets after a
> while (some performance penalty) or there should be some method to
> disable caching. Any opinions how this issue could be fixed?
> At the minimum, I would think there should be a compile time option.
> From a use case perspective, I think it falls under latency vs throughput
> considerations. If there is a latency sensitive application, it might not want
> to wait till 32 packets are received.
>
> From what I understood from Matias Elo and also after a quick glance in the
> code, the unlucky packets will be buffered indefinitely, in case the system
> goes idle. This is totally unacceptable (both in production and validation), in
> my opinion, and should be filed as a bug.
>
>
> Indeed, this is what happens. I’ll create a bug report to track this issue.
>
I have posted a patch for this issue
http://patchwork.dpdk.org/patch/53350/
Please let me know your comments.
Thanks,
Nikhil
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-09 11:24 ` Rao, Nikhil
@ 2019-05-09 11:24 ` Rao, Nikhil
2019-05-09 15:02 ` Elo, Matias (Nokia - FI/Espoo)
1 sibling, 0 replies; 16+ messages in thread
From: Rao, Nikhil @ 2019-05-09 11:24 UTC (permalink / raw)
To: Elo, Matias (Nokia - FI/Espoo), Mattias Rönnblom
Cc: Honnappa Nagarahalli, dev, nd
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Elo, Matias (Nokia -
> FI/Espoo)
> Sent: Tuesday, May 7, 2019 5:33 PM
> To: Mattias Rönnblom <hofors@lysator.liu.se>
> Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>;
> dev@dpdk.org; nd <nd@arm.com>
> Subject: Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
>
>
>
> On 7 May 2019, at 15:01, Mattias Rönnblom
> <hofors@lysator.liu.se<mailto:hofors@lysator.liu.se>> wrote:
>
>
>
> On 2019-05-07 13:12, Honnappa Nagarahalli wrote:
>
> Hi,
>
> The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
> event_enqueue_buffer', which stores packets received from the NIC until at
> least BATCH_SIZE (=32) packets have been received before enqueueing them
> to eventdev. For example in case of validation testing, where often a small
> number of specific test packets is sent to the NIC, this causes a lot of
> problems. One would always have to transmit at least BATCH_SIZE test
> packets before anything can be received from eventdev. Additionally, if the
> rx
> packet rate is slow this also adds a considerable amount of additional delay.
>
> Looking at the rx adapter API and sw implementation code there doesn’t
> seem to be a way to disable this internal caching. In my opinion this
> “functionality" makes testing sw rx adapter so cumbersome that either the
> implementation should be modified to enqueue the cached packets after a
> while (some performance penalty) or there should be some method to
> disable caching. Any opinions how this issue could be fixed?
> At the minimum, I would think there should be a compile time option.
> From a use case perspective, I think it falls under latency vs throughput
> considerations. If there is a latency sensitive application, it might not want
> to wait till 32 packets are received.
>
> From what I understood from Matias Elo and also after a quick glance in the
> code, the unlucky packets will be buffered indefinitely, in case the system
> goes idle. This is totally unacceptable (both in production and validation), in
> my opinion, and should be filed as a bug.
>
>
> Indeed, this is what happens. I’ll create a bug report to track this issue.
>
I have posted a patch for this issue
http://patchwork.dpdk.org/patch/53350/
Please let me know your comments.
Thanks,
Nikhil
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-09 11:24 ` Rao, Nikhil
2019-05-09 11:24 ` Rao, Nikhil
@ 2019-05-09 15:02 ` Elo, Matias (Nokia - FI/Espoo)
2019-05-09 15:02 ` Elo, Matias (Nokia - FI/Espoo)
1 sibling, 1 reply; 16+ messages in thread
From: Elo, Matias (Nokia - FI/Espoo) @ 2019-05-09 15:02 UTC (permalink / raw)
To: Rao, Nikhil; +Cc: Mattias Rönnblom, Honnappa Nagarahalli, dev, nd
Thanks, I’ve tested this patch and can confirm that it fixes the problem.
I didn’t do any performance comparison but at least with high packet rate rte_eth_rx_burst() should already return close to BATCH_SIZE packets, so the performance hit shouldn’t be that big.
-Matias
On 9 May 2019, at 14:24, Rao, Nikhil <nikhil.rao@intel.com<mailto:nikhil.rao@intel.com>> wrote:
-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Elo, Matias (Nokia -
FI/Espoo)
Sent: Tuesday, May 7, 2019 5:33 PM
To: Mattias Rönnblom <hofors@lysator.liu.se<mailto:hofors@lysator.liu.se>>
Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com<mailto:Honnappa.Nagarahalli@arm.com>>;
dev@dpdk.org<mailto:dev@dpdk.org>; nd <nd@arm.com<mailto:nd@arm.com>>
Subject: Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
On 7 May 2019, at 15:01, Mattias Rönnblom
<hofors@lysator.liu.se<mailto:hofors@lysator.liu.se><mailto:hofors@lysator.liu.se>> wrote:
On 2019-05-07 13:12, Honnappa Nagarahalli wrote:
Hi,
The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
event_enqueue_buffer', which stores packets received from the NIC until at
least BATCH_SIZE (=32) packets have been received before enqueueing them
to eventdev. For example in case of validation testing, where often a small
number of specific test packets is sent to the NIC, this causes a lot of
problems. One would always have to transmit at least BATCH_SIZE test
packets before anything can be received from eventdev. Additionally, if the
rx
packet rate is slow this also adds a considerable amount of additional delay.
Looking at the rx adapter API and sw implementation code there doesn’t
seem to be a way to disable this internal caching. In my opinion this
“functionality" makes testing sw rx adapter so cumbersome that either the
implementation should be modified to enqueue the cached packets after a
while (some performance penalty) or there should be some method to
disable caching. Any opinions how this issue could be fixed?
At the minimum, I would think there should be a compile time option.
From a use case perspective, I think it falls under latency vs throughput
considerations. If there is a latency sensitive application, it might not want
to wait till 32 packets are received.
From what I understood from Matias Elo and also after a quick glance in the
code, the unlucky packets will be buffered indefinitely, in case the system
goes idle. This is totally unacceptable (both in production and validation), in
my opinion, and should be filed as a bug.
Indeed, this is what happens. I’ll create a bug report to track this issue.
I have posted a patch for this issue
http://patchwork.dpdk.org/patch/53350/
Please let me know your comments.
Thanks,
Nikhil
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-09 15:02 ` Elo, Matias (Nokia - FI/Espoo)
@ 2019-05-09 15:02 ` Elo, Matias (Nokia - FI/Espoo)
0 siblings, 0 replies; 16+ messages in thread
From: Elo, Matias (Nokia - FI/Espoo) @ 2019-05-09 15:02 UTC (permalink / raw)
To: Rao, Nikhil; +Cc: Mattias Rönnblom, Honnappa Nagarahalli, dev, nd
Thanks, I’ve tested this patch and can confirm that it fixes the problem.
I didn’t do any performance comparison but at least with high packet rate rte_eth_rx_burst() should already return close to BATCH_SIZE packets, so the performance hit shouldn’t be that big.
-Matias
On 9 May 2019, at 14:24, Rao, Nikhil <nikhil.rao@intel.com<mailto:nikhil.rao@intel.com>> wrote:
-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Elo, Matias (Nokia -
FI/Espoo)
Sent: Tuesday, May 7, 2019 5:33 PM
To: Mattias Rönnblom <hofors@lysator.liu.se<mailto:hofors@lysator.liu.se>>
Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com<mailto:Honnappa.Nagarahalli@arm.com>>;
dev@dpdk.org<mailto:dev@dpdk.org>; nd <nd@arm.com<mailto:nd@arm.com>>
Subject: Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
On 7 May 2019, at 15:01, Mattias Rönnblom
<hofors@lysator.liu.se<mailto:hofors@lysator.liu.se><mailto:hofors@lysator.liu.se>> wrote:
On 2019-05-07 13:12, Honnappa Nagarahalli wrote:
Hi,
The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
event_enqueue_buffer', which stores packets received from the NIC until at
least BATCH_SIZE (=32) packets have been received before enqueueing them
to eventdev. For example in case of validation testing, where often a small
number of specific test packets is sent to the NIC, this causes a lot of
problems. One would always have to transmit at least BATCH_SIZE test
packets before anything can be received from eventdev. Additionally, if the
rx
packet rate is slow this also adds a considerable amount of additional delay.
Looking at the rx adapter API and sw implementation code there doesn’t
seem to be a way to disable this internal caching. In my opinion this
“functionality" makes testing sw rx adapter so cumbersome that either the
implementation should be modified to enqueue the cached packets after a
while (some performance penalty) or there should be some method to
disable caching. Any opinions how this issue could be fixed?
At the minimum, I would think there should be a compile time option.
From a use case perspective, I think it falls under latency vs throughput
considerations. If there is a latency sensitive application, it might not want
to wait till 32 packets are received.
From what I understood from Matias Elo and also after a quick glance in the
code, the unlucky packets will be buffered indefinitely, in case the system
goes idle. This is totally unacceptable (both in production and validation), in
my opinion, and should be filed as a bug.
Indeed, this is what happens. I’ll create a bug report to track this issue.
I have posted a patch for this issue
http://patchwork.dpdk.org/patch/53350/
Please let me know your comments.
Thanks,
Nikhil
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2019-05-09 15:02 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-07 9:52 [dpdk-dev] eventdev: sw rx adapter enqueue caching Elo, Matias (Nokia - FI/Espoo)
2019-05-07 9:52 ` Elo, Matias (Nokia - FI/Espoo)
2019-05-07 11:12 ` Honnappa Nagarahalli
2019-05-07 11:12 ` Honnappa Nagarahalli
2019-05-07 12:01 ` Mattias Rönnblom
2019-05-07 12:01 ` Mattias Rönnblom
2019-05-07 12:03 ` Elo, Matias (Nokia - FI/Espoo)
2019-05-07 12:03 ` Elo, Matias (Nokia - FI/Espoo)
2019-05-07 12:13 ` Jerin Jacob Kollanukkaran
2019-05-07 12:13 ` Jerin Jacob Kollanukkaran
2019-05-09 11:24 ` Rao, Nikhil
2019-05-09 11:24 ` Rao, Nikhil
2019-05-09 15:02 ` Elo, Matias (Nokia - FI/Espoo)
2019-05-09 15:02 ` Elo, Matias (Nokia - FI/Espoo)
2019-05-07 11:56 ` Mattias Rönnblom
2019-05-07 11:56 ` Mattias Rönnblom
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).