* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-07 12:03 ` Elo, Matias (Nokia - FI/Espoo)
@ 2019-05-07 12:03 ` Elo, Matias (Nokia - FI/Espoo)
2019-05-07 12:13 ` Jerin Jacob Kollanukkaran
2019-05-09 11:24 ` Rao, Nikhil
2 siblings, 0 replies; 16+ messages in thread
From: Elo, Matias (Nokia - FI/Espoo) @ 2019-05-07 12:03 UTC (permalink / raw)
To: Mattias Rönnblom; +Cc: Honnappa Nagarahalli, dev, nd
On 7 May 2019, at 15:01, Mattias Rönnblom <hofors@lysator.liu.se<mailto:hofors@lysator.liu.se>> wrote:
On 2019-05-07 13:12, Honnappa Nagarahalli wrote:
Hi,
The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
event_enqueue_buffer', which stores packets received from the NIC until at
least BATCH_SIZE (=32) packets have been received before enqueueing them
to eventdev. For example in case of validation testing, where often a small
number of specific test packets is sent to the NIC, this causes a lot of
problems. One would always have to transmit at least BATCH_SIZE test
packets before anything can be received from eventdev. Additionally, if the rx
packet rate is slow this also adds a considerable amount of additional delay.
Looking at the rx adapter API and sw implementation code there doesn’t
seem to be a way to disable this internal caching. In my opinion this
“functionality" makes testing sw rx adapter so cumbersome that either the
implementation should be modified to enqueue the cached packets after a
while (some performance penalty) or there should be some method to
disable caching. Any opinions how this issue could be fixed?
At the minimum, I would think there should be a compile time option.
From a use case perspective, I think it falls under latency vs throughput considerations. If there is a latency sensitive application, it might not want to wait till 32 packets are received.
From what I understood from Matias Elo and also after a quick glance in the code, the unlucky packets will be buffered indefinitely, in case the system goes idle. This is totally unacceptable (both in production and validation), in my opinion, and should be filed as a bug.
Indeed, this is what happens. I’ll create a bug report to track this issue.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-07 12:03 ` Elo, Matias (Nokia - FI/Espoo)
2019-05-07 12:03 ` Elo, Matias (Nokia - FI/Espoo)
@ 2019-05-07 12:13 ` Jerin Jacob Kollanukkaran
2019-05-07 12:13 ` Jerin Jacob Kollanukkaran
2019-05-09 11:24 ` Rao, Nikhil
2 siblings, 1 reply; 16+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-05-07 12:13 UTC (permalink / raw)
To: Elo, Matias (Nokia - FI/Espoo), Mattias Rönnblom
Cc: Honnappa Nagarahalli, dev, nd, Nikhil Rao
+ Nikhil
Please add respective maintainer from MAINTAINERS file for quick resolution.
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Elo, Matias (Nokia -
> FI/Espoo)
> Sent: Tuesday, May 7, 2019 5:33 PM
> To: Mattias Rönnblom <hofors@lysator.liu.se>
> Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; dev@dpdk.org;
> nd <nd@arm.com>
> Subject: Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
>
>
>
> On 7 May 2019, at 15:01, Mattias Rönnblom
> <hofors@lysator.liu.se<mailto:hofors@lysator.liu.se>> wrote:
>
>
>
> On 2019-05-07 13:12, Honnappa Nagarahalli wrote:
>
> Hi,
>
> The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
> event_enqueue_buffer', which stores packets received from the NIC until at
> least BATCH_SIZE (=32) packets have been received before enqueueing them to
> eventdev. For example in case of validation testing, where often a small number
> of specific test packets is sent to the NIC, this causes a lot of problems. One
> would always have to transmit at least BATCH_SIZE test packets before anything
> can be received from eventdev. Additionally, if the rx packet rate is slow this
> also adds a considerable amount of additional delay.
>
> Looking at the rx adapter API and sw implementation code there doesn’t seem
> to be a way to disable this internal caching. In my opinion this “functionality"
> makes testing sw rx adapter so cumbersome that either the implementation
> should be modified to enqueue the cached packets after a while (some
> performance penalty) or there should be some method to disable caching. Any
> opinions how this issue could be fixed?
> At the minimum, I would think there should be a compile time option.
> From a use case perspective, I think it falls under latency vs throughput
> considerations. If there is a latency sensitive application, it might not want to
> wait till 32 packets are received.
>
> From what I understood from Matias Elo and also after a quick glance in the
> code, the unlucky packets will be buffered indefinitely, in case the system goes
> idle. This is totally unacceptable (both in production and validation), in my
> opinion, and should be filed as a bug.
>
>
> Indeed, this is what happens. I’ll create a bug report to track this issue.
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-07 12:13 ` Jerin Jacob Kollanukkaran
@ 2019-05-07 12:13 ` Jerin Jacob Kollanukkaran
0 siblings, 0 replies; 16+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-05-07 12:13 UTC (permalink / raw)
To: Elo, Matias (Nokia - FI/Espoo), Mattias Rönnblom
Cc: Honnappa Nagarahalli, dev, nd, Nikhil Rao
+ Nikhil
Please add respective maintainer from MAINTAINERS file for quick resolution.
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Elo, Matias (Nokia -
> FI/Espoo)
> Sent: Tuesday, May 7, 2019 5:33 PM
> To: Mattias Rönnblom <hofors@lysator.liu.se>
> Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; dev@dpdk.org;
> nd <nd@arm.com>
> Subject: Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
>
>
>
> On 7 May 2019, at 15:01, Mattias Rönnblom
> <hofors@lysator.liu.se<mailto:hofors@lysator.liu.se>> wrote:
>
>
>
> On 2019-05-07 13:12, Honnappa Nagarahalli wrote:
>
> Hi,
>
> The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
> event_enqueue_buffer', which stores packets received from the NIC until at
> least BATCH_SIZE (=32) packets have been received before enqueueing them to
> eventdev. For example in case of validation testing, where often a small number
> of specific test packets is sent to the NIC, this causes a lot of problems. One
> would always have to transmit at least BATCH_SIZE test packets before anything
> can be received from eventdev. Additionally, if the rx packet rate is slow this
> also adds a considerable amount of additional delay.
>
> Looking at the rx adapter API and sw implementation code there doesn’t seem
> to be a way to disable this internal caching. In my opinion this “functionality"
> makes testing sw rx adapter so cumbersome that either the implementation
> should be modified to enqueue the cached packets after a while (some
> performance penalty) or there should be some method to disable caching. Any
> opinions how this issue could be fixed?
> At the minimum, I would think there should be a compile time option.
> From a use case perspective, I think it falls under latency vs throughput
> considerations. If there is a latency sensitive application, it might not want to
> wait till 32 packets are received.
>
> From what I understood from Matias Elo and also after a quick glance in the
> code, the unlucky packets will be buffered indefinitely, in case the system goes
> idle. This is totally unacceptable (both in production and validation), in my
> opinion, and should be filed as a bug.
>
>
> Indeed, this is what happens. I’ll create a bug report to track this issue.
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-07 12:03 ` Elo, Matias (Nokia - FI/Espoo)
2019-05-07 12:03 ` Elo, Matias (Nokia - FI/Espoo)
2019-05-07 12:13 ` Jerin Jacob Kollanukkaran
@ 2019-05-09 11:24 ` Rao, Nikhil
2019-05-09 11:24 ` Rao, Nikhil
2019-05-09 15:02 ` Elo, Matias (Nokia - FI/Espoo)
2 siblings, 2 replies; 16+ messages in thread
From: Rao, Nikhil @ 2019-05-09 11:24 UTC (permalink / raw)
To: Elo, Matias (Nokia - FI/Espoo), Mattias Rönnblom
Cc: Honnappa Nagarahalli, dev, nd
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Elo, Matias (Nokia -
> FI/Espoo)
> Sent: Tuesday, May 7, 2019 5:33 PM
> To: Mattias Rönnblom <hofors@lysator.liu.se>
> Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>;
> dev@dpdk.org; nd <nd@arm.com>
> Subject: Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
>
>
>
> On 7 May 2019, at 15:01, Mattias Rönnblom
> <hofors@lysator.liu.se<mailto:hofors@lysator.liu.se>> wrote:
>
>
>
> On 2019-05-07 13:12, Honnappa Nagarahalli wrote:
>
> Hi,
>
> The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
> event_enqueue_buffer', which stores packets received from the NIC until at
> least BATCH_SIZE (=32) packets have been received before enqueueing them
> to eventdev. For example in case of validation testing, where often a small
> number of specific test packets is sent to the NIC, this causes a lot of
> problems. One would always have to transmit at least BATCH_SIZE test
> packets before anything can be received from eventdev. Additionally, if the
> rx
> packet rate is slow this also adds a considerable amount of additional delay.
>
> Looking at the rx adapter API and sw implementation code there doesn’t
> seem to be a way to disable this internal caching. In my opinion this
> “functionality" makes testing sw rx adapter so cumbersome that either the
> implementation should be modified to enqueue the cached packets after a
> while (some performance penalty) or there should be some method to
> disable caching. Any opinions how this issue could be fixed?
> At the minimum, I would think there should be a compile time option.
> From a use case perspective, I think it falls under latency vs throughput
> considerations. If there is a latency sensitive application, it might not want
> to wait till 32 packets are received.
>
> From what I understood from Matias Elo and also after a quick glance in the
> code, the unlucky packets will be buffered indefinitely, in case the system
> goes idle. This is totally unacceptable (both in production and validation), in
> my opinion, and should be filed as a bug.
>
>
> Indeed, this is what happens. I’ll create a bug report to track this issue.
>
I have posted a patch for this issue
http://patchwork.dpdk.org/patch/53350/
Please let me know your comments.
Thanks,
Nikhil
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-09 11:24 ` Rao, Nikhil
@ 2019-05-09 11:24 ` Rao, Nikhil
2019-05-09 15:02 ` Elo, Matias (Nokia - FI/Espoo)
1 sibling, 0 replies; 16+ messages in thread
From: Rao, Nikhil @ 2019-05-09 11:24 UTC (permalink / raw)
To: Elo, Matias (Nokia - FI/Espoo), Mattias Rönnblom
Cc: Honnappa Nagarahalli, dev, nd
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Elo, Matias (Nokia -
> FI/Espoo)
> Sent: Tuesday, May 7, 2019 5:33 PM
> To: Mattias Rönnblom <hofors@lysator.liu.se>
> Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>;
> dev@dpdk.org; nd <nd@arm.com>
> Subject: Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
>
>
>
> On 7 May 2019, at 15:01, Mattias Rönnblom
> <hofors@lysator.liu.se<mailto:hofors@lysator.liu.se>> wrote:
>
>
>
> On 2019-05-07 13:12, Honnappa Nagarahalli wrote:
>
> Hi,
>
> The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
> event_enqueue_buffer', which stores packets received from the NIC until at
> least BATCH_SIZE (=32) packets have been received before enqueueing them
> to eventdev. For example in case of validation testing, where often a small
> number of specific test packets is sent to the NIC, this causes a lot of
> problems. One would always have to transmit at least BATCH_SIZE test
> packets before anything can be received from eventdev. Additionally, if the
> rx
> packet rate is slow this also adds a considerable amount of additional delay.
>
> Looking at the rx adapter API and sw implementation code there doesn’t
> seem to be a way to disable this internal caching. In my opinion this
> “functionality" makes testing sw rx adapter so cumbersome that either the
> implementation should be modified to enqueue the cached packets after a
> while (some performance penalty) or there should be some method to
> disable caching. Any opinions how this issue could be fixed?
> At the minimum, I would think there should be a compile time option.
> From a use case perspective, I think it falls under latency vs throughput
> considerations. If there is a latency sensitive application, it might not want
> to wait till 32 packets are received.
>
> From what I understood from Matias Elo and also after a quick glance in the
> code, the unlucky packets will be buffered indefinitely, in case the system
> goes idle. This is totally unacceptable (both in production and validation), in
> my opinion, and should be filed as a bug.
>
>
> Indeed, this is what happens. I’ll create a bug report to track this issue.
>
I have posted a patch for this issue
http://patchwork.dpdk.org/patch/53350/
Please let me know your comments.
Thanks,
Nikhil
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-09 11:24 ` Rao, Nikhil
2019-05-09 11:24 ` Rao, Nikhil
@ 2019-05-09 15:02 ` Elo, Matias (Nokia - FI/Espoo)
2019-05-09 15:02 ` Elo, Matias (Nokia - FI/Espoo)
1 sibling, 1 reply; 16+ messages in thread
From: Elo, Matias (Nokia - FI/Espoo) @ 2019-05-09 15:02 UTC (permalink / raw)
To: Rao, Nikhil; +Cc: Mattias Rönnblom, Honnappa Nagarahalli, dev, nd
Thanks, I’ve tested this patch and can confirm that it fixes the problem.
I didn’t do any performance comparison but at least with high packet rate rte_eth_rx_burst() should already return close to BATCH_SIZE packets, so the performance hit shouldn’t be that big.
-Matias
On 9 May 2019, at 14:24, Rao, Nikhil <nikhil.rao@intel.com<mailto:nikhil.rao@intel.com>> wrote:
-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Elo, Matias (Nokia -
FI/Espoo)
Sent: Tuesday, May 7, 2019 5:33 PM
To: Mattias Rönnblom <hofors@lysator.liu.se<mailto:hofors@lysator.liu.se>>
Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com<mailto:Honnappa.Nagarahalli@arm.com>>;
dev@dpdk.org<mailto:dev@dpdk.org>; nd <nd@arm.com<mailto:nd@arm.com>>
Subject: Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
On 7 May 2019, at 15:01, Mattias Rönnblom
<hofors@lysator.liu.se<mailto:hofors@lysator.liu.se><mailto:hofors@lysator.liu.se>> wrote:
On 2019-05-07 13:12, Honnappa Nagarahalli wrote:
Hi,
The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
event_enqueue_buffer', which stores packets received from the NIC until at
least BATCH_SIZE (=32) packets have been received before enqueueing them
to eventdev. For example in case of validation testing, where often a small
number of specific test packets is sent to the NIC, this causes a lot of
problems. One would always have to transmit at least BATCH_SIZE test
packets before anything can be received from eventdev. Additionally, if the
rx
packet rate is slow this also adds a considerable amount of additional delay.
Looking at the rx adapter API and sw implementation code there doesn’t
seem to be a way to disable this internal caching. In my opinion this
“functionality" makes testing sw rx adapter so cumbersome that either the
implementation should be modified to enqueue the cached packets after a
while (some performance penalty) or there should be some method to
disable caching. Any opinions how this issue could be fixed?
At the minimum, I would think there should be a compile time option.
From a use case perspective, I think it falls under latency vs throughput
considerations. If there is a latency sensitive application, it might not want
to wait till 32 packets are received.
From what I understood from Matias Elo and also after a quick glance in the
code, the unlucky packets will be buffered indefinitely, in case the system
goes idle. This is totally unacceptable (both in production and validation), in
my opinion, and should be filed as a bug.
Indeed, this is what happens. I’ll create a bug report to track this issue.
I have posted a patch for this issue
http://patchwork.dpdk.org/patch/53350/
Please let me know your comments.
Thanks,
Nikhil
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
2019-05-09 15:02 ` Elo, Matias (Nokia - FI/Espoo)
@ 2019-05-09 15:02 ` Elo, Matias (Nokia - FI/Espoo)
0 siblings, 0 replies; 16+ messages in thread
From: Elo, Matias (Nokia - FI/Espoo) @ 2019-05-09 15:02 UTC (permalink / raw)
To: Rao, Nikhil; +Cc: Mattias Rönnblom, Honnappa Nagarahalli, dev, nd
Thanks, I’ve tested this patch and can confirm that it fixes the problem.
I didn’t do any performance comparison but at least with high packet rate rte_eth_rx_burst() should already return close to BATCH_SIZE packets, so the performance hit shouldn’t be that big.
-Matias
On 9 May 2019, at 14:24, Rao, Nikhil <nikhil.rao@intel.com<mailto:nikhil.rao@intel.com>> wrote:
-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Elo, Matias (Nokia -
FI/Espoo)
Sent: Tuesday, May 7, 2019 5:33 PM
To: Mattias Rönnblom <hofors@lysator.liu.se<mailto:hofors@lysator.liu.se>>
Cc: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com<mailto:Honnappa.Nagarahalli@arm.com>>;
dev@dpdk.org<mailto:dev@dpdk.org>; nd <nd@arm.com<mailto:nd@arm.com>>
Subject: Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
On 7 May 2019, at 15:01, Mattias Rönnblom
<hofors@lysator.liu.se<mailto:hofors@lysator.liu.se><mailto:hofors@lysator.liu.se>> wrote:
On 2019-05-07 13:12, Honnappa Nagarahalli wrote:
Hi,
The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter-
event_enqueue_buffer', which stores packets received from the NIC until at
least BATCH_SIZE (=32) packets have been received before enqueueing them
to eventdev. For example in case of validation testing, where often a small
number of specific test packets is sent to the NIC, this causes a lot of
problems. One would always have to transmit at least BATCH_SIZE test
packets before anything can be received from eventdev. Additionally, if the
rx
packet rate is slow this also adds a considerable amount of additional delay.
Looking at the rx adapter API and sw implementation code there doesn’t
seem to be a way to disable this internal caching. In my opinion this
“functionality" makes testing sw rx adapter so cumbersome that either the
implementation should be modified to enqueue the cached packets after a
while (some performance penalty) or there should be some method to
disable caching. Any opinions how this issue could be fixed?
At the minimum, I would think there should be a compile time option.
From a use case perspective, I think it falls under latency vs throughput
considerations. If there is a latency sensitive application, it might not want
to wait till 32 packets are received.
From what I understood from Matias Elo and also after a quick glance in the
code, the unlucky packets will be buffered indefinitely, in case the system
goes idle. This is totally unacceptable (both in production and validation), in
my opinion, and should be filed as a bug.
Indeed, this is what happens. I’ll create a bug report to track this issue.
I have posted a patch for this issue
http://patchwork.dpdk.org/patch/53350/
Please let me know your comments.
Thanks,
Nikhil
^ permalink raw reply [flat|nested] 16+ messages in thread