DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Mattias Rönnblom" <hofors@lysator.liu.se>
To: "Elo, Matias (Nokia - FI/Espoo)" <matias.elo@nokia.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] eventdev: sw rx adapter enqueue caching
Date: Tue, 7 May 2019 13:56:35 +0200	[thread overview]
Message-ID: <800ce1b4-a723-603a-3e5a-0837997e50e4@lysator.liu.se> (raw)
In-Reply-To: <53AC5150-DBE2-4E46-9D93-99E01DCEC725@nokia.com>

On 2019-05-07 11:52, Elo, Matias (Nokia - FI/Espoo) wrote:
> Hi,
> 
> The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter->event_enqueue_buffer', which stores packets received from the NIC until at least BATCH_SIZE (=32) packets have been received before enqueueing them to eventdev. For example in case of validation testing, where often a small number of specific test packets is sent to the NIC, this causes a lot of problems. One would always have to transmit at least BATCH_SIZE test packets before anything can be received from eventdev. Additionally, if the rx packet rate is slow this also adds a considerable amount of additional delay.
> 
> Looking at the rx adapter API and sw implementation code there doesn’t seem to be a way to disable this internal caching. In my opinion this “functionality" makes testing sw rx adapter so cumbersome that either the implementation should be modified to enqueue the cached packets after a while (some performance penalty) or there should be some method to disable caching. Any opinions how this issue could be fixed?
> 

The rx adaptor's service function will be called repeatedly, at a very 
high frequency (especially in near-idle situations). One potential 
scheme is to, by means of a counter, keeping track of the number of 
calls since the last packet was received from the NIC, and flush the 
buffers a number of idle (zero-NIC-dequeue) calls.

In that case, you maintain good performance, while not introducing too 
much latency.

The DSW Event Device takes this approach to flushing its internal buffers.

Another way would be to use a timer. Either an adapter-internal TSC 
timestamp for buffer age, or a rte_timer timer. rdtsc is not for free, 
so I would lean toward the first option.

  parent reply	other threads:[~2019-05-07 11:56 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-07  9:52 Elo, Matias (Nokia - FI/Espoo)
2019-05-07  9:52 ` Elo, Matias (Nokia - FI/Espoo)
2019-05-07 11:12 ` Honnappa Nagarahalli
2019-05-07 11:12   ` Honnappa Nagarahalli
2019-05-07 12:01   ` Mattias Rönnblom
2019-05-07 12:01     ` Mattias Rönnblom
2019-05-07 12:03     ` Elo, Matias (Nokia - FI/Espoo)
2019-05-07 12:03       ` Elo, Matias (Nokia - FI/Espoo)
2019-05-07 12:13       ` Jerin Jacob Kollanukkaran
2019-05-07 12:13         ` Jerin Jacob Kollanukkaran
2019-05-09 11:24       ` Rao, Nikhil
2019-05-09 11:24         ` Rao, Nikhil
2019-05-09 15:02         ` Elo, Matias (Nokia - FI/Espoo)
2019-05-09 15:02           ` Elo, Matias (Nokia - FI/Espoo)
2019-05-07 11:56 ` Mattias Rönnblom [this message]
2019-05-07 11:56   ` Mattias Rönnblom

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=800ce1b4-a723-603a-3e5a-0837997e50e4@lysator.liu.se \
    --to=hofors@lysator.liu.se \
    --cc=dev@dpdk.org \
    --cc=matias.elo@nokia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).