DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Mattias Rönnblom" <hofors@lysator.liu.se>
To: Venky Venkatesh <vvenkatesh@paloaltonetworks.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] Application used for DSW event_dev performance testing
Date: Wed, 14 Nov 2018 20:41:02 +0100	[thread overview]
Message-ID: <779258cb-490f-0111-94ce-bc87d1502ed0@lysator.liu.se> (raw)
In-Reply-To: <27A03E76-DED0-435F-B02F-24A7A7B1BCC9@contoso.com>

On 2018-11-14 20:16, Venky Venkatesh wrote:
> Hi,
> 
> https://mails.dpdk.org/archives/dev/2018-September/111344.html mentions that there is a sample application where “worker cores can sustain 300-400 million event/s. With a pipeline
> with 1000 clock cycles of work per stage, the average event device
> overhead is somewhere 50-150 clock cycles/event”. Is this sample application code available?
> 
It's proprietary code, although it's also been tested by some of our 
partners.

The primary reason for it not being contributed to DPDK is because it's 
a fair amount of work to do so. I would refer to it as an eventdev 
pipeline simulator, rather than a sample app.

> We have written a similar simple sample application where 1 core keeps enqueuing (as NEW/ATOMIC) and n-cores dequeue (and RELEASE) and do no other work. But we are not seeing anything close in terms of performance. Also we are seeing some counter intuitive behaviors such as a burst of 32 is worse than burst of 1. We surely have something wrong and would thus compare against a good application that you have written. Could you pls share it?
> 

Is this enqueue or dequeue burst? How large is n? Is this explicit release?

What do you set nb_events_limit to? Good DSW performance much depends on 
the average burst size on the event rings, which in turn is dependent on 
the number of in-flight events. On really high core-count systems you 
might also want to increase DSW_MAX_PORT_OPS_PER_BG_TASK, since it 
effectively puts a limit on the maximum number of events buffered on the 
output buffers.

In the pipeline simulator all cores produce events initially, and then 
recycles events when the number of in-flight events reach a certain 
threshold (50% of nb_events_limit). A single lcore won't be able to fill 
the pipeline, if you have zero-work stages.

Even though I can't send you the simulator code at this point, I'm happy 
to assist you in any DSW-related endeavors.

  reply	other threads:[~2018-11-14 19:41 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-14 19:16 Venky Venkatesh
2018-11-14 19:41 ` Mattias Rönnblom [this message]
2018-11-14 21:56   ` Venky Venkatesh
2018-11-15  5:46     ` Mattias Rönnblom
2018-11-27 22:33       ` Venky Venkatesh
2018-11-28 16:55         ` Mattias Rönnblom
2018-11-28 17:09           ` Mattias Rönnblom

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=779258cb-490f-0111-94ce-bc87d1502ed0@lysator.liu.se \
    --to=hofors@lysator.liu.se \
    --cc=dev@dpdk.org \
    --cc=vvenkatesh@paloaltonetworks.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).